ok so i got something working WITHOUT using depth, just clever alpha blending.
https://vimeo.com/135236338
I used command buffers for the first time! They were AMAZING. I’ve got some weird perspective warping on the 3d geo from Unity’s rendering… i think i’m getting my wspos from the depth buffer incorrectly. N00b statement, but man the depth buffer is so lame. Why does it have to be in some bizarre format that doesn’t just represent the distance to the pixel? i spent much of my time just converting the Unity depth buffer into a “distance” buffer to work with my raytracer. It still feels like a huge hack and i still have this big problem, i’d wanted to render my atmospherics AFTER rendering everything else, but since there will be no depth buffer available at that point, i’m a bit screwed. however, if i can figure out the whole writing to the G-Buffer thing (failed once… but… im sure i can get it), then maybe i can embed my distance buffer into one of the unused G-Buffer channels.
I’ll probably keep updating this thread in case someone comes up against similar stuff. I really appreciate the comment Zuntaos, it kept me going!
So almost as a blog of sorts…
for the command buffers, i just procedurally draw a fullscreen quad. So lets imagine you’ve got a little class that constructs a quad and makes it big and attaches it to the camera:
so make a little function for your command buffers, because we’re going to clear and re-add them each frame
var commandBufferTerrain = new Rendering.CommandBuffer();
var commandBufferComposite = new Rendering.CommandBuffer();
commandBufferTerrain.GetTemporaryRT(0,Screen.width/1.8,Screen.height/1.8);
commandBufferTerrain.SetRenderTarget(0);
so the 0 is the RT’s ID, which is a simple nice system. so now im downsampling the terrain to keep performance in check.
commandBufferTerrain.DrawRenderer(renderQuad.mesh_renderer,material);
commandBufferTerrain.GetTemporaryRT(1,-1,-1);
commandBufferTerrain.Blit(0,1);
commandBufferTerrain.ReleaseTemporaryRT(0);
so here i draw my fullscreen quad with the raytracing, then i get a new render target the size of the screen (-1,-1), and blit it into the full-screen render-target and release the first one. So the temporary render targets seem to only use nearest neighbor filtering, so in the end i probably will just construct a render target the old way in that code and just use it with the commnad buffer.
commandBufferTerrain.SetGlobalTexture("_Terrain",1);
commandBufferComposite.DrawMesh(renderQuad.mesh,renderQuad.obj.transform.localToWorldMatrix,compositeMaterial);
so i set my compositing material’s “terrain” texture to the up-sampled render target.
notice i’m using “DrawMesh” here because… uhh… yeah should probably switch it to draw renderer for consistency.
so you’ve set up the command buffer’s actions, now tell them when to execute (this is so cool)
camera.AddCommandBuffer(Rendering.CameraEvent.AfterEverything,commandBufferTerrain);
camera.AddCommandBuffer(Rendering.CameraEvent.AfterEverything,commandBufferComposite);
so inside the shader i’ll need to do some compositing to lay some raytracing over-top of the rest of the scene
float3 getSceneWsPos(float3 dir, float4 uvproj){
float sceneDepth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uvproj);
sceneDepth = Linear01Depth (sceneDepth) * _Far + _Near;
float3 sceneWspos = _WorldSpaceCameraPos.xyz + (dir*sceneDepth);
return sceneWspos;
}
so dir a normalized vector shooting out from the camera through a fullscreen quad (the basis for the raytracing). then i convert it into a world space position… it currently doesn’t work 100%. there is some weird shearing going on… i’m not sure why, might make a separate post asking the community if they know whats going on.
so ok now you’ve got a wspos from raytracing and from the scene.
float sceneDistance = distance(_WorldSpaceCameraPos.xyz,sceneWspos)/_Far;
now this took me a while.
float mask = 0.0;
float sceneStencil = 1.0;
float skymask = hit.alpha;
if (sceneDistance >= 1.0){
sceneStencil = 0.0;
sceneDistance =0.0;
}
if (sceneDistance > hit.depth){
mask = 1.0 * hit.alpha;
}
sceneStencil = 1.0-sceneStencil;
float alpha = mask + sceneStencil;
that creates the “alpha” you need to drop the terrain with grass and shit on top of your geo.
so in the compositing shader, assuming your doing
Blend One OneMinusSrcAlpha
then DONT FORGET to pre-mult your input. took me an hour of head-banging. heres the output of the compositing shader, which is rendered as a fullscreen quad (so unity did all of ITS rendering, now i’m doing mine)
return float4(terrain.rgb * terrain.a,terrain.a);
boom.
so sadly it leaves me without a complete depth buffer of the terrain + objects. but since i struggle to understand how to correctly create a depth buffer, i’d be hosed anyways. i do LOVE the distance buffer, because its so simple to understand. anyways, i’ll try soon to see if i can get the G-Buffer writing done. Honestly… rasterization is a pain.