modifying Gbuffer / Depth buffer before rendering begins

Hey guys!
So i’m developing a new terrain system for my game that doesn’t use any polygons. Its a raytracer similar to Delta Force / Commanche games. I’d like to get it rendering like a first class citizen in Unity but am a bit lost for information. I’ve been reading over the unity docs alot but need some help from the pros.
here it is running in Unity:

  1. How do I modify the depth buffer from a full-screen quad’s fragment program before rendering of normal geometry begins?

it looks like I can use a command buffer at this stage: CameraEvent.BeforeDepthTexture

At that event, i’m going to render a fullscreen quad where my terrain is rendered and its at this stage i want to place my depth from the terrain into the depth buffer. Can i do that directly from inside a fragment program? Also what format is the depth in? Just a world space distance from the camera? or is it compressed somehow?

  1. how do i fill out the rest of the G-buffer with information generated from a full-screen quad’s fragment program?

I’ll need to insert normals, diffuse, roughness, etc. I’m just unclear on how to write to the G-buffer, and what format things are supposed to be in?

so i’m looking through the Deffered Decal example and i see this in one of the fragment programs, but its commented out:

//void frag(
// v2f i,
// out half4 outDiffuse : COLOR0, // RT0: diffuse color (rgb), --unused-- (a)
// out half4 outSpecRoughness : COLOR1, // RT1: spec color (rgb), roughness (a)
// out half4 outNormal : COLOR2, // RT2: normal (rgb), --unused-- (a)
// out half4 outEmission : COLOR3 // RT3: emission (rgb), --unused-- (a)
//)

is this how i would write into the gbuffer from a fragment shader?

anyone from Unity Technologies ? is this a really dumb question?

Yes that is the way to write to gbuffers - however i have no idea if it’s possible to write to the camera depth texture. You can access the depth texture via shaders by defining a sampler2d _CameraDepthTexture, which will be set by unity.

There is this post on stackoverflow; unity game engine - Cg: omit depth write - Stack Overflow

Which seems to claim you can output depth in a frag shader like this;

void frag(v2f IN, out float4 color:COLOR, out float depth:smile:EPTH) // fragment shader

so maybe it can be combined with the color{0-4} from the gbuffer.

man I’m not sure if out float DEPTH works on a mac… which blows. I did it but it just immediately makes my object invisible, no matter what depth i use.

anyone have experience with this?

the hack im planning is a terrible one… there are all these cool features of rasterization… i want to take advantage of them!

ok so i got something working WITHOUT using depth, just clever alpha blending.

https://vimeo.com/135236338

I used command buffers for the first time! They were AMAZING. I’ve got some weird perspective warping on the 3d geo from Unity’s rendering… i think i’m getting my wspos from the depth buffer incorrectly. N00b statement, but man the depth buffer is so lame. Why does it have to be in some bizarre format that doesn’t just represent the distance to the pixel? i spent much of my time just converting the Unity depth buffer into a “distance” buffer to work with my raytracer. It still feels like a huge hack and i still have this big problem, i’d wanted to render my atmospherics AFTER rendering everything else, but since there will be no depth buffer available at that point, i’m a bit screwed. however, if i can figure out the whole writing to the G-Buffer thing (failed once… but… im sure i can get it), then maybe i can embed my distance buffer into one of the unused G-Buffer channels.

I’ll probably keep updating this thread in case someone comes up against similar stuff. I really appreciate the comment Zuntaos, it kept me going!

So almost as a blog of sorts…

for the command buffers, i just procedurally draw a fullscreen quad. So lets imagine you’ve got a little class that constructs a quad and makes it big and attaches it to the camera:

so make a little function for your command buffers, because we’re going to clear and re-add them each frame

   var commandBufferTerrain = new Rendering.CommandBuffer();
   var commandBufferComposite = new Rendering.CommandBuffer();

   commandBufferTerrain.GetTemporaryRT(0,Screen.width/1.8,Screen.height/1.8);
   commandBufferTerrain.SetRenderTarget(0);

so the 0 is the RT’s ID, which is a simple nice system. so now im downsampling the terrain to keep performance in check.

      commandBufferTerrain.DrawRenderer(renderQuad.mesh_renderer,material);  
        commandBufferTerrain.GetTemporaryRT(1,-1,-1); 
        commandBufferTerrain.Blit(0,1);
        commandBufferTerrain.ReleaseTemporaryRT(0);

so here i draw my fullscreen quad with the raytracing, then i get a new render target the size of the screen (-1,-1), and blit it into the full-screen render-target and release the first one. So the temporary render targets seem to only use nearest neighbor filtering, so in the end i probably will just construct a render target the old way in that code and just use it with the commnad buffer.

      commandBufferTerrain.SetGlobalTexture("_Terrain",1);
           commandBufferComposite.DrawMesh(renderQuad.mesh,renderQuad.obj.transform.localToWorldMatrix,compositeMaterial);

so i set my compositing material’s “terrain” texture to the up-sampled render target.

notice i’m using “DrawMesh” here because… uhh… yeah should probably switch it to draw renderer for consistency.

so you’ve set up the command buffer’s actions, now tell them when to execute (this is so cool)

      camera.AddCommandBuffer(Rendering.CameraEvent.AfterEverything,commandBufferTerrain);
        camera.AddCommandBuffer(Rendering.CameraEvent.AfterEverything,commandBufferComposite);

so inside the shader i’ll need to do some compositing to lay some raytracing over-top of the rest of the scene

float3 getSceneWsPos(float3 dir, float4 uvproj){
        float sceneDepth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uvproj);
        sceneDepth = Linear01Depth (sceneDepth) * _Far + _Near;
        float3 sceneWspos = _WorldSpaceCameraPos.xyz + (dir*sceneDepth);
        return sceneWspos;
    }

so dir a normalized vector shooting out from the camera through a fullscreen quad (the basis for the raytracing). then i convert it into a world space position… it currently doesn’t work 100%. there is some weird shearing going on… i’m not sure why, might make a separate post asking the community if they know whats going on.

so ok now you’ve got a wspos from raytracing and from the scene.

        float sceneDistance = distance(_WorldSpaceCameraPos.xyz,sceneWspos)/_Far;

now this took me a while.

        float mask = 0.0;
        float sceneStencil = 1.0;
        float skymask = hit.alpha;

   if (sceneDistance >= 1.0){
            sceneStencil = 0.0;
            sceneDistance =0.0;
        }

        if (sceneDistance > hit.depth){
            mask = 1.0 * hit.alpha;
        }
        sceneStencil = 1.0-sceneStencil;
        float alpha = mask + sceneStencil;

that creates the “alpha” you need to drop the terrain with grass and shit on top of your geo.

so in the compositing shader, assuming your doing

    Blend One OneMinusSrcAlpha

then DONT FORGET to pre-mult your input. took me an hour of head-banging. heres the output of the compositing shader, which is rendered as a fullscreen quad (so unity did all of ITS rendering, now i’m doing mine)

        return float4(terrain.rgb * terrain.a,terrain.a);

boom.

so sadly it leaves me without a complete depth buffer of the terrain + objects. but since i struggle to understand how to correctly create a depth buffer, i’d be hosed anyways. i do LOVE the distance buffer, because its so simple to understand. anyways, i’ll try soon to see if i can get the G-Buffer writing done. Honestly… rasterization is a pain.

This is super cool! I started digging through github to see if there were any other examples of the same trick, and I found one: rendering fractals in Unity5 - primitive: blog

It seems like they are using the same quad in front of camera to render to a gbuffer. As far as I can tell, it is well integrating with both the depth buffer, and the lighting system… which is super exciting. The download needed a little tweaking to get running right, but neat.

oh dude rad! thanks for posting this. my project has come A LONG way since this post… to which i kind of thought no one would care.

maybe they figured out the one thing i couldn’t, writing a depth value via the fragment shader. Unity docs claimed this could be done on Windows, but no OSX. I’m on my little MacbookPro. If anyone heres about how to do this PLEASE let me know. I’ve solved the problem of compositing it pixel perfect, but man was it an annoying hack. plus i lose early Zculling of rasterized geo.

latest:

https://vimeo.com/150130322

compositing solution:

https://vimeo.com/144382155

1 Like