In many first person shooter games developed on deferred rendering engines, a very common way of rendering the first person mesh(es) is to divide the scene into, basically, two “layers”, foreground and background.
Commonly, when vertices are projected, they get mapped into a particular znear - zfar range (in OpenGL for instance this is -1 … 1, while in DirectX I believe it’s 0 … 1). The idea then is that each layer gets its own “partition” of this depth range. The foreground layer generally gets a very small slice, for instance 0 … 0.0000001 range, and the background layer gets the rest, 0.0000001 … 1 range.
The idea is that now the z-depth values of the gun and the background are guaranteed to be separate from each other, preventing the gun from clipping the world, but allows the gun to be written to G-buffers and properly receive light and shadows.
My question is: will this be (or is it already) possible in Unity 5, perhaps via command buffers?
That is probably the most requested “feature” FPS developers want in their games while using Unity.
I say “feature” because it’s probably something that is not engine related.
I’ve tried many different ways to achieve this but never got the perfect way to do it.
---- Please note that i don’t have much knowledge about creating shaders ----
The most commonly used method is just using 2 cameras with different depths, but that removes the shadows between the layers.
Another way, is to have 2 layers, where the second layer has every object from the 1st but duplicated with a shadow caster shader. This isn’t really optimal, because you do need to have duplicated objects of everything (unless you can make a shader that can render in different ways in different layers) and objects that animate or have physics ( trees, leaves, etc ) is basically impossible to have both layers correctly synchronized.
I did manage to make this effect, but failed on the performance part on specific circumstances.
What i did to be able to achieve this effect, was to use 2 cameras with different depths ( 1st contains everything except the first person objects and the second containing everything you want to cast shadows and the first person objects). What i did, was render the second camera with a replacement shader, which would switch every single shader you want to a shadow caster only.
I posted about this in my game’s thread:
The problems with this method are the following:
One gigantic shader ( used as a replacement shader ), using many subshaders for each type of shader we want to replace,
Every other shader must have a keyword (or something, i don’t remember) of some sort, so you can replace that shader with the shadow caster used by the replacement shader,
I got it to work with shadows from leafs and other alpha based shaders, but it was way too slow and never figured out how to fix it,
Every shader would have the same inputs, because a replacement shader requires it ( i don’t know if that was a big problem but it kind of sounds like a big problem )
I also made many little things, making it work with every other camera related things ( post processing effects, etc )
( If anyone wants this method and wants to try to make it work with good performance, then i’ll try to find the project i did this in and i’ll post here a package with it )
One other method is to use the Stencil Buffer, having the first pass where it renders the first person models to the stencil buffer before all geometry is rendered, then renders the scene, and then renders the first person models after all of that only when it passes on the stencil buffer.
EDITED:
One other method is to use the Stencil Buffer, having the first pass where it renders the first person models to the stencil buffer and renders the first person objects normally before all geometry is rendered, then renders the scene except where the first person objects passed on the stencil buffer.
I think i was able to not get any clipping, but i don’t think i got any shadows from it. also never figured out why.
The usual games out there probably don’t use any of these methods ( obviously ). They all probably use just one camera, and have a separate pass for the first person character stuff.
Wouldn’t you just have the second camera draw all the layers? That way you get shadows and don’t have to duplicate any objects. The only drawback is that the objects in the first camera would get drawn twice.
There was a graphics forum somewhere where a guy from Gearbox was talking about what they did for Colonial Marines (I know, ugh, but still) as well as Borderlands.
Forward renderers of course have the luxury of just clearing the depth buffer and rendering the weapon, but a deferred game engine doesn’t get this luxury. I mean, sure, you could render your weapon in either a whole separate forward pass, or even a whole separate deferred pass, but either of these approaches screams “inefficient” and makes post processing a pain in the ass (particularly if your post processing requires depth and/or normals, for instance SSAO)
By doing this “depth partition” you basically give yourself a way to render foreground objects right into the scene, ensuring they don’t intersect scene geometry, with the benefit of having a single G-buffer for lighting, a single rendering pass, only one camera to worry about for post processing, etc.
For example, how would you guess battlefield games (and other games) do it?
I’ve seen many issues about their method, but maybe it’s because of how they do it ( most efficient and performance friendly ).
For example, first person models are not drawn on top of transparent geometry and other types of objects.
some examples:
when parachuting at high velocities, the player models will clip through other geometry;
If the player is too close to an object/wall, the weapon sight ( transparent dot ) will clip through geometry.
@Eric5h5
Which method would you use there?
How would you make sure that what’s drawn on the first camera wouldn’t be overdrawn by the second camera? I don’t think i understand what you are suggesting there tbh
Just use a higher depth value for the first camera, as usual. All it is, is this:
Except you don’t disable the “gun” layer in the second camera. That way it casts shadows. It gets drawn on top of everything because it’s also drawn by the first camera.
Here’s a working package using the Stencil Buffer. Unfortunately i only got it to work with forward rendering. ( I also didn’t test everything, so i don’t know if it works in every case )
Feel free to try to improve it
Ah, yeah I see what you’re doing. Rendering the weapon first, and then rendering the scene geometry, discarding any pixels that overlap the gun.
And the reason it only works in forward is because the deferred rendering pipeline already uses the stencil buffer internally.
Although, now something occurs to me.
In the new Standard shader, isn’t the deferred version basically just a vertex/fragment shader internally? I wonder if I couldn’t just create a modified version of the vertex shader which squashes the z-position of the vertices after multiplying with the projection matrix?
Just posting here another solution if using Stencil Buffers is out of the question.
It’s similar to stencil buffers, but only the first person objects need a custom shader, so all in all it should be less painful.
So, this time, i used made a ColorMask Pass before the first person objects are rendered, and changed the “Queue” to “Geometry+50”. This way the geometry is rendered, then a pass is made where it removes everything where the first person objects are, and then the objects are rendered.
I won’t post any images because the result is exactly the same as above. Unfortunately, it only works with Forward Rendering…
BTW, does anyone know how to change the perspective (ex. different FOV and cam position) of objects through surface shaders? I think i know how to do it on vertex shaders, but not in surface shaders…
EDIT:
Well, actually the shader only needs a pass with:
Pass{
ZTest always
}
Don’t know if it’s faster with ColorMask 0 or nothing…
EDIT2:
Also remember that this should only be used on opaque objects and not transparent.
Does anyone know if this would be possible to do in Unity 5? ( with the new deferred renderer, new features, etc )
It would be great if we had a response from Unity for us to know if this is in fact possible to do with the tools we have, or if we are just wasting our time trying to figure out how to do it.