My material needs to read from the depthtexture (generated by: grabpass), and it also needs to receive shadows. This seems not to work?
Experimentally…
Unity refuses to process shadows for anything above queue 2500 (i.e. anything outside the Opaque queues)
Grabpass’s depth doesn’t work properly except in queues below 2500 (argh!) … it acts as though it rendered the object BEFORE doing the grabpass that it hands to that object
None of this seems to be documented in the manual, but a comment from @bgolus in an old thread suggests to me that grabpass might be incompatible with shadows:
“the depth texture used by post effects is actually a separate rendering of the entire scene using the shadowcaster pass of your shader”
So … I thought I could do this:
Remove fallback (which does shadowcasting by default)
Implement shadow collection
DO NOT implement shadow casting
Use my grabpass’s _CameraDepthTexture … and my shadow collection … and everything will work fine
But … Unity doesn’t allow shadow collection without shadow casting (ARRRGH!), and Unity’s processing of Grabpass’s _CameraDepthTexture appears to ignore MeshRenderer’s “Cast Shadows” setting (Bug??).
The grab pass does not generate a depth texture. They’re completely unrelated.
The grab pass makes a copy of the color values of the screen to a texture that can be sampled by the shader immediately afterward.
The depth texture is either generated as a pre-pass of the entire scene by rendering all opaque objects using their shadow caster pass and copying the resulting depth, or by copying the depth after the deferred gbuffers are filled.
If you’re using the deferred rendering path, you can indeed not sample the depth texture in the opaque queues as it has not been created yet. If you’re using the forward rendering path you can access the depth texture at any time, but it’s contents won’t change depending on when it’s sampled like a grab pass will since it’s been pre-filled will all opaque object before hand.
Also, you are correct, Unity does not support shadow receiving on objects with a queue over 2500 in the built in rendering paths. There are some work arounds for supporting the main directional light’s shadow, but there no way to match up other lights’ passes with their shadow maps. The newer SRPs sort of have support for shadow receiving on transparent objects, but they also don’t have any support for grab pass at all.
Exactly :). That is why I’m trying to do this without SRP’s.
(although also: this is part of an asset I’m using across multiple projects, and SRP packaging for export to other projects and/or sharing on Asset Store looked weak last time I investigated it. Maybe fixed by now?)
I had thought that - in forward rendering - the presence/absence of the grabpass triggered Unity to fill-in the _CameraDepthTexture automatically. My mistake - I need both (depth + colors), so … probably I found that disabling one “seemed” to break the other, but only because my rendering code broke completely at that point.
Yep. For anyone else reading the thread in future, to be clear it “seems” to change contents based on queue:
Low queue (opaque) the object is in the depth texture
High queue (transparent) the object is NOT in the depth texture
…because the choice of queue is causing Unity to globally ignore (or honour) the shadow-related passes, with the side-effect that it is present or absent in the depth generation phase.
So … what’s the path towards a solution here? (or is it simply: “Unity cannot combine depth-textures with shadows”, which seems too big a limitation to be true :)).
Everything I try so far becomes a dead-end. And then I ran into an @arasp comment from a few years ago where he essentially said “yeah, [non-trivial stuff with shadows] doesn’t work, we’re going to add a feature with customisable lighting-pipelines in the future, so you can fix this yourself!” (which sounds like the thing that became SRPs :)).
The biggest hurdle right now is that Unity appears to have one or more bugs in how it creates a shadow-map. Objects that DO NOT cast shadows are being included in the _CameraDepthTexture, and Unity’s various options for removing them are all broken / disabled since Unity 5.0 (I’ve tried ShadowCollector (broken/disabled), I’ve tried the Tag to force shadow-casting off (broken: it simply wipes ALL shadows, both collect and cast), I’ve tried implementing shadows but performing “clip(-1)” on every fragment (broken: no shadow, but depth texture is still corrupted), etc).
There’s no way I can replace the GrabPass; whatever I do has to be compatible with the existing Unity implementation of this (because making your own Grab implementation requires multiple Camera objects in scene - which can’t be achieved purely inside the shader - or requires SRP support, which doesn’t exist yet)
I don’t care which queue I’m in. I should be in the Geometry queue, but I’m rendering large volumetric objects that sit close to the camera and effectively obscure any transparent objects. So if being in a Transparent queue made Unity’s features work, I could live with that (although see next note)
There is no way to make Unity’s shadows work CORRECTLY (i.e. for all lighting in the scene) for a shader that sits in a Transparent queue. But according to @bgolus above, you can make it work for the main directional light (what should I search for to find info on that? I haven’t seen that at all in my searching so far)
There is no way to implement shadow-mapping purely within a shader. Like with grab-passes, it requires changes to your SRP and/or using multiple cameras.
There is no way to directly access Unity’s shadowmaps and do the CORRECT calculations (instead of Unity’s half-working calculations). For instance: Unity simply isn’t indexing the shadow-map for lights correctly, it’s ?optimizing? by using its own broken/changed-since-Unity-5 logic for “is this shader a shadow-caster?”. I can see that Unity’s lookup is incorrect: if I could override that and do the lookup myself, purely for shadow-collection, then everything would be fine.
Switching to deferred path won’t help in any way: you can’t access the depth texture at all (it doesnt exist yet) when you’re in a queue that supports shadows.
Unless there is a way to do item 5 above, it feels like it’s impossible to fix/workaround in Unity shaders. Either you need to fix the depth-texture generation code (requires multiple cameras / renders in SRP), or you need to fix the shadowing code (needs multiple cameras / renders in SRP). Surely I’m missing something?
Technical you can kind of replace the grab pass, at least in a very narrow time frame of during the command buffer camera events. Basically that’s what Unity’s LWRP/URP/HDRP support with their “camera opaque texture”.
Correct. This was a choice made by Unity more than a decade ago to not pass the necessary information about a light’s shadows to transparent objects. And while it is possible to get access to each light’s shadows, and shadow matrices, there isn’t any efficient way to link those to the actual light passes on transparent objects. There are similarly old threads on the topic asking how to do it and when it would be added, with basically no answers. The only way to do it with the built in rendering paths is to not use any of Unity’s built in lighting or shadowing systems and replace them all with your own. The Valve Labs renderer did this, and others have done similar. The SRP system exists as a way to do it more easily.
As for the main directional light’s shadows, since the base pass is always the main directional light, that one you can easily match up with the correct shadow map. See this project for an example: https://github.com/Gaxil/Unity-InteriorMapping
Shadow mapping requires rendering the depth of the scene from the view of a light. You can fake a little bit with screen space shadows, but that relies on the camera depth texture and is very expensive for anything more than a few pixels of shadow casting. Or raytracing.
5.See my answer to 3. Basically Unity is indexing the lights and shadows internally, that index just isn’t exposed to user land c# or shaders. Also transparency tends to be fairly expensive to begin with, so it is a perfectly valid optimization to not cast shadows on transparent objects. Indeed many game engines still don’t do this for the same reasons. Again the SRP system was written to solve this by writing whole new rendering paths that do all the work in public c# rather than hidden behind c++ code and ever trying to expose more of that.
I’m not entirely sure what you’re trying to do, but if you’re trying to do something that’s fully opaque, then you don’t need to sample the depth buffer or even do a grab pass to get correct results. If it’s something that’s semi-transparent or blends with the existing scene, then you kind of do need to use multiple cameras at some point, or at least some amount of manual rendering of scene objects. For example what @flogelz did for this effect:
Yeah, good question - what am I doing that meant I ran into this apparent limitation of Unity’s rendering?
In a word: water.
But … not fake, low quality, water. High accuracy water, for everything except oceans. When you have a visible seabed/riverbed, and waves are small, the rendering is very different - and is something I’ve not seen any solutions on asset store that even come close to being visually accurate enough.
A few key requirements/challenges that are relevant to the “grab + depth + shadows” situation:
Unlike most transparent materials, water actually receives shadows - volumetrically, it receives them at every depth into it until you can’t see any further.
Water is neither metal nor non-metal, it has a unique way that it’s colored (not directly supported by Unity’s PBR system).
…which is heavily dependent upon the water-depth (vertical) and water-raypath (camera-dependent).
And, of course: transmitted light that reflects off the seabed/riverbed
Transparent objects generally go invisible when under water, so it’s OK to discard/ignore Transparent queue materials.
The way water reflects and refracts light is more complex than can be supported by Unity’s use of Schlick’s approximation; real Fresnel equations are needed here (Unity doesn’t support them)
One option I have is that I could stop using a grab-pass - I can fake-texture the water-bottom - but it would create some hairy problems for dealing with objects part-immersed in the water. If the grab-pass were the only problem, I’d consider going down that route, but “shadows” + “depth” are critical, and they are the real problem / incompatibility right now.
EDIT: long-term, I was planning to add an option to use shadow-volumes isntead (at the cost of no longer working in a single shader), but early render tests showed that I could get a very good approximation using just shadow-mapping, and I really really wanted to make this all work within one shader.
EDIT2: I’m using custom vertex-attributes already (need to store a lot of per-vertex data for other aspects of accurate water rendering), so I can store SOME of the depth information into the meshes themselves - e.g. vertical depth - but obviously that doesn’t work for camera-angle-dependent “depth” (ray-length / eye-depth).
If I understand flogel’s work correctly, this matches my “next best alternative” plan: add a script that pre-renders the correct depth every frame and exposes it as a texture.
This would mean I could go down the route of “Use Unity’s shadowing system, render my non-shadow-casting object as a shadowcaster (yuck! But it at least works in receiving shadows), and then ignore their _CameraDepthTexture entirely (has the shadowcaster in there even though it shouldn’t), and sit in a high value Opaque queue, with grabpass still working OK”.
It’s intensely annoying that - as far as I can tell - a bug in Unity’s _CameraDepthTexture makes it a requirement for this extra script. I have many bad experiences of keeping such scripts working and integrated in larger Unity projects (mostly from 3rd party assets, but also my own) - problems with SceneView being inaccurate, problems with remembering to configure every new camera with the Magic Script ™, etc. I was really hoping that I’d misunderstood, and there was a way to get _CameraDepthTexture giving the correct values :(.
Really your problem is you’re trying to avoid scripting and trying to do everything in a single shader. There’s a reason why no high quality water shader does that, and it’s because Unity doesn’t do everything you need out of the box, and there’s a lot of stuff you can do in multiple passes to make things much more efficient for various techniques. The other problem you’re stuck on is trying to get anything but the main directional light & shadows to work. Those are really, really hard problems that even many AAA games just punt on because they’re not worth the trouble. At best they might have a handful of “hero” lights (ie: a flashlight, or a few key lights on a boat) affect the water, but usually not with any kind of shadow support.
Plus, the way Unity does shadows for the main directional light, you do not want to use Unity’s built in shadows as it uses the depth texture to cast shadows on, passing a screen space shadow mask to opaque objects, meaning you only get the surface shadows and can’t do the volumetric shadows you’re wanting to do. So the whole issue of having your water be part of the _CameraDepthTexture to receive shadows goes away if you stop trying to get Unity’s built in opaque shadow system to work.
So now what you do is:
Step 1: Copy off the directional light’s shadow maps like in the example project I posted above. Basically attach a script to the main light.
Step 2: Draw your water in the transparency queue, sampling shadows like in the project I posted above.
Step 3. There is no step 3, you’re done.
Don’t bother with non-directional lights, or add support for flagging “hero” lights with a script. You’d need special handling for them anyway as you’d need to do all lights in a single pass rather than using Unity’s multi-pass lighting system since that doesn’t play nice with transparent objects.
Agreed. But it comes so close to working that I had hope!
My plan was to get it working with core situations (e.g. main directional light as you said, e.g. if possible: single shader with multiple subshaders/passes as needed) … and then do optional features at higher quality that require the stuff that makes it more difficult to integrate with a project (scripts, special project settings, a custom SRP).
As I understand it:
Shadows are disabled by default on all FORWARD_ADD passes
I can expressly turn them on by specifying multi_compile_fwdadd_fullshadows (but I haven’t tried this. Does it cause new problems?)
I’m not doing anything that requires/uses/interacts with transparent objects
Also … if it helps, I’m very happy to implement full lighting model from scratch myself (I’ve written 3D engines from scratch before, so I’ve been through most of the learning steps involved). If I have to go to multiple cameras/renders anyway, and break easy deployment, it seems there’s no reason for me to shy away from building a whole lighting model?
(ditto: I expected I’d have to write my own shadow-volume implementation eventually. I’m OK with doing that, I just wanted first to try and make something that “plays nice” with core Unity and is super simple to deploy).
But shadowmapping tends to be really fast, right? So it felt worth implementing anyway, as a fallback for performance-critical situations (and the visual effect is suprisingly good, once all the other rendering is enabled).
The difference between multi_compile_fwdadd and multi_compile_fwdadd_fullshadows is indeed the difference between supporting shadows on the forward add passes and not. However how Unity’s lighting system works this just changes whether or not the variants for the various light types’ shadows are generated or not. This doesn’t do anything in the transparency queues because Unity will never use those variants on transparent objects regardless of if they exist or not.
You are doing stuff that requires interaction with transparent objects. The water itself is transparent, and you do not want it to be treated as opaque if you want to see the stuff underneath.
If you want the depth texture to show only the stuff that’s under the water, you should not have the water be part of the opaque queue, or not have a shadow caster pass for the water shader. If it doesn’t have a shadow caster pass, but is part of the opaque queue, any post processing that is applied only to the opaque queues (like AO) will also apply on top of the water, but since the water didn’t write to the depth texture it’ll have the wrong values and you’ll get the AO calculated for stuff under the water rendered on top of the water. The main directional light shadows for objects under the water will also be “on top” of the water unless you explicitly disable sampling the screen space shadows in the water shader or use the custom transparency shadow system.
Basically there’s a lot of assumptions Unity makes about opaque objects that aren’t true as soon as you make that object transparent, so you need to be very careful and understand the limitations that brings, or don’t do it.
It’s generally more expensive than sampling a normal color texture, especially when you’re talking about cascaded shadow maps like what Unity uses for the main directional light. And something that’s really fast done 100 times is still slow, which is the problem you get into with transparent objects. As for whether or not Unity should have added support for transparent objects receiving shadows into the built in rendering paths, I would have loved for them to have done that, but they didn’t and they chose instead to stop all development on the built in rendering paths in favor of working of the SRPs. I understand that choice as the SRPs are much more likely to be the way forward, eventually replacing the built in rendering paths entirely. For now and likely the next few years it’ll be a bit of a pain.
I’d need to at least itemise every part of the rendering system and figure out “do I want this to run, how will it break, and can I override/disable/correct it?”, and hope there was an answer for each of them.
Do you have a preferred reference for everything that happens in Unity’s rendering stack? You seem to have encyclopedic knowledge of it :), and I find myself flicking back and forth between many different Unity docs pages and notes just trying to keep track of how it works. It feels like there should be something like the CommandBuffers page diagram (https://docs.unity3d.com/Manual/GraphicsCommandBuffers.html) or the MonoBehaviour lifecycle page (https://docs.unity3d.com/Manual/ExecutionOrder.html).
e.g. there’s nothing in the official docs about AO and render pieline / shaders - there’s merely an offsite link to the postprocessing package, which HAHAHAHA links directly back to the official doc page recursively. Sob.
I fully support the idea and ideal of SRPs. But they seem to be taking the stance “maybe in 3+ years time we’ll allow people to share their SRP stuff. Until then, screw code-reuse and screw the Asset store”. Which makes it seem to me to still be little more than someone’s personal research project! (I mean, SRP’s are genuinely a great thing, and very much fit the traditional Unity mindset of “let developers customise all the things!”, but … a lot seems to be getting thrown away right now, from reading / following the list of “will never be supported” features, and the ongoing lack of response to people complaining about the unshareability of projects/code using SRPs)
To be clear: Water is rarely actually transparent. In a lot of my use-cases, a non-transparent setup would work fine! (although I obviously want to get all cases working eventually)
No site, no. It’s all from a lot of time playing with stuff myself, plus validating it with things like the frame debugger window or RenderDoc in the cases that the frame debugger lies or is broken. (And even then sometimes Nvidia Nsight for cases where RenderDoc fails.)
In the Geometry queue, this works as expected: With no shadowcaster, I get simultaneous depth info + accurate shadow-receiving for the main directional light in scene. (which is pretty much what I expected Unity to do with shadow-collectors: sample the shadow map(s) correctly at the 3d point!).
I think this is the closest we can get to solving the original thread (combine _CameraDepthTexture with Unity’s shadowmap) - it only works for one light, and it requires a C# script be added to that light, but I don’t see how we could get closer.
The code from the InteriorMapping project looks good, makes sense. Ultimately you only need one line from the main shader, since the shadow-map reading is neatly sectioned off in its own cginc (Shadows.cginc). You just need to do:
“GetSunShadowsAttenuation_PCF5x5(shadowPosition, i.screenPos.z, 0).x; // Access shadow map” (along with bgolus’s other points, i.e. attach a script to the main scene light and capture its shadows into a globaltexture)
In Transparent queue this is now failing for me (no other change), the depth texture goes to solid 0, but I’ve not investigated why yet. Probably something I changed accidentally in passing, but it’s strange that in the Geometry queue everything now works as expected. Once I’ve played with it a bit more I’ll post a followup.
Are multiple lights possible with this approach? I guess so, but too tired to try it now :).
How many things will this break if it stays in the Geometry queue? I might try it and see, just for fun.
PS: bgolus - do you have a patreon or something where we can give you more concrete thanks for your tireless help on shaders?
Yep, Jasper/CLC’s work on breaking down the rendering stack has been invaluable to me in the past. I highly recommend it, especially for worked examples on what’s going on under the hood!
EDIT: e.g. this one on Unity’s shadows has a lot of useful info that’s tricky to find in the main docs Rendering 7
But, again, it’s piecemeal - there’s no single line-by-line explanation, a lot of stuff you have to read the whole doc to find one item. That’s appropriate for his work because he’s specifically making “follow along with me” tutorials, that are great at teaching. But I think we could all benefit from alos having an overall reference, which Unity doesn’t seem to have.
I do just scan and copy paste note into a single “cache file”, I guess I should organize that and put it online then lol
I need to do it for the scriptable render pipeline tutorial to break down where all the stuff goes, so I can maybe write a tools that make adding stuff into teh various place faster.