In forward rendering with 2017.2, I have a few thousand objects with instancing enabled. But in the frame debugger, the depthnormals pass renders each object in a separate drawcall. Im using the builtin depth+normals shader. Is this normal?
I ran into this issue a while ago and reported the following bug:
Unity Technologies replied with:
The case has been closed then. I don’t expect them to fix or implement that.
Alright, thanks! We are going to switch over to deferred rendering anyways but still I was curious. I understand unity needs to deprecate certain things, but many postprocessing plugins use that camera pass for forward rendering and the issue seems easily solvable by them.
PD: while computing from depth normals would work, at one point I was using a custom depth+normal shader that received “standard” normal map info to improve the results, and that would not be possible to extract from a simple depthmap.
Sorry for necro-posting, but I just have to say - this was not the best kind of response from Unity regarding the issue. Reason: Ambient Occlusion in the Unity’s Post-Processing stack automatically turns the Depth&Normals texture on for the main camera, resulting in a huge performance loss (it happens only in “Scalable ambient obscurance” mode, but the other available mode is just not useful for my project). In other words it’s not just for some custom shader usage, it’s a very commonly used effect the default render pipeline. And calling it “legacy” is bold, especially in 2017, considering that the “non-legacy” renderers doesn’t seem to be ready yet in 2019.
Man, what a sad way to end my search on how to fix this issue