I have two cameras, A for the main 3d scene, B for renderering guis behind the main 3d scene. The reseon two use two cameras is GUITEXTURE and GUITEXT are always on top of the main scene.
for A I set Clear flags to depth only, depth to -1, culling mask to the main 3D scene
for B I set clera flags to solid color, depth to -3, culling mask to the layer I define for the guis,rendering path is forward.
The problem is that if I set A’s rendering path to forward, everything goes fine. If I set it to “deferred lighting”, It seems only can see the 3D scene, and everything without any depth values becomes black and can not see the behind guis. Seems “depth only” does not work in defered lighting mode. Is that a bug on something?
If I turn off camera A, I can see the guis rendered by camera B. So I guess the contents in camrea A just occlude camera B’s contents because "depth only"clearing mask does not work in deferred lighting cameras
You can try, the reason I haven’t responded (dunno about others reasons) is cause I can’t really tell what’s wrong so have nothing useful to respond with except I’ve not noticed anything.
thanks, has anybody met similar problems before? Is this really a bug that unity have problems on depth rendering in deferred lighting mode? I’m using 3.2.0f4 pro
Hey azuretttc,
the problem you’re describing is a known issue. It is caused by the fact that the final pass of prepass rendering (so the pass that renders the geometry the second time and applies the result of the lighting pass to it) is rendered into a render texture. Then it is just blit-copied into the back buffer, overriding everything that was rendered there before.
Fixing that problem requires some changes in other subsystems, so won’t be a quick one, but it has been placed on our roadmap.
A workaround is probably possible if the second camera (the deferred one) would render into a render texture and you would handle compositing that with the first camera’s output yourself.
I know this thread was started back in 2011, but it appears to still be an issue. I have exactly the same problem in that i have two camera’s setup as deferred with the ‘top’ layered camera set to depth only. In Forward Rendering i get the composited result i’d expect but in Deferred i only see the top layer result.
As this was raised in 2011, do you have any idea when it will be resolved? Failing that i will look at going down a rendered texture route.
Try to layer your camera’s, let’s say for e.g. you use a UI / GUI for camera to render you need a new layer to add the UI render with the correct depth to overlay. If they are both in default, the new camera will always take poll position.
Hey ShadowK, the issue is not with the ordering of my render ‘layers’. With all my cameras set to forward rendering i get exactly the result I expect by layering the depths of the cameras to overlay and ‘see through’ the top level ones.
The issue is with the depth ordering not working in deferred rendering mode. Or more specifically the ‘transparency/empty pixels’ between depths.
Sadly I am not layering for the GUI, otherwise I could just set the GUI camera to Forward Rendering. Instead my actual 3D view is made up of a couple of cameras layered atop of each other to get a very specific effect. Ideally i’d like to use deferred rendering to achieve the visual id i am after but unless I can solve this i will have to fall back to Forward.
Not really and could be a bug as Kuba says, as I understand it lower depth camera’s will render first. Now there is a specific issue that I’ve come across (I used deferred) which completely seems to ignore what depth you have. With Daikon forge for UI you have to specifically layer the UI or it renders over the main camera even though the depth is higher?
Not sure how applicable it is to what your trying to do, but I thought it worth a mention.
It is unlikely that they are going to address this issue, because with 4.6 (currently in beta), you will have the possibility to do that thanks to the new GUI solution.
Is this issue going to be solved for the final release of Unity 5 ?
With 4.6 everything stays the same and to my dismay, so does for the Unity 5 beta.
This is a really crippling issue, the kind of problem you simply can’t get around and it’s hard to believe that it’s been at least 4 years since it was first reported and it’s still here.
Now that Unity 5 is going to put unity very close to UDK/UE4 as far as graphical fidelity goes, it would be really, really frustrating not to be able to do something as simple as use two deferred cameras.
Please Unity team, tell me this is in the short term TODO list
“Placed on our Roadmap” In 2011. Unity 5 just came out. And It’s still a bug… I’m going to be very upset if I can’t use deferred lighting simply because I have to have 2 cameras with different depths use it.
I downloaded the Unity5 yesterday an test with the new deferred system and it seems to work fine with this setup. By the way I have to test it in our huge application.
I’m yet to test it, in the meantime I had to resort to a workaround that consisted in using a very low near camera range, with the guns and hand models and main game all in the same camera. Broke a couple of image effects due to the low cam range, forced us to have a bigger collision box for the player and forced us to make one additional animation for each weapon type.
Regardless, I’m REALLY happy to know it’s all working as it should now.
Bumping this thread because I’m having the same issue with the latest Unity version 5.2.3
It only works when HDR is disabled, it does not work when HDR is enabled on one of the camera (does not matter)
So this bug is unfortunately not completely solved, or is there a reason why it won’t work with HDR enabled?