That might help. Is it possible to clear the depth buffer after these objects have been rendered? Otherwise it will only work without z sorting I guess?
We actually have a package sample showcasing a 3D skybox setup. This uses camera stacking to achieve it plus some additional setup, it is worth a look.
A more complex setup without using two cameras can be achieved with some custom passes and stencil overrides in the UniversalRenderer, this way you only pay for culling once, can share shadow data and frame setup, but this does require more finesse but the payoff is a much more performant and clean approach.
Something like this:
URP Opaque pass executes as usual but ignores a ‘3D_Skybox’ layer and writes a stencil value
Inject a pass that only draws opaque ‘3D_Skybox’ layer, provide a different camera matrix(or whatever), and stencil tests to avoid overdraw
URP Skybox draws
Inject a pass that only draws transparent ‘3D_Skybox’ layer, provide a different camera matrix(or whatever)
URP Transparent draw over 3d skybox transparents
The only thing here is that if you need a correct depth buffer you have to do some trickery like how the source engine squishes its 3D skybox. I’m currently working on a sample that will show this use case as it’s a good example of some more advances rendering customisations, and also a great way to make VR/Mobile worlds look large and full of life without actually being large and being a pain to render efficiently.
Thanks, the custom pass solution seems to be what I was looking for
The 3D skybox setup from the sample (+additional transparency camera) is what I am using atm.
But it causes to many problems (e.g camera stacking not working on Quest)
Any guess when the sample will be released?
Just wondering if I should wait or start my own implementation now.
If you know our way around I would definitely start your own, the sample will be embedded within the Boat Attack project which is going under some extensive updates atm which could take quite some time and the timeline for different parts are not clearly defined.
Probably looking at sometime early next year, which I think you will be better off having a go yourself with the provided information.
I got it to work with multiple render features.
It works nicely with the added bonus of rendering correctly in scene view too.
I have trouble getting it to work properly in VR though.
The multiple render layers and stencil buffer work but I am not sure how to properly set custom position and clip planes for the VR matrices.
It looks like the standard implmentation of the RenderObjectsFeature didn’t even bother to implement it as it just logs this message if in VR mode:
"RenderObjects pass is configured to override camera matrices. While rendering in stereo camera matrices cannot be overridden."
My attempt at setting the matrices seems to be wrong as it makes the background vanish:
I also tried calling cameraData.xr.UpdateGPUViewAndProjectionMatrices(cmd, ref cameraData, false); which should just update the matrices with correct data but it also comes out wrong.
Any clues what I am doing wrong here?
Does it even make sense to adjust the existing matrices or would it be better to reconstruct the matrices from scratch?
The code that setup XR stereo matrices looks correct to me. That should update the UnityStereoViewBuffer using the passed-in view&proj.
Checking out code snippet, looks like the viewMatrices are derived from cameraProjectionMatrices, maybe cameraData.GetViewMatrix should be used here?
You are right @ThomasZeng . A made a copy paste error there.
But the problem stays.
I cannot get the correct stereo view and projection matrices after calling SetViewAndProjectionMatrices because it just adds commands to the command buffer instead of updating the matrices immediately. UpdateGPUViewAndProjectionMatrices also just grabs the current (non modified) matrices and applies them.
Is there any way to create the proper stereo view and projection matrices from the mono versions?
There does not seem to be any exposed unity method to do so. The only ones I can find lead to closed C++ code.
I would really love to look inside Camera.GetStereoViewMatrix and Camera.GetStereoNonJitteredProjectionMatrix and see how its implemented.
My current implementation for recreating the view matrix is pretty naive and only works if the camera position is at 0:
public static Matrix4x4 CreateStereoViewMatrix(Matrix4x4 view, float stereoSeparation, int eyeIndex)
{
var stereoViewMatrix = view;
stereoViewMatrix[12] += stereoSeparation * 0.5f * (eyeIndex*2f-1f);
return stereoViewMatrix;
}
Hi! From what I understand, it mean all environments shaders need to be customized to add stencil support right? Any way to avoid this? Also, why default URP don’t have a stencil option? Does this option have an important cost on the shader performance?
I thought about setting a skybox shader directly with a stencil option (to avoid changing all enviro shaders), tried that with no luck. I guess it could not work as URP skybox draw after opaque right? Last solution would be a gigantic reversed cube / sphere to act as a stencil mask, but it’s probably not the most efficient / practical.
I’m not expert into rendering, but I wonder why we need a stencil at all. There is no solutions to draw the 3D Skybox meshes first somehow?
No you dont need to set this up in the individual shaders.
Render features (RenderObjects in this case) allow you to render specific layers with specific paramaters such as depth and/or stencil buffer settings.
You just need to setup a RenderObjects render feature in you URPRendererData Sciptable Object and set it up to render one layer while writing to stencil buffer and then another RenderFeature rendering another layer while comparing to the stencil buffer.
You will probably end up using more than 2 render features as you will have to render transparency separately etc.
Oohhh I understand now!
After some tries I finally found a configuration that work, I used the default override to set all the default layers to use a stencil, then did the operation on the 3D skybox layer on the renderer feature and it worked
Now the only issues remaining is the 3D skybox disappearing when I look around (I think it’s related to frustrum culling, as it must use the player camera and not the matrix I use) and specular lighting or something else that change relative to player position.
PS: Did you figured out the StereoViewMatrix? Because I think I resolved this on my side, by using another camera to get it’s viewMatrix (disabled so it don’t take any resource).
Apologies for the semi-necro, but I’d also be interested to know any details regarding StereoViewMatrices and how the workarounds described here are implemented. I’ve written a few shaders before (and a bunch more shadergraphs), but graphics isn’t exactly my strongest area of expertise.
I have exactly the same problem as OP in that I need a 3d skybox and can’t afford camera stacking, probably in part because I’m stuck with a tile-based renderer. I implemented the render feature strategy instead and the results are promising, but I’m hung on the matter of getting it to behave correctly stereoscopically. It would be useful if I could modify the skybox FoV, but I don’t desperately need to. It would be enough to ensure the skybox always rendered behind the main scene. That works in my current implementation, just not in stereo. Instead it renders once and is just placed in both eyes. Anyone have any tips on how I might accomplish that? Do I need to re-implement Render Objects from scratch and build stereo support in?
Camera stacking seems a bit overkill, wouldn’t it be mostly a matter of assigning a background render queue to an object?
Or would one need a custom shader in order to still avoid writing to the Z buffer?
Because if Z buffer is shared between background and non-background objects, the background object would actually have to be behind everything else physically and that would cause issues with the camera view frustum, having to be closer than the far plane (and this can be tricky to get right).