I am not a graphics engineer but I want to understand how Unity performs rendering and what we could expect in the future. So especially in the sight of the upcoming Rendering pipelines I want to sum up my questions here. Also if there are any resources explaining that in detail then I would be thankful, so far I couldn’t find answeres to that.
Isn’t it possible to render specific things only once and not twice for each camera? Such as shadow maps, especially spotlights for instance which are not camera dependent should be calculated only once and not twice in VR. What is the state in Unity now?
Especially talking about one directional light and shadow cascades this is a critical point.
Also culling could be performed just once using a combined frustum, is this done already?
If any of that is not the case I wonder if there will be a pipeline that optimizes things for VR or if we “simply” could make our own, e.g. that the Community comes up with one.
The things just is that we always run into issues with VR as we have to render realistic environment and often sacrifice things like AA for otehr graphical features like crisper shadows, even though I would like to render in 2x or 3x resolution to get a crisper image
I want to know if we already at an end here or if there is room for optimization.
is there anything else that you think could be improved towards VR rendering?