So I have a space game with Solar System in real scale in it. I need to render very far distant objects, like other planets of Solar System. For example, here I render the Moon using Draw Renderers Custom Pass with Before Rendering injection point:
This works great because the Moon is relatively close to the Earth (300 000 km). But other planets is far far away. I tried to use different scale for them, like 1/60000, but then I have a problem that the Moon will render in front of Earth. I could not use second camera, because it is very expensive. So I need a way to render them in Custom Pass, even if they are further than far clip plane of the camera. Is this possible with Custom Pass? Could you please point me the direction, how can I solve this?
Right now I don’t think it’s possible without hacking the camera matrices manually to change the camera position when rendering the custom pass (Which is what i do here: GitHub - alelievr/HDRP-Custom-Passes: A bunch of custom passes made for HDRP but it only works in after post process right now).
We’re working on a more complete custom pass API which will allow you to render objects from any point of view in your scene during the rendering of your camera, I think this can solve your issue even though it may not be optimal because rendering thousands of these object will be costly.
Hey @antoinel_unity I recently updated to Unity 2020.2 and hdrp 10.
I saw new utility functions like CustomPassUtils.RenderFromCamera and saw FPSForeground sample. But I still quite not understand, how to render anything beyond the far clip plane of the first camera?
My results so far looks like this:
Here is the first camera (clip planes 0.1 - 15)
Thanks for advice. Unfortunately, it will not solve original problem with planets, because there are huge distances between them. And there will be depth buffer precision issues and other stuff like missing cascade shadows, if the camera has big far clip plane value and a small near clip plane at the same time.
Ok, I think I solved it. Firstly, I render objects from other camera into custom color/depth using above pass. Then copy data from custom color buffer to camera buffer using second pass and LEqual ZTest. At the same time I am writing to camera depth buffer values just above far clip plane on that pixels.
It will obviously work only for Before Rendering and After Opaque injection points.
I don’t know if it is the best solution, but at least it is working.
This looks like a good solution to a similar situation I’m trying to address. I understand how the first pass copies the secondary camera’s color/depth buffers to the respective custom buffers. But I’m a bit new to SRP/HDRP, and am stuck trying to figure out how to replicate the function of the second pass.
I can’t find any really useful examples of combining the custom color/depth buffers with an existing camera’s buffers. Does anyone know any basic tutorials or docs that explain this?