Hello everybody.
In the project I am working on, I am rendering textures with path tracing, both simple “2d” images and “360” cubemap images. Because unity’s camera.RenderToCubemap()
doesn’t work with path tracing (it doesn’t wait for the rays to be “accumulated” for the final image), I made my own method where I basically rotate the camera and render all 6 faces from all the directions and then combine the 6 images into a final cubemap image.
Now I am trying to reduce the rendering time for these images. To do that, I had an idea of trying to instantiate more than one camera and render different “faces” at the same time and then combine them in a separate script. Basically I thought to try and parallelize the work as much as the resources (gpu memory and so on) allows it. But this didn’t work, it actually takes more time using more cameras than using just one (almost double). Any idea on why is that? Is it the unity engine that doesn’t allow me to make more render requests at the same time, so it is not actually parallel work? Or is it how the gpu itself works?
Else any other ideas on how I could reduce the rendering times? I can’t optimize the scene geometry too much so I have to find another way.
Thanks for any help or ideas