Optimization options for Helper Cameras?

Hi,

Let’s say, I have a second helper camera under the ground plane. It only renders one helper layer and saves the whole render into a 1024x1024 texture. I then use this texture for a custom shader and material to make some cool ground VFX.

My question is - how I can optimize the second camera that writes a texture? It still has a noticeable impact on FPS even in only renders basically 2-3 primitive objects with an unlit white shader on them. I thought that would be much cheaper on GPU. Maybe I’m doing something wrong, I’ve tried to change output formats of the image from RGBA32 to something different, it didn’t help a lot.

The trick is don’t use the camera to render anything. Instead use command buffers or Graphics calls to render directly to the render texture.

The cost likely is not from rendering on the GPU, but from the CPU side where the camera does a lot of unnecessary work. For my most recently released project we have a map which used an extra camera rendering 3-4 quads, most of which were tiny, but this still took nearly 4 ms to render on the CPU. I swapped to to using DrawMeshNow commands and now it sometimes profiles as “0.0 ms”.

1 Like

Thx, it looks helpful, but I can’t replicate it without additional documentation. Can you share some specific example of this? Also, from your example code I don’t understand what is meshesToRender[ ]? Is this array of mesh renderers or meshes? Also, from this line “GL.modelview = dummyCam.transform.worldToCameraMatrix” he can’t find the worldToCameraMatrix.

Oh, I’m also trying to make this in HDRP.

UPDATE:

I’m somehow made it at least working, here is my code. I’m using worldToLocalMatrix, because there is no worldToCameraMatrix.

public class BottomCameraCustomDraw : MonoBehaviour
{

    public RenderTexture renderTexture;
    public Camera dummyCam;
    public MeshRenderer[] meshesToRender;

    void Update()
    {
        Graphics.SetRenderTarget(renderTexture);
        GL.PushMatrix();
        GL.modelview = dummyCam.transform.worldToLocalMatrix;
        GL.LoadProjectionMatrix(GL.GetGPUProjectionMatrix(dummyCam.projectionMatrix, true));
        for (int i = 0; i < meshesToRender.Length; i++)
        {
            meshesToRender[i].material.SetPass(0);
            // localToWorldMatrix is from a disabled Renderer component
            Graphics.DrawMeshNow(meshesToRender[i].gameObject.GetComponent<MeshFilter>().mesh, meshesToRender[i].localToWorldMatrix);
        }
        GL.PopMatrix();
        Graphics.SetRenderTarget(null);
    }
}

But the results are so wrong, here is what is suppose to look like (http://prntscr.com/o6sv0d) and here is what I’ve got (http://prntscr.com/o6svcv).

It’s whatever you want it to be. Having it be a list of MeshRenderer components like you have is perfectly fine (though I would use meshesToRender[ i ].sharedMaterial.SetPass(0); in that case to avoid a memory leak that calling .material on a mesh renderer can cause). You could also use a custom struct array with the data pre-filled from arbitrary sources, so it can be from a mesh renderer, or from purely “virtual” mesh objects with no mesh renderer or game object backing them.

That’d by my typo. Should just be “dummyCam.worldToCameraMatrix”. That matrix is a property of the camera itself, not its transform. (Fixed now in the original post too.)

Hopefully that’ll fix the problems you’re having.