OnPreRender and OnPostRender understanding to discard fragments

RenderTexture mainRenderTexture;
void OnPreRender()
{
    mainRenderTexture= RenderTexture.GetTemporary(width,height,16);
    Camera.main.targetTexture = mainRenderTexture;
}
void OnPostRender()
{
    Camera.main.targetTexture = null;
    Graphics.Blit(mainRenderTexture,null as RenderTexture, postProcessMaterial);
    RenderTexture.ReleaseTemporary(mainRenderTexture);
}

I would really appreciate it if someone explains this snippet of code. If I am setting a target texture for the main camera in OnPreRender(), does it mean the whole scene is getting rendered every frame into the target texture with the standard shader? And after I do Graphics.Blit inside OnPostRender(), the result of the previous rendering stage is again getting read and re-processed applying the post-processing material shader on it? It seems like I am doing double rendering work to get an output! In my case, I discard a lot of pixels through my postProcessMaterial fragment shader, and it looks like I first render a full-screen quad in the pre-rendering stage and then discard those pixels as post-processing! Am I making a mistake in understanding this process? Is there a way to discard those pixels earlier then this?

If you set the camera to use a render texture as the target texture in OnPreRender, it means it renders exactly the same stuff as it would if you hadn’t done that, but to the render texture you assigned.

That’s it. Normally Unity renders to the frame buffer, or to an internally created render texture, depend on if you’re using post processing or not (or more specifically, if the post processing turns on forceIntoRenderTexture on the camera). You’re just specifying one yourself. In the case above it’ll be an ARGB32 format with a 16 bit depth buffer, no MSAA at whatever resolution you’ve select, as ARGB32 is the default, as is no MSAA. If you have HDR enabled on the camera, or MSAA enabled in the quality settings & camera, it’ll be ignored since you’re stomping on that, just like you are the resolution.

But otherwise it’s exactly the same as rendering normally.

I’m not sure I understand exactly what you’re worried is happening. You’re reading the render texture, which is the results of the scene, and applying some shader to it, outputting back to the camera’s default render target (either the frame buffer or the internal render texture) at the default resolution. If you clip, you’ll not be updating the pixels of the target render texture with a new value. But you’re not really doubling the work. The scene is already rendered, and will eventually need to be copied to the frame buffer anyway to be displayed. Using a Blit is what Unity would do anyway in the case of any other post processing, so doing that with your own material saves a Blit (and an additional render texture).

1 Like

Thank you for your response, @bgolus ! :slight_smile:
I was hoping that I could apply the effect while rendering into the render texture at the very beginning instead of first writing it into the render texture and then using the shader filter with the render texture as input. Do you consider having a replacement shader will allow me to render into the mainRenderTexture with a custom fragment shader in OnPreRender() itself? Then I just need to blit the result in OnPostRender()? Do you think that is a better approach?

void OnPreRender()
{
    mainRenderTexture= RenderTexture.GetTemporary(width,height,16);
    Camera.main.targetTexture = mainRenderTexture; // repalcement shader modifying the results while rendering!?
}
void OnPostRender()
{
    Camera.main.targetTexture = null;
    Graphics.Blit(mainRenderTexture,null as RenderTexture); // no need for postProcessMaterial!?
    RenderTexture.ReleaseTemporary(mainRenderTexture);
}

If you want to use a replacement shader, then use a replacement shader. However I wonder if that is actually what you want. If you’re getting the result you want from the current code, then that’s more likely already doing everything correctly.

The comments you have in the above script don’t make any sense. They might as well be:

Camera.main.targetTexture = mainRenderTexture; // add two eggs and mix until smooth

The comment you have is just as equally related as that.

Imagine you have several different sheets of paper. By default, if you do nothing, Unity is going to paint the scene onto a regular letter size sheet and then show that to you. Using that metaphor, this is what’s happening with your code.

// Hey Unity, before you paint anything I have some things I want you to do
void OnPreRender()

// take an older piece of paper and cut it down to the size you want
mainRenderTexture= RenderTexture.GetTemporary(width,height,16);

// tell Unity to use your piece of paper rather than the one it has
Camera.main.targetTexture = mainRenderTexture;

// Unity then paints the scene to your paper

// Hey Unity, after you finish painting everything, I have some things I want you to do
void OnPostRender()

// go back to your original piece of paper
Camera.main.targetTexture = null;

// while the paint is wet, just slap the paper you just painted onto your original paper to make a copy
// if they're not the same size, just kind of smear the paint around a bit to get it to fit
Graphics.Blit(mainRenderTexture,null as RenderTexture);

// now throw away the paper I gave you
RenderTexture.ReleaseTemporary(mainRenderTexture);

// Unity now shows the original piece of paper to you

At no point in any of that did you tell Unity to paint the objects differently, just to a different piece of paper.

If you want to change the shader the objects render with, then you want to use a replacement shader, or manually override the material / shader on your objects.

If you want to render to a different render texture, and use a replacement shader, you can do that. But you don’t need to use OnPreRender / OnPostRender (though you can). You could just do:

void Update()
{
  tempTexture = RenderTexture.GetTemporary(width, height, 16);
  camera.targetTexture = tempTexture;
  camera.RenderWithShader(replacementShader, "RenderType");
}

Though in that example the output of the replacement shader will never be seen by you as it’s not ever getting copied to the frame buffer (what you screen is actually displaying, and what that Blit to a null in your code is doing).

You could also just do:

void Start()
{
  camera.SetReplacementShader(replacementShader, "RenderType");
}

And nothing else. But I highly doubt that’s what you want.

2 Likes

Thank you for your time and the great explanation @bgolus ! I got it! :smile: