At it’s core, what I’m trying to achieve is to downscale the camera (non-main) output onto a Texture (RenderTexture, 2D, etc doesn’t matter) by averaging out the colors in a portion of the screen that corresponds to the placement of the downscaled pixel on the downscaled texture.
In other words. If my downscaled texture is 2x1, the first pixel would contain an average of all colors in the left portion of the camera output, and the second pixel would be an average of all colors in the right portion of the camera output.
This is easy to do in OnRenderImage() directly in C# but is highly inefficient. Does anyone have an idea of how to do this in a way that performs well?
I wanted to look into doing this with a shader where each pass would average the colors in 2x2 pixel quads and reduce the resolution by half, then repeat the operation however many times is necessary. From my estimates, at most the number of passes would be 8 and run through as many pixels as a single pass would on a roughly 550x550 rez image. But I don’t know how to programmatically run n passes where n is defined at runtime, I would rather not Graphics.Blit() n times if possible (though that may still be better than by current solution).
To clarify. I’m trying to accomplish this inside a single shader if possible (pseudocode):
for(int i = 0; i < n; i++) {
Graphics.Blit(src,dest)
src = dest;
dest.xy /= 2; // reduce dest resolution by half on each run.
}
I have a working version of the above but I feel like the overhead is still too much since reducing the resolution of the dest (RenderTexture) actually involves creating a new one each time, in addition to the overhead from Blit().
A “cheap” method would be either render to or blit to a render texture that is something like 512x256 and which has automatic mip maps enabled, then sample from the 8th mip level (which will be a 2x1 texture). If you blit your higher resolution render that render texture to it you’ll loose some of the pixels, but this often how bloom and depth of field type effects work.
I have a tiny bit of leeway. It’s possible for me to only sample some pixels. Aka: it’s ok for some small items (in the distance) to randomly be ignored. But as a first step I would like to find a solution that doesn’t ignore any pixels (or close to).
I am trying to implement this to test but I must be missing some crucial piece of information, or going about it wrong. Bellow is my attempt but it looks like tex2Dlod() is only ever sampling from level 0.
This is a script I apply to the camera:
public class Test : MonoBehaviour
{
[SerializeField] public RenderTexture someText;
[SerializeField] public Material ShaderMaterial;
private Camera cam;
void Start()
{
cam = GetComponent<Camera>();
if (cam.targetTexture != null)
{
cam.targetTexture.Release();
}
cam.targetTexture = new RenderTexture(256, 256, 24);
cam.targetTexture.Release(); // unsure about the double release here, but I'm getting correctly sized camera viewport.
cam.targetTexture.useMipMap = true;
cam.targetTexture.autoGenerateMips = true;
cam.targetTexture.Create();
}
void OnRenderImage(RenderTexture src, RenderTexture dest)
{
Graphics.Blit(src, someText, ShaderMaterial);
}
}
And my frag shader section is simply:
sampler2D _MainTex;
fixed4 frag (v2f i) : SV_Target
{
float4 col = tex2Dlod(_MainTex, float4(i.uv, 0, 3)); // any number here results in the same
return col;
}
Any idea where I’m going wrong? I’m assuming .targetTexture and src are simply not the same. Should I instead read from the buffer on Update() or something? I’m still digging through the documentation but any help is appreciated. Thanks a ton!
Edit: If I want to enable auto mipmaps of the source in OnRenderImage() should I add a command buffer to the camera and SetTargetTexture() that way on AfterEverything?