RenderTextures merging issues for custom Checkerboard Rendering shader

I want to implement a simple checkerboard rendering technique at 4K 3840x2160 target resolution.

I have two RenderTextures 1920x2160 that I want to merge with a checkerboard (or scanline) pattern in my fragment.
The first RenderTexture comes straight from the camera, the other is just the previous frame rendered.

The Problem
I get awful merging artifacts (img.3) that make the final image looks much worse than native (img.2)

It’s not because the pattern doesn’t work or because it doesn’t sample the right pixels because if I sample the same texture I get the expected 2x1 pixels that you see in img.1.

The camera is standing still and so does the geometry, even if I disable the camera alternate offsetting for testing purpose I stil get these artefacts caused, apparently, by sampling two different images.

The problem persists if I disable any other camera effect.

  1. Same image as input
  2. 4K native
  3. 4K checkerboard

Shader code:

fixed4 frag (v2f i) : SV_Target
            {
                int x = (int)(i.uv.x * _texWidthTarget);
                int y = (int)(i.uv.y * _texHeight);

                bool even = ((x+y)%2 ) ==0;


                //float cord = (int) ((i.uv.x + _pixelSizeX * 0.25)  / _pixelSizeX); // _pixelSizeX = 1/_texWidth
                //float realX =  cord * _pixelSizeX;

                float x2 = x * 0.5;
                float realX = (float) x2 / (float) _texWidth;

                float realY = i.uv.y;//+ _pixelSizeY * 0.5;


                fixed4 col = tex2D(_MainTex, float2(realX, realY));

                fixed4 oldcol = tex2D(_oldTex,float2(realX, realY));

Some updates after having adjusted the uv coordinates:

I saved two frames(without any camera change between them) and used as input in the checkerboard algorithm the result matches what i expect: amost a stretched image because both frames are almost the same.

But if I do it at runtime with the last two frames I still get this horrible result:

What’s different between saving the two render textures and reading them as an asset and accessing them without doing it?

It’s possible the textures are getting flipped vertically. There’s a lot of cases where this can happen due to oddities with graphics APIs that Unity tries to take into account behind the scenes. Try checking if _MainTex_TexelSize.y is negative in one case or the other. If it’s negative in one case, but not the other, you may need to adjust the order of the checkerboard to account for it.

1 Like

Thanks,
but it doesn’t seem to work.

I may have a lead:
I first use Graphics.Blit(source, target, material) source is the current frame rendered, target is the output 4K render texture and I pass the old render texture to the material.

Then I Graphics.Blit(source, oldrt);

may have gamma something to do with it? the rendering is not on linear. I use the default settings when I create the RenderTextures

It looks like you have the same image twice but one of them is offset by one pixel vertically. I made a quick test and isolated each second pixel and both images are exactly the same.
2967654--220368--lines.jpg

1 Like

Thanks.
I have no idea why, they should be offset by one pixel horizontally.

Please chek this updated code:

float4 frag (v2f i) : SV_Target
            {


                int x = (int)(i.uv.x * _texWidthTarget);
                int y = (int)(i.uv.y * _texHeight);

                bool even = ((x+y)%2 ) == 0;


                float x2 = (x * 0.5);
                float realX =  (float) x2 / (float) _texWidth + _pixelSizeX * 0.25;

                float realY = i.uv.y ;


                float4 col = tex2D(_MainTex, float2(realX , realY));

                float4 oldcol = tex2D(_oldTex,float2(realX, realY ));

I have discovered another thing:

In the first image I return the sample from the RenderTexture source (1920x2160) in OnRenderImage in any case and somehow I end with different sameples.

In the second image I return the sameple from the previous RenderTexture source (_oldTex) already blitted to another RenderTexture of the same size without any shaders and I get the expected result.

This is the code:

Graphics.Blit (source, target, mat);

where source is a 1920x2160 RT and target is a 3840x2160 RT

So why is the sample different?

EDIT: the problem persists even if the normal sampling such as float4 col = tex2D(_MainTex, float2(i.uv.x , i.uv.y));

If I may ask, how are you going about select rendering of alternating pixels in the first place? If I am understanding that you are rendering to swapped 1920x2160 render targets, then with a still camera and static geometry wouldn’t the resulting composition be the same as a single 1920x2160 rendering since both past frames produce the same exact image?

1 Like

yes it should but it doesn’t that’s the problem.

Any progress on this? I’m very curious about checkerboard rendering