# Issue with image rotation in Compute Shader

Hey everyone,
I have implemented a rotate image Compute Shader. The image is rotating fine, although for some reason there are right side pixels on the left side of the image.

I use this image as an input (source) image.

(I re-scaled the original image down for this website, so it wouldn’t take so much space)

The image width is 652 and height is 817

When I process this image in my Compute Shader I get the following result:

This is the Compute Shader code I use:

``````Texture2D<float4> source;
RWStructuredBuffer<float4> pixels;

int angle;
int width;
int height;

{
float radians = angle * (3.14159274 * 2 / 360);

int w = width;
int h = height;

float2x2 r2 = float2x2(c, -s, s, c);

float2 center = float2(w, h) / 2, pos = mul(r2, id - center) + center;

if (min(pos.x, pos.y) < 0 || max(pos.x - w, pos.y - h) >= 0)
{
pixels[id.x + id.y * w] = float4(0, 0, 0, 0);
}
else
{
pixels[id.x + id.y * w] = source[pos.xy];
}
}
``````

And this is the C# code

``````void Start()
{

pixels = new Color[source.width * source.height];

pixelsBuffer = new ComputeBuffer(pixels.Length, sizeof(float) * 4);
pixelsBuffer.SetData(pixels);

UpdateRender();
}
``````

I changed the condition a bit. I added the bold marked code

``````if (min(pos.x, pos.y) < 0 || max(pos.x - w, pos.y - h) >= 0 [B]|| id.x >= width || id.y >= height[/B])
{
pixels[id.x + id.y * w] = float4(0, 0, 0, 0);
}
else
{
pixels[id.x + id.y * w] = source[pos.xy];
}
``````

And now those pixels are kind of gone, however I’m not sure what is happening completely, but it seems that the entire image is shifted by 4 pixels to the right. I rotated the image 90 degrees and as can be seen in the below image the left side edge pixels are gone. There’s something wrong with my code but I can’t find the source of it…

Well I found the issue. The considering the image size, the width could not be divided by 2 properly, so if I ceil the x thread group count I get 4 pixels more than the actual width, if floor rounded 4 less than width. So that’s why there’s that gap. Not sure how to fix this tho

Changing the numthreads to 1, 1, 1 and setting the threadgroup count to width and height does fixes the problem, although it’s not really efficient way…

Hi, I had the same issue but with a different problem. I don’t know if you have found an other solution but what I have seen to solve this issue is limiting the compute shader to the width and height of your image.

You can keep the number of threadgroup at [8,8,1] and keep the Mathf.CeilToInt(source.width / 8f) and Mathf.CeilToInt(source.height/ 8f), this is correct.

You can also use GetKernelThreadGroupSizes to be more flexible if you are interested in.

Here is the code:

``````Texture2D<float4> source;
RWStructuredBuffer<float4> pixels;
int angle;
int width;
int height;
{
// This limit the compute shader to the size of your image
if (id.x > width - 1 || id.y > height - 1)
return;

float radians = angle * (3.14159274 * 2 / 360);
int w = width;
int h = height;
float2x2 r2 = float2x2(c, -s, s, c);
float2 center = float2(w, h) / 2, pos = mul(r2, id - center) + center;
if (min(pos.x, pos.y) < 0 || max(pos.x - w, pos.y - h) >= 0)
{
pixels[id.x + id.y * w] = float4(0, 0, 0, 0);
}
else
{
pixels[id.x + id.y * w] = source[pos.xy];
}
}
``````

I don’t know if you have to keep the - 1 but try with and without.

I hope it will work for you.