how to rerun the shader based on the last output

Hi all, I wrote a shader which will turn the white pixel to transparent if one of the four pixels around it is transparent.
Firstly I need to set the pixel at the corner to transparent,
then turn the white pixel to transparent if one of the four pixels around it is transparent as the pic below shows.

So my questions are:

  1. I need to rerun the shader again to turn all the white pixels to transparent based on the last output. How to do it?
  2. How to do the two things one by one to the same texture?

Very new to unity shader. Thanks in advance!!

4390867--398995--comp4.jpg

You cannot read from and write to the same texture outside of some very specific situations. On iOS devices you can read the specific pixel being rendered to, but no other, and with compute shaders you can use RWTextures, but there are lots of gotchas there too.

You also can’t render to a Texture2D directly, you have to use a render texture, usually using a Blit(). Call Blit() with your original texture, your custom shader, and the render texture with the same resolution as the original texture. To do it multiple times you’d need to have a second render texture to render to, and “ping pong” between them. Unfortunately there’s no easy way to know when you’re done as this is all happening on the GPU and it’s expensive to read data back to the CPU. So you’d have to calculate that before hand, which would likely require just as much work as doing all of this on the CPU to begin with, or make a guess at the max number of times you might ever need to do it and always do it that many times. At that point it might be cheaper to do it on the CPU anyway!

The other way to go about it is using a Compute Shader and a RWTexture like I mentioned before. This might be faster than doing it on the CPU. Just realize that flood fill like calculations are very difficult to implement efficiently for parallel computation. The GPU’s biggest strength is it’s ability to do the same thing many thousands of times, with the assumption that the output of one pixel is not affected by the output of another in the same execution. Flood filling is inherently a serial operation as each pixel’s resulting calculation impacts the next pixel.

TLDR; this is not something you should be doing on the GPU. Do it on the CPU, or find another way to mark the pixels you want to be transparent before hand.

Thanks so much for making everything so clear!! I think I am kind of clear about what to do on GPU and CPU. I used the Blit() to realize it and used a timer to ask it to stop replacing the texture, which sounds like a stupid way. lol
The thing I want to do is to subtract the white pixels in the background. As it needs to find the contour, flood fill is the only way I can think of. Do you have any idea about other methods to do the similar thing?

Use a different background color? I don’t know the specifics of your scenario, but if the assets are created offline, then using an image with alpha to start with is much faster. If this is something you’re rendering at runtime, use a camera with a clear background.

I need to scan a paper with some patterns on it and turn the background color which is the paper color to transparent in unity, so as to only keep the patterns. Because the pattern may contain white color, I can’t just turn all the white pixels to transparent. Therefore I need an algorithm to recognize the contour of the pattern and find all the pixels outside that contour and turn them to transparent.

The way of using Blit() to replace the texture of the shader doing flood fill works well in this case. I was just thinking if there is a method to do it in the shader instead of doing it by using flood fill on CPU. :stuck_out_tongue: