Looking at UnityCsReference, SetPixel is calling a native function, without any mention of GetRawTextureData. Internally I’d expect it to get access to the system buffer and just change the value with pointer math. Which is similar to how you’d work with GetRawTextureData in C#. I’d expect for 1 pixel to have negligible impact.
Documentation of SetPixel mentions SetPixels being faster when “regenerating a texture every frame, especially large textures”.
SetPixel will need to convert the pixel from a float format (Color is 4 floats) to the format of the texture. Potentially, it could be faster to write directly to the texture memory in the correct format (8bit/channel for example).
But if you call SetPixel only a few times per frame then the performance impact will be negligible so it’s best to pick the option that reduces the amount of code you need to write.
You can find some additional info the difference between GetPixels and GetRawTextureData here .
We are also improving the documentation of these methods for 2021.2. This should be available soon.
I just wonder how much does GetRawTextureData cost in terms of overhead of setting the data up for the scripts to access them. I wonder if the performance or memory wise there is difference if I use GetRawTextureData on the textures of different sizes?
Because I can just use SetPixel to one pixel, but on large texture such as 2k texture. But then I can also use GetRawTextureData to get all the data and set one value. I wonder which one is faster… does getting managed reference to the texture data using GetRawTextureData costs differently per texture size?
The cost of GetRawTextureData should be independent from the texture size. It just returns a wrapper around the native pointer. It’s something you could measure if it is really important for your usage case. Let us know what you find!