Hi, I would like to do some intense array math in my game, but I would love to utilize the GPU.
Would it be possible to create a dummy shader that accepts two textures as inputs, does some operations on them (eg. averages them together) and then stores the result in another texture that is accessible by the Unity game logic?
The textures would most likely be greater than 1080 HD res and I would need to perform many operations on the arrays, so I figured the GPU might be better suited than hyperthreading.
Any thoughts on how to create a simple shader to average two images and retrieve the result for every frame?
Hey, this is what compute shaders are for! You can use them to do any kind of stuff in a shader and the outputs don’t have to be textures either. Definitely possible.
One caveat is the “retrieve” aspect of it the question. Compute shaders are great, but getting data back to the CPU can be very slow. Depending on what you’re doing, how much data you’re trying to transfer back, and when you’re doing it, it can take several milliseconds to finish. And by default that stalls the CPU main thread until it finishes. Unity has async readback that you can use to retrieve data, and in some cases things done at the start of the frame you might be able to retrieve during before rendering starts, but generally you should expect it to be delayed by a frame or two.
Usually if you’re doing work on the GPU generally you want to keep that work on the GPU.