Hi All,
I have a model that is trained in RGB color space in range (0, 255).
When I create Tensor based on RenderTexture this way and set it as an input for inference, then all colors are distorted.
Tensor input = new Tensor(renderTexture, channels: 3);
because colors in renderTexture is in (0, 1) range.
My model/app is working as expected when I convert colors explicitly to (0, 255) (basically, loop over all pixels, get colors, multiply in preprocessing by 255 or divide by 255 in postprocessing, store them in float array and create Tensor for Barracuda model input from this array). But I do this kind of preprocessing/postprocessing on CPU and this is very costly.
Is there any other way to create Tensor from renderTexture and specify that renderTexture should be converted to RGB (0, 255) ?
Is there any example how can I do this in shaders ? Should I do this in Unity compute shader ?
Any advice will be helpful.
Thanks!