Hi,
I was converting some OpenGL code to Unity and ran into an issue. I have a solution where I was do some calculations on the Frag Shader. I just realized that all return values are being clamped to a max of 1.
So for instance, in openGL my frag shader has:
out vec4 gl_FragData;
gl_FragData = vec4(5.0f, 6.0f, 7.0f, 8.0f);
What is the format of the buffer you’re rendering to? Is not HDR checked on for the camera or have you assigned a non float/half render texture as the target?
how do you render to your texture? when do you read the data? I’m pretty sure it’s default tonemapping which mangles your texture (by clamping values to the 0-1 range)
What format is the Texture2D you’re copying data to? Is it RGBA32 or RGBAHalf/Float? Is the render texture also ARGBHalf/Float? If they’re not both float or half formats, the values will be clamped.
The camera has got HDR enabled so that I can get values of greater than 1. This is called once per Update and written into a . Once every agent has rendered into the RT the values are then read back into a buffer at the end of the Update:
Color[ ] h_idata = myTex.GetPixels();
Not sure about the tone mapper and how to read before its being changed back into LDR.
There has to be a second texture. You can’t read a RenderTexture using GetPixels(). You have to copy the RenderTexture’s contents to a Texture2D using ReadPixels() before you can use GetPixels() to read the values of the Texture2D.
Graphics.DrawMeshInstanced doesn’t draw mesh instantly, instead, it enqueues it to be rendered within the normal rendering loop (after update, lateupdate and so on). so you’re actually reading data from the previous frame, after tonemapping.
or maybe bgolus is right and it’s simply because of the format of your texture2d
Finally got it. I had to set the TextureFormat on the Texture2D to RGBAFloat also. Thanks for helping me guys, small error but I probably would have been tearing my hair out next week trying to figure this out.