I want to write a float buffer in texture. It’s ok for pack value in RGBA and a other Channel ( five channel because the value is not in ]1,0] ) but have a bad accuracy.
i can too write bytes into color32 but in shader the binary operator are for shader model 4.
And finally i have too many texture for encode float.
The last way is write into a rendertexture; Rendertexture support Float value.
I don’t understand what you’re trying to do on lines 7 and 8. Can you elaborate? Normally you’d just pass m_Texture.colorBuffer to SetRenderTarget, or just pass m_Texture as the only argument to get the same effect.
Is the third step trying to get the colorBuffer to point to different backing storage (aBuffer)? Why? Just pass your m_Texture to Graphics.SetRenderTarget, then do your rendering, and finally read the data out using m_Texture.colorBuffer.
If you really want to break through the protection and rewrite the private members then you can do it using reflection, but I doubt it will help as colorBuffer is probably just a read-only C# view of the underlying C++ model. You can make the view point to the wrong bit of memory, but the rendering doesn’t use it anyway, that all happens in the C++ code and won’t care where the C# view is pointing.
So if you’re using immediate mode to issue your rendering, you should be able to just do:
Graphics.SetRenderTarget(m_Texture);
GL.Clear(...whatever...);
// draw stuff
// later on, when the rendering is all done
RenderTarget.active = m_Texture;
m_2DTexture.ReadPixels(new Rect(0, 0, width, height), 0, 0);
RenderTagret.active = null;
This is just a sketch, a lot depends on what you’re rendering (immediate mode, full screen quad, or selected meshes from a manually-controlled camera). I also don’t know how much latency you should expect before being able to lock the render target for reading.
And of course there’s a conversion taking place here if your render target is using half precision. I don’t know what the cost is, nor whether it can be mitigated by matching the texture format to the render target format. Presumably ReadPixels is a GPU operation…
I haven’t seen any really good documentation on what any of these functions do, though, nor examples of different types of off-screen rendering. Most of what I know has been figured out by experimentation and by reading other forum or answers posts… not the best way.
Sorry gfoot, my explains are bad ( my english too ) but i want that :
I read a file → i extrat the float data → convert to a Bufferfloat → convert to a TextureFloat → and finally the Cg shader use the float data from the texture.
I have the same conclusion. If i want to write some floatting point values in rendertexture, i should have draw a 3D reprensentation…and it’s too …
Conclusion :
You can’t fill the RenderTexture’s data with your buffers. You can use Texture2D.SetPixels with texture2D but there are no RGBA_FLOAT, RGBA_HALF… )
Oneaday i use five channels for pack my floatting data in texture2D… it’s not perfect but it’s only way in Unity.
[PS : For unity devs : your Encode/Uncode Float RGBA functions from UnityCG.cginc are wrong… there 256 values in a channel and not 255. You can test … divide by 256.0 it’s more accuracy ]