Hi
lately i’ve had quite a lot ideas of things i could do with shaders. But a lot of them have one issue: I’d need 32 bit floats as input. As far as i know, you can only use vectors for that which would be limited to 4 values. So i thought “shouldn’t be that hard to get them packed into the color channels of a texture and decode them on the GPU”. Turns out it is.
Sadly it looks like Unity doesn’t support the IntBitsToFloat function (or an older cg library, which doesn’t include it), so this doesn’t work. Another way i thought of was simply doing the whole decoding process myself, but due to the lack of bit access in shaders, this would result in way to many instructions (Edit: Just realised, i could get away with a lot less instructions than i originally thought. But still too many of them). Last one was simply storing a single float in two pixels instead of just one. Tough it’s an easy and quick way, two lookups for one value doesn’t seem reasonable to me and might be even slower than the instruction heavy way.
So my question is, does anyone have any good ways to do this without wasting an absurd amount of instructions or texture lookups on getting a single float value?
Thanks
Chicken