32bit floating point RenderTexture support needed / pack float to half

Hi community,

I currently work on a GPU based fluid simulation based on the article from the GPU Gems: Fast Fluid Dynamics Simulation on the GPU, http://http.developer.nvidia.com/GPUGems/gpugems_ch38.html.

It seems that the RenderTextureFormat.ARGBHalf format is not accurate enough for at least some steps (I can notice a bit of aliasing when i output the pressure texture, for example). The simulation works quite good, but i have problems when it comes down to maintain a consistant viscosity at different render target sizes. Since i’m not completely sure whether it is a mistake somewhere in my code or a precision issue:

Is there a way to force higher precision render targets (like D3DFMT_A32B32G32R32F, modern graphics cards are perfectly capable of it) to at least proof that the artifacts are caused by too low precision? Or does anybody know a way how to pack a float into two halfs to work around that limitation?

edit: Already tried RenderTexture.DefaultHDR which causes “Unknown render texture format” errors.

Feeding all gridscale (1 / texturesize) related parameters (alpha, beta, rBeta) with a constant value (besides the texture coordinates) helps to maintain a consistant behavoir through all render texture sizes, but i suppose its not the way its meant to be solved. So i got a workaround, but still the problem exists… I’d like to use full precision floating point render textures to proof that there are precision issues :slight_smile:

May I ask what your workaround was? We’re currently trying to build a texture in unity using full 32 bit floats. Thanks! Or wait - are you saying that is the workaround? :slight_smile: