InterlockedAdd for floats in compute shader

Hi everyone. My first time to post here. Thanks for any help and read.

I’m trying to simulate deformable objects and need to do atomic add for floating points. But HLSL only supports int/uint in its InterlockedAdd intrinsic function. I have searched a lot and got an alternative way like this:
https://www.gamedev.net/forums/topic/613648-dx11-interlockedadd-on-floats-in-pixel-shader-workaround/
But I still want to find a better or more official way.

And I saw that Nvidia supports float InterlockedAdd. Like this:

But it seems to be written for DirectX and I can’t understand much.

So does anyone know how to do this in the compute shader in Unity?

Again, thanks for any help.

@SiqiLi hello and welcome,

I’m not a compute shader expert, but I’ve doodled with them a bit and I stumbled into InterlockedAdd issue too. From what I read in the Microsoft HLSL manual, it’s indeed stated that only int and uint are supported (InterlockedAdd function (HLSL reference) - Win32 apps | Microsoft Learn).

You could convert your floats to ints (or uints), by multiplying them with a large value that still fits to the required range (int -2,147,483,648 to 2,147,483,647, uint 0 to 4,294,967,295). This might of course create some inaccuracy to your simulation but it’s one workaround? Depends a lot on the thing you are doing, if small deviations in values matter or not.

And to answer your question about floats, I’m not sure if there’s a way to use them. Someone more knowledgeable should answer that.

Thank you so much for your quick reply!
It’s quite a good idea! I will try it.

However I’m still wondering is there any possibility to do this by NVIDIA’s api, which will bring many other useful functions too.