I ALMOST got this working. a few questions/issues.
1) how can i access the texture’s width in a compute shader, so that i can index the buffer by y*width + x? do i need to pass in the uniform myself, or is this provided?
2) if I pass in a ushort[ ] array, like this:
_Data = new ushort[LengthInPixels];
frame.CopyFrameDataToArray(_Data);
computeBuffer.SetData(_Data);
then my computeShader has to return a value via:
Result[id.xy] = depthBuffer[id.y * 256 + id.x/2.0];
This is because I’m writing to a RFloat texture with ushort buffer data, which is exactly half the amount of bits in the buffer (32 vs 16), so each index location has to be at half the coordinates.
I have a feeling this is the correct way to index the location, but the value needs to be byte swapped i think because the #'s in the texture seem…off.
3) this version gives correct values- my #'s for each pixel are correct. if I cast from ushort[ ] to float[ ] in C# and pass the float[ ] array to the computeBuffer, i can simply use
Result[id.xy] = depthBuffer[id.y * 512 + id.x];
but i don’t wanna convert the short[ ] array, that’s the whole point of using compute shaders, even though 512x424 operations-per-frame where i simply cast from a ushort to float for each pixel doesnt appear to slow anything down…it’s the principle of the matter.
(full shader here)
// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel DepthFrameCompute
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float> Result;
RWStructuredBuffer<float> depthBuffer : register(u[0]);
[numthreads(32,32,1)]
void DepthFrameCompute (uint3 id : SV_DispatchThreadID)
{
Result[id.xy] = depthBuffer[id.y * 256 + id.x*0.5];
}
here’s the picture of the ‘almost’ – the plane on the left is the raw depth image. the skeleton and image on the right is other data that’s coming in fine