Kinect v2, writing a ushort[] to a ComputeBuffer to a RWStructuredBuffer

Hey there- I am trying to write some code to support the kinect v2 (it’s in alpha, i’m required to say “This is preliminary software and/or hardware and APIs are preliminary and subject to change” )

The kinect depth camera gives me a ushort[ ] array. I understand that ComputeBuffers can accept any sort of data- presumably I can pass that array straight to a ComputeBuffer if my stride and count and RWStructuredBuffer types are correct.

From the ComputeBuffer docs:

ComputeBuffer(count: int, stride: int, type: ComputeBufferType)

**count **Number of elements in the buffer.
**stride **Size of one element in the buffer. Has to match size of buffer type in the shader.
**type **Type of the buffer, default is ComputeBufferType.Default.

so, should it be new ComputeBuffer(ushortArrayLength, sizeof(float), ComputeBufferType.Raw)
and then in the shader, declare my RWStructuredBuffer buffer as a float type?

Thanks much,
Brian Chasalow

I ALMOST got this working. a few questions/issues.

1) how can i access the texture’s width in a compute shader, so that i can index the buffer by y*width + x? do i need to pass in the uniform myself, or is this provided?

2) if I pass in a ushort[ ] array, like this:

                _Data = new ushort[LengthInPixels];
                frame.CopyFrameDataToArray(_Data);
                computeBuffer.SetData(_Data);

then my computeShader has to return a value via:

    Result[id.xy] = depthBuffer[id.y * 256 + id.x/2.0];

This is because I’m writing to a RFloat texture with ushort buffer data, which is exactly half the amount of bits in the buffer (32 vs 16), so each index location has to be at half the coordinates.
I have a feeling this is the correct way to index the location, but the value needs to be byte swapped i think because the #'s in the texture seem…off.

3) this version gives correct values- my #'s for each pixel are correct. if I cast from ushort[ ] to float[ ] in C# and pass the float[ ] array to the computeBuffer, i can simply use

 Result[id.xy] = depthBuffer[id.y * 512 + id.x];

but i don’t wanna convert the short[ ] array, that’s the whole point of using compute shaders, even though 512x424 operations-per-frame where i simply cast from a ushort to float for each pixel doesnt appear to slow anything down…it’s the principle of the matter.

(full shader here)

// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel DepthFrameCompute

// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float> Result;
RWStructuredBuffer<float> depthBuffer : register(u[0]);

[numthreads(32,32,1)]
void DepthFrameCompute (uint3 id : SV_DispatchThreadID)
{
    Result[id.xy] = depthBuffer[id.y * 256 + id.x*0.5];
}

here’s the picture of the ‘almost’ – the plane on the left is the raw depth image. the skeleton and image on the right is other data that’s coming in fine :wink:

You can use GetDimensions on TextureObjects.
For example, I’m using

float simulationWidth, simulationHeight;
FlowMapIn.GetDimensions(simulationWidth, simulationHeight);

Your other two points seem more like statements than questions, although I might just not be fully awake right now.

Thanks for the info. As for #2 or #3, the question was more like if I copy ushort[ ] data into a float[ ] compute buffer, how would I access that data properly via bit shifting or doing some byte swappy- union style stuff in the compute shader?

I guess you could try using min16uint for the type used by StructuredBuffer, since you’re on Windows 8.

Alternatively, I think the easiest thing to do would be to keep the array as-is, but set the stride as if it’s made of regular uints (assuming the stride value is just used for writing the data to the GPU, not for reading the array), so you could just access the data as normal in your compute shader. But I think someone with more low-level memory experience might have a better solution for you.

the stride has to be the same as the RWStructuredBuffer or i get SUCCEEDED(hr) in the editor. I tried using min16uint, but I get SUCCEEDED(hr) if I try to use sizeof(ushort) as the stride- it requires floating point stride for some reason. dunno if that’s a bug… it would appear that any of the min_x types still must map to floating point stride in the shader.

Full code repo here:
https://bitbucket.org/brianchasalow/fast_kinect_v2_unity_public