Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Kinect v2, writing a ushort[] to a ComputeBuffer to a RWStructuredBuffer

Discussion in 'Shaders' started by brianchasalow, May 18, 2014.

  1. brianchasalow

    brianchasalow

    Joined:
    Jun 3, 2010
    Posts:
    204
    Hey there- I am trying to write some code to support the kinect v2 (it's in alpha, i'm required to say “This is preliminary software and/or hardware and APIs are preliminary and subject to change” )

    The kinect depth camera gives me a ushort[] array. I understand that ComputeBuffers can accept any sort of data- presumably I can pass that array straight to a ComputeBuffer if my stride and count and RWStructuredBuffer types are correct.

    From the ComputeBuffer docs:

    ComputeBuffer(count: int, stride: int, type: ComputeBufferType)

    count Number of elements in the buffer.
    stride Size of one element in the buffer. Has to match size of buffer type in the shader.
    type Type of the buffer, default is ComputeBufferType.Default.

    so, should it be
    Code (csharp):
    1. new ComputeBuffer(ushortArrayLength, sizeof(float), ComputeBufferType.Raw)
    and then in the shader, declare my RWStructuredBuffer buffer as a float type?

    Thanks much,
    Brian Chasalow
     
  2. brianchasalow

    brianchasalow

    Joined:
    Jun 3, 2010
    Posts:
    204
    I ALMOST got this working. a few questions/issues.

    1) how can i access the texture's width in a compute shader, so that i can index the buffer by y*width + x? do i need to pass in the uniform myself, or is this provided?


    2) if I pass in a ushort[] array, like this:
    Code (csharp):
    1.  
    2.                 _Data = new ushort[LengthInPixels];
    3.                 frame.CopyFrameDataToArray(_Data);
    4.                 computeBuffer.SetData(_Data);
    5.  
    then my computeShader has to return a value via:
    Code (csharp):
    1.     Result[id.xy] = depthBuffer[id.y * 256 + id.x/2.0];
    This is because I'm writing to a RFloat texture with ushort buffer data, which is exactly half the amount of bits in the buffer (32 vs 16), so each index location has to be at half the coordinates.
    I have a feeling this is the correct way to index the location, but the value needs to be byte swapped i think because the #'s in the texture seem...off.

    3) this version gives correct values- my #'s for each pixel are correct. if I cast from ushort[] to float[] in C# and pass the float[] array to the computeBuffer, i can simply use
    Code (csharp):
    1.  Result[id.xy] = depthBuffer[id.y * 512 + id.x];
    but i don't wanna convert the short[] array, that's the whole point of using compute shaders, even though 512x424 operations-per-frame where i simply cast from a ushort to float for each pixel doesnt appear to slow anything down...it's the principle of the matter.






    (full shader here)
    Code (csharp):
    1. // Each #kernel tells which function to compile; you can have many kernels
    2. #pragma kernel DepthFrameCompute
    3.  
    4. // Create a RenderTexture with enableRandomWrite flag and set it
    5. // with cs.SetTexture
    6. RWTexture2D<float> Result;
    7. RWStructuredBuffer<float> depthBuffer : register(u[0]);
    8.  
    9. [numthreads(32,32,1)]
    10. void DepthFrameCompute (uint3 id : SV_DispatchThreadID)
    11. {
    12.     Result[id.xy] = depthBuffer[id.y * 256 + id.x*0.5];
    13. }
    14.  
    here's the picture of the 'almost' -- the plane on the left is the raw depth image. the skeleton and image on the right is other data that's coming in fine ;-) $fastDepthRendering.png
     
    Last edited: May 18, 2014
  3. RC-1290

    RC-1290

    Joined:
    Jul 2, 2012
    Posts:
    639
    You can use GetDimensions on TextureObjects.
    For example, I'm using
    Code (csharp):
    1. float simulationWidth, simulationHeight;
    2. FlowMapIn.GetDimensions(simulationWidth, simulationHeight);
    Your other two points seem more like statements than questions, although I might just not be fully awake right now.
     
  4. brianchasalow

    brianchasalow

    Joined:
    Jun 3, 2010
    Posts:
    204
    Thanks for the info. As for #2 or #3, the question was more like if I copy ushort[] data into a float[] compute buffer, how would I access that data properly via bit shifting or doing some byte swappy- union style stuff in the compute shader?
     
    Last edited: May 18, 2014
  5. RC-1290

    RC-1290

    Joined:
    Jul 2, 2012
    Posts:
    639
    I guess you could try using min16uint for the type used by StructuredBuffer, since you're on Windows 8.

    Alternatively, I think the easiest thing to do would be to keep the array as-is, but set the stride as if it's made of regular uints (assuming the stride value is just used for writing the data to the GPU, not for reading the array), so you could just access the data as normal in your compute shader. But I think someone with more low-level memory experience might have a better solution for you.
     
  6. brianchasalow

    brianchasalow

    Joined:
    Jun 3, 2010
    Posts:
    204
    the stride has to be the same as the RWStructuredBuffer<type> or i get SUCCEEDED(hr) in the editor. I tried using min16uint, but I get SUCCEEDED(hr) if I try to use sizeof(ushort) as the stride- it requires floating point stride for some reason. dunno if that's a bug... it would appear that any of the min_x types still must map to floating point stride in the shader.

    Full code repo here:
    https://bitbucket.org/brianchasalow/fast_kinect_v2_unity_public
     
    Last edited: May 18, 2014