Search Unity

  1. Unity 2018.3 is now released.
    Dismiss Notice
  2. The Unity Pro & Visual Studio Professional Bundle gives you the tools you need to develop faster & collaborate more efficiently. Learn more.
    Dismiss Notice
  3. Want more efficiency in your development work? Sign up to receive weekly tech and creative know-how from Unity experts.
    Dismiss Notice
  4. Build games and experiences that can load instantly and without install. Explore the Project Tiny Preview today!
    Dismiss Notice
  5. Want to provide direct feedback to the Unity team? Join the Unity Advisory Panel.
    Dismiss Notice
  6. Improve your Unity skills with a certified instructor in a private, interactive classroom. Watch the overview now.
    Dismiss Notice

Shader Transparency over KinectMesh

Discussion in 'Shaders' started by Bduinat, Dec 5, 2018.

  1. Bduinat

    Bduinat

    Joined:
    Nov 26, 2018
    Posts:
    6
    Hi,

    I've been working with a Kinect v2, from which I got a PointCloud style object, that is included into my virtual sceen. It works like a charm and I can play with occlusions depending on the position of the character (behind or in front of a tree).
    My problem is with transparency, I want my kinect PointCloud to be inside an IceCube, can't figure out why the transparency of my IceCube works for everything but my KinectPointCloud.


    Occlusion works correctly


    Behind the glass shader, the background is visible but not my kinect mesh.

    I'm using Kinect function like KinectMesh and KinectBody
    Unilit_KinectMeshVisalizer and KinectTextureCS

    The glass shader is https://assetstore.unity.com/packages/vfx/shaders/mk-glass-100711
    But as the background is diplay correctly, my guess is the problem comes from the KinectMesh not being able to be 'Transparent'
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    5,319
    I've never used Kinect with Unity, so I have no idea how exactly the Kinect mesh gets rendered, but my guess is that the glass material's queue is lower than or the same as the kinect mesh material's queue.
     
  3. Bduinat

    Bduinat

    Joined:
    Nov 26, 2018
    Posts:
    6
    That was my first guess but the glass material's queue is 3000 (transparent) and the kinect mesh is 2000 (opaque)
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    5,319
    Depending on how the Kinect Mesh is rendered the queue on the material may be irrelevant. If the KinectMesh is a Mesh Renderer and Mesh Filter on a game object in the scene hierarchy, then it should "just work". However if either of those two don't exist on the object that's rendering the KinectMesh, then it likely won't.

    Try using the frame debugger. Step through rendering and confirm the KinectMesh is rendered during the opaque rendering, or at least rendered prior to the ice. Actually, more explicitly, make sure the KinectMesh is visible where you see this event:
    upload_2018-12-6_8-53-10.png
     
  5. Bduinat

    Bduinat

    Joined:
    Nov 26, 2018
    Posts:
    6
    Thanks, for the advice, looking at FrameDebugger, I got this :

    Looks like the kinect mesh is a draw procedural coming after the render queue.
    Any idea on how to put it up ?
     
  6. Bduinat

    Bduinat

    Joined:
    Nov 26, 2018
    Posts:
    6
    The shader used by the kinect is this one

    Code (CSharp):
    1. Shader "Unlit/KinectMeshVisalizer"
    2. {
    3.     Properties{
    4.         _EdgeThreshold ("edge max length", Range(0.01, 1.0)) = 0.2
    5.         _WireWidth ("wireframe width", Range(0.0,2.0)) = 1.0
    6.     }
    7.     CGINCLUDE
    8.     #include "UnityCG.cginc"
    9.     #define Size_X 512
    10.     #define Size_Y 424
    11.  
    12.     struct v2f
    13.     {
    14.         half2 uv : TEXCOORD0;
    15.         half3 bary : TEXCOORD1;
    16.         half3 wPos : TEXCOORD2;
    17.         half3 normal : TEXCOORD3;
    18.         uint bodyIdx : TEXCOORD4;
    19.         uint idx : TEXCOORD5;
    20.         float4 pos : SV_POSITION;
    21.     };
    22.     struct VertexData
    23.     {
    24.         float3 pos;
    25.         float2 uv;
    26.     };
    27.     StructuredBuffer<VertexData> _VertexData;
    28.     sampler2D _ColorTex;
    29.     sampler2D _BodyIdxTex;
    30.  
    31.     float _EdgeThreshold;
    32.     float _WireWidth;
    33.  
    34.     v2f getVertexOut(uint idx) {
    35.         VertexData vData = _VertexData[idx];
    36.         v2f o = (v2f)0;
    37.         o.pos = UnityObjectToClipPos(vData.pos);
    38.         o.wPos = vData.pos;
    39.         o.uv = vData.uv;
    40.         o.idx = idx;
    41.         return o;
    42.     }
    43.  
    44.     v2f vert (uint idx : SV_VertexID)
    45.     {
    46.         return getVertexOut(idx);
    47.     }
    48.  
    49.     float edgeLength(float3 v0, float3 v1, float3 v2) {
    50.         float l = distance(v0, v1);
    51.         l = max(l, distance(v1, v2));
    52.         l = max(l, distance(v2, v0));
    53.         return l;
    54.     }
    55.  
    56.     [maxvertexcount(6)]
    57.     void geom(point v2f input[1], inout TriangleStream<v2f> triStream)
    58.     {
    59.         v2f p0 = input[0];
    60.         uint idx = p0.idx;
    61.  
    62.         v2f p1 = getVertexOut(idx + 1);
    63.         v2f p2 = getVertexOut(idx + Size_X);
    64.         v2f p3 = getVertexOut(idx + Size_X+1);
    65.  
    66.         if (edgeLength(p0.pos.xyz, p1.pos.xyz, p2.pos.xyz) < _EdgeThreshold) {
    67.             p0.normal = p1.normal = p2.normal = cross(normalize(p2.wPos - p0.wPos), normalize(p1.wPos - p0.wPos));
    68.             p0.bary = half3(1, 0, 0);
    69.             triStream.Append(p0);
    70.             p1.bary = half3(0, 1, 0);
    71.             triStream.Append(p1);
    72.             p2.bary = half3(0, 0, 1);
    73.             triStream.Append(p2);
    74.             triStream.RestartStrip();
    75.         }
    76.  
    77.         if (edgeLength(p1.pos.xyz, p3.pos.xyz, p2.pos.xyz) < _EdgeThreshold) {
    78.             p1.normal = p3.normal = p2.normal = cross(normalize(p2.wPos - p1.wPos), normalize(p3.wPos - p1.wPos));
    79.             p1.bary = half3(1, 0, 0);
    80.             triStream.Append(p1);
    81.             p3.bary = half3(0, 1, 0);
    82.             triStream.Append(p3);
    83.             p2.bary = half3(0, 0, 1);
    84.             triStream.Append(p2);
    85.             triStream.RestartStrip();
    86.         }
    87.     }
    88.  
    89.     fixed4 frag (v2f i) : SV_Target
    90.     {
    91.         half3 d = fwidth(i.bary);
    92.         half3 a3 = smoothstep(half3(0, 0, 0), d*_WireWidth, i.bary);
    93.         half w = 1.0 - min(min(a3.x, a3.y), a3.z);
    94.  
    95.         half l = dot(i.normal, float3(0.5, 1.0, 0.0));
    96.         l = l * 0.5 + 0.5;
    97.  
    98.         float2 depthUV = float2(i.idx % 512 / 512.0, i.idx / 512 / 424.0);
    99.         fixed bodyIdx = tex2D(_BodyIdxTex, depthUV).r;
    100.  
    101.         //if (bodyIdx == 1) discard;
    102.  
    103.         fixed4 col = tex2D(_ColorTex, i.uv);
    104.         return col;
    105.     }
    106.     ENDCG
    107.     SubShader
    108.     {
    109.         Tags { "RenderType"="Opaque"  }
    110.         LOD 100
    111.  
    112.         Pass
    113.         {
    114.             CGPROGRAM
    115.             #pragma vertex vert
    116.             #pragma geometry geom
    117.             #pragma fragment frag
    118.  
    119.             ENDCG
    120.         }
    121.     }
    122. }
    123.  
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    5,319
    This isn't an issue of shaders anymore, this is purely about when the object gets rendered, and for DrawProcedural that's in script someplace.

    If the KinectMesh script is in c#, I would guess there's a command buffer that's being added to CameraEvent.AfterEverything or CameraEvent.AfterForwardAlpha or something like this. Look for the AddCommandBuffer function call. You'll want to change that to use CameraEvent.AfterForwardOpaque. That assumes that's how it's being rendered. If you don't find an AddCommandBuffer, it might be using OnPostRender or OnRenderImage. Those are harder to change as it'll require more significant rewriting of the code.

    If this is a native plugin ... you're boned. There's no way to solve this I can think of.
     
  8. Bduinat

    Bduinat

    Joined:
    Nov 26, 2018
    Posts:
    6
    There is OnRenderObject , looks like I'm screwed :(

    Code (CSharp):
    1. using System.Collections.Generic;
    2. using System.Linq;
    3. using System.Runtime.InteropServices;
    4. using UnityEngine;
    5. using Windows.Kinect;
    6.  
    7. public class KinectMesh : MonoBehaviour
    8. {
    9.     struct VertexData
    10.     {
    11.         public Vector3 pos;
    12.         public Vector2 uv;
    13.     }
    14.  
    15.     KinectSensor kinect;
    16.     MultiSourceFrameReader reader;
    17.  
    18.     byte[] colorData;
    19.     byte[] bodyIndexData;
    20.     ushort[] depthData;
    21.     CameraSpacePoint[] cameraSpacePoints;
    22.     ColorSpacePoint[] colorSpacePoints;
    23.     Body[] bodyData;
    24.     [SerializeField] Windows.Kinect.Vector4 floorClipPlane;
    25.  
    26.     int depthDataLength;
    27.  
    28.     [Header("Compute Shader")]
    29.     public ComputeShader pointCloudCS;
    30.     [SerializeField] Texture2D colorTex;
    31.     [SerializeField] Texture2D bodyIndexTex;
    32.     ComputeBuffer colorSpacePointBuffer;
    33.     ComputeBuffer cameraSpacePointBuffer;
    34.     ComputeBuffer vertexBuffer;
    35.  
    36.     public Material meshVisalizer;
    37.     public Camera cameraToRender;
    38.     public KinectBody[] kinectBodies;
    39.     public bool useKinectRot = true;
    40.  
    41.     void Start()
    42.     {
    43.         kinect = KinectSensor.GetDefault();
    44.  
    45.         if (kinect != null)
    46.         {
    47.             reader = kinect.OpenMultiSourceFrameReader(FrameSourceTypes.Depth | FrameSourceTypes.Color | FrameSourceTypes.Body | FrameSourceTypes.BodyIndex);
    48.  
    49.             var colorDesc = kinect.ColorFrameSource.CreateFrameDescription(ColorImageFormat.Rgba);
    50.  
    51.             var colorPixels = (int)colorDesc.LengthInPixels;
    52.             var colorBytePerPixel = (int)colorDesc.BytesPerPixel;
    53.             colorData = new byte[colorPixels * colorBytePerPixel];
    54.             colorTex = new Texture2D(colorDesc.Width, colorDesc.Height, TextureFormat.RGBA32, false);
    55.  
    56.  
    57.             var depthDesc = kinect.DepthFrameSource.FrameDescription;
    58.             depthDataLength = (int)depthDesc.LengthInPixels;
    59.  
    60.             depthData = new ushort[depthDataLength];
    61.             colorSpacePoints = new ColorSpacePoint[depthDataLength];
    62.             cameraSpacePoints = new CameraSpacePoint[depthDataLength];
    63.  
    64.             colorSpacePointBuffer = new ComputeBuffer(depthDataLength, Marshal.SizeOf(typeof(ColorSpacePoint)));
    65.             cameraSpacePointBuffer = new ComputeBuffer(depthDataLength, Marshal.SizeOf(typeof(CameraSpacePoint)));
    66.             vertexBuffer = new ComputeBuffer(depthDataLength, Marshal.SizeOf(typeof(VertexData)));
    67.  
    68.             var bodyIndexDesc = kinect.BodyIndexFrameSource.FrameDescription;
    69.             bodyIndexData = new byte[bodyIndexDesc.LengthInPixels * bodyIndexDesc.BytesPerPixel];
    70.             bodyIndexTex = new Texture2D(bodyIndexDesc.Width, bodyIndexDesc.Height, TextureFormat.R8, false);
    71.             bodyData = new Body[kinect.BodyFrameSource.BodyCount];
    72.             if (kinectBodies == null || kinectBodies.Length != kinect.BodyFrameSource.BodyCount)
    73.             {
    74.                 kinectBodies = bodyData.Select((b, idx) =>
    75.                 {
    76.                     var kinectBody = new GameObject(string.Format("body.{0}", idx.ToString("00"))).AddComponent<KinectBody>();
    77.                     kinectBody.transform.SetParent(transform);
    78.                     return kinectBody;
    79.                 }).ToArray();
    80.             }
    81.  
    82.             if (!kinect.IsOpen)
    83.                 kinect.Open();
    84.  
    85.             pointCloudCS.SetInt("_CWidth", colorDesc.Width);
    86.             pointCloudCS.SetInt("_CHeight", colorDesc.Height);
    87.             pointCloudCS.SetInt("_DWidth", depthDesc.Width);
    88.             pointCloudCS.SetInt("_DHeight", depthDesc.Height);
    89.             pointCloudCS.SetVector("_ResetRot", new UnityEngine.Vector4(0, 0, 0, 1));
    90.         }
    91.     }
    92.  
    93.     private void OnApplicationQuit()
    94.     {
    95.         new[] { colorSpacePointBuffer, cameraSpacePointBuffer, vertexBuffer }
    96.         .ToList().ForEach(b => b.Release());
    97.         reader.Dispose();
    98.         if (kinect != null)
    99.             if (kinect.IsOpen)
    100.                 kinect.Close();
    101.     }
    102.  
    103.     void Update()
    104.     {
    105.         if (reader != null)
    106.         {
    107.             var frame = reader.AcquireLatestFrame();
    108.             if (frame != null)
    109.             {
    110.                 var colorFrame = frame.ColorFrameReference.AcquireFrame();
    111.                 var depthFrame = frame.DepthFrameReference.AcquireFrame();
    112.                 var bodyFrame = frame.BodyFrameReference.AcquireFrame();
    113.                 var bodyIndexFrame = frame.BodyIndexFrameReference.AcquireFrame();
    114.  
    115.                 if (colorFrame != null)
    116.                 {
    117.                     colorFrame.CopyConvertedFrameDataToArray(colorData, ColorImageFormat.Rgba);
    118.                     colorFrame.Dispose();
    119.  
    120.                     colorTex.LoadRawTextureData(colorData);
    121.                     colorTex.Apply();
    122.                 }
    123.                 if (depthFrame != null)
    124.                 {
    125.                     depthFrame.CopyFrameDataToArray(depthData);
    126.                     depthFrame.Dispose();
    127.  
    128.                     kinect.CoordinateMapper.MapDepthFrameToColorSpace(depthData, colorSpacePoints);
    129.                     kinect.CoordinateMapper.MapDepthFrameToCameraSpace(depthData, cameraSpacePoints);
    130.  
    131.                     colorSpacePointBuffer.SetData(colorSpacePoints);
    132.                     cameraSpacePointBuffer.SetData(cameraSpacePoints);
    133.                 }
    134.                 if (bodyFrame != null)
    135.                 {
    136.                     var temp = floorClipPlane;
    137.                     floorClipPlane = bodyFrame.FloorClipPlane;
    138.                     if (floorClipPlane.W == 0)
    139.                         floorClipPlane.W = temp.W;
    140.                     var kinectRot = Quaternion.FromToRotation(new Vector3(floorClipPlane.X, floorClipPlane.Y, floorClipPlane.Z), Vector3.up);
    141.                     var kinectHeight = floorClipPlane.W;
    142.                     if (!useKinectRot)
    143.                     {
    144.                         kinectRot = Quaternion.identity;
    145.                         kinectHeight = 0f;
    146.                     }
    147.                     pointCloudCS.SetVector("_ResetRot", new UnityEngine.Vector4(kinectRot.x, kinectRot.y, kinectRot.z, kinectRot.w));
    148.                     pointCloudCS.SetFloat("_KinectHeight", kinectHeight);
    149.  
    150.                     bodyFrame.GetAndRefreshBodyData(bodyData);
    151.                     for (var i = 0; i < bodyData.Length; i++)
    152.                         kinectBodies[i].SetBodyData(bodyData[i], kinectRot, kinectHeight);
    153.  
    154.                     bodyFrame.Dispose();
    155.  
    156.                     if (cameraToRender != null)
    157.                     {
    158.                         cameraToRender.transform.position = Vector3.up * kinectHeight;
    159.                         cameraToRender.transform.rotation = kinectRot;
    160.                     }
    161.                 }
    162.                 if (bodyIndexFrame != null)
    163.                 {
    164.                     bodyIndexFrame.CopyFrameDataToArray(bodyIndexData);
    165.                     bodyIndexFrame.Dispose();
    166.  
    167.                     bodyIndexTex.LoadRawTextureData(bodyIndexData);
    168.                     bodyIndexTex.Apply();
    169.                 }
    170.             }
    171.  
    172.             var kernel = pointCloudCS.FindKernel("buildVertex");
    173.             pointCloudCS.SetBuffer(kernel, "_ColorSpacePointData", colorSpacePointBuffer);
    174.             pointCloudCS.SetBuffer(kernel, "_CameraSpacePointData", cameraSpacePointBuffer);
    175.             pointCloudCS.SetBuffer(kernel, "_VertexDataBuffer", vertexBuffer);
    176.             pointCloudCS.Dispatch(kernel, depthDataLength / 8, 1, 1);
    177.         }
    178.     }
    179.  
    180.     private void OnRenderObject()
    181.     {
    182.         if (meshVisalizer == null)
    183.             return;
    184.  
    185.         meshVisalizer.SetTexture("_ColorTex", colorTex);
    186.         meshVisalizer.SetTexture("_BodyIdxTex", bodyIndexTex);
    187.         meshVisalizer.SetBuffer("_VertexData", vertexBuffer);
    188.         meshVisalizer.SetPass(0);
    189.         Graphics.DrawProcedural(MeshTopology.Points, depthDataLength);
    190.     }
    191. }
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    5,319
    Not screwed, just requires more than changing a single line to fix. The short version is you need this to use a command buffer and not OnRenderObject, which explicitly renders after everything else has finished rendering.

    Step 1: Set the material properties in the start function. Not really a reason not to do this. As is it's setting the same values over and over again for every camera, every frame which is kind of pointless as they don't change. Optionally you can create and set a MaterialPropertyBlock during start so you're not constantly modifying the material asset on disk. Remember to do this after the textures and buffers have been created within that if (kinect != null) condition.

    Step 2: Create a command buffer during start that calls CommandBuffer.DrawProcedural() with the existing pre-setup material (and MaterialPropertyBlock if you use that instead).
    buffer.DrawProcedural(Matrix.identity, meshVisalizer, 0, MeshTopology.Points, depthDataLength);

    Step 3: On start, register with Camera.onPreRender and Camera.onPostRender delegates. Use these functions to add and remove the command buffer to the current camera during the AfterForwardOpaque event.
    // onPreRender
    cam.AddCommandBuffer(CameraEvent.AfterForwardOpaque, buffer);

    // onPost Render
    cam.RemoveCommandBuffer(CameraEvent.AfterForwardOpaque, buffer);


    Step 3: Profit! ... er, delete the OnRenderObject function. Wait, didn't we already have a step 3?
     
    Last edited: Dec 7, 2018
  10. Bduinat

    Bduinat

    Joined:
    Nov 26, 2018
    Posts:
    6
    Thanks a lot for the detailed solution,
    I'm not sure of what the buffer variable should be ?
    Do i need to create a new one like
    Code (CSharp):
    1. CommandBuffer buffer = null;
    or should I use one of the buffer created in the start function like
    Code (CSharp):
    1. vertexBuffer
    Code (CSharp):
    1. if (meshVisalizer != null){
    2.            
    3.  
    4.             meshVisalizer.SetTexture("_ColorTex", colorTex);
    5.             meshVisalizer.SetTexture("_BodyIdxTex", bodyIndexTex);
    6.             meshVisalizer.SetBuffer("_VertexData", vertexBuffer);
    7.             meshVisalizer.SetPass(0);
    8.             }
    9.            
    10.             buffer.DrawProcedural(Matrix.identity, meshVisalizer, 0, MeshTopology.Points, depthDataLength);
    Then I create those to function right?
    Code (CSharp):
    1. public void onPreRender()
    2.     {
    3.         cameraToRender.AddCommandBuffer(CameraEvent.AfterForwardOpaque, buffer);
    4.     }
    5.    
    6.     public void onPostRender()
    7.     {
    8.         cameraToRender.RemoveCommandBuffer(CameraEvent.AfterForwardOpaque, buffer);
    9.     }
     
  11. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    5,319
    There are surprisingly few tutorials online showing how to use command buffers out there, especially for how long they've been available at this point. I'm not entirely sure why, perhaps because it's considered the domain of programmers who "know enough" to be able to muddle through them on their own ... but I digress.

    The command buffer example project in Unity's manual has some relatively straight forward code examples.
    https://docs.unity3d.com/Manual/GraphicsCommandBuffers.html

    But this site is the only one that I've found that actually shows some basic code without needing to download a project.
    http://colourmath.com/2018/tutorials/adventures-in-commandbuffers-an-epic-pt-1/

    Overall I'd say you're close, but write it all out and see if it works. If it doesn't and you can't figure out why, post the entire modified class's code here again.

    This is also not needed. SetPass() is a function that was required for the Graphics.DrawProcedural() function, SetPass() tells the GPU "the next thing you should render, use the first pass of this material", then Graphics.DrawProcedural() uses whatever was last set. The command buffer version of the function takes a material and pass index as it'll do this for you later.

    Look at the documentation for those functions. You need to register your custom functions with the delegate, and then the functions get the current camera calling the function. You don't need to supply your own. The benefit here is it'll keep working in the editor and show in the scene view.