Search Unity

How can we wait for Camera.RenderWithShader to finish?

Discussion in 'General Graphics' started by Dreamback, Apr 8, 2019.

  1. Dreamback

    Dreamback

    Joined:
    Jul 29, 2016
    Posts:
    220
    I'm calling Camera.RenderWithShader - the shader is a simple one that writes solid colors to GBuffers 0 and 2. When that camera is done rendering I need to call a ComputeShader that reads from those GBuffers. But if I call ComputeShader.Dispatch immediately after the RenderWithShader call, the GBuffers aren't filled. I've tried adding a GL.Flush before the Dispatch, no difference. If I put some wasteful time-consuming code before the Dispatch, it works.

    Any ideas? I can't use OnPostRender, because that isn't called with RenderWithShader.

    What's disturbing is that even if I set the Dispatch as a CommandBuffer for the camera on AfterGBuffer or AfterEverything, the GBuffers haven't been filled by my shader.
     
  2. joelv

    joelv

    Unity Technologies

    Joined:
    Mar 20, 2015
    Posts:
    203
    Where do you call Camera.RenderWithShader and what is the active render target bound at that point? Are you using SRP?

    If you do not set up the camera target to be the G-Buffer there is a big chance you are rendering with replacement shaders to the camera target texture. Is that the one you are raeding from in the compute step?

    These calls should execute immediate, so you should be able to do the compute dispatch directly afterwards and see the result there.
     
  3. Dreamback

    Dreamback

    Joined:
    Jul 29, 2016
    Posts:
    220
    I call Camera.RenderWithShader in a loop in Update (using standard Deferred renderer) - basically I render the camera, then Dispatch the ComputeShader, then rotate the camera and repeat a few times. Every other iteration through the loop ComputeShader is reading the correct values from the GBuffers.

    As for render target, I didn't think I needed to set the target to be the G-Buffer since the shader is directly writing to the existing GBuffers. I'm not doing anything with camera target at all. This is my replacement shader:

    Code (CSharp):
    1.     SubShader{
    2.         Tags {"Queue" = "Geometry" "LightMode" = "Deferred" "ShadowSupport" = "true"}
    3.         Lighting Off
    4.         ZWrite On
    5.  
    6.         Pass
    7.         {
    8.             CGPROGRAM
    9.  
    10.             #pragma vertex vert
    11.             #pragma fragment frag
    12.             #pragma target 5.0
    13.             #include "UnityCG.cginc"
    14.  
    15.             struct v2f
    16.             {
    17.                 float4 vertex : SV_POSITION;
    18.                 half3 worldNormal : TEXCOORD0;
    19.             };
    20.  
    21.  
    22.             struct Output
    23.             {
    24.                 float4 gbuffer0 : SV_TARGET0;
    25.                 float4 gbuffer2 : SV_TARGET2;
    26.             };
    27.  
    28.             float4 Debug_Color;
    29.             float4 Release_Color;
    30.  
    31.             v2f vert(float4 vertex : POSITION, float3 normal : NORMAL)
    32.             {
    33.                 v2f o;
    34.                 o.vertex = UnityObjectToClipPos(vertex);
    35.                 o.worldNormal = UnityObjectToWorldNormal(normal);
    36.                 return o;
    37.             }
    38.  
    39.             Output frag (v2f i)
    40.             {
    41.                 Output outp;
    42.  
    43.                 outp.gbuffer0 = float4(i.worldNormal,1);
    44.                 outp.gbuffer2 = Release_Color;               // gbuffer0 alters the color, gbuffer2 does not
    45.                 return outp;
    46.             }
    47.             ENDCG
    48.         }
    49.     }
    50.  
    51.     FallBack "Diffuse"
    My ComputeShader got the GBuffers using
    Code (CSharp):
    1.        computeShader_.SetTextureFromGlobal(kernelHandle_, "_DepthTexture", "_CameraDepthTexture");
    2.         computeShader_.SetTextureFromGlobal(kernelHandle_, "_ColorTexture0", "_CameraGBufferTexture0");
    3.         computeShader_.SetTextureFromGlobal(kernelHandle_, "_ColorTexture2", "_CameraGBufferTexture2");
    4.  
    and reads them using
    Code (CSharp):
    1.     output0Color = _ColorTexture0[viewPos.xy];    // Normal
    2.     output2Color = _ColorTexture2[viewPos.xy];    // Output color (RGB) and intensity (A)
    3.     depth = _DepthTexture[viewPos.xy].r;
    4.  
    Note, the depth texture *is* being written correctly, just not the two GBuffers.
     
    Last edited: Apr 9, 2019