Search Unity

HDRP Multi-camera rendering color / depth / motion to textures errors

Discussion in 'High Definition Render Pipeline' started by emfinger, Mar 11, 2020.

  1. emfinger

    emfinger

    Joined:
    Aug 23, 2017
    Posts:
    9
    Background:
    We are trying to collect simulation data using Unity using HDRP scriptable rendering. From multiple cameras, we would like to serialize (to file system) the color, depth, and motion vectors textures from these cameras.

    Here is a sample (good) output from a single camera (https://imgur.com/eiJnxcq):


    Approach:
    We have a camera script (attached) which leverages 3 RenderTextures (for color, depth, and motion vectors), and uses the RenderPipelineManager.endCameraRendering callback to blit the camera's active texture into the render textures before using ReadPixels to copy the render textures into Texture2D data for serialization into a file. The camera is configured with setTargetBuffers, and has the Depth and MotionVectors flags set in its depthTextureMode. When performing the blitting, a material / shader combination is used in conjunction with the depth and motionvectors pass to copy out the _CameraDepthTexture and the _CameraMotionVectorsTexture into the target render texture. The associated DepthShader and MotionShader are also provided.

    Problems Found:
    1. The script / shaders / materials used (essentially the whole pipeline) only works on mac right now - on linux (Vulkan) and Windows (DX11 or DX12 experimental) the motion vectors and depth are all black. Only on mac (Metal) does any of this work.

    2. Planar Reflection Probes in the scene mess up the depth / motion vectors output (I have attached an example of this) - the only way to fix this issue is to disable the planar reflection probes in the scene. The first frame of RGB (both what is viewed in the game window and what is saved is also messed up the same way)

    initial frames messed up (https://imgur.com/AVlYbbC):

    subsequent depth / motion vector frames are still messed up, but RGB is good now (https://imgur.com/plPmmB7):


    Of course if we disable the planar reflection probes, then all frames are good - even initial frames. We do have other reflection probes in the scene, and everything works fine with them.

    3. If we do not use setTargetBuffers, but instead set the camera's targetTexture, we do not get depth or motion vectors (and of course in this case we ensure the targetTexture has depth).

    4. The cameras do not render the correct view - all views (front, back, left, right) are rendered, but they are not associated with the correct image data; e.g. front image shows back view, right image shows front view, etc. In the scene view, each camera is oriented properly, and the preview shows the proper rendering.

    imgur post with some images: https://imgur.com/a/GeW9YYj
     

    Attached Files:

    Last edited: Mar 12, 2020
  2. emfinger

    emfinger

    Joined:
    Aug 23, 2017
    Posts:
    9
    As a note - I've tried this on multiple scenes, and with multiple versions of Unity and HDRP. For reference here are the versions I've tried:

    Unity:
    * 2019.3.1f1
    * 2019.3.2f1
    * 2019.3.3f1
    * 2019.3.4f1
    * 2020.1.0a25

    HDRP:
    * 7.1.8
    * 7.2.1
    * 8.0.1
     
  3. emfinger

    emfinger

    Joined:
    Aug 23, 2017
    Posts:
    9
    Here is an updated file which simplifies some of the code into a single script to show how multiple cameras are handled
     

    Attached Files:

    Egad_McDad likes this.
  4. emfinger

    emfinger

    Joined:
    Aug 23, 2017
    Posts:
    9
    Update:

    I've determined that the _CameraDepthTexture and _CameraMotionVectorsTexture on Windows have no valid data.

    When I use this output for my fragment shader:
    Code (CSharp):
    1. return float4(0.2, 0.4, 0.6, 1.0);
    I get this data when reading the serialized files (assuming it's [Depth, Motion X, Motion Y, UNUSED]):
    Code (CSharp):
    1. Loading newer format _depth_motion file
    2. Depth (min, max): (0.2, 0.2)
    3. Motion x (min, max): (0.4, 0.4)
    4. Motion y (min, max): (0.6, 0.6)
    However, when I use this as my shader:
    Code (CSharp):
    1.       sampler2D_float _CameraDepthTexture;
    2.       sampler2D_half _CameraMotionVectorsTexture;
    3.  
    4.       struct v2f {
    5.          float4 pos : SV_POSITION;
    6.          float4 scrPos : TEXCOORD1;
    7.       };
    8.  
    9.       // Vertex Shader
    10.       v2f vert(appdata_base v) {
    11.          v2f o;
    12.          o.pos = UnityObjectToClipPos(v.vertex);
    13.          o.scrPos = ComputeScreenPos(o.pos);
    14.          return o;
    15.       }
    16.  
    17.       // Fragment Shader
    18.       float4 frag(v2f i) : SV_TARGET {
    19.         float2 coords = UNITY_PROJ_COORD(i.scrPos);
    20.         float depth = Linear01Depth(tex2D(_CameraDepthTexture, coords).r) * _ProjectionParams.z;
    21.         half2 motion = tex2D(_CameraMotionVectorsTexture, coords).rg;
    22.  
    23.         return float4(depth, motion.r, motion.g, 1.0);
    24.     }
    Then I get this from ALL of my serialized files (even in cases where I am very close to objects and where I am moving):
    Code (CSharp):
    1. Loading newer format _depth_motion file
    2. Depth (min, max): (0.046333157, 0.046333157)
    3. Motion x (min, max): (0.21582031, 0.21582031)
    4. Motion y (min, max): (0.21582031, 0.21582031)
    I've attached my updated shader code and data collector script for reference.
     

    Attached Files:

  5. emfinger

    emfinger

    Joined:
    Aug 23, 2017
    Posts:
    9
    Further update: I've determined that the approach I was using (RenderPipelineManager::endCameraRendering) is not viable within HDRP, whereas it seems that creating a custom HDRP post process according to the docs here: https://docs.unity3d.com/Packages/c...s.high-definition@8.0/manual/Custom-Pass.html might be viable. As it is, I am able to create a CustomPostProcessVolumeComponent which applies the shader to all cameras in the scene (which I can then filter as needed) and I can see that the shader is working in the editor in Windows (progress!). Currently looking through the docks on the RTHandle API so that I can copy the resultant textures from the HDUtils.DrawFullScreen calls back into T2Ds.
     
  6. emfinger

    emfinger

    Joined:
    Aug 23, 2017
    Posts:
    9
    Update: actually using HDRP now i have it behaving the same on windows and mac now. This means that I'm able to get motion / depth / color data on all platforms, but I still have the issue that the cameras are not rendering the right data:

    This IMGUR album shows the scene config (HDRP sample scene with minimal config) as well as the resultant data (after moving a little bit to get different images in frame):
    https://imgur.com/a/Lwj7cFM

    Note - i believe the main camera and the depth configuration of the cameras is affecting the rendering output.

    Here is an example of what the attached shader and script produce now (note - this should be from the front camera but is actually duplicating as both the left and right camera):
     
  7. emfinger

    emfinger

    Joined:
    Aug 23, 2017
    Posts:
    9
    I did manage to get it fixed - turns out Graphics.Blit should not be used, and instead cmd.Blit should be used. Also the order of blitting and when readPixels is called matters. I've attached the updated script / shader for completeness.

    This album (https://imgur.com/a/X3BKIXO) has some example images showing the proper data, here is one example image:
     

    Attached Files:

    comoc likes this.