Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice

Question Rendering Motion Vectors from an additional Camera to a render Texture

Discussion in 'General Graphics' started by parameterpollution, Apr 16, 2023.

  1. parameterpollution

    parameterpollution

    Joined:
    Mar 6, 2021
    Posts:
    4
    Is it possible to have an additional camera in the scene that is not the main game output camera and which renders motion vectors to a render texture?

    I have an additional camera (that is not the main camera used for the "normal" game output rendering) in my scene that I use to render a depth map to a render texture, which is then used in a fluid simulation shader (running on a custom render texture) to figure out where the dynamic obstacles are in the scene.
    This works perfectly.

    But for proper momentum transfer from those objects into the fluid simulation I could use motion vectors for those obstacles.
    I have found the Camera.depthTextureMode property and the DepthTextureMode.MotionVectors mode and set the camera settings in my Awake() function of that camera object:
    Code (CSharp):
    1. depthCam.depthTextureMode = depthCam.depthTextureMode | DepthTextureMode.Depth | DepthTextureMode.MotionVectors;
    But I am not getting any motion vectors on that render texture this camera is rendering to.
    I also tried different formats in the "Color Format" and "Depth Buffer" settings on that render texture, but no luck so far.

    Am I barking up the wrong tree here or should this be possible?

    Edit: forgot to add that I am using the built in render pipeline
     
    Last edited: Apr 16, 2023
  2. c0d3_m0nk3y

    c0d3_m0nk3y

    Joined:
    Oct 21, 2021
    Posts:
    560
    The RenderTexture that you assign to Camera.targetTexture has a color buffer and depth buffer. So there is no target texture for motion vectors. My guess is, that you do get motion vectors but you have to copy them from the temporary render target to a persistent target in a camera event.

    Have you checked with RenderDoc to see if there is a motion vector pass?
     
  3. parameterpollution

    parameterpollution

    Joined:
    Mar 6, 2021
    Posts:
    4
    I am a bit of a noob with graphics debugging (well, also with game engines in general), but you motivated me to take a look at RenderDoc (have heard about it, but never used it).
    I think you are right. When I compare a frame capture with the DepthTextureMode set to MotionVectors then I see a MotionVector.RenderJob in there and it points to a "TempBuffer" texture.

    I will try to figure out how I can get this temp buffer into a render texture.

    Thank you for your help!
     
  4. c0d3_m0nk3y

    c0d3_m0nk3y

    Joined:
    Oct 21, 2021
    Posts:
    560
    I can think of a couple of ways to do it but would have to try to see which ones work

    - Write a shader which uses the motion vectors right away without copying them out. Just sample
    sampler2D_half _CameraMotionVectorsTexture
    . According to this, you can do it in any opaque image effect. Not entirely sure how to write image effects, though, maybe with Camera.OnRenderImage.
    - Use a command buffer and a camera event to copy the motion vectors to your render texture.
    - Copy the texture via Graphics.CopyTexture in OnPostRender by fetching it via Shader.GetGlobalTexture("_CameraMotionVectorsTexture")
    - Copy the texture via CommandBuffer.CopyTexture(BuiltinRenderTextureType.MotionVectors, ...) and execute the command buffer immediately in OnPostRender via Graphics.ExecuteCommandBuffer

    PS: You can probably also check with the built-in frame debugger but RenderDoc is so much better.

    PPS: By the way, do you really need a second camera? This will probably triple your draw calls, which is super slow. Can't you do this on your main camera?
     
  5. parameterpollution

    parameterpollution

    Joined:
    Mar 6, 2021
    Posts:
    4
    Thank you for those ideas, I will try to go through them.

    I did some more testing with renderdoc and I can see that TempBuffer show up and I can see the right output in the other buffer (the depth buffer), but so far that TempBuffer has always been just black (even though I added some movement animations on an object). So I will have to play with this some more.

    The additional limitations for what I am actually trying to do might make getting access to that motion vector data impossible though because I want to get this to work in a VRChat World. This means I am currently stuck with unity 2019.4 (though they will probably switch to unity 2021/2022 soon-ish) and I can't use all of the unity c# functions, only those that they expose in their "udonsharp" language (but they recently allowed the Blit() function, so maybe that changes things).

    @ cameras: I do need that additional camera because it is othographic and at a fixed position (looking up from the floor of the horizontal 2d fluid simulation) and aligned with the simulation space of the simulation. To get dynamic obstacles (including the player's avatar) I need the depth buffer and I am hoping that same camera (with that additional motion vector pass) can also give me the motion vectors.
    But that camera is only rendering with 512x512, so the overhead should not be too much and I think having an interactive fluid simulation (you can touch it with your avatar's hands in VR) is worth that small performance hit.

    Worst case I will have to try to write a simple (don't need it to be perfect) motion vector calculation shader based on a double buffered custom render texture by comparing the current frame with the previous one.
    Though I am not looking forward to having to do that, the eularian fluid simulation was already enough of a mind bender on it's own ;-)
     
    Last edited: Apr 17, 2023
  6. c0d3_m0nk3y

    c0d3_m0nk3y

    Joined:
    Oct 21, 2021
    Posts:
    560
    Don't remember, but it is possible that the motion vector texture only contains the motion vectors for dynamic objects (objects that have MotionVectorGenerationMode.Object) since the motion vectors for rigid objects can just be calculated from the current and last Model-View-Projection matrices. For static objects it's even easier, in that case you only need the current and last View-Projection matrices.

    That's not how motion vectors are calculated. Maybe you are confusing it with optical flow?

    PS: You can download the shaders and take a look at Internal-MotionVectors.shader to see how Unity does it
     
    Last edited: Apr 17, 2023
  7. parameterpollution

    parameterpollution

    Joined:
    Mar 6, 2021
    Posts:
    4
    I checked the standard cube I added for testing and it's "Motion Vectors" setting on the mesh renderer was set to "Per Object Motion". Since the only other options are no motion or camera motion only I think this should be the right setting.
    But this made me think a bit more about how unity is calculating those vectors. A Avatar in VRChat of course is a skinned mesh renderer and I am not sure if they have motion vectors enabled for those. Since it seems to cause an overhead they probably didn't enable them.

    Yes, my plan B is to do a quick&dirty optical flow analysis if I can't figure out how to get the actual motion vectors.

    But thank you for pointing me to how unity calculates the motion vectors internally. I thought my options would only be to either get the "magically" created motion vectors from unity or do optical flow analysis. But maybe I can write my own shader based on this internal shader. I will read through the code and see if my brain can comprehend what is done in there.
     
    c0d3_m0nk3y likes this.