Search Unity

Question Path Tracing wait until finished

Discussion in 'High Definition Render Pipeline' started by lkaesberg, May 20, 2021.

  1. lkaesberg

    lkaesberg

    Joined:
    Dec 13, 2018
    Posts:
    6
    Hello everybody.
    I am programming a tool which can produce images of objects in different locations and rotations to train a neural network. I had the idea that it would be nice to use ray/path tracing to generate better looking images. But the sampling process for path tracing takes quite a while and when I just save the image instantly it is very noisy. Does anybody know how I could wait for the rendering process to finish and then save the screenshot?
    Greetings Lars. image1-4.png
     
  2. PavlosM

    PavlosM

    Unity Technologies

    Joined:
    Oct 8, 2019
    Posts:
    31
    We have a scripting API for that:
    https://docs.unity3d.com/Packages/c...high-definition@12.0/manual/Accumulation.html

    The idea is that in your script you have to call:
    • renderPipeline.BeginRecording( ) when you want to start capturing the path tracing image
    • renderPipeline.PrepareNewSubFrame() at the start of every frame that you want to capture.
    • renderPipeline.EndRecording() when you want to stop capturing the path tracing image (in your case you probably want to call this after N frames, where N is the number of samples in the path tracer settings)
    The documentation includes an example script that shows how to use this properly. If the camera or the objects moves between the BeginRecording and EndRecording calls, then you will also get nice "motion blur" without any additional cost.

    When this API is _not_ used, the path tracer resets the accumulation (and you get a noisy image) if the camera moves or the scene changes. In your case, I guess something marks the scene as changed right before you capture and the accumulation resets. If you use this API, the path tracer will not reset the image, and you will have the chance to grab a properly converged screenshot.
     
    Mehrdad995 and lkaesberg like this.
  3. lkaesberg

    lkaesberg

    Joined:
    Dec 13, 2018
    Posts:
    6
    Thank you and I found this API too. This is what I tried, but it wouldn't produce better images.
    Code (CSharp):
    1. var renderPipeline = RenderPipelineManager.currentPipeline as HDRenderPipeline;
    2. renderPipeline?.BeginRecording(128,1F,0.25F,0.75F);
    3. for (int i = 0; i < 128; i++)
    4. {
    5.        renderPipeline?.PrepareNewSubFrame();
    6. }
    7. renderPipeline?.EndRecording();
    8. ScreenCapture.CaptureScreenshot("data/image" + nextView + "-" + _number + ".jpg");
     
  4. PavlosM

    PavlosM

    Unity Technologies

    Joined:
    Oct 8, 2019
    Posts:
    31
    The way you try to use the API in this snippet will not work, because PrepareNewSubFrame(); will not actually render the image, if this is what you were thinking.

    The general idea is that you have to let Unity render N frames (so update() in your script will have to be called N times) and you keep track of the state (recording or not) with some extra variables, like in the example script in the docs.
     
  5. PavlosM

    PavlosM

    Unity Technologies

    Joined:
    Oct 8, 2019
    Posts:
    31
    If you want to use your approach, you could manually call camera.Render() inside the for loop,
    but you also need to account for the fact that Render will be called one extra time by Unity.

    For example, if you attach this script to your camera, you will only see converged frames in the game window with path tracing:

    Code (CSharp):
    1. using UnityEngine;
    2. using UnityEngine.Rendering;
    3. using UnityEngine.Rendering.HighDefinition;
    4.  
    5. public class WaitForConvergence: MonoBehaviour
    6. {
    7.     [Tooltip("Number of samples for convergence. Should be the same as the maximum samples in path tracing.")]
    8.     public int Samples = 128;
    9.  
    10.     [Tooltip("The number of frames to capture. Set to -1 to continue capturing frames until the application exits.")]
    11.     public int FramesToCapture = -1;
    12.  
    13.     bool _StopRecording = false;
    14.     int _RecordedFrames = 0;
    15.     Camera _camera;
    16.  
    17.     void Start()
    18.     {
    19.         _camera = GetComponent<Camera>();
    20.  
    21.         if (_camera == null)
    22.             Debug.Log("You should attach this script to a camera game object");
    23.     }
    24.  
    25.     void Update()
    26.     {
    27.         HDRenderPipeline renderPipeline = RenderPipelineManager.currentPipeline as HDRenderPipeline;
    28.         if(renderPipeline != null && _StopRecording == false)
    29.         {
    30.             if(_RecordedFrames == 0)
    31.                 renderPipeline.BeginRecording(Samples, 1F, 0.25F, 0.75F);
    32.  
    33.             for (int i = 0; i < Samples - 1; i++)
    34.             {
    35.                 renderPipeline.PrepareNewSubFrame();
    36.                 _camera?.Render();
    37.             }
    38.  
    39.             ScreenCapture.CaptureScreenshot($"frame_{_RecordedFrames++}.png");
    40.  
    41.             if (_RecordedFrames == FramesToCapture)
    42.             {
    43.                 _StopRecording = true;
    44.                 _RecordedFrames = 0;
    45.                 renderPipeline.EndRecording();
    46.             }
    47.         }
    48.     }
    49. }
    50.  
     
    Last edited: May 20, 2021
    lkaesberg likes this.
  6. lkaesberg

    lkaesberg

    Joined:
    Dec 13, 2018
    Posts:
    6
    Okay that kinda worked for me. I ended up using this:
    Code (CSharp):
    1. for (int i = 0; i < (rayTracingToggle.isOn ? samplesRaytracing : 0); i++)
    2. {
    3.      activeCamera.Render();
    4. }
    5. ScreenCapture.CaptureScreenshot("data/image" + nextView + "-" + _number + ".jpg");
    Sometimes the whole image/camera view just turns completely black, but I hope I get this figured out. Thank you for your help.
     

    Attached Files:

  7. lkaesberg

    lkaesberg

    Joined:
    Dec 13, 2018
    Posts:
    6
    When I use the renderPipeline.BeginRecording/NewSubFrame/EndRecording methodes the frame is very overexposed. If I only use camera.Render() it looks good, but I think somewhere in the process I messed up either the RenderPipeline config/ Volume config or my cams don't work properly. I have two render pipelines and with that I can swap between ray tracing and no ray tracing. But in ray tracing mode the cameras randomly turn black, and I have to move them to get a normal image again. I have 5 cameras and deactivate the ones not used for this image.
     
  8. PavlosM

    PavlosM

    Unity Technologies

    Joined:
    Oct 8, 2019
    Posts:
    31
    Indeed, this seems to be a bug when using automatic exposure modes. I will keep this in my notes to fix in the next release. For now I would recommend to use the "Fixed" exposure option (or the "Use physical camera" when capturing with these methods:
    upload_2021-5-20_17-48-22.png

    Edit: As you have noticed, the accumulation API is not needed if the scene/camera is completely static. If something moves, then you will have to use the extra functions.
     
    Last edited: May 20, 2021
    chadfranklin47 likes this.
  9. lkaesberg

    lkaesberg

    Joined:
    Dec 13, 2018
    Posts:
    6
    Okay yeah that is good to know. But I think I also found a bug with path tracing. Here is a video to show what I mean because it happens very unpredictable and only if path tracing is enabled. The cameras just turn black after some changes and I have to change something in the inspector to get the picture back. But if I try disabling and enable the camera via script it doesn't refresh the picture.

     
  10. PavlosM

    PavlosM

    Unity Technologies

    Joined:
    Oct 8, 2019
    Posts:
    31
    Yeah, this looks like a bug. It’s hard to debug over the forum, so I suggest filling a bug ticket with your project attached.
     
    lkaesberg likes this.
  11. lkaesberg

    lkaesberg

    Joined:
    Dec 13, 2018
    Posts:
    6
    Okay I created a bug ticket.
     
  12. chadfranklin47

    chadfranklin47

    Joined:
    Aug 11, 2015
    Posts:
    229
    Hello, has any progress been made on this issue? I am using Unity 2021.2.19 and I am still experiencing this behavior. I am not using accumulation with path tracing though, I am using it to converge TAA and related post effects. I have attached some screenshots with and without accumulation. Thanks.

    Edit:
    Using the timeline, fixed exposure seems to fix the issue when using a single VCam, though when blending between two VCams using the same volume profile (with fixed exposure) the exposure problem reappears and increases throughout the blend to the second VCam. It doesn't occur when using a lower sample count such as 4, but does occur using 6 - 10+.

    Edit 2:
    Enabling "Anti-ringing" under the Temporal Anti-Aliasing options on the camera seems to eliminate this problem.
     

    Attached Files:

    Last edited: Apr 29, 2022
    newguy123 likes this.
  13. Cascho01

    Cascho01

    Joined:
    Mar 19, 2010
    Posts:
    1,347
    Maybe a bit offtopic:
    I have a toggle that enables and disables pathtracing at runtime.
    Everything works fine but after disabling pathtracer I get a heavy reduced framerate compared to before enabling the pathtracer, approx from 100 down to 20 fps.

    I´m using this simple code:

    Code (CSharp):
    1. private PathTracing pathtracing;
    2.  
    3. private void Awake()
    4. {
    5.     volume.profile.TryGet(out pathtracing);
    6. }
    7.  
    8. void EnablePathtracer(bool enable)
    9. {
    10.     pathtracing.enable.value = enable;
    11. }

    Any idea what causes the framerate drop?
    The profiler shows a big "Gfx.WaitForpresentOnGfxThread" but this seems unsolved.

    Meanwhile I found out that:
    - it obviously appears only when the vertexcount of my scene exceeds a certain amount.
    - it happens rarely in the editor but consequently in executable
    - it does not happen when Pathtracer is already enabled when the app starts

    That´s all a bit strange to me.
    Since I would like to implement it in my current archviz app I really would like to see a runtime-description how to
    (0. Disable DLSS on current camera)
    1. Properly enable Pathtracer at runtime
    2. Get an event or bool when the progress is done
    3. Get an event or bool when the denoising is done (Save Screenshot then)
    4. How to properly disable Pathtracer to continue with regular rendering
    (5. Re-Enable DLSS on current camera)
     
    Last edited: May 25, 2022