Search Unity

Camera capture at deterministic intervals

Discussion in 'Scripting' started by serhanguel, Feb 2, 2021.

  1. serhanguel

    serhanguel

    Joined:
    Jan 7, 2019
    Posts:
    7
    Hi. I have a camera rotating around a dynamic (moving) object for a full cycle, and then approaching a fixed amount to the object and keep rotation. After each 0.5s, I capture the camera view, assign it to a
    RenderTexture
    and write it to a PNG file. My goal is to run this multiple times with different texture width and heights and obtain sets of images in different resolution from different angles and distances.

    The relevant part of my code look like this:

    Code (CSharp):
    1.  
    2.     private RenderTexture rt;
    3.     private float nextActionTime = 0.0f; // start capturing at 0s
    4.     public float period = 0.5f; // capture interval
    5.     public int texWidth = 1920;
    6.     public int texHeight = 1080;
    7.     private void LateUpdate()
    8.     {
    9.         // Capture object periodically
    10.         if (Time.time > nextActionTime)
    11.         {
    12.             nextActionTime += period;
    13.             Capture();
    14.         }
    15.     }
    16.     public void Capture()
    17.     {
    18.         Camera Cam = GetComponent<Camera>();
    19.         // rt = new RenderTexture(1920, 1080, 24, RenderTextureFormat.ARGB32);
    20.         rt = new RenderTexture(texWidth, texHeight, 24, RenderTextureFormat.ARGB32);
    21.         rt.Create();
    22.         Cam.targetTexture = rt;
    23.         RenderTexture currentActiveRT = RenderTexture.active;
    24.         RenderTexture.active = rt;
    25.         Cam.Render();
    26.         Texture2D tex = new Texture2D(Cam.targetTexture.width, Cam.targetTexture.height, TextureFormat.ARGB32, false);
    27.  
    28.         tex.ReadPixels(new Rect(0, 0, Cam.targetTexture.width, Cam.targetTexture.height), 0, 0);
    29.         tex.Apply();
    30.         RenderTexture.active = currentActiveRT;
    31.         byte[] bytes = tex.EncodeToPNG();
    32.         Destroy(tex);
    33.         File.WriteAllBytes(capturePath + fileCounter + ".png", bytes);
    34.         fileCounter++;
    35.     }
    36. }
    For each "experiment", I set
    texWidth
    and
    texHeight
    to new values and start the play mode. Another script attached to the camera rotates it and moves it closer to the object until a minimum distance is attached, and the above script captures the camera view at each 0.5s in the meantime.

    My issue is, when I inspect the captured images after the experiments, I see that they are not exactly matching although, according to my setup, they are supposed to be taken at the same sampling times. It seems that the timing of the snapshots varies between each different run (i.e. restarting play mode). My best guess is that this might be caused by a slow GPU-to-CPU copy due to the usage of
    Texture2D.ReadPixels
    ; however, I'm not entirely sure if this could be mitigated if I use
    AsyncGPUReadback
    to copy the data from GPU to CPU. I would quickly try it out but it seems that
    AsyncGPUReadback
    is not supported using OpenGL and might only work if a 3rd party plugin (such as this one) is used.

    Another possibility is, such a deterministic sampling is not possible between different runs. Then I'd probably have to insert multiple cameras and save multiple textures of different sizes during one run. I'm not sure though how many textures I can write simultaneously to files using this approach.
     
  2. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    38,686
    How are you moving the camera around? If you're computing it with anything related to Time.time or Time.deltaTime you won't be able to replicate the behavior perfectly.

    Instead you must drive the camera to the same angle and distance at each shot to get it to line up.

    In other words, don't orbit at X degrees per second and take shots every Y seconds.

    Instead, identify the angles you want, move the camera to those positions and take the N number of shots you want.
     
  3. serhanguel

    serhanguel

    Joined:
    Jan 7, 2019
    Posts:
    7
    Thanks, that's exactly what I'm doing. I move the camera around using
    Time.deltaTime
    with a fixed speed. Then when I complete a full cycle, I move it towards the object by 1 unit. See the code below.

    I will try the approach with the incremented angles but just curious, why are completion times of the frames different between different runs (as given by
    Time.deltaTime
    ) ?

    Code (CSharp):
    1.    
    2.     private float nextMoveTime = 6.0f;
    3.     public float movePeriod = 6.0f;
    4.     void MoveCameraCloser()
    5.     {  
    6.         targetCenter = targetRenderer.bounds.center;
    7.         transform.position = Vector3.MoveTowards(transform.position, targetCenter, 1.0f);
    8.         transform.LookAt(targetCenter);
    9.     }
    10.  
    11.     void LateUpdate()
    12.     {  
    13.         // Spin the camera around the target at 60 degrees/second
    14.         transform.RotateAround(target.transform.position, Vector3.up, 60 * Time.deltaTime);
    15.        
    16.         // cover 360 deg for each distance, then move the camera forward
    17.         if (Time.time > nextMoveTime)
    18.         {  
    19.             MoveCameraCloser();
    20.             nextMoveTime += movePeriod;
    21.             distCamToObject = transform.InverseTransformPoint(targetCenter).magnitude;
    22.         }
    23.  
    24.         if(distCamToObject < 0.5)   // don't get too close to the object
    25.             UnityEditor.EditorApplication.isPlaying = false;
    26.     }
     
  4. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    38,686
    It's because that's the point of Time.deltaTime: as your frame rate goes up and down over time, that lets you keep a constant speed of gameplay.

    And by the very nature of general purpose multi-process multi-user computers running complex modern operating systems connected to the network, no two runs will ever be the same timing-wise.

    If you want views from every 45 degrees, I would just do that explicitly:

    Code (csharp):
    1. for (int range = 10; range >= 5; range--)
    2. {
    3.    for (int angle = 0; angle < 360; angle += 45)
    4.    {
    5.       // somewhere out to the right and up
    6.       Vector3 position = new Vector3( 1, 0.5f, 0).normalized * range;
    7.  
    8.       // spin it around the Y axis
    9.       position = Quaternion.Euler( 0, angle,  0) * position;
    10.  
    11.       // TODO: use the position to position the camera
    12.    }
    13. }
     
  5. serhanguel

    serhanguel

    Joined:
    Jan 7, 2019
    Posts:
    7
    Thanks. Now I'm using multiple cameras to record textures of different sizes synchronously and increment angles and distances in a coroutine. I attach script to all cameras: Main (render to scene) and Recorder cameras, and call my Capture function if the script is attached to a Recorder cameras. Capturing images works perfect without any sync problem because all views for different sizes are recorded at one run.

    The only remaining issue is, the Main camera, which should only render to the display, does not rotate at all, although the script is attached to it too. If I disable the Capture for Recorder cameras (see below the if-clause in MoveCamera), it rotates without any problem. Am I using the yield statement somehow wrong such that it is blocking my Main camera?

    Code (CSharp):
    1.     void Start()
    2.     {  
    3.         Camera cam = GetComponent<Camera>();
    4.         targetRenderer = target.GetComponent<Renderer>();
    5.         currentRange = transform.InverseTransformPoint(targetCenter).magnitude;
    6.         StartCoroutine(MoveCamera(cam, currentRange, rangeStep, angleStep));
    7.     }
    8.  
    9.     void Capture(Camera Cam)
    10.     {
    11.         rt = new RenderTexture(texWidth, texHeight, 24, RenderTextureFormat.ARGB32);
    12.         rt.Create();
    13.         Cam.targetTexture = rt;
    14.         RenderTexture currentActiveRT = RenderTexture.active;
    15.         RenderTexture.active = rt;
    16.         Cam.Render();
    17.  
    18.         Texture2D tex = new Texture2D(Cam.targetTexture.width, Cam.targetTexture.height, TextureFormat.ARGB32, false);
    19.         tex.ReadPixels(new Rect(0, 0, Cam.targetTexture.width, Cam.targetTexture.height), 0, 0);
    20.         tex.Apply();
    21.         RenderTexture.active = currentActiveRT;
    22.  
    23.         byte[] bytes = tex.EncodeToPNG();
    24.         Destroy(tex);
    25.         File.WriteAllBytes(capturePath + fileCounter + ".png", bytes);
    26.         fileCounter++;
    27.     }
    28.  
    29.     IEnumerator MoveCamera(Camera cam, float currentRange, float rangeStep, int angleStep)
    30.     {
    31.         do
    32.         {  
    33.             targetCenter = targetRenderer.bounds.center;
    34.             transform.LookAt(targetCenter);
    35.             transform.position = Vector3.MoveTowards(transform.position, targetCenter, rangeStep);
    36.  
    37.             for (int angle = 0; angle < 360; angle += angleStep)
    38.             {              
    39.                 transform.RotateAround(target.transform.position, Vector3.up, angleStep);
    40.  
    41.                 yield return new WaitForEndOfFrame();
    42.                 if(transform.parent.name == "RecorderCameras")
    43.                   Capture(cam);
    44.             }
    45.  
    46.             currentRange -= rangeStep;
    47.  
    48.         } while (currentRange > rangeMin);
    49.     }
    50. }