Search Unity

EncodeToPNG super slow, ReadPixels also quite slow. Any faster ways to capturing camera shots?

Discussion in 'Editor & General Support' started by Dreamwriter, Apr 5, 2017.

  1. Dreamwriter

    Dreamwriter

    Joined:
    Jul 22, 2011
    Posts:
    472
    I need to record large screenshots from multiple cameras at once - one full screen screenshot per camera. The only way I can think to do that is to tell the cameras to:

    Render to texture
    ReadPixels from that RenderTexture into another texture
    EncodeToPNG from that other texture
    Save PNG to disk

    Saving to disk is actually quite fast, but the ReadPixels call and especially EncodeToPNG really slow everything down. Like, 0.3 frames per second slow. I can't use JPG because I need the higher quality of PNG, and I can't use Application.CaptureScreenshot because that only captures visually rendered data, not individual cameras. The only step that could be multithreaded is saving to disk, and as I said that's actually quite fast so it wouldn't save hardly any time.

    Anybody have any ideas?
     
  2. greg-harding

    greg-harding

    Joined:
    Apr 11, 2013
    Posts:
    524
    We are currently doing exactly what you're doing for rendered-to-texture high-res screenshots and cubemap screenshots. We've tried various iterations of it, messed with command buffers, and also tried capturing raw and encoded frames piped via a thread to external ffmpeg processes etc.

    Nothing we have done in c#/scripting has been close to realtime or very fast at all. You might have some native plugin luck on the Asset Store (sorry, nothing to point at).

    Copying data from gpu -> memory is the first bottleneck, ie. the gpu rendertexture -> memory readable texture step. We asked Unity when they released Adam what technique they used to capture stuff in realtime and if it was going to be available publicly - it was some custom low-level directx plugin buffer magic and they were unsure if it would be released.
     
    Last edited: Apr 6, 2017
  3. liortal

    liortal

    Joined:
    Oct 17, 2012
    Posts:
    3,562
    Is this something that needs to be done at realtime? if you have the texture object reference, why do you need to store it to disk ? or what is the need to convert it to a PNG ?
     
  4. Dreamwriter

    Dreamwriter

    Joined:
    Jul 22, 2011
    Posts:
    472
    It doesn't need to be in realtime, but better than 3 seconds per frame would be nice :) At that speed, rendering 10 minutes of 30fps data would take like 15 hours. I can't get into details why we are doing this, but it's basically because we have different cameras applying special filters, and we are analyzing the results on a per-frame basis, comparing the different camera outputs to each other.
     
  5. grizzly

    grizzly

    Joined:
    Dec 5, 2012
    Posts:
    357
  6. Kronnect

    Kronnect

    Joined:
    Nov 16, 2014
    Posts:
    2,905
    Try EncodeToJPEG. It's faster.
     
  7. Dreamwriter

    Dreamwriter

    Joined:
    Jul 22, 2011
    Posts:
    472
    Unfortunately, JPEG is lossy, and we need lossless images. I tried Graphics.CopyTexture, but I just get gray images - I think Graphics.CopyTexture is really for copying from GPU to GPU, but we are copying the RenderTexture to the CPU.
     
  8. greg-harding

    greg-harding

    Joined:
    Apr 11, 2013
    Posts:
    524
    The docs state that you can also copy textures that are both readable on the cpu - but rendertextures are on the gpu and the readable destination texture would be on the cpu.
     
  9. karay

    karay

    Joined:
    Aug 29, 2013
    Posts:
    1
    I know I'm late for the party, but this discussion is the first in google's result.

    I'm working on my hobby project, where I want to stream the camera view through the socket. I haven't yet found a solution, but here is a good article about this problem:

    ... ReadPixels() will trigger a GPU flush... “Flushing the GPU,” means waiting for all remaining commands in the current command buffer to execute. The CPU and GPU can run in parallel, but during a flush, the CPU sits idle waiting for the GPU to become idle as well, which is why this is also known as a “synchronization point."
     
    Last edited: Feb 12, 2018
    equivoque likes this.