Search Unity

RenderTexture to Texture2D too slow?!

Discussion in 'General Graphics' started by andyz, Jun 12, 2019.

  1. andyz

    andyz

    Joined:
    Jan 5, 2010
    Posts:
    2,269
    Turns out copying a rendertexture to a texture is a bit slow! With deep profiling it is just Texture2D.ReadPixels which is 10s of ms for a 1024x1024 in the editor.
    Seems other people have posted such, can you avoid read pixels and do it another way?

    Code (CSharp):
    1. Texture2D target
    2.  
    3. renderTex = RenderTexture.GetTemporary( Width, Height, 0 );
    4. cam.targetTexture = renderTex;
    5. cam.Render();//render stuff
    6. cam.targetTexture = null;
    7.  
    8. //copy to Texture2D - slow bit!!
    9. RenderTexture.active = renderTex;
    10.   target.ReadPixels(new Rect(0, 0, Width, Height), 0, 0);
    11.   target.Apply();
    12. RenderTexture.active = null;
    13.  
    14. RenderTexture.ReleaseTemporary( renderTex );
    15.  
     
    Last edited: Jun 12, 2019
    power0verwhelming likes this.
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    Yes. The solution is never use ReadPixels() and do everything on the GPU.

    A RenderTexture only ever really exists on the GPU, which is why you need to copy the data to a Texture2D asset, which is something that exists both on the GPU and CPU (though the two versions are separate, and they don’t always match!). ReadPixels asks the GPU to copy the data back to the CPU. This is slow in part because GPUs were designed to receive data from the CPU, not to send it back, but also because the GPU may be busy when the request is made. Either way, getting that data back can be quite slow in many situations.

    So the solution is often to simply not use ReadPixels. If are looking to copy the render texture into a texture 2D to use it on something else in the scene, you can either use Graphics.CopyTexture() which does the copy on the GPU, or just use the render texture itself directly.

    If you’re looking to process the image in some fashion, you may want to do that work on the GPU using blit() and custom shaders or a compute shader instead. If the data really does need to be on the CPU, like for using the information in it for some gameplay code reason, or for taking a screenshot, then you want to look into using AsyncGPUReadback.
     
  3. andyz

    andyz

    Joined:
    Jan 5, 2010
    Posts:
    2,269
    Just use the render texture... I never thought of that!!
    What difference does a RenderTexture have behind the scenes to a straight Texture [that it inherits]?
    All helpful thanks
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    In what context?

    The base Texture class is basically a dummy class with a bunch of shared properties. The main thing in common, beyond the obvious set of shared properties, is they exist as a way to link the CPU side of things with the actual GPU side asset. When you assign a texture to a material, it’s just a reference number of sorts that’s being attached. When the CPU tells the GPU to render an object, it says “render this mesh ID, with this shader ID, with these settings and texture IDs”.

    Each of the child classes of Texture implements some set of unique features on top of that. Most of them, like Texture2D and TextureCube, etc., have an optional CPU side data storage for that image used either temporarily when initially loading the texture asset from disk and then cleared once it’s been uploaded to the GPU, or kept around for CPU side reading and manipulation. If you modify a Texture2D on the CPU, the GPU side image used for rendering will not show those changes until you call Apply(), which copies the data from the CPU to the GPU. Alternatively if you use CopyTexture() to copy one texture to another, if the texture you’re copying from doesn’t exist on the CPU (like textures which have had their CPU side data cleared, or render textures), then only the GPU data has changed and the CPU side data won’t show the results of the copy.

    RenderTexture doesn’t ever have a CPU side version of the data. It’s assumed it exists purely on the GPU at all times. Render textures have the distinction of being something that the GPU can render to. Specifically something that can be bound as a render target, and the output of a fragment shader can write to. Normal textures cannot do this. But both can be read from in the shader when set as a texture property on a material equally*, and the shader doesn’t know there’s any difference.

    * Some old mobile hardware has some render texture formats available that it can render to, but not read from, or at least not read from very well. But this isn’t an issue anymore.
     
  5. andyz

    andyz

    Joined:
    Jan 5, 2010
    Posts:
    2,269
    ok yes I was confused as 'Texture' only basis for all others, I was thinking more of Texture2D.
    I suppose RenderTexture downside may only be the depth buffer when required (luckily I can avoid in my use case).
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    The depth buffer has no effect on sampling performance vs. an identically sized and formatted Texture2D. The depth buffer is only "a thing" when using it as a render target. Really, a render texture is a combination of a color render buffer and optional depth render buffer which are separate elements.
     
    AshwinMods likes this.
  7. john-mighty

    john-mighty

    Joined:
    Jan 17, 2019
    Posts:
    1
    So I have a similar problem where I need to save sreenshots on a regular basis. I am working in mobile VR and read pixels kills my frame rate. I cant use AsyncGPUReadback. because it does not support openGL. Other threads recommended CopyTexture but it sounds like there is no way to access that information in order to save it to disc? Any recommendations?
     
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    There's not really anything built into Unity that can help here. I believe some people have written native plugins that do this via OpenGL's Pixel Buffer Object interfaces, but I don't know of anyone who's released that code.
     
  9. arielfel

    arielfel

    Joined:
    Mar 19, 2018
    Posts:
    33
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    It’s not overcoming ReadPixels(), it’s not using it and simply providing a slightly easier way to do the built in async read back that already existed and was discussed above. The problem is it still won’t work on devices that don’t support that. Otherwise it’s fine.
     
  11. DavidSWu

    DavidSWu

    Joined:
    Jun 20, 2016
    Posts:
    183
    It is worth mentioning that RenderTextures are typically not swizzed in memory (unlike Texture2D), or they are swizzled in a way that optimizes writes.
    You may not notice a difference, but if your target supports CopyTexture and you are going to use the texture a lot, its best to copy it to a Texture2D for improved read-cache utilization.
    Then again, this is very platform depending. It may not make a difference on some platforms.
     
    georgerh likes this.
  12. WookieWookie

    WookieWookie

    Joined:
    Mar 10, 2014
    Posts:
    35
    I wish Unity would prioritize this issue. I work on mobile games and want to use blurred screenshots for UI panels. But I can't capture a screenshot at a reduced resolution, and using CaptureScreenshotIntoRenderTexture() throws a Metal-related error.
     

    Attached Files:

  13. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,924
    Which issue are you talking about? ReadPixels reads data back from the GPU to the CPU. That's very slow, even if you do it asynchronously. CaptureScreenshotIntoRenderTexture is designed to take screenshots that will be used by CPU as raw byte data (as a file to store, as data to send over the network, etc). It's not made to be used as part of your render pipeline, just like ReadPixels wasn't.

    The solution is simple: don't bring data back to the CPU unless you need it. And for blurred UI panels, you don't need this at all. Use a camera to render your background image into a RenderTexture, and feed the texture directly into a panel shader that blurs it.
     
    Last edited: Jul 14, 2022
    Walter_Hulsebos and Kurt-Dekker like this.
  14. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    38,697
    Walter_Hulsebos likes this.
  15. WookieWookie

    WookieWookie

    Joined:
    Mar 10, 2014
    Posts:
    35
    Nope. Using Cameras is a shortsighted solution. That method only serves to blur the environment behind a UI, and is isolated to a single Camera. I build large scale social mobile games. In my architecture, there is a Camera in every UI scene, and all scenes get loaded async as modals. In a UI-heavy game like this, with a camera stack, any solution that uses Camera references will not work. It's simply not possible to have a Camera that sees all UI Canvases along with the world, aka a composite of all things... aka a screenshot. If I could make a Camera see all things, I'd just pop it on for 1 frame with a blur post effect attached. This method is old news.

    Also, I do want this to land in the Assets folder and continually update a RenderTexture file, which would be referenced by RawImages on my UI panels.

    My prototyped solution works very well, but it generates a full resolution screenshot that is then downsized in the same frame to a very small texture before being blurred. The only issue is the temporary full resolution screenshot (16MB). So if CaptureScreenshotIntoRenderTexture worked, then I'd be done. Instead I'm having our Rendering Engineer do essentially the same thing using a render pass.

    Code (CSharp):
    1. private IEnumerator WaitThenUpdateScreenshot()
    2.         {
    3.             yield return new WaitForEndOfFrame();
    4.             runtimeTexture = ScreenCapture.CaptureScreenshotAsTexture();
    5.             Debug.LogError("Size: " + Profiler.GetRuntimeMemorySizeLong(runtimeTexture) + " Bytes before downsize.");
    6.             TextureResize.Bilinear(runtimeTexture,
    7.                 Mathf.RoundToInt(runtimeTexture.width * resolution),
    8.                 Mathf.RoundToInt(runtimeTexture.height * resolution),
    9.                 false,
    10.                 false);
    11.             runtimeTexture.Apply();
    12.             Debug.LogError("Size: " + Profiler.GetRuntimeMemorySizeLong(runtimeTexture) + " Bytes after downsize.");
    13.             if (blurriness > 0) BlurTexture();
    14.             else ApplyTextureToSprites();
    15.         }
     
    Last edited: Aug 3, 2022
  16. joshuacwilde

    joshuacwilde

    Joined:
    Feb 4, 2018
    Posts:
    727
    For Metal on mobile (iOS, iPad, M series Mac), you can use a framebuffer fetch trick. Basically requires writing into a buffer in a fragment shader that only touches certain pixels (basically at runtime create a mesh that is a bunch of 1 pixel wide squares across the screen, with the distance between the pixels being larger for a smaller final scaled down image). Also you will want to enable earlydepthstencil on the shader (actually not sure if supported in Unity, so RIP if not) to prevent breaking depth optimizations.

    This will avoid the memory cost of a full screen texture, as well as the cost of copying all those full res pixels.

    EDIT :

    Misunderstood what you are saying I guess. The above will work but is overkill. Instead what you can do that will work on all platforms is just blit your full res texture straight into a low res texture, or if the aliasing is too high for that to look good, you can at least blit to a half res texture (and get bilinear filtering smoothing from that) then blit into subsequent lower mips for a lot smoother looking result (at the cost of more memory operations of course).
     
  17. WookieWookie

    WookieWookie

    Joined:
    Mar 10, 2014
    Posts:
    35
    Interesting about the framebuffer fetch, thank you. Yes, these are the things my Rendering Engineer is mentioning. I just wish I could do it more directly by my little self. I'm a UI Lead and just prototyping the tool, as I do all our tools. So the real SDEs haven't really been involved yet.
     
  18. Zoodinger

    Zoodinger

    Joined:
    Jul 1, 2013
    Posts:
    43
    On my main computer it works fine so I thought everything was working great until I got a complaint when I released the first demo version. I verified the problem persists on another PC of mine but only on the released version, it works ok in the editor. This is such a basic feature that it's unthinkable something as important as this has been so problematic for so long. Is there any update on this? I have clients and deadlines to deal with and I don't want to have to wait for Unity for an indeterminate amount of time to fix the problems they created.

    All I'm doing is render with another camera to a render target texture that is then displayed on an in-game monitor. The scene isn't even big with either camera so even if I rendered the whole thing twice there shouldn't have been a problem.
     
  19. Kurt-Dekker

    Kurt-Dekker

    Joined:
    Mar 16, 2013
    Posts:
    38,697
    Well, you better start thinking about it! See below.

    And please, don't necro-post. Start your own post, it's FREE.

    In your NEW post, here is...

    How to report your problem productively in the Unity3D forums:

    http://plbm.com/?p=220

    This is the bare minimum of information to report:

    - what you want
    - what you tried
    - what you expected to happen
    - what actually happened, log output, variable values, and especially any errors you see
    - links to documentation you used to cross-check your work (CRITICAL!!!)

    The purpose of YOU providing links is to make our job easier, while simultaneously showing us that you actually put effort into the process. If you haven't put effort into finding the documentation, why should we bother putting effort into replying?

    Do not TALK about code without posting it. Do NOT post unformatted code. Do NOT retype code. Do NOT post screenshots of code. Use copy/paste and post code properly. ONLY post the relevant code, and then refer to it in your discussion. Do NOT post photographs of code.

    If you post a code snippet, ALWAYS USE CODE TAGS:

    How to use code tags: https://forum.unity.com/threads/using-code-tags-properly.143875/

    DO NOT OPTIMIZE "JUST BECAUSE..." If you don't have a problem, DO NOT OPTIMIZE!

    If you DO have a problem, there is only ONE way to find out. Always start by using the profiler:

    Window -> Analysis -> Profiler

    Failure to use the profiler first means you're just guessing, making a mess of your code for no good reason.

    Not only that but performance on platform A will likely be completely different than platform B. Test on the platform(s) that you care about, and test to the extent that it is worth your effort, and no more.

    https://forum.unity.com/threads/is-...ng-square-roots-in-2021.1111063/#post-7148770

    Remember that optimized code is ALWAYS harder to work with and more brittle, making subsequent feature development difficult or impossible, or incurring massive technical debt on future development.

    Don't forget about the Frame Debugger either, available right near the Profiler in the menu system. Remember that you are gathering information at this stage. You cannot FIX until you FIND.

    Notes on optimizing UnityEngine.UI setups:

    https://forum.unity.com/threads/how...form-data-into-an-array.1134520/#post-7289413

    At a minimum you want to clearly understand what performance issues you are having:

    - running too slowly?
    - loading too slowly?
    - using too much runtime memory?
    - final bundle too large?
    - too much network traffic?
    - something else?

    If you are unable to engage the profiler, then your next solution is gross guessing changes, such as "reimport all textures as 32x32 tiny textures" or "replace some complex 3D objects with cubes/capsules" to try and figure out what is bogging you down.

    Each experiment you do may give you intel about what is causing the performance issue that you identified. More importantly let you eliminate candidates for optimization. For instance if you swap out your biggest textures with 32x32 stamps and you STILL have a problem, you may be able to eliminate textures as an issue and move onto something else.

    This sort of speculative optimization assumes you're properly using source control so it takes one click to revert to the way your project was before if there is no improvement, while carefully making notes about what you have tried and more importantly what results it has had.
     
  20. arkano22

    arkano22

    Joined:
    Sep 20, 2012
    Posts:
    1,924
    This thread is about copying RenderTexture data to a Texture2D, and the reasons why it's slow. You posted on a one-year old thread about something completely unrelated to the issue you're talking about. In my book, that's a necro.

    As Kurt suggested, you should start your own post about your particular issue.

    No one is treating you like an idiot, just trying to offer helpful advice.

    Profile your build. It will tell you exactly what's slow and why. In case it's due to Unity and not your own scene setup/code, report a bug and provide a project that can be used to reproduce the issue.
     
  21. Zoodinger

    Zoodinger

    Joined:
    Jul 1, 2013
    Posts:
    43
    No I actually am an idiot, I posted in the wrong thread which is why my comment was out of place and why I was baffled by the response. I didn't mean to post it here.