Search Unity

Graphics.Blit vs GrabPass vs RenderTexture?

Discussion in 'Shaders' started by mahdiii, Mar 27, 2018.

  1. mahdiii

    mahdiii

    Joined:
    Oct 30, 2014
    Posts:
    856
    Hi
    I would like to capture the camera screen and apply effects like noise,hue change,distortion,...
    Which method is more suitable based on performance on mobile devices?
    Use grabpass or post process with OnRenderImage function and Graphics.Blit,..?
    and why your suggested way is more affordable thx
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Don't use a grab pass. You're better off using the OnRenderImage.

    But also don't use OnRenderImage.

    I believe the "correct" way to do image effects for mobile is to have your main camera render to a render texture, then have a dummy camera that is set to render basically nothing but a quad with that render texture. Alternatively I think you can use a CommandBuffer and Blit on that dummy camera. The OnRenderImage does a screen copy which can be expensive on mobile, which is also what the GrabPass does that you're trying to avoid. For modern high end phones it likely doesn't matter too much and you can probably just use OnRenderImage, but it makes a difference on older devices.
     
  3. Kumo-Kairo

    Kumo-Kairo

    Joined:
    Sep 2, 2013
    Posts:
    343
    Just to add to a wonderful bgolus's answer - keep in mind that Unity needs at least one non-render texture camera in the scene just to send something to backbuffer (even if its render layers are set to none). It really ignores you if you just wan't to do some general Graphics / GL stuff if there are no cameras rendering into the Unity's backbuffer.

    Generally OnRenderImage callback works ok if you don't need any sort of 3D scene "master" downscale (rendering your 3D contents to a lower res texture like 0.8 or 0.6 and applying fullrez UI ontop of it)

    You can find more info here
    https://forum.unity.com/threads/pos...atives-to-graphics-blit-onrenderimage.414399/
    https://forum.unity.com/threads/pos...shnow-problems-with-render-to-texture.500534/
    https://forum.unity.com/threads/render-textures-vs-image-effects-on-mobile-why.502469/

    Also note that generic distortion based on overlay normal map won't work effectively on older devices because it involves dependent texture reads. What you really need is to render a tesselated fullscreen quad with some vertex shader tweaks done to the w coordinate. This is the way it's done in Fetty Wapp Nitro Nation Stories and in the old Shadowgun game (they even have a Unite presentation somewhere)
     
    zhuhaiyia1, nbaris, rcd123 and 2 others like this.
  4. mahdiii

    mahdiii

    Joined:
    Oct 30, 2014
    Posts:
    856
    Thank you Kumo-Kairo
    Thank you bgolus. Why is RenderTexture better and more affordable? OnRenderImage and GrabPass copy all pixels of the screen while a rendertexture render objects in the scene with fewer pixels. I have a lot of full screen uis on top

    Stack post processing image effect uses command buffers to reduce drawcalls and passes?
     
    Last edited: Mar 28, 2018
    jimmyjamesbond likes this.
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Render textures are used no matter what option you use.

    GrabPass is creating a render texture behind the scenes, copying the screen contents (or more accurately the camera’s current render target, which could be the backbuffer or a render texture) to that new render texture, and setting it as the _GrabTexture for the remaining shader passes. Those shader passes render directly to the camera’s target.

    OnRenderImage is also creating a render texture and copying the screen contents to it, but a reference to that render texture is passed as the first parameter of the OnRenderImage function, with a second render texture passed in that’s expected to be the “output”, or destination. The benefit of OnRenderImage over a grab pass is you don’t have to have something placed in the scene which removes some cost, and the copy happens at the end of rendering everything else that camera sees which prevents some potential GPU stalls. It also more easily allows for multi-pass effects as you can pass the output of one Blit as the input of the next where a grab pass shader would have to rely purely on blending or doing additional screen copies. However OnRenderImage has one bit of additional overhead that a grab pass does not as the “output” render texture needs to eventually be rendered back into the backbuffer with a hidden Blit. Overall added flexibility is usually a win over a grab pass.

    Setting a render texture as a camera’s target removes the expensive backbuffer to render texture copy as the camera is already directly rendering to a render texture. You can also then render directly to the backbuffer as the final step of your post process chain rather than letting Unity handle it as the additional hidden Blit call like with the OnRenderImage.

    Command buffers are just a tool for letting you control when to do things, as well as avoid the OnRenderImage copy.
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Here’s a basic break down of the above:

    Grab pass
    1. Copy backbuffer to render texture _GrabTexture. (Hidden)
    2. Run shader reading from _GrabTexture, draw to backbuffer.
    * Repeat steps 1&2 for every post process pass.

    OnRenderImage
    1. Copy backbuffer to render texture SRC. (Hidden)
    2. Run shader with Graphics.Blit reading from SRC, draw to another render texture.
    3. Blit reading from DST, draw to backbuffer. (Hidden)
    * Repeat only step 2 for every post process pass, just make sure the last render texture rendered to is the output DST render texture.

    Set render texture as camera target
    1. Run shader with CommandBuffer.Blit reading from target, draw to another render texture or backbuffer.
    * Repeat step 1 while drawing to a render texture for additional post process passes.
     
    Aenyyezi, tinyant, JMota7 and 13 others like this.
  7. mahdiii

    mahdiii

    Joined:
    Oct 30, 2014
    Posts:
    856
    Awesome dude
    You helped me but one question, I had thought back buffer and front buffer are swapped so fast with changing pointer like not copying all pixel values
    I got it. It copies from backbuffer to renderTexture for ability of changing it before come to front buffer
     
    Last edited: Mar 28, 2018
  8. Kumo-Kairo

    Kumo-Kairo

    Joined:
    Sep 2, 2013
    Posts:
    343
    It's not really true if you look at native profilers. It doesn't make a pre-copy before supplying it for you in OnRenderImage (at least it's the case in all versions 5.3+). You still have to manage further passes yourself, but it's up to the developer really. In my case the first pass of the insides of OnRenderImage is actually making a downsampled antialiased copy of the supplied source render texture (current contents of the Unity's backbuffer which is not a real backbuffer but just a regular framebuffer like any other render texture). And this pass really registers itself like that, grabbing that framebuffer contents directly and rendering it into a smaller one without any additional copies.

    One thing OnRenderImage really requires you to do though is to populate that target framebuffer, and it may be time consuming if you are rendering fullres. But it's still not a backbuffer yet. There is one hidden framebuffer copy though, and it happens after the frame is completely done (all 3D objects and overlays (uGUI) are rendered) and it's really just an identity shader that just copies unity's main framebuffer into a backbuffer, so it can eglSwapBuffers right after that. It's a shame we can't control that last step though as in some cases it can be beneficial to customize that final pass.
     
  9. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,792
    Are there benefits to CommandBuffer.Blit as opposed to just doing:
    Code (CSharp):
    1. RenderTexture myRenderTexture;
    2. void OnPreRender()
    3. {
    4.     myRenderTexture = RenderTexture.GetTemporary(width,height,24);
    5.     camera.targetTexture = myRenderTexture;
    6. }
    7. void OnPostRender()
    8. {
    9.     camera.targetTexture = null;
    10.     Graphics.Blit(myRenderTexture,null as RenderTexture, postMat, postPassNo);
    11.     // Whatever other blits you may need
    12.     RenderTexture.ReleaseTemporary(myRenderTexture);
    13. }
    ?
     
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Good to know! I think I last looked around 5.1, but I might be miss-remembering from Unity 4 (or just wrong, that is of course an option).

    Not that I can think of, apart from being able to inject stuff after opaque passes and before transparent.
     
    AcidArrow likes this.
  11. benthroop

    benthroop

    Joined:
    Jan 5, 2007
    Posts:
    263
    In VR, does Unity warp the resulting RenderTexture before or after OnRenderImage?
     
  12. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    After. Unity knows nothing of the warping, that all happens on the Oculus / OpenVR / PSVR side outside of Unity. Immediately after OnRenderImage the destination target gets passed to the current VR system's compositor to do any additional external UI layers, reprojection, and warping.
     
    Covfefeh likes this.
  13. DavidSWu

    DavidSWu

    Joined:
    Jun 20, 2016
    Posts:
    183
    On mobile platforms we have had the most luck with the following:
    Render all 3d to a lower res render texture with MSAA
    Apply post effects while copying this to the back buffer (no MSAA)
    Draw UI at higher res without MSAA (most UI doesn't benefit from MSAA anyway)
     
  14. TheWiseKodama

    TheWiseKodama

    Joined:
    Oct 15, 2018
    Posts:
    12
    @DavidSWu
    Could you break that down a bit(with some code if possible)? Trying to achieve the same thing but can't seem to find any info.
     
  15. DavidSWu

    DavidSWu

    Joined:
    Jun 20, 2016
    Posts:
    183
    I apologize for the delay on this.
    As it turns out, the LWRP has a solid implementation of this system by default.
    I would encourage you to look at the LWRP source files (Or UWP, they are the same last I checked).
    You turn on dynamic scaling and set a ratio (like 0.707 for half the pixels) and everything else is taken care of for you.
     
  16. TheWiseKodama

    TheWiseKodama

    Joined:
    Oct 15, 2018
    Posts:
    12
    What about the UI? How do you make it render at a higher resolution? Does the dynamic scaling not affect the UI?
     
  17. DavidSWu

    DavidSWu

    Joined:
    Jun 20, 2016
    Posts:
    183
    This may be changing with newer versions of URP but here is how it last worked:
    - Draw 3D at lower res to MSAA buffer
    - Blit with post process for "frame buffer" usually at full res
    Draw UI on top of that
    This works for Screen Space Canvases, but World Space Canvases render to the 3d scene (makes text more difficult to read) and Camera Space Canvases are just world space canvases that get moved and rebuilt each frame.
     
  18. resetme

    resetme

    Joined:
    Jun 27, 2012
    Posts:
    204
    For all my mobile test the faster in order are;

    Draw directly to the texture from the camera target render
    PreRender PostRender
    CommandBuffer
    RenderImage
    GrabPass - Very Slow and lot of gmem

    MSAA, if u do depth of field effects using GPU Depth Pass you can't use MSAA at all on iOS and Android, confirmed by our Unity masters.
    Only way to use MSAA on Depth is with Vertex Depth.
     
  19. DavidSWu

    DavidSWu

    Joined:
    Jun 20, 2016
    Posts:
    183
    What blocks MSAA and depth of field? We use depth of field and we are starting to use depth effects.
    There are plenty of gotchas but we haven't been blocked it.
    I worry that we may get blocked when we start testing on more devices and IOS... ?
     
  20. resetme

    resetme

    Joined:
    Jun 27, 2012
    Posts:
    204

    Turning MSAA ON and getting Native Depth Texture the Depth Texture is blank (0).
    You can get MSAA and Depth if you use Unity camera Depth (Render all mesh twice, lol)
     
  21. DavidSWu

    DavidSWu

    Joined:
    Jun 20, 2016
    Posts:
    183
    Have your tried setting depthDescriptor.bindMS?
     
  22. DavidSWu

    DavidSWu

    Joined:
    Jun 20, 2016
    Posts:
    183
  23. JMota7

    JMota7

    Joined:
    Nov 15, 2017
    Posts:
    6
    Regarding this, if I have a shader using a "_BackgroundTexture" as GrabPass but I want to use it using URP (where 'GrabPass' is not supported) what should I do? I have read about using OpaqueTexture, or applying "CommandBuffer.Blit", etc but I don't think I get how can I maintain my shader working for URP.

    I'm including a gist, maybe someone can give me a hint... :rolleyes:

    Thanks!

    https://gist.github.com/jmota77/ae217c90d72f59e8d8ab0dc9776b6767

    P.S.: Most of the interesting information I've read and from where I'm learning came from your posts @bgolus !
     
  24. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Modifying that shader to work with the URP requires enough changes that you're effectively rewriting it from scratch. So I'd recommend using Shader Graph to remake it. At which point it's an Unlit graph, add a Scene Color node, a Saturation node, and you're "done".*

    * Though unlike doing this with Grab Pass, any transparent objects in the scene will not be visible. There's no easy work around for that. You also won't be able to set
    ZTest Off
    on a Shader Graph shader.

    Depending on what you're using this for, the solution might be a Custom Render Feature, which may or may not need a custom c# script to get working. And will likely require a handwritten shader. And depending on exactly how you're using that shader the answer might still be "no, this is impossible".
     
    SudoCat and JMota7 like this.
  25. JMota7

    JMota7

    Joined:
    Nov 15, 2017
    Posts:
    6
    I'm using this shader just for a loading screen while the asset bundles are being downloaded in a mobile app.

    I'll try to recreate it using ShaderGraph, otherwise, I'll have to remove some cool effects related to animated outlines and use PostProcessing V2 instead of URP.

    Thanks for the info and the quick response!