Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Graphics.Blit(...) behaving differently in URP?

Discussion in 'Universal Render Pipeline' started by LB_Chris, Feb 19, 2020.

  1. LB_Chris

    LB_Chris

    Joined:
    Jan 29, 2020
    Posts:
    33
    Hey folks,

    I have a seemingly simple problem (probably) regarding the Universial Render Pipeline, specificly the Graphics.Blit() function call. My ultimate goal is to stream the screen of an Oculus Quest onto a tablet via WLAN. I am currently limited by the performance on the Quest when capturing a screenshot.

    I have tried various ways to limit the performance cost of that screenshoting process. Instead of having a camera rendering into a custom render texture, I found that using the main cameras internal render texture, which is ultimately shown on the screen, might be usefull since it would save me a whole camera.Render() process (which is especially costly on the Quest).

    This "default" render textures resolution is too high to call a Texture2D texture.readPixels(...) since it would take several milliseconds. So the plan is to create a second RenderTexture customRenderTexture with a lower resolution and somehow copy and scale down the data from RenderTexture.active to my customRenderTexture.

    So, using the build-in RP, nothing of this was a problem. I used Graphics.Blit(RenderTexture.active, customRenderTexture) which does the copying process and conveniently also scales down the picture from RenderTexture.active's resolution to my custom desired resolution. So far so good.

    BUT, using the URP, some problem arise and this approach doesn't work anymore. The ultimate problem currently is that Graphics.Blit(...) does not scale down the source RT to the target RT's resolution anymore.

    The target RT only receives the lower left part of the source RT with the size of my target resolution. That behaviour seems to be similar to calling Texture2D texture.readPixels(Rect rect(...), 0, 0) with a lower rect resolution than the active RT's resolution.

    That means that now I am either stuck with a high resolution RT which is too big to run performantly when calling Texture2D texture.readPixels(...) OR having a low resolution RT with just the lower left part of the source RT.

    I could be wrong, but I have identified a different behaviour in the Graphics.Blit(...) function in the URP as the culprit. I would rather not go back to a method that involves performing a second camera.render() process, especially since everything was working fine in the build-in RP.

    Does anyone have some insight on what the problem might be or can anyone propose another approach or workaround?


    Additional Notes:
    - I am testing my code on an Oculus Rift S in Unity play mode.
    - in the build-in RP, I am hooking in at the OnPostRender() call and in the URP I am hooking in in the RenderPipelineManager.endCameraRendering event by adding RenderPipelineManager.endCameraRendering += CustomOnPostRender; to the event. CustomOnPostRender() does the Graphics.Blit(...) operation.
    - current behaviour of Graphics.Blit(...) in URP (1): The target RT only receives the lower left part of the source RT with the size of my target resolution. That behaviour seems to be similar to calling Texture2D texture.readPixels(Rect rect(...), 0, 0) with a lower rect resolution than the screen.
    - current behaviour of Graphics.Blit(...) in URP (2): In addition to (1), the target RT is inverted on the y axis (its upside down), but I already have a workaround for that so is not a problem at all.
    - when using that approach in the build-in RP, the target RT contains a picture with fish-eyed view versions of the two eyes. On the URP the target image is a "normal" quad version of what a normal (non-VR) camera would produce.
    - in URP, the Render.active RenderTexture is null, even if camera.forceIntoRenderTexture is set to true. So calling Graphics.blit(null, customRenderTexture) ends in the same result.
    - setting the resolution of the target render texture to the sources resolution results in a correct and expected behaviour, but then again the target resolution is too high.
    - Graphics.Blit(...) can be called with a Vector2 scale and a Vector2 offset, but changing those values do not change the results in any way. (Anyone knows why?)


    Here is an improvised code sniplet:

    Code (CSharp):
    1.  
    2. using UnityEngine;
    3. using UnityEngine.Rendering;
    4.  
    5. public class Example : MonoBehaviour
    6. {
    7.     private RenderTexture _customRenderTexture;
    8.     private Vector2Int _customResolution = new Vector2Int(576, 324); // mock values
    9.     private Texture2D _targetTexture;
    10.  
    11.     void Awake()
    12.     {
    13.         RenderPipelineManager.endCameraRendering += CustomOnPostRender;
    14.         _customRenderTexture = new RenderTexture(_customResolution.x, _customResolution.y, 16, RenderTextureFormat.ARGB32);
    15.         _targetTexture = new Texture2D(_customResolution.x, _customResolution.y, TextureFormat.RGB24, false);
    16.     }
    17.  
    18.     private void OnDestroy()
    19.     {
    20.         RenderPipelineManager.endCameraRendering -= CustomOnPostRender;
    21.     }
    22.  
    23.     public void CustomOnPostRender(ScriptableRenderContext context, Camera camera)
    24.     {
    25.         RenderTexture source = RenderTexture.active;
    26.         // effectively, source is null here
    27.  
    28.         Graphics.Blit(source, _customRenderTexture);
    29.         // _customRenderTexture now contains data representing an image of the lower left
    30.         // 576x324 pixels of the rendered image of the camera.
    31.         // I would _customRenderTexture like to be a scaled down version of the whole
    32.         // screen of the source RT's image.
    33.         // In the build-in RP, that would now be the case, in URP it's not.
    34.  
    35.         // afterwards I write the _customRenderTextures content into a Texture2D, but
    36.         // my _customRenderTexture doesn't contain the image I expect it to have.
    37.         RenderTexture prevRenderTexture = RenderTexture.active;
    38.         RenderTexture.active = _customRenderTexture;
    39.         _targetTexture.ReadPixels(new Rect(0, 0, _customResolution.x, _customResolution.y), 0, 0, false);
    40.         RenderTexture.active = prevRenderTexture;
    41.  
    42.         // Encode _targetTexture and send it to the tablet
    43.         byte[] sendData = _targetTexture.EncodeToJPG(75);
    44.         //SendBytes(sendData);
    45.     }
    46. }
    47.  
    48.  
     
  2. weiping-toh

    weiping-toh

    Joined:
    Sep 8, 2015
    Posts:
    186
    Graphics.Blit do not work on SRP. You would need to use a CommandBuffer and use the RenderContext to execute it or add it to the camera queue.
     
  3. LB_Chris

    LB_Chris

    Joined:
    Jan 29, 2020
    Posts:
    33
    I investigated in what you proposed and found a solution that fixed all my problems :)

    I ended up creating my own CustomRenderPassFeature where I would blit the BuiltinRenderTextureType.CameraTarget to my custom render texture after all rendering is done. CommandBuffer.Blit() also scales down the image.

    Thanks a lot for mentioning CommandBuffers! :)
     
  4. Clonze

    Clonze

    Joined:
    Dec 15, 2012
    Posts:
    26
    any chance you could post a simplified code on accomplishing blit with the CustomRenderPassFeature
    (I don't have much experience with this stuff and i'm really confused)
     
  5. LB_Chris

    LB_Chris

    Joined:
    Jan 29, 2020
    Posts:
    33

    Sure. To sum it up, when right clicking in your project folder->Create->Rendering->URP->Render Feature, a template will be created where you can add your custom code. below is my whole script. Note that in the line "m_ScriptablePass.renderPassEvent = RenderPassEvent.AfterRendering;", you can set the point in the render pipeline that you want to hook in. In my case, it's when all rendering is done, though I may be changing it in the future. Also note, that CommandBuffer.Blit(...) sets the destination RT as the new RenderTarget by default. I did not want that to happen so after the blit, I set the render target back to whatever it was before.

    You will also have to tell the render pipeline to acutally use that custom render feature. To do that, look for your render pipeline renderer asset (I think it is created when crating your URP Asset and it's default name is "UniversalRenderPipelineAsset_Renderer") and under Render Features, add the name of your class (in my case "CustomRenderPassFeature").

    I don't have experience in this topic too, so use it with caution.

    Code (CSharp):
    1. using System.IO;
    2. using UnityEngine;
    3. using UnityEngine.Rendering;
    4. using UnityEngine.Rendering.Universal;
    5.  
    6. public class CustomRenderPassFeature : ScriptableRendererFeature
    7. {
    8.     class CustomRenderPass : ScriptableRenderPass
    9.     {
    10.         private RenderTargetIdentifier _customRenderTargetIdentifier;
    11.         private Vector2 _scale;
    12.         private Vector2 _offset;
    13.  
    14.         // This method is called before executing the render pass.
    15.         // It can be used to configure render targets and their clear state. Also to create temporary render target textures.
    16.         // When empty this render pass will render to the active camera render target.
    17.         // You should never call CommandBuffer.SetRenderTarget. Instead call <c>ConfigureTarget</c> and <c>ConfigureClear</c>.
    18.         // The render pipeline will ensure target setup and clearing happens in an performance manner.
    19.         public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
    20.         {
    21.         /* MY CODE START */
    22.         // some intern conditions
    23.             bool takeScreenShot = StreamingManager.Instance != null &&
    24.                                   StreamingManager.Instance._renderType == StreamingManager.RenderType.Blitting &&
    25.                                   StreamingManager.Instance.TakeScreenshotThisFrame();
    26.  
    27.             if (takeScreenShot)
    28.             {
    29.                 if (StreamingManager.Instance._squeezeForDoubleFishEye)
    30.                 {
    31.                     _scale = new Vector2(0.3f, 0.8f);
    32.                     _offset = new Vector2(0.1f, 0.1f);
    33.                 }
    34.                 else
    35.                 {
    36.                     _scale = new Vector2(1f, 0f);
    37.                     _offset = new Vector2(0f, 0f);
    38.                 }
    39.                 _customRenderTargetIdentifier = StreamingManager.Instance._customRenderTextureIdentifier;
    40.             }
    41.         /* MY CODE END */
    42.         }
    43.  
    44.         // Here you can implement the rendering logic.
    45.         // Use <c>ScriptableRenderContext</c> to issue drawing commands or execute command buffers
    46.         // https://docs.unity3d.com/ScriptReference/Rendering.ScriptableRenderContext.html
    47.         // You don't have to call ScriptableRenderContext.submit, the render pipeline will call it at specific points in the pipeline.
    48.         public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
    49.         {
    50.             CommandBuffer cmd = CommandBufferPool.Get();
    51.  
    52.             /* MY CODE START */
    53.             bool takeScreenShot = Application.isPlaying &&
    54.                                   StreamingManager.Instance != null &&
    55.                                   StreamingManager.Instance._renderType == StreamingManager.RenderType.Blitting &&
    56.                                   StreamingManager.Instance.TakeScreenshotThisFrame();
    57.  
    58.             if (takeScreenShot)
    59.             {
    60.                 RenderTargetIdentifier prev = BuiltinRenderTextureType.CameraTarget;
    61.                 cmd.Blit(BuiltinRenderTextureType.CameraTarget, _customRenderTargetIdentifier, _scale, _offset);
    62.                 cmd.SetRenderTarget(prev);
    63.             }
    64.             /* MY CODE END */
    65.  
    66.             // execution
    67.             context.ExecuteCommandBuffer(cmd);
    68.             CommandBufferPool.Release(cmd);
    69.         }
    70.  
    71.         /// Cleanup any allocated resources that were created during the execution of this render pass.
    72.         public override void FrameCleanup(CommandBuffer cmd)
    73.         {
    74.  
    75.         }
    76.     }
    77.  
    78.     CustomRenderPass m_ScriptablePass;
    79.  
    80.     public override void Create()
    81.     {
    82.         m_ScriptablePass = new CustomRenderPass();
    83.  
    84.         // Configures where the render pass should be injected.
    85.         m_ScriptablePass.renderPassEvent = RenderPassEvent.AfterRendering;
    86.     }
    87.  
    88.     // Here you can inject one or multiple render passes in the renderer.
    89.     // This method is called when setting up the renderer once per-camera.
    90.     public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    91.     {
    92.         renderer.EnqueuePass(m_ScriptablePass);
    93.     }
    94. }
     
  6. chrismarch

    chrismarch

    Joined:
    Jul 24, 2013
    Posts:
    465
    Is this in the documentation? If so, I haven't found it yet.
     
  7. weiping-toh

    weiping-toh

    Joined:
    Sep 8, 2015
    Posts:
    186
    I meant it does not work in the same fashion as one might expect with the default render pipeline. Graphics.Blit still works but not in the per-camera fashion of the default render pipeline. It works with the according to the whatever the custom renderer is applying to it. The usual camera hooks simply does not gets called.
    https://docs.unity3d.com/Packages/c...l/universalrp-builtin-feature-comparison.html
     
    chrismarch likes this.
  8. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    My editor shows me a bunch of errors when creating the renderer feature. The namespace using UnityEngine.Rendering.Universal cant be found. Anyone else with this issue? Im using Visual Studio Code as editor atm.

    EDIT: upgrading the unity version from 2019.3.10f0 to 2020.1 fixed it for me
     
    Last edited: Aug 10, 2020
  9. Desoxi

    Desoxi

    Joined:
    Apr 12, 2015
    Posts:
    195
    As this is the only code snippet I could found which uses the ScriptableRenderPass class: Do you know how to expose parameters to the editor? Or how to blit to a rendertexture which I have created via the inspector?
     
  10. weiping-toh

    weiping-toh

    Joined:
    Sep 8, 2015
    Posts:
    186
    It is supposed to accessed via ScriptableRenderFeatures