Search Unity

Rendering into part of a render texture

Discussion in 'General Graphics' started by JibbSmart, Aug 11, 2016.

  1. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Our friend the manual says:
    When rendering into a texture, the camera always renders into the whole texture; effectively rect and pixelRect are ignored.​

    That is a very weird limitation. There are plenty of effects that would take advantage of rendering into only a portion of a render texture, and it's hard to imagine there's much (or any) more to it than rendering into a portion of the screen.

    So, is this correct? Why? Is there some other way to render directly into a portion of a render texture in Unity?

    Thanks.
     
  2. MSplitz-PsychoK

    MSplitz-PsychoK

    Joined:
    May 16, 2015
    Posts:
    1,278
    You could alter the UVs in-shader to fit within a certain range (ex: divide the uv.x by 2 to use the left half of the texture, and add another 0.5 to use the right half of the texture).

    Then, to make sure you don't draw outside the small part of the texture, you can either use a second texture as a mask, or you can use an "if" statement to discard pixels that fall outside the UV range you want. (or if you're shader savvy and want better performance, use a step() to set the alpha to 0 instead of discarding pixels with an if statement)
     
  3. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Hmm. That seems like it might be the best solution short of actually rendering into a viewport smaller than the render target. However, since I'm interested in the effect for a dynamic resolution for performance reasons, that still seems too messy and slow for my purposes.

    Interestingly, it apparently does sometimes work? The internet has been hit-and-miss with this for me.

    Someone asked about this more than a year ago here, and found that its success varied by project.
    Can anyone speak into this?

    I know that with plain ol' OpenGL, rendering into part of a render texture is no different than rendering into part of the screen. Literally no extra code, if rendering into a render texture is already supported.
     
  4. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    This isn't true. The default is that the camera will fill the whole texture but, for sure, you can define the rectangle size for the camera to determine how much of the render texture it covers. I know because I did exactly this to render to a 1920x1080 area of a 2048x2048 texture in my game and it works fine with pixel-perfect non-blurry output. Plus when the render texture displays in the scene view I see the whole texture including the portion I output to plus the remaining portion which shows garbage data (which is correct). Change the 0 and 1 for the camera size and make it smaller. You'll have to do math to calculate what these need to be for a given pixel size based on your texture size.
     
    PrimalCoder likes this.
  5. MSplitz-PsychoK

    MSplitz-PsychoK

    Joined:
    May 16, 2015
    Posts:
    1,278
    According to the Camera.TargetTexture scripting API page: "When rendering into a texture, the camera always renders into the whole texture; effectively rect and pixelRect are ignored."
     
  6. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    You can quote all you want, I'm telling you from experience, I have a project RIGHT NOW with a 2048x2048 render texture which does NOT render a camera covering the whole texture. It only covers the area I've told it to cover. The documentation must be wrong, or old.
    Since you're looking at the SCRIPTING version of the documentation, maybe there are limits imposed when you're doing this from a script. But when you're just setting up a render texture to send the camera output to in the inspector, adjusting the rect DOES WORK.
     
    Last edited: Aug 12, 2016
  7. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Here for example is a screen shot of what my RenderTexture looks like. Its a 2048x2048 ARGB render texture. It's being mapped onto a perfectly square quad. The lower half or so is the result of a 1920x1080 camera output. Only the lower half of the render texture is ever written to, the top portion remains unused and thus collects junk data over time (the scrambled garbled area). Because of this, I have perfect pixel quality for pixel art from this texture. I position the quad over another camera to render the render-texture to the screen and it looks perfect. There is no blurring or bilinear filtering or missing rows or anything. Now, ORIGINALLY, before I adjusted the camera to do this, it WAS defaulting to rendering to the whole render texture. This caused my graphics output to STRETCH to fit the texture, vertically and a little bit horizontally, which blurred the images and destroyed my image quality. I couldn't figure out what was going on until I realized the entire render texture was filled with my image, rather than an appropriately sized area. I tweaked the camera rect and instantly it was fixed. My 'display camera's "Viewport Rect" is now:

    x=0, y=0, W=0.9375, H=0.5273438 - which is the percentage in a 0..1 range that the texture takes up to map a 1920x1080 area inside a 2048x2048 render texture.

    Screen Shot 2016-08-12 at 3.09.39 PM.png
     
    Minchuilla likes this.
  8. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    ImaginaryHuman, that's great that it's working for you. Unfortunately, there's evidence that for some people or for some projects, Unity's behaviour is consistent with the manual, which is cause for concern about how much I can count on it as my project changes, or as I build for other platforms.

    Secondly, it is generally bad practice to rely on undocumented features. While the documentation can fall behind the feature set for a time, differences between software and its features are sometimes bugs, not features. This means the "feature" (bug) might get "fixed" (removed) in a future update with no warning. It can also often mean seemingly simple changes to code that should still work if it was a properly supported feature just don't.

    So I'm glad to see it's working for you. Having tried it, it's working for me, too. But since I'm concerned it might stop working unexpectedly, or that it might work on my development platform but not one of the target platforms, I'm looking for a solution that has some support from the manual. Or even better, for Unity to support the feature officially and update the manual accordingly, so I've asked about it in the documentation forum.
     
  9. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Valid concerns. It seems to me that it'd be a bit silly for a camera to output to a full render texture, while requiring the render texture to be square.... it wouldn't be very useful for much. I mean, even using a render texture to render the full screen or to do image effects, it would just totally interfere with the output quality if everything was always getting stretched to fill the texture. I would say that's more of a buggy oversight than that the documentation says something else. I hoping the way it is now is that this has been 'fixed' and its just the docs that are out of date. Otherwise it's a pretty major shortcoming to not be able to create a rectangular render texture that isn't perfectly square.
     
  10. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Agreed. It's a huge handicap!

    I was making quite good progress on a dynamic resolution + temporal anti-aliasing effect using targetTexture and pixelRect, but then I realised my depth effects that use the depth texture weren't working anymore -- _CameraDepthTexture no longer contained the depth if the camera had a targetTexture.

    So I tried creating my own depth texture (RenderTextureFormat.Depth) and setting it as the depth buffer via SetTargetBuffers. But for whatever reason, SetTargetBuffers completely ignores pixelRect, so now I'm stuck :(

    What's the best way to go about making a feature request for Unity? SetTargetBuffers should work with pixelRect. And while we're at it, there should be better ways to access depth buffers as depth textures than _CameraDepthTexture and _LastCameraDepthTexture.
     
  11. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
  12. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    So does "fixed" mean, the correct way it will work and is supposed to work, is that the camera rect SHOULD be taken into account and thus output to only a portion of a render texture not the whole thing?
     
  13. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Haha, good question! I'm optimistic, but I really don't know :p
     
  14. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Fixed in the latest patch (5.4.0p2). It's actually working now, although using the built-in motion vectors means everything transparent renders to the full texture (ignores pixelRect), so don't use that for now.
     
  15. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    What do you mean by fixeD? What is the new standard functioning? That cameras can render to part of the render texture?
     
  16. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Sorry, should've been more clear :p The camera can render to part of the render texture :)
     
  17. jobigoud

    jobigoud

    Joined:
    Apr 13, 2017
    Posts:
    8
    For me the problem persists. When changing the camera viewport rect, the camera still paints on the entirety of its target RenderTexture instead of the defined rectangle.

    As far as I can see the documentation (which hasn't been changed for 2017.1) is ,unfortunately, correctly describing Unity's behavior.

    I've tested this in 5.6 and 2017.10b9, with Render textures created in the editor and by code. Maybe the behavior was altered to match the doc recently? Maybe it depends on the platform?

    Here is how I create a test scene to expose the issue:
    1. Create a new project.
    2. Create a RenderTexture.
    3. Create a secondary camera.
    4. Create a quad in front of the primary camera.
    5. Assign the RenderTexture to `TargetTexture` property of secondary camera.
    6. Assign the RenderTexture to the quad, creating a material.
    7. Add a few objects somewhere else and point the secondary camera at them.

    -> At this point the view of the secondary camera is correctly painted on the quad.

    8. Change ViewportRect on the secondary camera to limit the viewport to a region, for example: x:0, y:0, w:0.5, h:1.

    -> Result: The texture is shown on the entirety of the quad, The projection is distorted.
    -> Expected: The texture is only shown on the left half of the quad. The right half may show garbage.

    I cannot work around by modifying the shaders' tiling or UVs, as this is for a tool to be used in third party scenes with their own shaders.

    Changing the RenderTexture size doesn't change anything, as expected.
    Changing ViewportRect on a camera rendering to the display is correctly rendering the view on a partial region of the display.

    Partial RT.png
     
    Last edited: Jun 21, 2017
  18. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Not sure why you're seeing this, but I have a game which renders an HD resolution screen to part of a 2048x2048 render texture and it only renders the area covered by the camera's window, per its rectangle. However, that said, I dont actually have any objects outside of that rectangle to test whether they would render to it.
     
  19. fablam

    fablam

    Joined:
    Sep 6, 2012
    Posts:
    1
    Reproduced on 5.6.1f1, and the docs for 2017.2b indicate it is not supported.

    At the very least, the proper functionality should be for Unity to spew a Debug.LogWarning() on the first frame where a Camera is rendered with a RenderTexture and rect != Rect(0,0,1,1).

    If it still somehow works on some target platforms and not others, then there should be a way to query if we can actually do this. If that's not acceptable for some reason, then the ability to render to parts of a rendertexture should be disabled completely.
     
  20. npatch

    npatch

    Joined:
    Jun 26, 2015
    Posts:
    247
    Take a look at CustomTexture as well. A new feature in 2017.
     
    Arkade likes this.
  21. jobigoud

    jobigoud

    Joined:
    Apr 13, 2017
    Posts:
    8
    To clarify my previous message, rendering to a sub window of the viewport does work if we use camera SetTargetBuffers. In this case the camera.rect field is correctly taken into account during rendering. Verified in 5.6 and 2017.1.

    So a workaround is instead of using camera SetRenderTexture, to use camera SetTargetBuffers using the texture's colorBuffer and depthBuffer as parameters.

    I would thought that SetRenderTexture would just be a wrapper around the lower level SetTargetBuffers but it must have some extra logic that prevent the correct behavior of camera rect during rendering to its target texture.
     
    yasirkula and Arkade like this.
  22. simpleyuji

    simpleyuji

    Joined:
    Jun 11, 2017
    Posts:
    2
    Hello. I was able to get rendering to a specific viewport rect of the texture by changing the Camera setting "Rendering Path" from "Use Graphic Settings" to "Forward" .
     
  23. MaeL0000

    MaeL0000

    Joined:
    Aug 8, 2015
    Posts:
    35
    How about if you want to blit into a portion of a rendertexture? Say I have two texture, both x wide and y high, and I want to blit them side by side into a rendertexture with 2x width and y height.
     
  24. npatch

    npatch

    Joined:
    Jun 26, 2015
    Posts:
    247
    Graphics.CopyTexture if your GPU supports it(check notes inside).
     
    daverin_n likes this.
  25. Kolyasisan

    Kolyasisan

    Joined:
    Feb 2, 2015
    Posts:
    397
    Necrobumping this thread, this is still a big issue that can't seemingly be solved with the built-in pipeline.

    In our case, Unity tells you that you indeed are rendering into a part of a texture defined with your camera's viewport rect, but in reality unity creates a new rendertexture each time the rect is changed and simply blits it back into the target texture once it's done.

    That's just awful.
     
  26. npatch

    npatch

    Joined:
    Jun 26, 2015
    Posts:
    247
  27. Kolyasisan

    Kolyasisan

    Joined:
    Feb 2, 2015
    Posts:
    397
    No, I haven't. We tried to use the viewport clipping for dynamic resolution (as described by Intel in one of their 2011 papers) and CustomRenderTexture is not really suited for such a thing afaik.

    SRPs easily allow you to achieve such a behaviour, but the built-in one just generates a new texture from a pool based on its pixel width (which is based on pixel rect which in itself is based on rect). That's just sad.
     
  28. broarty

    broarty

    Joined:
    Sep 28, 2020
    Posts:
    3
    I had recently struggled with this, found this on an old thread. Saved my Mac build.
    https://answers.unity.com/questions/1266312/dynamic-viewport-for-a-camera-rendering-to-a-targe.html

    The work around seems to be calling Camera.SetDepthBuffers(RenderTexture.colorBuffer,RenderTexture.depthBuffer) and then Camera.Render() on the Camera with the specific viewport rect you are sending to the render texture.

    this is a work around but it might help someone else, for some reason I only encountered this problem on Mac and was able to draw to part of the texture on windows without needing the scripting.
     
    frank-ijsfontein likes this.
  29. Sab_Rango

    Sab_Rango

    Joined:
    Aug 30, 2019
    Posts:
    121
    I made it!
    1.set render texture=1920&1080
    2. create Ui> Raw image =1920&1080
    3.attach render texture to raw image.
     
  30. pistoleta

    pistoleta

    Joined:
    Sep 14, 2017
    Posts:
    539
    Any news on this? I'm trying also to get a portion of the renderTexture and tried all your workarounds but wasn't lucky.
    Is there any method to this day?
    Thanks!
     
  31. npatch

    npatch

    Joined:
    Jun 26, 2015
    Posts:
    247
    I haven't done anything like it since.
    So long as your purpose is to render the portion out with no modification, CopyTexture should be enough (beware the limitations, same texture type etc). Or your own Material+Shader + Graphics.Blit and pass any custom args through Material.SetXX.

    If you want to modify then you need the regular ways of GetTextureData and so on and filter the reading yourself.


    As for Custom Render Texture, here's an quick example of subregions I whipped up. For anything beyond simple region blitting, you'll need to customize the Custom Render Texture Shader. upload_2022-5-28_14-24-9.png
    Here the right quad has a material with a CustomRenderTexture as base tex. And the inspector on the right shows the configuration used to update it. Basically it inits to a black color and then has two update regions (top left and bottom right quadrants) which as seen in the shader below, use the globalTexcoord to sample the Texture itself, so it's like you copy the same quadrants from the left quad to the right one. Default shader code uses localTexcoord, but that would have blitted the full SRP texture into those quadrants.

    Code (Shader):
    1. Shader "CustomRenderTexture/Simple"
    2. {
    3.     Properties
    4.     {
    5.         _Color ("Color", Color) = (1,1,1,1)
    6.         _MainTex("InputTex", 2D) = "white" {}
    7.      }
    8.  
    9.      SubShader
    10.      {
    11.         Blend One Zero
    12.  
    13.         Pass
    14.         {
    15.             Name "Simple"
    16.  
    17.             CGPROGRAM
    18.             #include "UnityCustomRenderTexture.cginc"
    19.             #pragma vertex CustomRenderTextureVertexShader
    20.             #pragma fragment frag
    21.             #pragma target 3.0
    22.  
    23.             float4      _Color;
    24.             sampler2D   _MainTex;
    25.  
    26.             float4 frag(v2f_customrendertexture IN) : COLOR
    27.             {
    28.                 float2 uv = IN.globalTexcoord.xy;
    29.                 float4 color = tex2D(_MainTex, uv);
    30.                 return color;
    31.             }
    32.             ENDCG
    33.         }
    34.     }
    35. }
    One interesting aspect of CustomRenderTexture is that you can control intialization from other sources and also control the update frequency, OnLoad, OnDemand (script) or Realtime (Always). And you can provide multiple passes, which you can set for different update zones in case you want customized blitting for each region/zone.
     
  32. pistoleta

    pistoleta

    Joined:
    Sep 14, 2017
    Posts:
    539
    Wow, i didn't understand the Bit first part Ill read more about it, but the CustomRenderTexture might proof very useful soon in our project.
    Thanks a lot!!!!
     
  33. pistoleta

    pistoleta

    Joined:
    Sep 14, 2017
    Posts:
    539
    about your first suggestion, still the generation of the first RenderTexture, even if you destroy it afterwards copying it to a texture is creating an important memory overhead, am I right? especially if you're capturing a HDR texture , what im trying to avoid is that first memory peak and take 16 render textures in 16 different frames.
     
  34. npatch

    npatch

    Joined:
    Jun 26, 2015
    Posts:
    247
    I don't know what you're trying to do, but you want to grab a region of a texture and copy it to another place right?
    Depending on what you need, you might avoid any extra copy if you can just figure out sampling and provide a custom shader for whatever you need that region for. Otherwise you'll need at least one render texture to copy that region to.

    The memory overhead is mostly there, when you need to modfiy the texture and a CPU copy is created. If you can do that within GPU, with fragment or even compute shaders, I think you can avoid that. Granted it's a harder path. Graphics.Blit does exactly that. You provide a material with a custom shader that does whatever you need it to and you provide a source and destination texture to do that shader pass on. Whether you blindly copy texels or transform them somehow is another issue.

    As for the 16 RTs, perhaps you could alternate between temporary RTs? Why Create and Destroy so much? That's the overhead, not copying around. And if they are temporary, use GetTemporaryRenderTexture instead. I think Releasing those, caches them for a bit (so it can skip allocations and frees of the memory blocks) and it's good for transient results. e.g. https://github.com/sienaiwun/TAA_Unity_URP/blob/master/Runtime/TAAPass.cs
     
    Last edited: May 28, 2022
  35. pistoleta

    pistoleta

    Joined:
    Sep 14, 2017
    Posts:
    539
    What I'm doing currently is enable a Cam that I have on top of the map looking down, then I enable it and call cam.Render() with a target texture.

    Since the map is quite big im creating a 4096 x4096 renderTexture, then I convert it into a Texture2D and set it on a plane, with this I have kind of a google maps view of my whole map. This right now is working.

    Obviously the take of this render view consumes a lot of memory... (peak of 200MB) so I thought instead doing 1 of 4096 I could do 16 of 256 separated In different frames and create 16 textures.

    Thats why Im interested in taking just a part of the render view, because I want a high res image, even if I have to split it in 16 textures.

    Sorry if I didn't make myself understand earlier and thanks for your help!!
     
  36. npatch

    npatch

    Joined:
    Jun 26, 2015
    Posts:
    247
    If it were me, I'd ditch the Texture2D and use a RenderTexture instead which can go directly on the Camera and can be used in manipulation apis like Graphics.CopyTexture etc. As for the rendering, if it's a UI Image, just use a RawImage which can take a RenderTexture. You will avoid the conversion costs to Texture2D.

    If you can't do so, then make sure to call
    void Apply(bool updateMipmaps = true, bool makeNoLongerReadable = false);
    for all the Texture2D involved with "true" for makeNoLongerReadable. What this does is, tells the engine that you won't be editing the Texture data from CPU side, basically through C#, but you can still do so in shaders dealing with GPU memory, so it just dumps the CPU copy of the texture completely. The CPU copy adds extra memory and overhead because it needs to stay synced with the GPU one. If you don't need it, remove it. That should remove some overhead, whether you use 4kx4k or 16 256x256.
    That said, 4kx4k seems a bit of an overkill. Haven't done a minimap, but you probably don't ever see that much detail in one and usually games render out simpler shapes to convey that information since it's updated frequently. Do you need that much precision?
    One more setting you might find useful on the CustomRenderTexture is the updatePeriod. You can set it to update every .4s for example, instead of every frame.
    As for the 16 textures, you will still need a single render texture to capture the camera input, so 4kx4k texture it is, so you already pay for the resolution/precision upfront. You're getting issues from converting to Tex2D with CPU copy. Even if you use a CustomRenderTexture with 16 update zones, or 16 textures/rendertextures, you are doing twice the work anyhow and in both cases you'll still update the same amount of pixels. Unless ofc you play around with the frequency of texture updates, like have greater frequency towards the center and less towards the sides, but the result will probably not look smooth.

    Otherwise you'll need 16 scene cameras, each rendering in one RenderTexture, which is not advised (too many cameras can have a big overhead). Not sure if Cinemachine has some good workaround towards that.

    Another option would be to use the camera depth instead, with different colors for value ranges. You might even get away with 16bits precision if you don't need too much detail. And you can probably mess with camera settings in case you are in HDRP and have it render out just depth (https://forum.unity.com/threads/rendering-a-depth-only-pass-on-a-seperate-camera.1172807/).
     
    Last edited: May 29, 2022
  37. yasirkula

    yasirkula

    Joined:
    Aug 1, 2011
    Posts:
    2,879
    This was the solution for us, thanks! On Windows, targetTexture worked fine but on Mac, we needed this trick (which works just fine on Windows, as well). Here's the code diff for anyone interested:

    upload_2023-3-6_12-32-5.png
     
    MisfitXXX and Sluggy like this.
  38. oxinfinite

    oxinfinite

    Joined:
    Nov 25, 2018
    Posts:
    9
    Does anyone know how to apply camera rect into the target render texture on URP(or SRP)?

    The solution above seems only works on legacy pipeline. But I have to do the same in URP.

    I'm currently creating a render texture Atlas with a single target texture assigned by many cameras for global texture of shader.
     
  39. c0d3_m0nk3y

    c0d3_m0nk3y

    Joined:
    Oct 21, 2021
    Posts:
    666
    CommandBuffer.SetViewport and CommandBuffer.EnableScissorRect should do the trick (depending on what you need)
     
    oxinfinite likes this.
  40. oxinfinite

    oxinfinite

    Joined:
    Nov 25, 2018
    Posts:
    9
    As you advised, I succeeded in clipping the Render Texture by applying EnableScissorRect() to the RendererFeature. But there's another problem.

    Screenshot 2023-11-06 110303.png

    As in the uploaded image, when the second camera draws a renderer on the same Render Texture, the first camera erases the previously drawn. If this happens, there's no point in making Atlas.

    How can I keep my previously drawn Render Texture? Below is the Scriptable Render Feature code I wrote, and it applies to the cameras(the first and second cameras mentioned above) assigned render texture to the Output Texture .

    Code (CSharp):
    1.  
    2. using UnityEngine;
    3. using UnityEngine.Rendering;
    4. using UnityEngine.Rendering.Universal;
    5.  
    6. public class ApplyCameraRect : ScriptableRendererFeature
    7. {
    8.     private ApplyCameraRectPass pass;
    9.  
    10.     public RenderPassEvent _renderPassEvent = RenderPassEvent.BeforeRenderingShadows;
    11.  
    12.     public RenderTexture _targetTexture;
    13.  
    14.     public override void Create()
    15.     {
    16.         pass = new ApplyCameraRectPass(_renderPassEvent, _targetTexture);
    17.     }
    18.  
    19.     public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    20.     {
    21.         renderer.EnqueuePass(pass);
    22.     }
    23. }
    24.  
    25. public class ApplyCameraRectPass : ScriptableRenderPass
    26. {
    27.     public RenderTexture targetTexture;
    28.  
    29.     public ApplyCameraRectPass(RenderPassEvent _evt, RenderTexture _rt)
    30.     {
    31.         renderPassEvent = _evt;
    32.         targetTexture = _rt;
    33.     }
    34.  
    35.     public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
    36.     {
    37.         Rect rect = renderingData.cameraData.camera.rect;
    38.  
    39.         rect.x = targetTexture.width * rect.x;
    40.         rect.y = targetTexture.height * rect.y;
    41.         rect.width = targetTexture.width * rect.width;
    42.         rect.height = targetTexture.height * rect.height;
    43.  
    44.         cmd.EnableScissorRect(rect);
    45.     }
    46.  
    47.     public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
    48.     {
    49.     }
    50. }
    51.  
     
  41. c0d3_m0nk3y

    c0d3_m0nk3y

    Joined:
    Oct 21, 2021
    Posts:
    666
    Set the Camera.clearFlags to Nothing or Depth only.
     
  42. oxinfinite

    oxinfinite

    Joined:
    Nov 25, 2018
    Posts:
    9