Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Bug Using ConfigureCameraTarget results in BlackScreen on FinalBlit pass

Discussion in 'Universal Render Pipeline' started by IvanNevesDev, Jun 27, 2023.

  1. IvanNevesDev

    IvanNevesDev

    Joined:
    Jan 28, 2019
    Posts:
    27
    Hello guys. I'm quite new to a custom RP and i'm trying to understand how things work.
    Right now i'm trying to make my camera use custom depth and color buffers using ScriptableRendererFeature/ScriptableRenderPass, but the way i'm setting it up does not work as i thought it would, but i can't see where the error is.

    The way i'm doing it is using this CustomScriptableRenderPass that is fired on RenderPassEvent.BeforeRendering, to setup the new buffers before doing anything.
    Code (CSharp):
    1. On CustomScriptableRenderPass.cs
    2.  
    3. private int _colorBuffer = Shader.PropertyToID("_PixelColorTarget");
    4. private int _depthBuffer = Shader.PropertyToID("_PixelDepthTarget");
    5.  
    6. public override void OnCameraSetup(CommandBuffer cmd, ref RenderingData renderingData)
    7. {
    8.   cmd.GetTemporaryRT(_colorBuffer, 320, 180, 0, FilterMode.Point, RenderTextureFormat.Default);
    9.   cmd.GetTemporaryRT(_depthBuffer, 320, 180, 16, FilterMode.Point, RenderTextureFormat.Depth);
    10. }
    11. ...
    12. public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
    13. {
    14.   CommandBuffer cmd = CommandBufferPool.Get("Setup Custom Buffer");
    15.   context.ExecuteCommandBuffer(cmd);
    16.  
    17.   renderingData.cameraData.renderer.ConfigureCameraTarget(_colorBuffer, _depthBuffer);
    18.   CommandBufferPool.Release(cmd);
    19. }

    This seems to almost work.
    But: The screen becomes black. Using the frame debugger, It seems that everything works as intended until the command "FinalBlit", which turns the screen black.

    I thought it had just forgotten to "reconfigure" the camera, since i've changed the default buffers, but creating another pass on RenderPassEvent.AfterRendering and did the following, but it did not work. I can see a pass that creates a proper image covering the entire screen, but FinalBlit still results in a black screen.

    Code (CSharp):
    1. public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
    2. {
    3.     renderingData.cameraData.renderer.ConfigureCameraTarget(
    4.          BuiltinRenderTextureType.CameraTarget,
    5.          BuiltinRenderTextureType.CameraTarget
    6.     );
    7.  
    8.     CommandBuffer cmd = CommandBufferPool.Get("Set Pixel Buffer");
    9.     cmd.Blit(_colorBuffer, BuiltinRenderTextureType.CameraTarget);
    10.  
    11.     context.ExecuteCommandBuffer(cmd);
    12.     CommandBufferPool.Release(cmd);
    13. }
    Does anyone know why this happens?
    Also, i'm quite sure the problems are related, but it seems that the depth and color buffers are not being cleared "automatically"(i need to do it manually on my render pass), but i thought that since i've set the proper render textures before rendering anything, the default clear on depth and color buffer would clear my RT. Is this the intended behaviour?

    EDIT:

    I figured out the solution:
    The problem actually seems like a bug, but since i haven't dug deep into URP codebase i can't be sure. The problem seems to be that when you call ConfigureCameraTarget on the RenderPass you create a new RenderingData.cameraData.renderer.cameraColorTargetHandle and cameraDepthTargetHandle for the camera, but the code on FinalBlit does not update the reference to it, since it uses the struct values it receives on the Renderer setup that apparently happens before any RenderPass OnCameraSetup function is called, so it inevitably used the old cameraColorTargetHandle value, resulting on an empty texture when blitting at the end since it uses an empty texture that has never been drawn to.
    Storing the old cameraColorTargetHandle before calling ConfigureCameraTarget and blitting to it instead of BuiltinRenderTextureType.CameraTarget solves the issue as the image below suggests.
    Also, i don't think is strictly necessary, but i'm using RTHandle instead of the GetTemporaryRT, since the override used earlier is deprecated and the cameraColorTargetHandle was being null

    color and depth buffer before:
    Code (CSharp):
    1. cmd.GetTemporaryRT(_colorBuffer, 320, 180, 0, FilterMode.Point, RenderTextureFormat.Default);
    2. cmd.GetTemporaryRT(_depthBuffer, 320, 180, 16, FilterMode.Point, RenderTextureFormat.Depth);

    color and depth buffer after:
    Code (CSharp):
    1.    
    2. RTHandle _colorBuffer = RTHandles.Alloc(320, 180, colorFormat: GraphicsFormat.R8G8B8A8_UNorm);
    3. RTHandle _depthBuffer = RTHandles.Alloc(320, 180, depthBufferBits:DepthBits.Depth32, colorFormat: GraphicsFormat.R8G8B8A8_UNorm);
    4.  

    View attachment 1262992


    Would be nice to know if this is actually intended behaviour or not and raise a problem about it if necessary.
     
    Last edited: Jun 27, 2023
  2. ElliotB

    ElliotB

    Joined:
    Aug 11, 2013
    Posts:
    215
    It looks like you blit the _colorBuffer to the screen, but are you sure this texture is not black? I assume you are intending to change the buffers to do some pixelisation - are you sure something is not changing the render target after you set it? For instance, try BeforeRenderingOpaques
     
  3. IvanNevesDev

    IvanNevesDev

    Joined:
    Jan 28, 2019
    Posts:
    27
    Yes, it's for a pixel shader but at the moment i just want to make sure i can have control over the frame buffer.
    Yes, everything up until the FinalBlit pass works exactly as intended and my custom frame buffer seems to be being rendered just as i expect it to, but the end result is still a black screen due to the FinalBlit pass

    Here's some prints of the frame debugger:
    upload_2023-6-27_10-24-49.png

    upload_2023-6-27_10-25-11.png
    This is the result of my blit to the screen.
    But then, everything is still black on FinalBlit: upload_2023-6-27_10-25-56.png

    Also: I dont know if there's something changing the render target - my bet is that there is not but, but calling ConfigureCameraTarget messes up the process somehow, since things break after this call(removing it of course disables the pixel effect and the passes stop working and warnings are logged telling me the buffers i'm blitting have problems, but there is something on the screen in the end) -. Changing the second pass to AfterOpaque like you suggested, or anything in between AfterOpaque and AfterRendering has no effect. It behaves exactly like in the prints I posted above.
    It renders to this camera buffer that i have setup, but there is still a black screen at the end.

    Edit: I also tried blitting to the global _BlitTexture, since FinalBlit is setting it at the end, but to no avail. Same result as the prints above.
    My guess is that the pipeline might be blitting to this _BlitTexture automatically from another unknown texture, and this texture is losing reference to the camera textures after i call ConfigureCameraTarget and thus is always empty(black)
     

    Attached Files:

    Last edited: Jun 27, 2023
  4. IvanNevesDev

    IvanNevesDev

    Joined:
    Jan 28, 2019
    Posts:
    27
    I figured out the solution:
    The problem actually seems like a bug, but since i haven't dug deep into URP codebase i can't be sure. The problem seems to be that when you call ConfigureCameraTarget on the RenderPass you create a new RenderingData.cameraData.renderer.cameraColorTargetHandle and cameraDepthTargetHandle for the camera, but the code on FinalBlit does not update the reference to it, since it uses the struct values it receives on the Renderer setup that apparently happens before any RenderPass OnCameraSetup function is called, so it inevitably used the old cameraColorTargetHandle value, resulting on an empty texture when blitting at the end since it uses an empty texture that has never been drawn to.
    Storing the old cameraColorTargetHandle before calling ConfigureCameraTarget and blitting to it instead of BuiltinRenderTextureType.CameraTarget solves the issue as the image below suggests.
    Also, i don't think is strictly necessary, but i'm using RTHandle instead of the GetTemporaryRT, since the override used earlier is deprecated and the cameraColorTargetHandle was being null


    upload_2023-6-27_11-53-48.png


    Would be nice to know if this is actually intended behaviour or not and raise a problem about it if necessary.
     
    Last edited: Jun 27, 2023
    ElliotB likes this.
  5. ManueleB

    ManueleB

    Unity Technologies

    Joined:
    Jul 6, 2020
    Posts:
    98
    hey,

    the proper way to do rendering to an offscreen target is to setup camera target textures, that camera will then not do any final blit back to screen and just keep the results in your textures

    https://docs.unity3d.com/Packages/c...7.2/manual/rendering-to-a-render-texture.html

    in general if you specify a new camera target you should not expect those results to be blitted correctly back to screen, the proper way of doing that will be to then create a camera without setting a specific camera target, then this will render correctly to screen. If you want your texture created by the previous offscreen camera to end up blitted by FinalBlit of the main camera you then need to add a custom pass that blits your offscreen texture to the active camera color before final blit happens
     
  6. ManueleB

    ManueleB

    Unity Technologies

    Joined:
    Jul 6, 2020
    Posts:
    98
    regarding the use of ConfigureCameraTarget: URP has 2 timelines, frame setup and frame execution, when setting up the frame (i.e. in places like OnCameraSetup) you can call API that configures targets, sets outputs, creates resources etc

    When Execute() is called, unless you call cmd.SetRenderTarget explicitly, the RT will already be set to what was passed to ConfigureCameraTarget. But calling it in the Execute() function will have no effect since it's not the correct place to call pass setup functions

    Unfortunately these are shortcomings of the API since you are not getting clear errors/warnings about incorrect or undefined usage, 23LTS will provide a much clearer API that should prevent wrong usage as much as possible

    hope it clarifies a bit
     
  7. IvanNevesDev

    IvanNevesDev

    Joined:
    Jan 28, 2019
    Posts:
    27
    But that would require a two camera setup to render the final image, right? Wouldn't that be the same as getting a second camera, setting up a canvas raw image that fits the entire screen and putting the render texture there? I want to avoid that, specifically because raycasting to the world would become more complicated and the scene setup in general.
    I tried using the RenderTexture, but there doesn't seem to have a way to render to the final screen without using two cameras that way.

    This way that i'm doing seems to work fine to get the correct final image with just one camera. First, on the SetupRenderPassses function i setup the new render textures and configure the camera and before post processing i blit the textures on this manually created RT to the original camera RT and re-configure the camera to use the old frame and depth buffer. This works correctly, but does seem like a hack. Is there any problem doing it that way? Is there a way to render with the Unity camera output render texture with just one camera, without "hacking" it?
     
  8. IvanNevesDev

    IvanNevesDev

    Joined:
    Jan 28, 2019
    Posts:
    27
    But that is not what is happening. I was creating those resources on the OnCameraSetup, and right now i'm doing it on the RendererFeature SetupRenderPasses function(because that way i don't need a second pas that happens before everything just o create the RTHandle's) but the problem still occurs: If i don't manually revert the changes by blitting my current output to the original, cached camera RTHandle's and manually reconfiguring the camera to again use the original, cached camera handles, i will have a black output at the end due to the FinalBlit referencing the old camera RTHandle that has not been updated.

    Or, as i figured out yesterday, later at night(local time), when using PostProcessing, i don't need to do this manual blit and reconfigure and everything sorta works as intended, but the final image will be blured as it is using bilinear filtering at the end to copy into the final image, so to get the sharp feel image that we have with pixel art, this manual blit and reconfigure is still necessary to avoid that.
     
  9. ManueleB

    ManueleB

    Unity Technologies

    Joined:
    Jul 6, 2020
    Posts:
    98
    if you wanted a single camera you should add a custom pass (either via monobehaviour script or render feature) which draws to your texture, and then blit back your result to the active color target, then the final blit should take the correct input

    you cannot have a pass that changes the RT output of the existing URP passes, each pass sets up its own outputs unless a camera target was specified. The only thing a custom pass can do is setup its own targets and do custom rendering to it. If you want the URP passes to draw to an offscreen RT you can only do that by specifying a different target and having a separate camera. Calling ConfigureCameraTarget in a pass Execute() and assuming that different passes will use the same target is undefined behaviour
     
  10. IvanNevesDev

    IvanNevesDev

    Joined:
    Jan 28, 2019
    Posts:
    27
    So i'd render normally, then blit the output to my new, low-res, point filter RT and then blit back to the original color texture? I'm quite sure this can produce artifacts when going righ res to low res with pixel art specifically, that's why i was trying to do something more direct