Search Unity

  1. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice

Rendering into part of a render texture

Discussion in 'General Graphics' started by JibbSmart, Aug 11, 2016.

  1. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Our friend the manual says:
    When rendering into a texture, the camera always renders into the whole texture; effectively rect and pixelRect are ignored.​

    That is a very weird limitation. There are plenty of effects that would take advantage of rendering into only a portion of a render texture, and it's hard to imagine there's much (or any) more to it than rendering into a portion of the screen.

    So, is this correct? Why? Is there some other way to render directly into a portion of a render texture in Unity?

    Thanks.
     
  2. MSplitz-PsychoK

    MSplitz-PsychoK

    Joined:
    May 16, 2015
    Posts:
    1,278
    You could alter the UVs in-shader to fit within a certain range (ex: divide the uv.x by 2 to use the left half of the texture, and add another 0.5 to use the right half of the texture).

    Then, to make sure you don't draw outside the small part of the texture, you can either use a second texture as a mask, or you can use an "if" statement to discard pixels that fall outside the UV range you want. (or if you're shader savvy and want better performance, use a step() to set the alpha to 0 instead of discarding pixels with an if statement)
     
  3. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Hmm. That seems like it might be the best solution short of actually rendering into a viewport smaller than the render target. However, since I'm interested in the effect for a dynamic resolution for performance reasons, that still seems too messy and slow for my purposes.

    Interestingly, it apparently does sometimes work? The internet has been hit-and-miss with this for me.

    Someone asked about this more than a year ago here, and found that its success varied by project.
    Can anyone speak into this?

    I know that with plain ol' OpenGL, rendering into part of a render texture is no different than rendering into part of the screen. Literally no extra code, if rendering into a render texture is already supported.
     
  4. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,703
    This isn't true. The default is that the camera will fill the whole texture but, for sure, you can define the rectangle size for the camera to determine how much of the render texture it covers. I know because I did exactly this to render to a 1920x1080 area of a 2048x2048 texture in my game and it works fine with pixel-perfect non-blurry output. Plus when the render texture displays in the scene view I see the whole texture including the portion I output to plus the remaining portion which shows garbage data (which is correct). Change the 0 and 1 for the camera size and make it smaller. You'll have to do math to calculate what these need to be for a given pixel size based on your texture size.
     
  5. MSplitz-PsychoK

    MSplitz-PsychoK

    Joined:
    May 16, 2015
    Posts:
    1,278
    According to the Camera.TargetTexture scripting API page: "When rendering into a texture, the camera always renders into the whole texture; effectively rect and pixelRect are ignored."
     
  6. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,703
    You can quote all you want, I'm telling you from experience, I have a project RIGHT NOW with a 2048x2048 render texture which does NOT render a camera covering the whole texture. It only covers the area I've told it to cover. The documentation must be wrong, or old.
    Since you're looking at the SCRIPTING version of the documentation, maybe there are limits imposed when you're doing this from a script. But when you're just setting up a render texture to send the camera output to in the inspector, adjusting the rect DOES WORK.
     
    Last edited: Aug 12, 2016
  7. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,703
    Here for example is a screen shot of what my RenderTexture looks like. Its a 2048x2048 ARGB render texture. It's being mapped onto a perfectly square quad. The lower half or so is the result of a 1920x1080 camera output. Only the lower half of the render texture is ever written to, the top portion remains unused and thus collects junk data over time (the scrambled garbled area). Because of this, I have perfect pixel quality for pixel art from this texture. I position the quad over another camera to render the render-texture to the screen and it looks perfect. There is no blurring or bilinear filtering or missing rows or anything. Now, ORIGINALLY, before I adjusted the camera to do this, it WAS defaulting to rendering to the whole render texture. This caused my graphics output to STRETCH to fit the texture, vertically and a little bit horizontally, which blurred the images and destroyed my image quality. I couldn't figure out what was going on until I realized the entire render texture was filled with my image, rather than an appropriately sized area. I tweaked the camera rect and instantly it was fixed. My 'display camera's "Viewport Rect" is now:

    x=0, y=0, W=0.9375, H=0.5273438 - which is the percentage in a 0..1 range that the texture takes up to map a 1920x1080 area inside a 2048x2048 render texture.

    Screen Shot 2016-08-12 at 3.09.39 PM.png
     
    Minchuilla likes this.
  8. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    ImaginaryHuman, that's great that it's working for you. Unfortunately, there's evidence that for some people or for some projects, Unity's behaviour is consistent with the manual, which is cause for concern about how much I can count on it as my project changes, or as I build for other platforms.

    Secondly, it is generally bad practice to rely on undocumented features. While the documentation can fall behind the feature set for a time, differences between software and its features are sometimes bugs, not features. This means the "feature" (bug) might get "fixed" (removed) in a future update with no warning. It can also often mean seemingly simple changes to code that should still work if it was a properly supported feature just don't.

    So I'm glad to see it's working for you. Having tried it, it's working for me, too. But since I'm concerned it might stop working unexpectedly, or that it might work on my development platform but not one of the target platforms, I'm looking for a solution that has some support from the manual. Or even better, for Unity to support the feature officially and update the manual accordingly, so I've asked about it in the documentation forum.
     
  9. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,703
    Valid concerns. It seems to me that it'd be a bit silly for a camera to output to a full render texture, while requiring the render texture to be square.... it wouldn't be very useful for much. I mean, even using a render texture to render the full screen or to do image effects, it would just totally interfere with the output quality if everything was always getting stretched to fill the texture. I would say that's more of a buggy oversight than that the documentation says something else. I hoping the way it is now is that this has been 'fixed' and its just the docs that are out of date. Otherwise it's a pretty major shortcoming to not be able to create a rectangular render texture that isn't perfectly square.
     
  10. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Agreed. It's a huge handicap!

    I was making quite good progress on a dynamic resolution + temporal anti-aliasing effect using targetTexture and pixelRect, but then I realised my depth effects that use the depth texture weren't working anymore -- _CameraDepthTexture no longer contained the depth if the camera had a targetTexture.

    So I tried creating my own depth texture (RenderTextureFormat.Depth) and setting it as the depth buffer via SetTargetBuffers. But for whatever reason, SetTargetBuffers completely ignores pixelRect, so now I'm stuck :(

    What's the best way to go about making a feature request for Unity? SetTargetBuffers should work with pixelRect. And while we're at it, there should be better ways to access depth buffers as depth textures than _CameraDepthTexture and _LastCameraDepthTexture.
     
  11. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
  12. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,703
    So does "fixed" mean, the correct way it will work and is supposed to work, is that the camera rect SHOULD be taken into account and thus output to only a portion of a render texture not the whole thing?
     
  13. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Haha, good question! I'm optimistic, but I really don't know :p
     
  14. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Fixed in the latest patch (5.4.0p2). It's actually working now, although using the built-in motion vectors means everything transparent renders to the full texture (ignores pixelRect), so don't use that for now.
     
  15. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,703
    What do you mean by fixeD? What is the new standard functioning? That cameras can render to part of the render texture?
     
  16. JibbSmart

    JibbSmart

    Joined:
    Feb 18, 2013
    Posts:
    26
    Sorry, should've been more clear :p The camera can render to part of the render texture :)
     
  17. jobigoud

    jobigoud

    Joined:
    Apr 13, 2017
    Posts:
    8
    For me the problem persists. When changing the camera viewport rect, the camera still paints on the entirety of its target RenderTexture instead of the defined rectangle.

    As far as I can see the documentation (which hasn't been changed for 2017.1) is ,unfortunately, correctly describing Unity's behavior.

    I've tested this in 5.6 and 2017.10b9, with Render textures created in the editor and by code. Maybe the behavior was altered to match the doc recently? Maybe it depends on the platform?

    Here is how I create a test scene to expose the issue:
    1. Create a new project.
    2. Create a RenderTexture.
    3. Create a secondary camera.
    4. Create a quad in front of the primary camera.
    5. Assign the RenderTexture to `TargetTexture` property of secondary camera.
    6. Assign the RenderTexture to the quad, creating a material.
    7. Add a few objects somewhere else and point the secondary camera at them.

    -> At this point the view of the secondary camera is correctly painted on the quad.

    8. Change ViewportRect on the secondary camera to limit the viewport to a region, for example: x:0, y:0, w:0.5, h:1.

    -> Result: The texture is shown on the entirety of the quad, The projection is distorted.
    -> Expected: The texture is only shown on the left half of the quad. The right half may show garbage.

    I cannot work around by modifying the shaders' tiling or UVs, as this is for a tool to be used in third party scenes with their own shaders.

    Changing the RenderTexture size doesn't change anything, as expected.
    Changing ViewportRect on a camera rendering to the display is correctly rendering the view on a partial region of the display.

    Partial RT.png
     
    Last edited: Jun 21, 2017
  18. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,703
    Not sure why you're seeing this, but I have a game which renders an HD resolution screen to part of a 2048x2048 render texture and it only renders the area covered by the camera's window, per its rectangle. However, that said, I dont actually have any objects outside of that rectangle to test whether they would render to it.
     
  19. fablam

    fablam

    Joined:
    Sep 6, 2012
    Posts:
    1
    Reproduced on 5.6.1f1, and the docs for 2017.2b indicate it is not supported.

    At the very least, the proper functionality should be for Unity to spew a Debug.LogWarning() on the first frame where a Camera is rendered with a RenderTexture and rect != Rect(0,0,1,1).

    If it still somehow works on some target platforms and not others, then there should be a way to query if we can actually do this. If that's not acceptable for some reason, then the ability to render to parts of a rendertexture should be disabled completely.
     
  20. npatch

    npatch

    Joined:
    Jun 26, 2015
    Posts:
    155
    Take a look at CustomTexture as well. A new feature in 2017.
     
    Arkade likes this.
  21. jobigoud

    jobigoud

    Joined:
    Apr 13, 2017
    Posts:
    8
    To clarify my previous message, rendering to a sub window of the viewport does work if we use camera SetTargetBuffers. In this case the camera.rect field is correctly taken into account during rendering. Verified in 5.6 and 2017.1.

    So a workaround is instead of using camera SetRenderTexture, to use camera SetTargetBuffers using the texture's colorBuffer and depthBuffer as parameters.

    I would thought that SetRenderTexture would just be a wrapper around the lower level SetTargetBuffers but it must have some extra logic that prevent the correct behavior of camera rect during rendering to its target texture.
     
    Arkade likes this.
  22. simpleyuji

    simpleyuji

    Joined:
    Jun 11, 2017
    Posts:
    1
    Hello. I was able to get rendering to a specific viewport rect of the texture by changing the Camera setting "Rendering Path" from "Use Graphic Settings" to "Forward" .
     
  23. MaeL0000

    MaeL0000

    Joined:
    Aug 8, 2015
    Posts:
    35
    How about if you want to blit into a portion of a rendertexture? Say I have two texture, both x wide and y high, and I want to blit them side by side into a rendertexture with 2x width and y height.
     
  24. npatch

    npatch

    Joined:
    Jun 26, 2015
    Posts:
    155
    Graphics.CopyTexture if your GPU supports it(check notes inside).
     
  25. Kolyasisan

    Kolyasisan

    Joined:
    Feb 2, 2015
    Posts:
    324
    Necrobumping this thread, this is still a big issue that can't seemingly be solved with the built-in pipeline.

    In our case, Unity tells you that you indeed are rendering into a part of a texture defined with your camera's viewport rect, but in reality unity creates a new rendertexture each time the rect is changed and simply blits it back into the target texture once it's done.

    That's just awful.
     
  26. npatch

    npatch

    Joined:
    Jun 26, 2015
    Posts:
    155
  27. Kolyasisan

    Kolyasisan

    Joined:
    Feb 2, 2015
    Posts:
    324
    No, I haven't. We tried to use the viewport clipping for dynamic resolution (as described by Intel in one of their 2011 papers) and CustomRenderTexture is not really suited for such a thing afaik.

    SRPs easily allow you to achieve such a behaviour, but the built-in one just generates a new texture from a pool based on its pixel width (which is based on pixel rect which in itself is based on rect). That's just sad.
     
  28. broarty

    broarty

    Joined:
    Sep 28, 2020
    Posts:
    3
    I had recently struggled with this, found this on an old thread. Saved my Mac build.
    https://answers.unity.com/questions/1266312/dynamic-viewport-for-a-camera-rendering-to-a-targe.html

    The work around seems to be calling Camera.SetDepthBuffers(RenderTexture.colorBuffer,RenderTexture.depthBuffer) and then Camera.Render() on the Camera with the specific viewport rect you are sending to the render texture.

    this is a work around but it might help someone else, for some reason I only encountered this problem on Mac and was able to draw to part of the texture on windows without needing the scripting.
     
unityunity