Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Larger renderTexture -> better performance? Weird behaviour

Discussion in 'Shaders' started by stanislavdol, Mar 25, 2020.

  1. stanislavdol

    stanislavdol

    Joined:
    Aug 3, 2013
    Posts:
    282
    So, I noticed the following:
    I have 9 cameras rendering to 9 renderTextures(directly through target texture). That are displayed in a gui as Raw images.
    Everything is tested on an android device with the same charge level and these results are consistent (on this device).


    At first I made the RTs 256x256, the performance was 60 fps with occasional drops to 55 fps(without RTs at all, fps is steady 60fps).
    So I decided to reduce it to 128x196 and the fps dropped to 45-50.
    I thought it was due to using non-square textures, so I reduced it to 128x128. Which reduced the fps even further, to 40-45 fps.

    After that, out of curiousity, I increased the renderTextures resolution to 512x512 and it resulted in steady 60fps.

    Could anyone please explain the logic behind it? Why rendering in higher resolutions results in better fps?
    Sorry in advance, if I posted in the wrong category, this one seemed the most appropriate one
    Thank you
     
    Last edited: Mar 25, 2020
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    The reasons why a lower resolution might be slower than a higher resolution are complicated, but a short version can be summed up like this.

    GPUs are fast because they can do a lot of work in parallel. However a big part of how they are able to accomplish that parallelism is in the fact it's rendering to a grid of pixels. Each individual pixel has to look at each triangle that overlaps it one at a time. If one pixel is covered by a lot of triangles, that pixel takes a lot longer to render than if those triangles were spread out over a larger number of pixels with less overlap.

    In this very simplified version of a GPU rendering pipeline, imagine you have a 4x4 render target and 4 triangles. Lets say no triangle overlaps with another triangle such that each pixel only sees one triangle at a time. Now move all 4 triangles so they all overlap 1 triangle. It now takes roughly 4 times longer to render. If you shrink it down to a 1x1 render texture where all 4 triangles still overlap, it'll take basically as long as the 4x4 render texture with only 1 pixel overlapping.

    In the first case where none of the triangles overlapped, each pixel only had to worry about 1 triangle, and all 16 pixels took the same time to render. They could even potentially render all at once in parallel. But the overlapping triangles can't.

    This is ignoring things like the cost of the vertex calculations, rasterization, memory bandwidth usage, tiled rendering on mobile, and the fact that individual pixels don't really run one at a time ever. But it's the basic problem. GPUs are fast because of parallel computing, and anything that increases the amount of serial work required makes it slower.
     
    adriaanwormgoor likes this.
  3. stanislavdol

    stanislavdol

    Joined:
    Aug 3, 2013
    Posts:
    282
    That's very interesting. Thanks for the detailed response