Search Unity

Graphics blit in OnRenderImage into a low-resolution render texture

Discussion in 'Shaders' started by SupriyaRaul, Jul 31, 2020.

  1. SupriyaRaul

    SupriyaRaul

    Joined:
    Jun 20, 2018
    Posts:
    28
    I am trying to understand how Unity camera renders into a low-resolution render texture ( which is set as target texture for the camera) when I just use standard shader (that comes with a 3D model from the asset store). Does the number of fragments getting colored with the standard shader depend on the native resolution or the render texture resolution? I am just trying to understand if the standard shader's work is reduced? And is there any performance difference when directly render into low render texture through setting it in the inspector vs if I use Graphics.blit inside OnRenderImage to create the low-resolution render texture from the source?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    The number of fragments shaded is determined by the resolution of the render target. In the case of rendering to a render texture, it's the render texture's resolution.

    Fragment shader stage, yes. The vertex shader stage is doing the same amount of work regardless of resolution, so mesh vertex count still matters, and at a certain point going to a lower resolution won't actually make it noticeably faster if the vertex count is the limiting factor. Also there's complicated stuff with how GPUs do rendering that can making going to a lower resolution slower than a higher resolution if you have a lot of very small (compared to the pixel "size") triangles, or just gotten to such a low resolution that the GPU is no longer able to take advantage of it's highly parallel nature.

    No.
     
  3. SupriyaRaul

    SupriyaRaul

    Joined:
    Jun 20, 2018
    Posts:
    28
    You are always there to help @bgolus! Thank you so much! :)