Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Single-pass MRT with Camera.SetTargetBuffers

Discussion in 'Shaders' started by metaleap, May 8, 2014.

  1. metaleap

    metaleap

    Joined:
    Oct 3, 2012
    Posts:
    589
    I have no clue how/when I need to call Unity 4.3's Camera.SetTargetBuffers to have that camera render the scene to two RenderTextures at once in a single geometry pass:

    All geometry in my test scene has a shader with multiple color outputs, that's not the issue.

    I have created two persistent (in "Assets" folder, instead of runtime in-memory) RenderTexture assets and assigned them to my below cam script. Still nothing ever gets rendered to these two RenderTextures with this simple script:

    Code (csharp):
    1.  
    2. using UnityEngine;
    3. using System.Collections;
    4.  
    5. [RequireComponent(typeof(Camera))]
    6. public class azCamMrtTest : MonoBehaviour
    7. {
    8.  
    9.         public Shader rtShader;
    10.  
    11.         public RenderTexture[] texes = new RenderTexture[2];
    12.         private RenderBuffer[] bufs = new RenderBuffer[2];
    13.         private Material mat;
    14.    
    15.         void OnDisable ()
    16.         {
    17.                 if (mat != null) {
    18.                         DestroyImmediate (mat);
    19.                         mat = null;
    20.                 }
    21.         }
    22.    
    23.         void OnEnable ()
    24.         {
    25.                 if (mat == null) {
    26.                         mat = new Material (rtShader);
    27.                         mat.hideFlags = HideFlags.HideAndDontSave;
    28.                 }
    29.         }
    30.  
    31.         void Start ()
    32.         {
    33. // we use persistent RenderTexture assets for now so the following 2 lines dont apply
    34. //              texes [0] = new RenderTexture (Screen.width, Screen.height, 24, RenderTextureFormat.ARGB32);
    35. //              texes [1] = new RenderTexture (Screen.width, Screen.height, 24, RenderTextureFormat.ARGB32);
    36.                 bufs [0] = texes [0].colorBuffer;
    37.                 bufs [1] = texes [1].colorBuffer;
    38.                 camera.SetTargetBuffers (bufs, texes [0].depthBuffer);
    39.         }
    40.  
    41.         void OnPreRender ()
    42.         {
    43.                 camera.SetTargetBuffers (bufs, texes [0].depthBuffer);
    44.         }
    45.  
    46.         void OnPostRender ()
    47.         {
    48.                 camera.SetTargetBuffers (bufs, texes [0].depthBuffer);
    49.         }
    50.  
    51.         void OnRenderImage (RenderTexture source, RenderTexture destination)
    52.         {
    53.                                 // is source equal to texes[0] or texes[1] or what? ah who the hell knows...
    54.                 mat.SetTexture ("_Tex0", texes [0]);
    55.                 mat.SetTexture ("_Tex1", texes [1]);
    56.                 Graphics.Blit (source, destination, mat, 0);
    57.         }
    58.    
    59. }
    60.  
    61.  
     
    Last edited: May 8, 2014
  2. WhiskyJoe

    WhiskyJoe

    Joined:
    Aug 21, 2012
    Posts:
    143
    Why don't you just blit them both?

    Source is not the one you are setting with set texture, source is what the camera is rendering and destination is what will be presented to the screen.

    Code (csharp):
    1.  
    2. Graphics.Blit (source, texes [0], mat, 0);
    3. Graphics.Blit (source, texes [1], mat, 0);
    4.  
    There might be more elegant ways to do this due to the fact that both buffers will have the exact same data, perhaps a simple copy would suffice.

    Also, you're using the depthbuffer in your setTargetBuffer. Any reason for that?
     
  3. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    The goal is not to have the same data in two buffers I assume, but to render different data to two buffers in a single pass.

    I've never tried working with MRT in Unity, but I did with other tools. Can you post your pixel shader and the output structure returned from the pixel shader?

    What are the formats of the color buffer parts of the two RenderTexture instances? The amount of bytes per pixel needs to be the same for each target.

    I don't think you can use OnRenderImage to do MRT. You'll just have to enable the camera and let it render. You can use OnRenderImage to combine the MRT to a single destination after rendering the camera.
     
  4. metaleap

    metaleap

    Joined:
    Oct 3, 2012
    Posts:
    589
    Hi guys thanks for your inputs..

    Exactly!

    Sure thing, it's a most-simple vert+frag shader:

    Code (csharp):
    1.  
    2. Shader "Custom/azMrtSurf" {
    3.     Properties {
    4.         _MainTex ("_MainTex", 2D) = "white" {}
    5.     }
    6.  
    7.     CGINCLUDE
    8.  
    9.     #include "UnityCG.cginc"
    10.  
    11.     struct v2f {
    12.         float4 pos : POSITION;
    13.         float2 uv : TEXCOORD0;
    14.     };
    15.  
    16.     struct PixelOutput {
    17.         float4 col0 : COLOR0;
    18.         float4 col1 : COLOR1;
    19.     };
    20.  
    21.     sampler2D _MainTex;
    22.  
    23.     v2f vert(appdata_img v) {
    24.         v2f o;
    25.         o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    26.         o.uv = v.texcoord.xy;
    27.         return o;
    28.     }
    29.  
    30.     PixelOutput frag(v2f pixelData) {
    31.         PixelOutput o;
    32.         o.col0 = float4(tex2D(_MainTex, pixelData.uv).rgb, 1.0);
    33.         o.col1 = float4(0.0f, 1.0f, 1.0f, 1.0f);
    34.         return o;
    35.     }
    36.  
    37.     ENDCG
    38.  
    39.     Subshader {
    40.         Pass {
    41.             Fog { Mode off }
    42.  
    43.             CGPROGRAM
    44.             #pragma target 3.0
    45.             #pragma glsl
    46.             #pragma exclude_renderers d3d9 d3d11 d3d11_9x xbox360 ps3 flash
    47.  
    48.             #pragma fragmentoption ARB_precision_hint_fastest
    49.             #pragma vertex vert
    50.             #pragma fragment frag
    51.             ENDCG
    52.         }
    53.  
    54.     }
    55.  
    56.     Fallback off
    57. }
    58.  
    Basically stole this from another MRT thread but I know my Cg ;) there's plenty of examples for old-school MRT via Graphics.SetRenderTarget out there but this only helps for stuff like Graphics.DrawMeshNow etc. For a Camera's own smarter scene rendering with its own ordering,culling,etc. we now have since 4.3 Camera.SetTargetBuffers but I see nowhere how/when this needs to be called.

    So the two render-textures are both shown in Unity's Inspector panel as RGBA 32bit.

    Sure that's exactly what I tried to do above ;) in OnRenderImage I expect both rendertextures to be filled with my shader outputs, so that both can be fed to another shader that would take in and neatly combine the passed rendertextures (_Tex0 and _Tex1)..
     
    Deleted User likes this.
  5. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    All seems fine then. You say nothing gets rendered at all. What happens if you only set a single target? If it does render in that case, I wonder if SetTargetBuffers with multiple color buffers is functioning properly.
     
  6. WhiskyJoe

    WhiskyJoe

    Joined:
    Aug 21, 2012
    Posts:
    143
    Disregard my answer, I need to learn how to read properly >_>

    Perhaps OnRenderObject is the thing you are looking for. It's called when the scene has been rendered, but then again, so does the OnPostRender..
     
  7. metaleap

    metaleap

    Joined:
    Oct 3, 2012
    Posts:
    589
    Seems like this can all work with 2 cameras. One solely responsible for off-screen geometry rendering into the targetbuffers, another for the sole purpose of first rendering nothing at all (except the clear-color) into its own (self-created in-memory and never-used) rendertexture ---quite uselessly--- JUST so I can then on this one do OnRenderImage and in there use a shader that reads my targetbuffers and finally combines them. Oh well ;)
     
    Deleted User likes this.
  8. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    So, you're saying it doesn't work when applied to the main camera?

    That makes some sense. In for example DirectX 9 you are always stuck with the main backbuffer for the final result. If I remember correctly you can't even change the main backbuffer to a HDR one. But I'm pretty sure you at least can't set the main backbuffer to be multiple targets. In DirectX 11 some of these things have been relaxed.

    So considering that the main backbuffer is attached to the main camera and also Unity's HDR and deferred (MRT) paths, I can imagine you're not supposed to overwrite the target of the main camera. Best to create a new camera for these purposes and just copy the main camera settings into it.