Search Unity

Is Single Pass Stereo rendering supported for RenderTextures?

Discussion in 'General Graphics' started by cmandrews, Jul 26, 2017.

  1. cmandrews

    cmandrews

    Joined:
    Jan 8, 2016
    Posts:
    9
    I'm trying to manually render a VR camera into a double-wide RenderTexture the same way that the main camera would render with VR enabled. I don't see any documentation stating that this isn't supported, but it doesn't appear to work. I upgraded to Unity 2017.1.03f because RenderTextureDescriptor.vrUsage seemed promising, but it doesn't seem to do anything. The camera always renders just one eye. Here's a snippet I'm using to test this:

    Code (CSharp):
    1. using UnityEngine;
    2. using UnityEngine.VR;
    3.  
    4. public class StereoTest : MonoBehaviour {
    5.     private Camera m_Camera;
    6.  
    7.     void Awake() {
    8.         m_Camera = GetComponent<Camera>();
    9.  
    10.         RenderTextureDescriptor desc = new RenderTextureDescriptor(
    11.             VRSettings.eyeTextureWidth * 2,
    12.             VRSettings.eyeTextureHeight,
    13.             RenderTextureFormat.Default,
    14.             24);
    15.         desc.vrUsage = VRTextureUsage.TwoEyes;
    16.         RenderTexture texture = new RenderTexture(desc);
    17.         texture.name = "Stereo RenderTexture";
    18.         texture.Create();
    19.         m_Camera.targetTexture = texture;
    20.     }
    21.  
    22.     void LateUpdate() {
    23.         // Tested enabling UNITY_SINGLE_PASS_STEREO because it wasn't being set by the camera.
    24.         // Shader.EnableKeyword("UNITY_SINGLE_PASS_STEREO");
    25.         m_Camera.Render();
    26.         // Shader.DisableKeyword("UNITY_SINGLE_PASS_STEREO");
    27.     }
    28. }
    29.  
     
  2. cmandrews

    cmandrews

    Joined:
    Jan 8, 2016
    Posts:
    9
    I've still not been able to find an answer to this. Not sure what else I can try here.
     
  3. Nekativ

    Nekativ

    Joined:
    Jun 30, 2017
    Posts:
    9
    I was also unable to find an answer to this problem.

    My solution was to create a left and right render textures seperately and combine them together into one large render texture using Graphics.Blit. The texture you blit onto must be twice the width as your left and right texture.

    C# Code
    Code (CSharp):
    1.  
    2.    
    3. //Merge Left Right Texture into Stereo Texture
    4. RenderTexture lastActive = RenderTexture.active;
    5. Graphics.Blit(m_TextureLeft, m_Texture, material, 0);
    6. Graphics.Blit(m_TextureRight, m_Texture, material, 1);
    7. RenderTexture.active = lastActive;
    Blit Shader
    Code (CSharp):
    1. Shader "Nekativ/StereoAppend" {
    2.     Properties {
    3.         _MainTex ("", 2D) = "white" {}
    4.     }
    5.     CGINCLUDE
    6.     #include "UnityCG.cginc"
    7.  
    8.     sampler2D _MainTex;
    9.  
    10.     struct v2f
    11.     {
    12.         float4 vertex : SV_POSITION;
    13.         float2 uv : TEXCOORD0;
    14.     };
    15.  
    16.     v2f vert (appdata_base v)
    17.     {
    18.         v2f o;
    19.         o.vertex = UnityObjectToClipPos(v.vertex);
    20.         o.uv = v.texcoord;
    21.         return o;
    22.     }
    23.  
    24.     ENDCG
    25.     SubShader {
    26.         ZTest Always Cull Off ZWrite Off
    27.    
    28.         //Blit Left
    29.         Pass
    30.         {
    31.             CGPROGRAM
    32.             #pragma vertex vert
    33.             #pragma fragment frag
    34.        
    35.             fixed4 frag (v2f i) : SV_Target
    36.             {
    37.                 clip(0.5 - i.uv.x);
    38.                 fixed4 c = tex2D(_MainTex, float2(i.uv.x*2, i.uv.y));
    39.                 return c;
    40.             }
    41.             ENDCG
    42.         }
    43.    
    44.         //Blit Right
    45.         Pass
    46.         {
    47.             CGPROGRAM
    48.             #pragma vertex vert
    49.             #pragma fragment frag
    50.        
    51.             fixed4 frag (v2f i) : SV_Target
    52.             {
    53.                 clip(i.uv.x-0.5);
    54.                 fixed4 c = tex2D(_MainTex, float2(i.uv.x*2-1, i.uv.y));
    55.                 return c;
    56.             }
    57.             ENDCG
    58.         }
    59.  
    60.     }
    61.     FallBack "Diffuse"
    62. }
     
    Last edited: Aug 2, 2017
  4. cmandrews

    cmandrews

    Joined:
    Jan 8, 2016
    Posts:
    9
    Thank you for the response Nekative!

    I will likely go a similar route to what you have done, but I was also hoping to harness the performance benefits of single pass stereo. If one of the Unity devs could chime in on whether or not this is supposed to be supported, that would be lovely. And if it's not meant to be supported, what is the purpose of RenderTextureDescriptor.vrUsage?
     
    bensenior and slice3d like this.
  5. jjxtra

    jjxtra

    Joined:
    Aug 30, 2013
    Posts:
    1,464
    This is a bug / limitation in Unity currently. Only cameras that render without Camera.Render support single pass stereo. Seems odd, but it is what it is.
     
  6. EthnoTekhBrad

    EthnoTekhBrad

    Joined:
    Nov 16, 2013
    Posts:
    6
    I'm experiencing this same issue, trying to render a mask with a separate camera but no matter how I try and wrangle it, it doesn't work. Hopefully it's fixed soon
     
  7. Fewes

    Fewes

    Joined:
    Jul 1, 2014
    Posts:
    259
    It seems that this is still not fixed. What is the point of the RenderTextureDescriptor.vrUsage flag if it doesn't do anything? Whenever a camera's targetTexture is set to a RenderTexture, it will return stereoEnabled false, regardless of the vrUsage flag on the RenderTexture. This basically makes it impossible to implement single-pass stereo reflections and other effects.
     
    ceitel likes this.
  8. RohitNutalapati

    RohitNutalapati

    Joined:
    May 17, 2017
    Posts:
    2
    Still no fix.

    Anyone trying to reproduce this bug:

    Code (CSharp):
    1. public class cam_test : MonoBehaviour {
    2.  
    3.     RenderTexture clientRenderTexture;
    4.     private Camera CAM;
    5.  
    6.     // Use this for initialization
    7.     void Start () {
    8.         CAM = this.GetComponent<Camera>();
    9.         clientRenderTexture = new RenderTexture(XRSettings.eyeTextureDesc);
    10.         Debug.Log("XR loaded sdk - " + XRSettings.loadedDeviceName + " -- " + XRSettings.isDeviceActive);
    11.         clientRenderTexture.vrUsage = VRTextureUsage.TwoEyes;
    12.         //CAM.targetTexture = clientRenderTexture;
    13.     }
    14.  
    15.     // Update is called once per frame
    16.     void Update () {
    17.         // To capture Render Texture
    18.         if (Input.GetKeyDown("x")) {
    19.             Texture2D s = toTexture2D(clientRenderTexture);
    20.             File.WriteAllBytes(Application.dataPath + "/SavedScreen.png", s.EncodeToPNG());
    21.         }
    22.     }
    23.     Texture2D toTexture2D(RenderTexture rTex)
    24.     {
    25.         Texture2D tex = new Texture2D(clientRenderTexture.width*2, clientRenderTexture.height, TextureFormat.RGB24, false);
    26.         RenderTexture.active = rTex;
    27.         tex.ReadPixels(new Rect(0, 0, rTex.width, rTex.height), 0, 0);
    28.         tex.Apply();
    29.         return tex;
    30.     }
    31.  
    32.     private void OnPreRender()
    33.     {
    34.         Debug.Log("eye - " + CAM.stereoActiveEye);
    35.  
    36.     }
    37. }
    38.  
    The above code should keep logging "eye - left" in single pass stereo.
    "eye - left" followed by "eye - right" in multi pass stereo.

    uncommenting the following line:
    Code (CSharp):
    1.  //CAM.targetTexture = clientRenderTexture;
    will start logging "eye - mono".

    Clearly the rendertexture breaks the stereo render mode of the camera.

    Interestingly, if you enable HDR on the camera it renders two of the same mono output side by side... idk whats that about..

    Tested on Unity 2018.1 and 2018.3

    Hope this helps!
     
  9. themdubs

    themdubs

    Joined:
    Jan 12, 2014
    Posts:
    26
    Figured I'd bump this as I'm having the same problem. Would love to have this implemented!
     
  10. Mutimir

    Mutimir

    Joined:
    Dec 6, 2018
    Posts:
    36
  11. EwieElektro

    EwieElektro

    Joined:
    Feb 22, 2016
    Posts:
    45
    [push]
    Problem still exist. That sucks @Unity :(
     
  12. Gaulois94

    Gaulois94

    Joined:
    Jan 24, 2018
    Posts:
    12
    The problem still exist...
     
  13. bricefr

    bricefr

    Joined:
    May 3, 2015
    Posts:
    61
    Still the case... almost 3 years after...
     
    ceitel likes this.
  14. ceitel

    ceitel

    Joined:
    Jan 3, 2017
    Posts:
    35
    Was attempting to migrate form multi-pass to single-pass since the WMR plugin now only supports single-pass, same problem. Camera.Render() should use the same rendering properties (single-pass stereo) but will only render the left eye, no matter the "targetEye"
     
  15. DaffyTheBunny

    DaffyTheBunny

    Joined:
    Dec 3, 2015
    Posts:
    5
    I've been trying to get my reflections in VR to render in a single-pass too. Then I found this thread.
    It's so dissapointing that this doesn't work.
    Is there anything we can do to encourage this to be possible?
     
  16. Gerard_Slee

    Gerard_Slee

    Joined:
    Apr 3, 2017
    Posts:
    11
    Also having this issue. Anyone figured out how to set target texture and have our camera render both eyes to this texture?
     
  17. Tom-Goethals

    Tom-Goethals

    Joined:
    Mar 6, 2015
    Posts:
    102
    Same issue here! Where do we vote to get this finally fixed? I multi-view render-textures so we can finally implement portals on quest without reverting to slow multi-pass rendering
     
    Kohei-Ikeda and CloudyVR like this.
  18. TomGoethals

    TomGoethals

    Joined:
    Jan 29, 2022
    Posts:
    48
  19. Mutimir

    Mutimir

    Joined:
    Dec 6, 2018
    Posts:
    36
    This is not an issue anymore. I don't have time now to write complete guides but you can definitely do postprocessing effects in single pass stere on quest 1 and 2. It differs based on unity version if you are using universal render pipeline as Unity is deprecating command buffer blit. The way you define render textures is fine, you probably have not used
    UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(output);
    UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
    in your shader codes. Just keep reading the documentation and or pull some of unity made postprocessing effects for guidelines.
     
    julienkay likes this.
  20. Mutimir

    Mutimir

    Joined:
    Dec 6, 2018
    Posts:
    36
    And don't follow old tutorials on this stuff as a lot has changed in the last 3 years.
     
  21. TomGoethals

    TomGoethals

    Joined:
    Jan 29, 2022
    Posts:
    48
    Will look in to it further, but if I'm just looking at the raw rendertexture (from the frame debugger), before it goes into any shader as a texture, that rendertexture (or array) is not stereo when rendered from a second camera. (I'm not doing post processing effects where I think you use the main camera that IS stereo enabled. But using an offscreen camera set to render to a texture. That's specifically where it goes wrong I think. Would like to be proven wrong though. Or maybe the whole thing needs to be implemented as a post processing effect instead of using a second camera but damn, that would be so much work if even possible.
     
  22. Mutimir

    Mutimir

    Joined:
    Dec 6, 2018
    Posts:
    36
    So the frame debugger isn't really up to date with VR and while it will show you the draw calls correctly the textures that you see in the frame debugger aren't always the same as in the headset. If you have a multiple-camera setup and you are working with URP I would advise you to research command buffers. Now those are really all over the place based on the unity version, The proper way of defining VR-ready render textures in there is:
    new RenderTargetIdentifier(textureIdentifierString, 0, CubemapFace.Unknown, -1)
    But even that might be different based on versions.

    I know that it seems like Unity ruined everything by removing surface shaders and constantly changing rendering stuff but really I think they are doing a great job and moving the engine forward. We are just stuck in interesting times. I know that for example now you don't need to load the render textures into Quests tiled based graphics card memory when bliting from one to another, which is a huge improvement cause reading from tiled-based memory is slow as F***.