Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Rendering with screenspace texture: failure to match positions in camera target and RenderTexture

Discussion in 'VR' started by dimatomp, Feb 20, 2017.

  1. dimatomp

    dimatomp

    Joined:
    Oct 20, 2016
    Posts:
    16
    I am trying to perform a bit non-trivial rendering job in the following way:
    1. create an auxiliary RenderTexture which has a (Screen.width)x(Screen.height) size,
    2. draw a mesh on it
    3. and, finally, draw part of the texture on screen (with every position in RenderTexture matching the equal pixel position in camera render target).
    Here is a minimal demonstration of how I tried to put this into practice.
    First, a code that creates the texture to render something on it and adds the command buffer to Camera:
    Code (CSharp):
    1. private RenderTexture _texture;
    2. [SerializeField] private Material _quadMaterial; // Plain red color
    3.  
    4. void Start()
    5. {
    6.     _texture = new RenderTexture(Screen.width, Screen.height, 0);
    7.     var quad = GameObject.Find("Quad");
    8.     quad.GetComponent<MeshRenderer>().material.mainTexture = _texture;
    9.     var cmdBuffer = new CommandBuffer();
    10.     cmdBuffer.SetRenderTarget(_texture);
    11.     cmdBuffer.ClearRenderTarget(true, true, Color.green);
    12.     var mesh = new Mesh
    13.     {
    14.         vertices = new[] { new Vector3(-0.25f, -0.5f), new Vector3(-0.25f, 0.5f), new Vector3(0.25f, 0.5f), new Vector3(0.25f, -0.5f) },
    15.         triangles = new[] {0, 1, 2, 0, 2, 3}
    16.     };
    17.     cmdBuffer.DrawMesh(mesh, quad.transform.localToWorldMatrix, _quadMaterial);  
    18.     Camera.main.AddCommandBuffer(CameraEvent.BeforeForwardAlpha, cmdBuffer);
    19. }
    20.  
    For drawing the texture on screen, I use the following shader in Quad material:
    Code (CSharp):
    1. Shader "Unlit/ScreenspaceTexture"
    2. {
    3.     Properties
    4.     {
    5.         _MainTex ("Texture", 2D) = "white" {}
    6.     }
    7.     SubShader
    8.     {
    9.         Tags { "RenderType"="Transparent" "Queue"="Transparent" }
    10.         LOD 100
    11.  
    12.         Pass
    13.         {
    14.             ZWrite Off
    15.  
    16.             CGPROGRAM
    17.             #pragma vertex vert
    18.             #pragma fragment frag
    19.             #pragma target 3.0
    20.        
    21.             #include "UnityCG.cginc"
    22.  
    23.             sampler2D _MainTex;
    24.        
    25.             float4 vert (float4 v: POSITION): SV_POSITION
    26.             {
    27.                 return mul(UNITY_MATRIX_MVP, v);
    28.             }
    29.        
    30.             fixed4 frag (float4 screenPos: SV_Position): SV_Target
    31.             {
    32.                 screenPos += float4(0.5, 0.5, 0, 0);
    33.                 float2 texturePos = screenPos.xy * _ScreenParams.zw - screenPos.xy;
    34.                 return tex2D(_MainTex, float2(0, 1) + float2(1, -1) * texturePos);
    35.             }
    36.             ENDCG
    37.  
    38.             Blend SrcAlpha OneMinusSrcAlpha
    39.         }
    40.     }
    41. }
    42.  
    This example should draw a red rectangle on green background and place it directly in the center of the quad.
    On real HoloLens, however, the red rectangle (with its height matching the Quad's one) appears slightly upper on the quad, leaving a green margin at bottom. Something like this:
    In the real project, I don't see any alternative to using an auxiliary texture because I don't want to mess camera target's depth buffer while I use depth in RenderTexture. By the way, I chose a multi-pass rendering method to make usage of this technique possible.
    Does anyone see what I am doing wrong in the code above? Seems like there is some VR-specific render target property I am not aware of.
     
    Last edited: Mar 9, 2017
    Hodgson_SDAS likes this.
  2. Unity_Wesley

    Unity_Wesley

    Unity Technologies

    Joined:
    Sep 17, 2015
    Posts:
    558
    Can you provide more information about the project? It would be helpful to know how you have your scene setup and what version of unity you are using.
     
  3. dimatomp

    dimatomp

    Joined:
    Oct 20, 2016
    Posts:
    16
    Thank you for response!
    I have attached the minimal project described in my post above.
    Unity version is 5.5.1p3.
    It would also be interesting if there exists a way to do the same screenspace effect in single-pass setup.
     

    Attached Files:

  4. Unity_Wesley

    Unity_Wesley

    Unity Technologies

    Joined:
    Sep 17, 2015
    Posts:
    558
    I see the same thing you are seeing in 5.5, also in 5.6 the behavior is very different which is interesting. I will ask around to see if anyone else can see any issues with the code.
     
  5. Unity_Wesley

    Unity_Wesley

    Unity Technologies

    Joined:
    Sep 17, 2015
    Posts:
    558
    Still investigating to see what we find
     
  6. dimatomp

    dimatomp

    Joined:
    Oct 20, 2016
    Posts:
    16
    Is the difference in behaviour introduced by an API change? If not, I am going to file a bug report on this for convenience.
     
  7. dimatomp

    dimatomp

    Joined:
    Oct 20, 2016
    Posts:
    16
    Case 885448
     
  8. Unity_Wesley

    Unity_Wesley

    Unity Technologies

    Joined:
    Sep 17, 2015
    Posts:
    558
    Thank you for submitting a bug report, gave it to the team to investigate.
     
  9. dimatomp

    dimatomp

    Joined:
    Oct 20, 2016
    Posts:
    16
    I have also noticed that in fragment program, screenPos.zw is not always (0,1). When I insert "screenPos /= screenPos.w;", I get some weird behaviour on desktop, too. I wonder if I have to take those two coordinates into account for shader to work properly on all devices.
     
  10. Unity_Wesley

    Unity_Wesley

    Unity Technologies

    Joined:
    Sep 17, 2015
    Posts:
    558
    Let us know what you find out.
     
  11. dimatomp

    dimatomp

    Joined:
    Oct 20, 2016
    Posts:
    16
    Finally, I have done some graphics debugging on HoloLens and here is what I found.
    Below are two screenshots of VS graphics debugger. The first one is for CommandBuffer stage:
    The next one is for Camera's alpha rendering pass.

    I have highlighted "y" coordinates of quad nodes in both of those stages. As you can see, input vertex buffers are identical because I have changed the code for Quads to coincide. In spite of this, I get different outputs from the same vertex shader when it is run on different stages of rendering.
    I have looked up constant buffers that GPU had used and here is, perhaps, the cause for the app's resulting behaviour:
    Code (CSharp):
    1. #,float4,float4 (command buffer)
    2. "0","{+3.7514043, +0.15385284, -3.5147241e-05, +0.041316763}","{+0.0052874619, -6.2493477, -0.00029377319, +0.34534025}"
    3. "1","{-0.21135402, +2.2566321, -0.00079756766, +0.93756759}","{+0.16183814, +0.56201142, +0.85076952, -0.054499686}"
    4.  
    5. #,float4,float4 (alpha rendering pass)
    6. "0","{+3.7514043, -0.15683173, -3.5147241e-05, +0.041316763}","{+0.0052874619, +6.2244492, -0.00029377319, +0.34534025}"
    7. "1","{-0.21135402, -2.3242295, -0.00079756766, +0.93756759}","{+0.16183814, -0.55808204, +0.85076952, -0.054499686}"
    This variable is, probably, glstate_matrix_mvp because it is the only one used in vertex shader code. Looks like an MVP matrix which is passed to GPU depends on rendering stage when the draw call is performed.
    I am wondering what the practical reason for this difference is. Is there a way to get second matrix from the first one in vertex shader code?
     
    Last edited: Mar 9, 2017
  12. dimatomp

    dimatomp

    Joined:
    Oct 20, 2016
    Posts:
    16
    And here is a workaround for 5.5.x versions.
    Texture shader:
    Code (CSharp):
    1.  
    2. Shader "Unlit/PlainRed"
    3. {
    4.     Properties {}
    5.     SubShader
    6.     {
    7.         Pass
    8.         {
    9.             Cull Front
    10.  
    11.             CGPROGRAM
    12.             #pragma vertex vert
    13.             #pragma fragment frag
    14.      
    15.             #include "UnityCG.cginc"
    16.      
    17.             float4 vert (float4 v: POSITION): SV_POSITION
    18.             {
    19.                 float4x4 projMatrix = UNITY_MATRIX_P;
    20.                 projMatrix._m11 = -projMatrix._m11;
    21.                 return mul(projMatrix, mul(UNITY_MATRIX_MV, v));
    22.             }
    23.      
    24.             fixed4 frag (): SV_Target
    25.             {
    26.                 return fixed4(1, 0, 0, 1);
    27.             }
    28.             ENDCG
    29.         }
    30.     }
    31. }
    32.  
    Screenspace shader fragment function:
    Code (CSharp):
    1.  
    2. fixed4 frag (float4 screenPos: SV_Position): SV_Target
    3. {
    4.         float2 texturePos = screenPos.xy * _ScreenParams.zw - screenPos.xy;
    5.         return tex2D(_MainTex, texturePos);
    6. }
    7.  
    Thank you for providing me with a joyful hacking challenge! :D Now I've got to find a workaround for 5.6.x - maybe the solution will be more straightforward on a newer version of Unity...
    I am also still interested if this screenspace thing can be done with single-pass rendering method - will try to figure it out via graphics debugging soon.
     
  13. Unity_Wesley

    Unity_Wesley

    Unity Technologies

    Joined:
    Sep 17, 2015
    Posts:
    558
    Sweet, thank you for the update.

    I will make note in the bug
     
  14. BrandonFogerty

    BrandonFogerty

    Joined:
    Jan 29, 2016
    Posts:
    83
    Hi @dimatomp,

    The eye texture is being flipped on HoloLens so your screen space uv's y component needs to be flipped as well. We have a built-in shader variable called "_ProjectionParams" that will help. The x component of this variable will tell you if the active render texture is y flipped or not. The following modification to your shader will make it work regardless of platform/rendering API.

    Code (CSharp):
    1. Shader "Unlit/Widget/ScreenspaceTexture"
    2. {
    3.     Properties
    4.     {
    5.         _MainTex ("Texture", 2D) = "white" {}
    6.     }
    7.     SubShader
    8.     {
    9.         Tags { "RenderType"="Transparent" "Queue"="Transparent" }
    10.         LOD 100
    11.  
    12.         Pass
    13.         {
    14.             ZWrite Off
    15.  
    16.             CGPROGRAM
    17.             #pragma vertex vert
    18.             #pragma fragment frag
    19.             #pragma target 3.0
    20.            
    21.             #include "UnityCG.cginc"
    22.  
    23.             sampler2D _MainTex;
    24.  
    25.             float4 vert (float4 v: POSITION) : SV_POSITION
    26.             {
    27.                 return UnityObjectToClipPos(v);
    28.             }
    29.            
    30.             fixed4 frag (float4 screenPos : POSITION): SV_Target
    31.             {
    32.                 screenPos += float4(0.5, 0.5, 0, 0);
    33.                 float2 texturePos = screenPos.xy * _ScreenParams.zw - screenPos.xy;
    34.                 texturePos.y = lerp(texturePos.y, 1.0 - texturePos.y, _ProjectionParams.x * 0.5 + 0.5);
    35.                 return tex2D(_MainTex, texturePos);
    36.             }
    37.             ENDCG
    38.         }
    39.     }
    40. }
     
  15. mordechai30

    mordechai30

    Joined:
    Jun 18, 2017
    Posts:
    3
    Graphics.Blit doesn't work when the "Single pass Instanced" is chosen.
    Regardless which kind of shader is being exploited during the call, embedded unity shaders don't work as well as a custom. The custom shaders have been written in accordance with all instructions from the next unity doc:
    https://docs.unity3d.com/Manual/SinglePassStereoRenderingHoloLens.html

    Thanks )

    P.S. the bug was tested on Unity 2017.1. When i mean embedded shaders, i mean even most trivial: such as default Blit call without supplied shader as the third method param.
     
  16. pdmendki

    pdmendki

    Joined:
    Aug 18, 2017
    Posts:
    1
    @mordechai30 are you able to find any workaround to get Graphics.Blit() working with singple pass stereo rendering