Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

The result of ComputeScreenPos is different between VR's lefteye image and righteye image.

Discussion in 'AR/VR (XR) Discussion' started by tigerwoodsisao, Jul 15, 2018.

  1. tigerwoodsisao

    tigerwoodsisao

    Joined:
    Mar 8, 2015
    Posts:
    17
    Hi

    In Unty2018.2
    Stereo Rendering Method is MultiPass.

    I displayed the attached image (modo_uv_checker.jpg) on the Plane with the following shader code, but the results seen by the left eye and the right eye of VR are different like the attached image (uvbug.PNG).

    This seems to indicate that ComputeScreenPos is handled differently for left eye and right eye. That was not the case with unity 2017.

    Is this a bug?


    Code (CSharp):
    1. v2f vert(appdata v)
    2.     {
    3.         v2f o;
    4.         o.vertex = UnityObjectToClipPos(v.vertex);
    5.         o.screenPos = ComputeScreenPos(o.vertex);
    6.         return o;
    7.     }
    8.  
    9.     fixed4 frag(v2f i) : SV_Target
    10.     {
    11.         float2 screenUV = i.screenPos.xy / i.screenPos.w;
    12.  
    13.         return  tex2D(_Test, screenUV);
    14.     }
     

    Attached Files:

  2. BrandonFogerty

    BrandonFogerty

    Joined:
    Jan 29, 2016
    Posts:
    83
    HI @tigerwoodsisao,

    There was no change in the implementation of ComputeScreenPos between version 2017 and 2018.2
    You can download the built-in shader code for both versions here and compare.
    https://unity3d.com/get-unity/download/archive

    ComputeScreenPos is designed to help you take a clip-space position and convert it to a screen-space friendly 0 to 1 range which can be useful for various fragment shader operations.

    In a stereo context, ComputeScreenPos will perform differently if and only if you are using the single pass double wide stereo rendering method. This is due to the fact that when you are rendering with the single-pass double wide mode, you are rendering into a single texture which is twice the width of a single eye texture. Therefore in order to calculate the screen position, we need to consider the width and offset of each eye.

    Code (CSharp):
    1. inline float4 ComputeScreenPos(float4 pos) {
    2.     float4 o = ComputeNonStereoScreenPos(pos);
    3. #if defined(UNITY_SINGLE_PASS_STEREO)
    4.     o.xy = TransformStereoScreenSpaceTex(o.xy, pos.w);
    5. #endif
    6.     return o;
    7. }
    However, you said you were using multi-pass. When using multi-pass, ComputeScreenPos will perform in the same manner as it would if you were not rendering in stereo. However, the input to ComputeScreenPos is dependent upon the output of UnityObjectToClipPos. Therefore I think the offset is happening due to the UnityObjectToClipPos function. The UnityObjectToClipPos function will transform the object space position into a view space position and then finally into a clip-space position. The view matrix per eye will be slightly different to account for the interocular distance between the two eye positions. That is why game objects appear in slightly different locations from the perspective of each eye rendering. The same view matrix which applied a horizontal offset to the game objects in your scene, per eye, is being applied when you use the output of UnityObjectToClipPos as the input to ComputeScreenPos. Even if you don't use ComputeScreenPos and you attempt to calculate the screen pos like this,

    Code (CSharp):
    1. Shader "Unlit/UVTest"
    2. {
    3.     Properties
    4.     {
    5.         _MainTex ("Texture", 2D) = "white" {}
    6.     }
    7.     SubShader
    8.     {
    9.         Tags { "RenderType"="Opaque" }
    10.         Pass
    11.         {
    12.             CGPROGRAM
    13.             #pragma vertex vert
    14.             #pragma fragment frag
    15.             #include "UnityCG.cginc"
    16.             struct appdata
    17.             {
    18.                 float4 vertex : POSITION;
    19.             };
    20.             struct v2f
    21.             {
    22.                 float4 vertex : SV_POSITION;
    23.             };
    24.             sampler2D _MainTex;
    25.             float4 _MainTex_ST;
    26.            
    27.             v2f vert (appdata v)
    28.             {
    29.                 v2f o;
    30.                 o.vertex = UnityObjectToClipPos(v.vertex);
    31.                 return o;
    32.             }
    33.            
    34.             fixed4 frag (v2f i) : SV_Target
    35.             {
    36.                 float2 screenUV = (i.vertex.xy / _ScreenParams.xy);
    37.                 #if UNITY_UV_STARTS_AT_TOP
    38.                 screenUV.y *= -_ProjectionParams.x;
    39.                 #endif
    40.                 return  tex2D(_MainTex, screenUV);
    41.             }
    42.             ENDCG
    43.         }
    44.     }
    45. }
    You will still notice an offset in your texture coordinates. However, if you move your quad close to the camera such that it fills the screen, you will notice that the offset doesn't seem to exist. This is because the game object itself is offset per eye based on the view matrix as mentioned earlier. You can see that the distance from the left edge of the eye texture and the left edge of the quad is different per eye.

    upload_2018-8-3_16-51-9.png

    vs

    upload_2018-8-3_16-44-56.png
     

    Attached Files:

    Arclite83 likes this.
  3. crydrk

    crydrk

    Joined:
    Feb 10, 2012
    Posts:
    74
    I'm having a similar issue based on a tutorial I found for doing Portal style effects using render textures. I have little shader knowledge, but it looks like it's using UnityObjectToClipPos just before ComputeScreenPos similar to your examples.

    My question is, is there any way to make this work in multi-pass without duplicating my cameras and render textures? OP mentions they used to, but your answer sounds to me like it simply won't work this way.

    Code (CSharp):
    1. Shader "Unlit/ScreenCutoutShader"
    2. {
    3.     Properties
    4.     {
    5.         _MainTex ("Texture", 2D) = "white" {}
    6.     }
    7.     SubShader
    8.     {
    9.         Tags{ "Queue" = "Transparent" "IgnoreProjector" = "True" "RenderType" = "Transparent" }
    10.         Lighting Off
    11.         Cull Back
    12.         ZWrite On
    13.         ZTest Less
    14.        
    15.         Fog{ Mode Off }
    16.  
    17.         Pass
    18.         {
    19.             CGPROGRAM
    20.             #pragma vertex vert
    21.             #pragma fragment frag
    22.            
    23.             #include "UnityCG.cginc"
    24.  
    25.             struct appdata
    26.             {
    27.                 float4 vertex : POSITION;
    28.                 float2 uv : TEXCOORD0;
    29.             };
    30.  
    31.             struct v2f
    32.             {
    33.                 //float2 uv : TEXCOORD0;
    34.                 float4 vertex : SV_POSITION;
    35.                 float4 screenPos : TEXCOORD1;
    36.             };
    37.  
    38.             v2f vert (appdata v)
    39.             {
    40.                 v2f o;
    41.                 o.vertex = UnityObjectToClipPos(v.vertex);
    42.                 o.screenPos = ComputeScreenPos(o.vertex);
    43.                 return o;
    44.             }
    45.            
    46.             sampler2D _MainTex;
    47.  
    48.             fixed4 frag (v2f i) : SV_Target
    49.             {
    50.                 i.screenPos /= i.screenPos.w;
    51.                 fixed4 col = tex2D(_MainTex, float2(i.screenPos.x, i.screenPos.y));
    52.                
    53.                 return col;
    54.             }
    55.             ENDCG
    56.         }
    57.     }
    58.  
    59.     Fallback off
    60. }