Search Unity

How to use screenUV surface shader in single-pass VR mode

Discussion in 'Shaders' started by wdw8903, Aug 9, 2016.

  1. wdw8903

    wdw8903

    Joined:
    Apr 2, 2015
    Posts:
    42
    Hello, I want to use screenUV in my surface shader in Single Pass Stereo Rendering mode. But now the texture won't show the same position in both eye.

    For example, how to modify the example surface shader?
    Thanks.

    Shader "Example/ScreenPos" {
    Properties {
    _MainTex ("Texture", 2D) = "white" {}
    _Detail ("Detail", 2D) = "gray" {}
    }
    SubShader {
    Tags { "RenderType" = "Opaque" }
    CGPROGRAM
    #pragma surface surf Lambert
    struct Input {
    float2 uv_MainTex;
    float4 screenPos;
    };
    sampler2D _MainTex;
    sampler2D _Detail;
    void surf (Input IN, inout SurfaceOutput o) {
    o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
    float2 screenUV = IN.screenPos.xy / IN.screenPos.w;
    screenUV *= float2(8,6);
    o.Albedo *= tex2D (_Detail, screenUV).rgb * 2;
    }
    ENDCG
    }
    Fallback "Diffuse"
    }
     
  2. silentslack

    silentslack

    Joined:
    Apr 5, 2013
    Posts:
    268
    Hi, I have the same issue. Did you solve this!? Thanks!
     
  3. z_space

    z_space

    Joined:
    Jul 17, 2016
    Posts:
    10
    Did you ever solve this?
     
  4. Tudor

    Tudor

    Joined:
    Sep 27, 2012
    Posts:
    114
    Did anyone ever solve this?
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,677
  6. Tudor

    Tudor

    Joined:
    Sep 27, 2012
    Posts:
    114
    Yes, thank you.
    It would be nice if the docs touched on what happens in the background (eg why you need to write x and why you don't need to write y because it's already handled in z way). I have a regular (non-surface) shader, and I followed the docs, and it's all badly projected. And now I have to to a whole lotta guesswork about what might be happening and what might not be.

    BTW the docs say you should use theUnityWorldToClipPos method. Why WorldToClipPos? Does that imply ObjectToClipPos doesn't do some VR calculations?

    [EDIT]
    Based on this I guess it works fine:

    Code (CSharp):
    1. // Tranforms position from object to homogenous space
    2. inline float4 UnityObjectToClipPos( in float3 pos )
    3. {
    4. #if defined(UNITY_SINGLE_PASS_STEREO) || defined(UNITY_USE_CONCATENATED_MATRICES)
    5.     // More efficient than computing M*VP matrix product
    6.     return mul(UNITY_MATRIX_VP, mul(unity_ObjectToWorld, float4(pos, 1.0)));
    7. #else
    8.     return mul(UNITY_MATRIX_MVP, float4(pos, 1.0));
    9. #endif
    10. }
    Well, better keep digging through the shader source for VR related stuff I guess.
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    7,677
    Btw, the answer from that page that's relevant to this thread is you want to use this:

    float2 screenUV = UnityStereoTransformScreenSpaceTex(IN.screenPos.xy / IN.screenPos.w);

    However this won't necessarily work exactly like you want it to. In fact this will feel really weird since the texture won't line up in both eyes since most VR headsets use a slightly skewed projection matrix. This means to look at the center of the "screenUV" is going to require you go a bit wall-eyed.

    Really if you want to do "screen space UVs" for VR you want to use view space xy position, or otherwise project onto some kind of virtual infinite plane or sphere around the user's head.
     
  8. Tudor

    Tudor

    Joined:
    Sep 27, 2012
    Posts:
    114
    Now that was very helpful, thanks! Otherwise I would have thought I did something wrong.

    So on top of that I managed to fix this walleyed problem by doing the following:

    Code (CSharp):
    1. //in vertex program:
    2. //o.screenPos = ComputeScreenPos(o.pos);
    3. //then in fragment:
    4.  
    5. float2 screenUV = i.screenPos.xy / i.screenPos.w;
    6. #ifdef UNITY_SINGLE_PASS_STEREO
    7.         float4 scaleOffset = unity_StereoScaleOffset[unity_StereoEyeIndex];
    8.         screenUV = (screenUV - scaleOffset.zw) / scaleOffset.xy;
    9. #endif
    10.  
    11. if(unity_StereoEyeIndex == 0) // 0 means Left Eye
    12.         col = tex2D(_LeftTex, screenUV*_LeftTex_ST.xy+_LeftTex_ST.zw);
    13. else
    14.         col = tex2D(_RightTex, screenUV*_RightTex_ST.xy+_RightTex_ST.zw);
    15.  
    16.  
    Now everything is projected correctly, at any distance. The Game object is just a quad placed in the world wherever.
     
  9. plmx

    plmx

    Joined:
    Sep 10, 2015
    Posts:
    249
    @Tudor: I am facing a similar issue, but seem to fail in adapting your code to my shader. Could you elaborate on where the left and right texture you are using are coming from and/or post the entire shader code for reference? Thanks!
     
    bgolus likes this.
  10. Tudor

    Tudor

    Joined:
    Sep 27, 2012
    Posts:
    114
    Sorry I wasn't around for a while. I don't remember what weirdness I was trying to do with a _LeftTex eye texture and a _RightTex eye texture. But actually in the end I had one single texture that was a rendertexture from a VR camera, so it was the standard half of it was left eye and half of it was right.

    The code I posted is in the fragment program of a shader. The examples are in
    https://docs.unity3d.com/Manual/SinglePassStereoRendering.html

    So it's somehting like this:

    Code (CSharp):
    1. v2f vertex(input i)
    2. {
    3. //...
    4. o.screenPos = ComputeScreenPos(o.pos);
    5. //...
    6. }
    7.  
    8. output fragment(v2f i)
    9. {
    10.  
    11. float2 screenUV = i.screenPos.xy / i.screenPos.w;
    12. #ifdef UNITY_SINGLE_PASS_STEREO
    13.         float4 scaleOffset = unity_StereoScaleOffset[unity_StereoEyeIndex];
    14.         screenUV = (screenUV - scaleOffset.zw) / scaleOffset.xy;
    15. #endif
    16. if(unity_StereoEyeIndex == 0) // 0 means Left Eye
    17.         col = tex2D(_RenderTex, screenUV*_LeftSide_ST_Offset.xy+_LeftSide_ST_Offset.zw);
    18. else
    19.         col = tex2D(_RenderTex, screenUV*_RightSide_ST_Offset.xy+_RightSide_ST_Offset.zw);
    20.  
    21. //...
    22.  
    23. }

    I think you will need 2 vector4's `LeftSide_ST_Offset` and `RightSide_ST_Offset`, because different VR headsets have different aspect ratios for the VR camera's rendertexture that you want to input into this shader. Some have a texture that's squished together or maybe it's top bottom not left right. But it's usually easy to figure out, it's like you need to put a 0.25 or a 0.33 or a 0.5, those kinds of numbers and it'll look correct.