Search Unity

  1. Unity 2018.1 has arrived! Read about it here
    Dismiss Notice
  2. Scriptable Render Pipeline improvements, Texture Mipmap Streaming, and more! Check out what we have in store for you in the 2018.2 Beta.
    Dismiss Notice
  3. If you couldn't join the live stream, take a peek at what you missed.
    Dismiss Notice
  4. Improve your Unity skills with a certified instructor in a private, interactive classroom. Learn more.
    Dismiss Notice
  5. ARCore is out of developer preview! Read about it here.
    Dismiss Notice
  6. Magic Leap’s Lumin SDK Technical Preview for Unity lets you get started creating content for Magic Leap One™. Find more information on our blog!
    Dismiss Notice
  7. Want to see the most recent patch releases? Take a peek at the patch release page.
    Dismiss Notice

what does the function ComputeScreenPos ( in unitycg.cginc) do ?

Discussion in 'Shaders' started by JohnSonLi, Jan 30, 2015.

  1. JohnSonLi

    JohnSonLi

    Joined:
    Apr 15, 2012
    Posts:
    549
    Code (csharp):
    1.  
    2. #define V2F_SCREEN_TYPE float4
    3. inline float4 ComputeScreenPos (float4 pos) {
    4.   float4 o = pos * 0.5f; //why myltiply by .5f
    5.   #if defined(UNITY_HALF_TEXEL_OFFSET)
    6.   o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w * _ScreenParams.zw;
    7.   #else
    8.   o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w;
    9.   #endif
    10.  
    11.   #if defined(SHADER_API_FLASH)
    12.   o.xy *= unity_NPOTScale.xy;
    13.   #endif
    14.  
    15.   o.zw = pos.zw;
    16.   return o;
    17. }
    18.  
    this is from a sample shader in angrybots. RealtimeReflectionInWaterFlow.shader
     
  2. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    Given a position in projection/camera space (I think - essentially o.pos in the vertex shader) it returns the position of that point on the screen.

    With the bottom left being (0,0) and the top right being (1,1).
     
  3. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,140
    I was actually using ComputeScreenPos just today and in my case the result was 10 times as high. So bottom left being (0,0) and top right being (10,10). I'm pretty sure that's not the intended result, but that's what happened.

    So my code now actually looks something like this:
    Code (csharp):
    1.  
    2. struct v2f {
    3.    float4 pos_clip : SV_POSITION;
    4.    float2 uv0 : TEXCOORD0;
    5. };
    6.  
    7. v2f vert(appdata_base v) {
    8.    v2f o;
    9.    o.pos_clip = mul(UNITY_MATRIX_MVP, v.vertex);
    10.    o.uv0 = ComputeScreenPos(o.pos_clip) / 10.0;
    11.    return o;
    12. }
    13.  
    14. float4 frag(v2f i) : COLOR {
    15.    float4 input1 = tex2D(_MainTex, i.uv0);
    16.  
    And that maps _MainTex to the screen perfectly.
     
  4. Glurth

    Glurth

    Joined:
    Dec 29, 2014
    Posts:
    95
    C:\Program Files\Unity\Editor\Data\CGIncludes

    That folder contains the unitycg.cginc file.

    You can open up the file in any text editor to see the actual code that gets compiled into your shader.
    for ComputeScreenPos I see:
    Code (CSharp):
    1. inline float4 ComputeScreenPos (float4 pos) {
    2.     float4 o = pos * 0.5f;
    3.     #if defined(UNITY_HALF_TEXEL_OFFSET)
    4.     o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w * _ScreenParams.zw;
    5.     #else
    6.     o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w;
    7.     #endif
    8.  
    9.     o.zw = pos.zw;
    10.     return o;
    11. }
    by the way... note there is no division by w ( to normalize the XYZ coordinates). I do this myself before passing in the parameter e.g. pos/=pos.w; (Perhaps your W happens to be 10, jvo3dc?) Full discolsure, I'm not sure if this is correct! I found this post trying to confirm.
    Also note there is no use of _ScreenParams.x or .y, so it doesn't seem to be outputing pixel coordinates.
     
    Last edited: Aug 12, 2016
    AndreiMarian likes this.
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    4,363
    You should be dividing .xy / .w in the pixel shader for anything that's not perfectly parallel to the camera plane (pretty much anything that isn't an image effect). Doing the divide in the vertex shader will cause warping.
     
    Glurth likes this.
  6. Glurth

    Glurth

    Joined:
    Dec 29, 2014
    Posts:
    95
    Oh! so I should NOT divide .zw by .w? (I thought I was supposed to because it yields a w=1, and thus "normalized coordinates")
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    4,363
    To be clear, the proper use is:
    Code (CSharp):
    1. // vertex shader
    2. o.pos = UnityObjectToClipPos(v.vertex.xyz);
    3. o.screenPos = ComputeScreenPos(o.pos); // using the UnityCG.cginc version unmodified
    4.  
    5. // fragment shader
    6. float2 screenUV = i.screenPos.xy / i.screenPos.w;
    You can do the screenPos.xy / screenPos.w in the vertex shader in the case of image effects, or anything perfectly flat and not angled away from the camera at all, and would likely solve @jvo3dc 's issue in a generic way.
     
    Glurth likes this.
  8. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,140
    Wow, I made a beginner mistake there. I probably figured it would do the w divide for me, but that is obviously not possible in the vertex shader. I don't think my w "happens" to be 10, I'm willing to bet on it ;-)

    It was probably the name ComputeScreenPos that misled me there. It would be friendlier to call it ComputeScreenPosVert and then add a ComputeScreenPosFragment that does the w divide.
     
  9. Glurth

    Glurth

    Joined:
    Dec 29, 2014
    Posts:
    95
    @jvo3dc agreed, a better name would certainly help. Specifying what the output ACTUALLY IS, in the documentation, would also be useful. Alas, no such luck.
     
  10. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    122
    ComputeScreenPos() will not divide input's xy by w.
    because ComputeScreenPos() expect you sample the texture in fragment shader using tex2Dproj(float4).
    tex2Dproj() is similar to tex2D(), it just divide input's xy by w in hardware before sampling, which in much faster than user code division in fragment shader(result always correct but slow), or vertex shader(result will not correct if polygon not facing directly to camera).

    ComputeScreenPos() will just transform input from clip coordinate vertex position [-w,w] into [0,w]
    then calling tex2DProj() will transform [0,w] into [0,1], which is a valid texture sampling value.

    -----------------------------
    @bgolus point out that tex2Dproj() will not help performance, as it is wrapper in most case. I do not have enough knowledge to tell if it is right/wrong. so I will put this note here.
    from my experience, hlsl compile to glsl for gles2 is not a wrapper, while gles3 is.
    the compiled glsl code is inside the replies below
     
    Last edited: Aug 16, 2016
  11. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    122
    if anyone is confused by coordinates, here is a list showing what is inside a vertex position at different stage 13765939_10153839732515897_1876395612751424638_o.jpg
     
    Last edited: Aug 16, 2016
  12. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    122
    Code (CSharp):
    1. //example code of ComputeScreenPos()'s usage
    2. //remember to sample texture using tex2Dproj(), not regular tex2D() in fragment shader
    3.  
    4. //in vertex shader
    5. o.vertex = mul(UNITY_MATRIX_MVP, v.vertex); //o.vertex.xy is [-w,w]
    6. //or o.vertex = UnityObjectToClipPos(v.vertex.xyz); //which is the same
    7.  
    8. o.uv = ComputeScreenPos(o.vertex); //o.uv.xy is [0,w]
    9.  
    10. /////////////////////////////////////////////////////////////////
    11. //in fragment shader
    12. //tex2Dproj will remap from [0,w] to [0/w,w/w] = [0,1] before sample
    13. //which, [0,1], is a valid uv value
    14. fixed4 col = tex2Dproj(_MainScreenRT, i.uv);
     
    Last edited: Aug 16, 2016
  13. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    122
    ComputScreenPos() just remap [-w,w] to [0,w], it do not do any magic
    Code (CSharp):
    1. //float4 pos, the input of ComputeScreenPos(), is [-w,w]
    2. //usually we input the result of MVP transform directly, just like the reply above
    3. inline float4 ComputeScreenPos (float4 pos) {
    4.     float4 o = pos * 0.5f; //now o.xy is [-0.5w,0.5w], and o.w is half of pos.w also
    5.  
    6.     //UNITY_HALF_TEXEL_OFFSET is only for DirectX9, which is quite old in 2016, still Unity
    7.     //will support it
    8.     #if defined(UNITY_HALF_TEXEL_OFFSET)
    9.     o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w * _ScreenParams.zw;
    10.     #else
    11.  
    12.     o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w;
    13.     //now result o.xy is [-0.5w + 0.5w,0.5w + 0.5w] = [0,w]
    14.     //opengl & directx have different conventions of clip space y(start from top/start from bottom)
    15.     //o.y*_ProjectionParams.x will make it behave the same in different platform
    16.     //otherwise you will see the sampled texture flipped upsidedown
    17.     #endif
    18.     o.zw = pos.zw; //must keep the w, for tex2Dproj() to use
    19.     return o;
    20. }
     
    Last edited: Aug 16, 2016
  14. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    4,363
    One minor nitpick.
    They're actually identical. The tex2Dproj function isn't implemented in hardware, at least not anymore if it ever was. DX11 doesn't even have an analog to tex2Dproj(), and the tex2Dproj and textureProj in DX9 / OpenGL are just wrappers for tex2D(_tex, uv.xy / uv.w).

    Unity even has code for converting tex2Dproj into tex2D calls directly for consoles.

    Here's the compiled DX11 pixel shader for using tex2D(_Tex, uv.xy / uv.w);
    0: div r0.xy, v0.xyxx, v0.wwww
    1: sample o0.xyzw, r0.xyxx, t0.xyzw, s0
    2: ret


    And here's the compiled DX11 pixel shader using tex2Dproj(_Tex, uv.xyzw);
    0: div r0.xy, v0.xyxx, v0.wwww
    1: sample o0.xyzw, r0.xyxx, t0.xyzw, s0
    2: ret


    That said it's not bad to use them as it reduces user error.
     
  15. colin299

    colin299

    Joined:
    Sep 2, 2013
    Posts:
    122
    you are right!
    My target platform is opengles 2.0 & 3.0, I still use tex2Dproj() because I see it can help me avoid dependent texture read on gles2. I assume tex2Dproj() will cause no harm in other platforms but benefits gles2, so my reply above said tex2Dproj() is better. (I do not have any prove, please correct me if it is wrong)

    Code (CSharp):
    1. //hlsl compile to glsl for gles 2.0
    2. //(not sure if it is wrapper, but this function use the texcoord directly,
    3. //it should not trigger any dependent texture read, which is slow)
    4. tmpvar_1 = texture2DProj (_MainScreenRT, xlv_TEXCOORD0);
    5.  
    6. //hlsl compile to glsl for gles 3.0 (already act as a wrapper)
    7. t0.xy = vs_TEXCOORD0.xy / vs_TEXCOORD0.ww;
    8. t10_0 = texture(_MainScreenRT, t0.xy);
     
    Last edited: Aug 16, 2016
    bgolus likes this.
  16. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,140
    I didn't expect it to do much magic, but considering it's called ComputeScreenPos I expected something in the 0 to 1 range. I know how it works, so if I would have paid some more attention I could have known it would be in the 0 to w range. Still, call it ComputeProjectiveUV() then.

    I moved to doing the perspective divide myself and using tex2D years ago. But that is for desktop development where I assume SM 3.0 support. For mobile SM 2.0 use it can't hurt to use tex2Dproj to potentially prevent a dependent texture read. I think it was indeed implemented in hardware for desktop too at one time, so then it doesn't come as a surprise it is implemented in hardware on mobile now.
     
    colin299 likes this.
  17. bugsbun

    bugsbun

    Joined:
    Jun 26, 2017
    Posts:
    24
    Now in the newsest version of unity, are the windows coordinates normalized ? or I am reffering in the wrong way to get the depth of a pixel at pixel position(550,550):

    Code (CSharp):
    1.  
    2. //...inside fragment shader
    3. float d1 //depth
    4. float n1 //normal    
    5.         DecodeDepthNormal(tex2D(_CameraDepthNormalTexture,float2(550/1920,550/1080)), d1, n1);
     
  18. JayMounes

    JayMounes

    Joined:
    Oct 17, 2016
    Posts:
    144
    I didn't read more so it's been said probably, but you'll need to convert the vertex position from local space to world space first. I think.

    Edit: I was wrong. But I was less wrong than I used to be, so I almost knew something.

    Toot toot
     
    TorbenDK likes this.