Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

how to get position of current pixel in screen space in framgment shader function?

Discussion in 'Shaders' started by fancybit, Dec 29, 2013.

  1. fancybit

    fancybit

    Joined:
    Sep 6, 2012
    Posts:
    3
    I wan't to blend pics using the formula: srcAlpha*scrColor+dstAlpha*dstColor
    But there isn't any blend mode fit to it. So I want to use GrabPass in shaderlab to get a texture of background, then use the formula to blend them. The problem is the grabbed texture is a snapshot of full screen,so I must calculate the uv but I don't know how to get the correct pixel position in screen space.
    That's my shader code :
    Code (csharp):
    1. Shader "UnityMugen/SpriteShader" {
    2.     Properties {
    3.         _Color ("Main Color", Color) = (1,1,1,1)
    4.         _MainTex ("Texture", 2D) = "white" {}
    5.         _SrcAlpha ("Source Alpha", Float) = 0.0
    6.         _DstAlpha ("Destionation Alpha", Float) = 1.0
    7.         _ScreenWidth ("Screen Width", Float) = 0
    8.         _ScreenHeight ("Screen Height",Float) = 0  
    9.     }
    10.     SubShader {
    11.         Tags {"Queue" = "Transparent" "IgnoreProjector" = "True"}
    12.         LOD 200
    13.         Lighting Off
    14.         Cull Back
    15.         GrabPass {"_BGTex" }
    16.         Pass{
    17.             Blend SrcAlpha OneMinusSrcAlpha
    18.             CGPROGRAM
    19. // Upgrade NOTE: excluded shader from DX11 and Xbox360; has structs without semantics (struct v2f members pos2)
    20. #pragma exclude_renderers d3d11 xbox360
    21.             #pragma vertex vert
    22.             #pragma fragment frag
    23.             #include "UnityCG.cginc"
    24.            
    25.             float _ScreenWidth;
    26.             float _ScreenHeight;
    27.             float _SrcAlpha;
    28.             float _DstAlpha;
    29.             float4 _Color;
    30.             sampler2D _BGTex;
    31.             sampler2D _MainTex;
    32.            
    33.             struct v2f{
    34.                 float4 pos:SV_POSITION;
    35.                 float2 uv:TEXCOORD0;
    36.                 float2 pos2;
    37.             };
    38.            
    39.             float4 _MainTex_ST;
    40.            
    41.             v2f vert(appdata_base v)
    42.             {
    43.                 v2f o;
    44.                 o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    45.                 o.uv = TRANSFORM_TEX(v.texcoord, _MainTex);
    46.                 o.pos2 = o.pos.xy;
    47.                 return o;
    48.             }
    49.            
    50.             half4 frag (v2f i) : COLOR
    51.             {
    52.                 half4 texcol = tex2D(_MainTex,i.uv);
    53.                 half u = (i.uv.x+i.pos2.x)/_ScreenWidth;
    54.                 half v = (i.uv.y+i.pos2.y)/_ScreenHeight;
    55.                 half4 bgcol = tex2D(_BGTex,half2(u,v));
    56.                 return (_SrcAlpha*bgcol + _DstAlpha*texcol)*_Color;
    57.             }
    58.             ENDCG
    59.         }
    60.     }
    61.     FallBack "Diffuse"
    62. }
     
    SodaSupermanNo1 likes this.
  2. shaderbytes

    shaderbytes

    Joined:
    Nov 11, 2010
    Posts:
    900
    You use "ComputeScreenPos" eg..

    Code (csharp):
    1. v2f vert (appdata_t v)
    2. {
    3.     v2f o;
    4.     o.vertex = mul (UNITY_MATRIX_MVP, v.vertex);       
    5.     o.screenPos = ComputeScreenPos(o.vertex);
    6.     return o;
    7. }
     
    AAAAAAAAAE likes this.
  3. fancybit

    fancybit

    Joined:
    Sep 6, 2012
    Posts:
    3
    3ks a lot! and...
    Code (csharp):
    1. struct v2f{
    2.                 float4 pos:SV_POSITION;
    3.                 float2 uv:TEXCOORD0;
    4.                 float2 screenPos:???;//<- what should i write here?
    5.             };
    is there any good book for shaderlab or cg, glsl?
     
  4. shaderbytes

    shaderbytes

    Joined:
    Nov 11, 2010
    Posts:
    900
    I used :

    Code (csharp):
    1. float4 screenPos : TEXCOORD1;
    then in your frag program :

    Code (csharp):
    1. half4 frag (v2f i) : COLOR
    2. {
    3. i.screenPos.x
    4. i.screenPos.y
    5. }
    Im still learning myself and have not found a definitive good source for everything shader related , rather I piece together my understanding though looking at code examples , Unity shaderlab docs and then google..

    here is great read though :

    http://en.wikibooks.org/wiki/GLSL_Programming/Unity
     
  5. FuzzyQuills

    FuzzyQuills

    Joined:
    Jun 8, 2013
    Posts:
    2,871
    This might be old, but... this is to help those after similar answers:

    for declaring ScreenPos in a struct, simply put this where you declare screenpos:
    Code (csharp):
    1.  
    2. float2 screenPos:TEXCOORD2
    3.  
    Hope it helps. You may also have to sub texcoord2 with a different priority texcoord. in most cases, this is all you need
     
    JoeStrout likes this.
  6. blueknee

    blueknee

    Joined:
    Apr 5, 2014
    Posts:
    8
    For someone still troubled with this problem...
    This code worked for me.
    I used ComputeScreenPos() and _ScreenParams.xy.
    Code (csharp):
    1. SubShader
    2. {
    3. Pass
    4. {
    5. CGPROGRAM
    6. #pragma vertex vert
    7. #pragma fragment frag
    8. #include "UnityCG.cginc"
    9.  
    10. struct appdata_t
    11. {
    12. float4 vertex : POSITION;
    13. float4 color : COLOR;
    14. float2 texcoord : TEXCOORD0;
    15. };
    16.  
    17. struct v2f
    18. {
    19. float4 vertex : POSITION;
    20. float4 color : COLOR;
    21. float2 texcoord : TEXCOORD0;
    22. float2 screenpos : TEXCOORD1;
    23. };
    24.  
    25. sampler2D _MainTex;
    26.  
    27. // vertex shader
    28. v2f vert(appdata_t IN)
    29. {
    30. v2f OUT;
    31. OUT.vertex = mul(UNITY_MATRIX_MVP, IN.vertex);
    32. OUT.texcoord = IN.texcoord;
    33. OUT.color = IN.color;
    34. OUT.screenpos = ComputeScreenPos(OUT.vertex);
    35.  
    36. return OUT;
    37. }
    38.  
    39. // fragment shader
    40. fixed4 frag(v2f IN) : COLOR
    41. {
    42. half4 tex = tex2D(_MainTex, IN.texcoord) * IN.color;
    43. float4 worldpos = IN.screenpos * _ScreenParams.xy;
    44. // here i got worldpos.xy = (320, 568) if it is in the very center pixel of the screen, where my resolution is (640, 1136)
    45. return tex;
    46. }
    47. ENDCG
    48. }
    49. }
     
    SodaSupermanNo1 likes this.
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    howong likes this.
  8. alphaxiao

    alphaxiao

    Joined:
    Aug 4, 2017
    Posts:
    8
    Hi, bgolus. See you again~

    I tried VPOS and ComputeScreenPos + _ScreenParams.xy. But I find the second way seems to produce precision problem?

    the result is like this, upload_2018-5-19_21-2-31.png

    the code is like this,

    Code (CSharp):
    1. vert() {
    2.    pos = UnityObjectToClipPos(v.position.xyz);
    3.  
    4.    i.screenPos = ComputeScreenPos(outpos);
    5.    return i;
    6. }
    7.  
    8. frag (InterpolatorsVertex i) : SV_TARGET {
    9.      float4 screenPos = i.screenPos;
    10.      screenPos.xy *= _ScreenParams.xy;
    11.      screenPos.xy = floor(screenPos.xy * 0.1) * 0.5;
    12.      float checker = -frac(screenPos.r + screenPos.g);
    13.  
    14.     clip(checker);
    15.     return float4(1, 1, 1, 1);
    16. }
    But when I am using VPOS, the result if fine. So, ask your favor once again, am I missing something?
     

    Attached Files:

  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Try this:

    float4 screenPos = i.screenPos;
    screenPos.xy = floor(screenPos.xy * _ScreenParams.xy);
    screenPos.xy = floor(screenPos.xy * 0.1) * 0.5;
     
  10. alphaxiao

    alphaxiao

    Joined:
    Aug 4, 2017
    Posts:
    8
    It still looks like having a precision issue.

    The code :
    upload_2018-5-20_22-40-55.png

    The screenPos compute by ourselves is in normalized device coordination, and the one we get from VPOS is in windows space, am I right about it?

    I find a very interesting point. The gird size produced by VPOS seem irrelevant to camera distance, as the result below. No matter how far or close the camera to the plane, the grid size remains same.
    upload_2018-5-20_22-46-16.png

    But it is different in screenPos compute by hands. The camera size will influence grid size. The closer the camera is, the larger (more precise) the grid is. And Besides, the camera preview in the scene view is different from camera view in playing mode(game view).

    Quite strange....
     

    Attached Files:

    Seyed_Morteza_Kamaly likes this.
  11. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    I forgot the perspective divide:

    float4 screenPos = i.screenPos / i.screenPos.w;
     
  12. callebo_FK

    callebo_FK

    Joined:
    Nov 23, 2017
    Posts:
    30
    Anyone knows how to make the screen position "local"?... Not sure how to explain it but... Imagine a texture being projected on an object in screen space, but being offset along the object as well and not just being one-to-one with the screen.

    I am making smoke with a particle system which is using an alpha texture in screen space. The reason for this is that I have more control over the shape of the smoke in its entirety instead of having many individual billboards sum up as smoke. But, when I move the camera the smoke will flicker because the object is moving around on the screen but not the alpha texture...

    I want the alpha to move with the rest of the particle system. I have tried adding offset with cameras y position which didn't quite work, and many other different things but ultimately I don't know the "right way", if there even is one?

    Here are some of the interesting bits in the shader that I'm using:
    Code (CSharp):
    1.  
    2.                 float camDistance = distance(i.worldPos, _WorldSpaceCameraPos); // Distance to object from camera
    3.                 float2 worldPos = (i.screenPos.xy / i.screenPos.w) * _ScreenParams.xy // Screen position based on screen size from camera.
    4.                 * camDistance + (float2((-0.5) * _ScreenParams.x,(-0.5) * _ScreenParams.y) * camDistance); // Offset so UV scales from center and not any corner.
    5.                 fixed cA = tex2D(_MainTex, float2(i.uv.x, worldPos.y * _Scale)).r * i.color.a; // Alpha
    6.  
    _Scale is 0.001 in the gif

    EDIT: I seem to have kind of fixed it..! Not sure how though.

    I realized there needs to be an offset to the textures worldPos.y, so I added an offset based on worldPos.y basically... Tried with some multiplication values and now it works to about 90%, but the 10% margin that's off isn't really noticable. Here is what's new:
    Code (CSharp):
    1.  
    2.                 float camDistance = distance(i.worldPos, _WorldSpaceCameraPos); // Distance to object from camera
    3.                 float2 worldPos = (i.screenPos.xy / i.screenPos.w) * _ScreenParams.xy // Screen position based on screen size from camera.
    4.                 * camDistance + (float2((-0.5) * _ScreenParams.x,(-0.5) * _ScreenParams.y) * camDistance); // Offset so UV scales from center and not any corner.
    5.                 float YAdjust = _ScaleY * i.worldPos.y; // Offset for UVs Y.
    6.                 float2 cAUV = float2(i.uv.x, ((worldPos.y * _Scale - uvSpeed) + YAdjust) / _ScaleYCounter); // UV for alpha
    7.  
    8.                 fixed cA = tex2D(_MainTex, cAUV).r * i.color.a; // Alpha
    9.  
    _ScaleY is 3 and _ScaleYCounter is 6

    gif for how it looks. Also good for people who wonder what I meant I wanted to do in my original post :D
     
    Last edited: Sep 28, 2018
  13. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
  14. callebo_FK

    callebo_FK

    Joined:
    Nov 23, 2017
    Posts:
    30
    That seems to work fine, except when rotating around it the UVs move around like crazy. Also, it doesn't adjust to screen size (atleast for me)

    I'm still working on this. I thought I had it right before but the UVs scaled with the objects X axis, which caused stretching. This was because I was using the texture coordinates in the X. Now after some modificiations I can have persistent UV size, but the X axis UV is "stuck" to the screen (like my original problem)..
     
  15. FuzzyQuills

    FuzzyQuills

    Joined:
    Jun 8, 2013
    Posts:
    2,871
    That black particle effect looks amazing, nice job. Reminds me of Reaper's wraith form effects from Overwatch.
     
  16. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Ultimately you're trying to construct a camera facing coordinate system. You could use similar math to what you would use to do a local axis aligned billboard shader to create UVs that are matched to the pivot, distance, and orientation of an object, but by adding rotation you're going to potentially have problems at the poles.
     
  17. callebo_FK

    callebo_FK

    Joined:
    Nov 23, 2017
    Posts:
    30
    This is the UV function I ended up with in the end. I ended up using the texture coordinate for UV's X after all. If I don't scale the particles during it's lifetime it won't stretch, which is fine. This also makes it look fine when rotating around the particles.

    Code (CSharp):
    1.             float2 SetScreenUV(float speed, float4 worldPos, float4 screenPos, float2 uv)
    2.             {
    3.                 float uvSpeed = _Time.y * speed; // Scroll UV up for animation.
    4.  
    5.                 float YAdjust = worldPos.y * _Scale; // Offset for UVs Y.
    6.  
    7.                 float4 SSobjectPositionFull = UnityObjectToClipPos (float4(0,0,0,1)) ; // Objects position
    8.  
    9.                 float SSobjectPosition = SSobjectPositionFull.x /(2*SSobjectPositionFull.w); // Objects position with cam distance.
    10.                 SSobjectPosition *= _ScaleX; // Set _scaleX manually to match with generated Y.
    11.  
    12.                 float2 screenP = (screenPos.xy/ screenPos.w) * _ScreenParams.xy; // Screen position.
    13.              
    14.                 float2 cAUV = float2( uv.x * _ScaleX, screenP.y * _ScaleY + YAdjust - uvSpeed) * _Scale; // UV for alpha
    15.                 return cAUV;
    16.             }
     
  18. wechat_os_Qy0yAa6GUpYhzZxsCg7GUwSXE

    wechat_os_Qy0yAa6GUpYhzZxsCg7GUwSXE

    Joined:
    Sep 19, 2018
    Posts:
    1
    Thanks a lot for your guys help. I still have a question : Why not use the "SV_POSITION" variable and _ScreenParams to get the actual screen position? Can we use "SV_POSITION" in Fragment Shader?
     
  19. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Yes, you can use it in the fragment shader. And for getting the real render target pixel position, there's not really a good reason to not use it.
    SV_POSITION
    in the fragment shader is transformed into the pixel position, and dividing it by
    _ScreenParams.xy
    would get you a normalized UV similar to what you'd get with the option above.

    However that's not all using
    ComputeScreenPos()
    does. It also flips the UV upside down in the cases where that's necessary ... most of the time. Unity does a lot of weird things to make all APIs act like OpenGL, which sometimes requires flipping the UV upside down. It also handles some weird cases with VR render texture sampling. The flipping upside down thing can be replicated by flipping it in the fragment shader under the same conditions, as can the VR case, though the later requires passing some additional information from the vertex shader to the fragment shader that isn't always done.

    Lastly on some older mobile / console GPUs there's also a decent performance benefit to using UVs generated in the vertex shader instead of in the fragment shader, including projective UVs like what screenPos is. Those make up a relatively small number of the GPUs still supported by Unity, and the shaders Unity generates may not even make full use of those benefits anymore. But it's a practice that's so ingrained into Unity's shaders that I don't think most people question it anymore.