Search Unity

Get Vector from World to Camera

Discussion in 'Shaders' started by HWDKoblenz, Jul 2, 2018.

  1. HWDKoblenz

    HWDKoblenz

    Joined:
    Apr 3, 2017
    Posts:
    19
    Hello everyone,

    i have a quite basic question. I need a 3D Direction as an Vector in 2D in my Camera Space to I can compute an angle between this 2D Vector and the Vector2(1,0);

    The Green and Red Point are given as a Vector3 in WorldSpace. Both vectors are passed to my Vertex Programm. The Red one as the vertex and the Green one is stored inside color. (use Color just to store the information)



    If i change the camera Position the 2D Vector should also change...



    I made many combinations with:

    PseudoCode:

    Code (CSharp):
    1.  
    2.  
    3. vertexdata
    4. float4 red : SV_POSITION:
    5. float4 green : COLOR;
    6.  
    7.  
    8. VertexProgramm
    9. r= UnityObjectToClipPos(red);
    10. r_sc= ComputeScreenPos(r),
    11.  
    12. g= UnityObjectToClipPos(green);
    13. g_sc= ComputeScreenPos(g),
    14.  
    15. blue= g_sc -r_sc;
    But this doesn't work.

    So does someone know's how to get the desired result?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    It sounds like you're trying to calculate the mesh's tangents. Also, I'm guessing you don't actually want this in screen space if you're looking to get an angle as the vector will be scaled based on the aspect ratio which will result in angles that don't match your expectations.

    Can you say what your end goal is rather than this one part?


    However, to do what you want with the code you have you should just need to do this:
    g_sc = g_sc.xy / g_sc.w;
    r_sc = r_sc.xy / r_sc.w;
    blue = normalize(g_sc.xy - r_sc.xy);



    However this might also get you the same thing:

    blue = normalize(mul((float2x2)UNITY_MATRIX_P, mul((float3x3)UNITY_MATRIX_V, mul((float3x3)unity_ObjectToWorld, v.tangent.xyz)).xy));
     
  3. HWDKoblenz

    HWDKoblenz

    Joined:
    Apr 3, 2017
    Posts:
    19
    Hi,

    well okay this might be not such a good example but it is the "easiest" case for my studies. What you can see here are curvature vectors (strongest direction). Because it is a flat plane the curvature has the same direction on every point.

    Here is a more complex example:


    So my final goal is, to rotate my hatching textures along these curvature vectors.
    (I know you can also unwrap this object correct and then use your textures, but this is not the goal of my studies).
    That's why my Hatching Textures are all in ScreenSpace. So I have to rotate them in Screen Space. That means: I need an angle "more or less" from a screenspace calculation.

    So my idea was: I know the start and end point of every curvature vector (start point is the local vertex position and the end is the local vertex pos + the normalized curvature vector) -> calculate the 2D Position on my Screen -> dot Product against the (1,0) Axis and the acos should give me the desired angle. But i was not able to program this. A
    fellow student was able to do this just in some minutes with his little OpenGL framework. This is quite frustrating :-D.

    I will give it a try to use your code maybe it works.

    But there also some things i really really don't understand. Or: What is unity doing here?

    If i use this code: (Pseudo) Inside the attached Deferred Shader to my Object (Not the Light Shader).


    vertexdata vert(float4 vertex : POSITION,......){
    float4 vs.vertex_sv = UnityObjectToClipPos(vertex); //This sould give me 2D position in range 0 to 1 right???

    }

    fragmentdata frag(vertexdata vs){

    float4 color = float4(vs.vertex_sv.xy,0,1);

    return color;
    }


    In my opinion this should give a result that looks like (left) and not what it actually does (right):



    Well but where did I get the left picture? This is the next unity mystery :-D. I get this result inside the Deferred Light Shader if i just pass the arrived vertex inside my vertexShader, WITHOUT making any calculations. So am I right that the "vertex" that comes into a Deferred Light Shader isn't in local cooradinates anymore and already between 0..1? Where happens the magic? It's really really confusing at the moment. And it makes it hard to decide where i make my desired calculation because the attached Deferred Shader and the second Deferred Light shader seams to have different requirements and inputs.
     
    Last edited: Jul 3, 2018
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Nope. That gives you a 4d(ish) position with a range of -w to w in the vertex shader, 0,0 to the screen resolution in the fragment shader.



    The vertex shader outputs a position in homogeneous clip space (the purple box) with the semantic SV_POSITION. The value the fragment shader gets from the SV_POSITION is after the GPU has done the perspective divide (divide clip space coordinate xyz by w, getting the normalized device space coordinate, shown in the pink box) and the transformation into viewport / screen pixel space (NDC xy * 0.5 + 0.5 * screen resolution, shown in the yellow box).

    Values passed via a different semantic, like TEXCOORD# are not transformed this way and are simply interpolated and passed to the fragment shader. You need to do the perspective divide manually in this case, or if you need the position in the vertex shader. Read up on perspective correction if you're more curious about this.

    The ComputeScreenPos() function takes the -w to w range and transforms it to a 0 to w range so that once you do the perspective divide the resulting value is a 0 to 1 range. That's why my snippet of code above divides the returned values from that function by w.

    Yes, because everything I said above only applies to 3D objects being rendered with a perspective projection matrix. The deferred directional light is a full screen quad and the projection matrix is very nearly an identity matrix (ie: as little transformation as possible while still being a valid clip space position), and I'm guessing you're passing the local vertex position using a TEXCOORD# semantic. It only renders where your mesh rendered previously because the deferred passes use stencils to mark which pixels have been drawn to so the GPU can quickly skip those that have not.



    Now that I understand what your end goal is, I think you're going about this in the wrong way. You have a curvature value which is a float2 value. What space is that in for you only to need two values? I would assume it's in tangent space, in which case you'll want to transform from tangent space into view space before writing out the value into the gbuffer. After that ... you're done, you have the value you need. Heck, you could take that view space direction, normalize just the x and y, and then use acos(x) to get the angle and store only that single value.