Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.

Camera normals to Worldspace normals

Discussion in 'Shaders' started by X3doll, Jul 6, 2022.

  1. X3doll

    X3doll

    Joined:
    Apr 15, 2020
    Posts:
    26
    Hi!

    I'll want to implement a postprocessing effect to recoloring the viewed surfaces.

    upload_2022-7-6_12-41-40.png
    upload_2022-7-6_12-47-59.png
    The fragment code:
    Code (CSharp):
    1.  
    2. fixed4 frag (v2f i) : SV_Target
    3. {
    4.    float3 normals = decode_normal(tex2D(_CameraDepthNormalsTexture, i.uv));
    5.    float3 world_up = float3(0.0, 1.0, 0.0);
    6.    float t = dot(normals, world_up);
    7.  
    8.     return lerp(tex2D(_MainTex, i.uv), _SurfaceColor, smoothstep(_SurfaceSensibility, 1.0, t));
    9. }
    10.  
    The fragment code works well, the main problem is, the effect is not projected in world space, but in viewspace.

    Example:

    upload_2022-7-6_12-47-3.png
    upload_2022-7-6_12-47-30.png

    How can i convert the view normal to world space normals ignoring the camera rotation and position?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    11,926
    There are two options. If you know the normals are in view space, you can either transform the "world up" vector into view space, or transform the normals from world space into view space.

    Code (csharp):
    1. fixed4 frag (v2f i) : SV_Target
    2. {
    3.    float3 normals = decode_normal(tex2D(_CameraDepthNormalsTexture, i.uv));
    4.    float3 world_up = mul(UNITY_MATRIX_V, float3(0.0, 1.0, 0.0)); // transform from world to view space
    5.    float t = dot(normals, world_up);
    6.  
    7.     return lerp(tex2D(_MainTex, i.uv), _SurfaceColor, smoothstep(_SurfaceSensibility, 1.0, t));
    8. }
    Code (csharp):
    1. fixed4 frag (v2f i) : SV_Target
    2. {
    3.    float3 normals = mul(decode_normal(tex2D(_CameraDepthNormalsTexture, i.uv)), UNITY_MATRIX_V); // transform from view to world space
    4.    float3 world_up = float3(0.0, 1.0, 0.0);
    5.    float t = dot(normals, world_up);
    6.  
    7.     return lerp(tex2D(_MainTex, i.uv), _SurfaceColor, smoothstep(_SurfaceSensibility, 1.0, t));
    8. }
    Note, there's no view to world space matrix, but the world to view space matrix is guaranteed to be an orthogonal matrix (fancy way of saying the matrix isn't oddly scaled or warped in any way), and a handy property of an orthogonal matrix is the transpose and inverse are identical. You use
    mul()
    with a vector before the matrix, it applies the transpose of that matrix, so that's the view to world matrix.
     
    Last edited: Jul 7, 2022
    lilacsky824 likes this.
  3. X3doll

    X3doll

    Joined:
    Apr 15, 2020
    Posts:
    26
    Hi ben, nice to meet you.

    I've tryed as you say, to transform the view normals to world normals.
    But i've not notice any change at all, until i try to check differences:

    Code (CSharp):
    1.  
    2. fixed4 frag(v2f i) : SV_Target
    3. {
    4.     float3 normals_world_space = mul(decode_normal(tex2D(_CameraDepthNormalsTexture, i.uv)), UNITY_MATRIX_V);
    5.     float3 normals_view_space = decode_normal(tex2D(_CameraDepthNormalsTexture, i.uv));
    6.     return float4(normals_world_space - normals_view_space, 1.0);
    7. }              
    8.  
    And as i notice, there are no differences (Black == 0)
    upload_2022-7-8_12-32-31.png

    It doesn't change even with transposing the vector:

    Code (CSharp):
    1.  
    2. fixed4 frag (v2f i) : SV_Target
    3. {
    4.     float3 normals_view_space = decode_normal(tex2D(_CameraDepthNormalsTexture, i.uv));
    5.     float3 world_up = mul(UNITY_MATRIX_V, float3(0.0, 1.0, 0.0));
    6.     float t = dot(normals_view_space, world_up);
    7.     return lerp(tex2D(_MainTex, i.uv), _SurfaceColor, smoothstep(_SurfaceSensibility, 1.0, t));
    8. }
    9.  
    Sideview:
    upload_2022-7-8_12-40-10.png

    top view:
    upload_2022-7-8_12-42-41.png
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    11,926
    Whoops, right. This is a post process, so the
    UNITY_MATRIX_V
    will be an identity matrix (no rotation or translation, and a uniform scale of 1). You want to use
    unity_WorldToCamera
    , like this:
    Code (csharp):
    1. float3 normals_view_space = decode_normal(tex2D(_CameraDepthNormalsTexture, i.uv));
    2. float3 world_up = mul(unity_WorldToCamera, float3(0.0, 1.0, 0.0)) * float3(1,1,-1); // because reasons ...
    3. float t = dot(normals_view_space, world_up);
    To explain that
    * float3(1,1,-1)
    , Unity's View space is -Z forward, but the
    unity_WorldToCamera
    matrix is not, it is +Z forward. The reason is that matrix is the Camera game object transform (without scale) rather than the camera's view matrix. So you have the flip the Z direction to make it match view space.
     
  5. X3doll

    X3doll

    Joined:
    Apr 15, 2020
    Posts:
    26
    It works, thanks!

    upload_2022-7-9_17-54-18.png

    Just for asking,did you suggest some good article or suggestion on how Unity (And 3D game engine), handle these spaces convertion?
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    11,926
    I don't have any specific recommendations, no. Though there are several that go over it in various ways.

    I've been a fan of this image though (not my creation, and I'm not sure if the original source still exists).


    And I've posted longer descriptions of the matrices, like I did here:
    https://forum.unity.com/threads/retro-shader-bugs.775475/#post-5164691

    Thought that image isn't Unity specific, that's describing the common setup for OpenGL (which Unity mostly adheres to). And doesn't talk about the difference between the "camera" and "view" matrices I mentioned above, nor other things like Unity's use of a reversed Z depth. Nor the fact if you're not rendering using OpenGL, the NDC z is from 0.0 to 1.0, not -1.0 to 1.0 (which is unique to OpenGL as it's one of the first graphics APIs, and literally every graphics API after didn't do that, because it's dumb).
     
    dnach and X3doll like this.