Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Question Y axis direction in NDC/clip space

Discussion in 'General Graphics' started by maxestws, Mar 19, 2023.

  1. maxestws

    maxestws

    Joined:
    May 17, 2014
    Posts:
    24
    I am confused in which direction the Y axis goes in NDC/clip space in Unity.

    I am running the editor via DX11, where this axis goes up. So the lower left corner has coords (-1, -1), and the upper right has (1, 1).

    I defined a quad and rendered it using a custom shader, that simply passes through vertex position (does not use any matrix whatsoever). I made one vertex stand out and it has coords (-0.9, -0.9), so it should be positioned in the lower left corner. But it is positioned in the upper left corner:

    upload_2023-3-19_23-27-22.png

    This goes against DX11.

    I used frame debugger and noticed that because of MSAA and HDR turned on, Unity does render-to-texture. So I turned these two options off. Thanks to that this extra render-to-texture pass is gone, so Unity should be rendering to the backbuffer directly. And it did improve things. But only half:

    upload_2023-3-19_23-30-44.png

    In the Game view it's correct now, but in the Scene view the "lower left" corner is still at the top. As if the Scene view still performed some render-to-texture pass, which inverts pixels in Y-axis.

    Anyone knows what's going on?
     

    Attached Files:

  2. c0d3_m0nk3y

    c0d3_m0nk3y

    Joined:
    Oct 21, 2021
    Posts:
    651
    Unity uses OpenGL convention for u/v coordinates and projection matrices. In OpenGL, the origin (uv coordinate 0,0) is at the bottom left.

    When you run the game on DirectX, Unity has to do some tweaks to make it work because DirectX uses top-left as its origin. Otherwise, it would have to change all u/v coordinates of all meshes. To fix this it has to:
    - Flip all textures vertically when uploading them to the GPU
    - Render upside down when rendering to a render target (but not the back buffer) since render targets are used just like a texture

    To render upside down, it scales NDC space by (1, -1, 1). This is done by GL.GetGPUProjectionMatrix which is usually called automatically. Note that GetGPUProjectionMatrix takes a parameter to specify whether you are rendering to the back buffer or not. Unity also has to invert backface culling (see GL.invertCulling) because the winding changes when you flip the y-axis.

    There is probably a difference between scene view and game view because the scene view is probably rendered to a render target for some reason (gizmos, maybe).

    Usually, you don't have to know about this except in a few cases:
    - You'll notice that all textures and render targets are upside down in RenderDoc
    - In RenderDoc, you'll notice that the projection matrix is different from what you see in C# code
    - You are passing the projection matrix manually as a shader parameter by calling SetMatrix instead of SetProjectionMatrix. Usually this is only necessary for compute shaders.

    PS: GetGPUProjectionMatrix also changes the NDC-z range because OpenGL uses -1 to +1 for NDC-z whereas DirectX uses 0 to 1.
    PPS: In newer OpenGL versions you can change the origin and depth range via glClipControl to make it match DirectX conventions.
     
    Last edited: Mar 20, 2023
    maxestws and richardkettlewell like this.
  3. maxestws

    maxestws

    Joined:
    May 17, 2014
    Posts:
    24
    Thank you @c0d3_m0nk3y for this elaborate response.

    So as I understand, the starting point is that Unity follows OpenGL convention. Imho that was the worst one to follow, but to be fair Unity authors didn't know where things would go back in 2005 :).

    I calculated NDC position of a point, via C# code, and it confirms what you say about OpenGL convention:
    Code (CSharp):
    1. Matrix4x4 m = Camera.main.projectionMatrix * Camera.main.worldToCameraMatrix;
    2. Vector4 position_ndc = new Vector4(go.transform.position.x, go.transform.position.y, go.transform.position.z, 1.0f);
    3. position_ndc = m * position_ndc;
    4. position_ndc /= position_ndc.w;
    Y goes up here and Z is in [-1, 1], just like in OpenGL.

    Essentially you are saying that because with MSAA and HDR Unity's camera will render to texture, this operation will be performed "upside down", and that is why my (-0.9, -0.9) point ends up at the top and not at the bottom? Because during this one render-to-target the texture got flipped?
    If there was another render-to-texture pass, so say we have two such passes (assume the second one is a simple texture copy), would the (-0.9, -0.9) point end up at the bottom?