Search Unity

Want to understand the project matrix and lighting

Discussion in 'Shaders' started by sungsoos-mess, Nov 12, 2021.

  1. sungsoos-mess

    sungsoos-mess

    Joined:
    Feb 22, 2021
    Posts:
    2
    Hi,

    I want to understand where and how the lighting comes into play in the pipeline and interact with shaders. Not quite easy to figure it out.

    Here are my questions:

    (1) After MVP processing with a vertex shader, the space that the camera sees would be transformed using a 'linear' transformation. Then, if I correctly understood the documents, the lighting effect in the pipeline kicks in. So, for example, if I displace some object behind another object (wrt a light source) in the vertex shader, the lighting will make that object rather dark because it would not be illuminated. Is this correct? (Or is there a way to perform lighting based on the original position of the vertex before the vertex shader? But, I guess there is no point of doing it in most cases.)

    (2) If above understanding is correct, then there is a problem in lighting. Even though the MVP transformation is linear, the light direction may change. For example, if light bounces on a surface, the angle of inward and outward angles should be the same, but after MVP, it generally won't. The only thing that can be maintained after the linear MVP transformation is whether the light would reach a certain surface in the first place. Anything after that would not be physically correct. Is this correct?

    (3) If correct, then how does the ray tracing even work? To avoid this issue, ray tracing should happen 'before' Projection transformation. But does it?

    (4) This may sound a bit strange, but all the MV, culling, z-test, ray tracing, lighting, etc should be performed 'before' projection to avoid non-physical rendering.

    I've been searching for the answer to these questions, but have not found a definite answer yet. Virtually all guidelines, tutorials, shader articles only talk about MVP matrix, vertex displacement, surface properties, post-processing, etc. So, I hope I can get an answer from here. Maybe I'm too new to the field.

    Thanks,
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    Okay, so I can tell there’s a couple of gaps here in your understanding I can see.

    Some basic ‘high level’ (ha!) things.

    MVP is referring the Model View Projection matrix. This is used for rasterization to calculate the screen position for each vertex. Just because the screen space (actually homogeneous clip space) vertex positions are calculated using the MVP matrix, that does not mean the shader can’t also calculate other data in other spaces and pass that data to the fragment shader.

    The main two things that get used for lighting is the normal, aka the direction the surface is facing, and the world position. Both of those get calculated in the vertex shader of most lit shaders and passed to the fragment shader to be used with lighting calculations. The clip space calculated with the MVP is mainly just to figure out where the vertices are on screen, and thus what the triangle coverage is. That and they have another trick.

    Homogenous clip space is a projective coordinate that gets linearly interpolated in screen space. The magic of this kind of coordinate system is even though it’s linearly interpolated in screen space, when you divide the resulting interpolated xyz values by its w component it results in a value that’s been perspective corrected. This can be used to interpolate the normal and world position so that the values are effectively “linearly interpolated in world space”. This is known as perspective correction. Very old real time rendering, like that used by the PS1, did do this, resulting in the weird distortion of the UVs. PS1 texture coordinates are linearly interpolated in screen space and are not perspective corrected.

    Raytracing
    doesn’t use an MVP matrix for transforming vertices. All of the stuff you’re reading about how real time rendering works, I guarantee none of it is talking about raytracing. Raytracing works by shooting arbitrary rays into a BVH (a data structure for representing a sparse collection of elements in 3D space used to quickly find which triangles the ray may intersect with) that itself may be in an arbitrary transform space. There’s no concept of a “vertex shader” in raytracing because the BVH needs to be for a mesh which has had its positions pre calculated prior to calculating the BVH.
     
    Last edited: Nov 13, 2021