Search Unity

  1. We are migrating the Unity Forums to Unity Discussions by the end of July. Read our announcement for more information and let us know if you have any questions.
    Dismiss Notice
  2. Dismiss Notice

VR shadwos rendering once or twice?

Discussion in 'Graphics Experimental Previews' started by McDev02, Mar 12, 2018.

  1. McDev02

    McDev02

    Joined:
    Nov 22, 2010
    Posts:
    664
    I am not a graphics engineer but I want to understand how Unity performs rendering and what we could expect in the future. So especially in the sight of the upcoming Rendering pipelines I want to sum up my questions here. Also if there are any resources explaining that in detail then I would be thankful, so far I couldn't find answeres to that.


    Isn’t it possible to render specific things only once and not twice for each camera? Such as shadow maps, especially spotlights for instance which are not camera dependent should be calculated only once and not twice in VR. What is the state in Unity now?

    Especially talking about one directional light and shadow cascades this is a critical point.

    Also culling could be performed just once using a combined frustum, is this done already?


    If any of that is not the case I wonder if there will be a pipeline that optimizes things for VR or if we "simply" could make our own, e.g. that the Community comes up with one.
    The things just is that we always run into issues with VR as we have to render realistic environment and often sacrifice things like AA for otehr graphical features like crisper shadows, even though I would like to render in 2x or 3x resolution to get a crisper image :)

    I want to know if we already at an end here or if there is room for optimization.
    is there anything else that you think could be improved towards VR rendering?
     
  2. Tim-C

    Tim-C

    Unity Technologies

    Joined:
    Feb 6, 2010
    Posts:
    2,225
    When doing VR we render the shadowmap one time (from the perspective of the light). What happens next is is one of two things:
    1) Screen space shadows: Do a depth pass (needs both eyes), then project this shadow onto the depth pass to make a screen space shadow texture. When rendering normal object sample this texture (in screen space) and apply shadow. Objects will be rendererd as normal for VR (i.e multipass, singlepass, or single pass instancing).
    2)Object space shadows: Projection is done as the object is rendered. No Screen space texuture is used. projection happens for each eye.
     
    McDev02 likes this.
  3. McDev02

    McDev02

    Joined:
    Nov 22, 2010
    Posts:
    664
    Thanks for the answere. This lets me assume that culling and LOD calculations also happen just once in VR?