Search Unity

Depth/world position reprojection

Discussion in 'AR/VR (XR) Discussion' started by HamtaroDeluxe, Aug 8, 2018.

  1. HamtaroDeluxe

    HamtaroDeluxe

    Joined:
    Jun 16, 2018
    Posts:
    11
    Hello everyone.
    I'm building a raymarching shader. It's working well but I'm still looking to optimise its VR usage. I have read about calculating the depth once for a middle eye, reprojecting it to both eyes, and doing the rest of the shading afterwards. I thought I could also use world position since its also computed by the raymarcher.
    My context is already set up : a first pass render the worldPos to a square render texture. The second shader pass should reproject world pos and compute shading from it, into a twice larger rendertexture.

    The problem is, I don't represent myself well what is the exact transforms to apply to "reproject" world position.
    I tried to do something similar as explained here https://developer.oculus.com/blog/introducing-stereo-shading-reprojection-for-unity/
    If I get it right, using view and projection matrix on the worldPos will give us the uv that pixel should be at on the eye we are trying to reproject to. BUT, we are computing this in the fragment shader, for pixel at the uv of the first rendertexture, middle eye camera. So we can't write the worldPos value at the correct UV.
    (I'm using single pass, remapping uv.x, providing matrixes for both eye and switching depending on uv.x )

    I'm think I'm missing something...
    Thank's in advance :)

    EDIT : Ok, I found that article, which answer most of my question. : http://www.marries.nl/wordpress/wp-...of-Stereoscopic-Images-Using-Reprojection.pdf .
    They use raymarching to estimate depth... Seems more complex than expected but I'll try this.
     
    Last edited: Aug 8, 2018