Search Unity

Multiple Camera / RenderTexture Issue!

Discussion in 'VR' started by jeo77, Sep 25, 2019.

  1. jeo77

    jeo77

    Joined:
    Jan 5, 2012
    Posts:
    5
    I’ve been having some VR issues that have been plaguing me, and I was hoping I could find some help here. I’ve edited this post to make my description a bit clearer, here it goes:

    I have a setup that is as follows:
    • Scene with 2 cameras:
    • One camera that draws either black or red to a Render Texture (RTCamera), using a replacement shader and SetTargetBuffers(). This camera is looking at a different area of the game, but mimicks the motion / movement of the Main Camera
    • The Render Texture is then used by a shader to clip any pixels that share screenspace within the black portion of the texture
    • A Second camera (Main Camera) that views a number of objects that are using ^ this shader

    The issues I’m having:
    • A) The Main Camera renders as a VR camera, meaning there’s a view offset for each eye. Currently the RT Camera mimicks the movement / etc of that Main Camera, but when it renders to the Render Texture, it’s using the center position (as in between the eyes, not the eye offsets) and is ONLY rendering the center position. I can’t seem to figure out how to render each eye offset to the Render Texture - either both to one doublewide texture, or each offset seperately (if rendering one eye at a time). I’ve tried setting the RTCamera’s ‘Target Eye’ setting to ‘Both’, but it won’t render to the Render Texture unless I have that set to ‘None’.
    • B) I think even if I got the right offset for each eye when writing to the Render Texture from the RTCamera, it won’t line up properly. I’m pretty sure clipping based the screenspace position in the shader will yield the wrong results, because (I believe) the headset view for each eye warps the image - meaning the ‘screenspace’ center of the eye isn’t the same as the actual center of the image for the eye.

    This is the view of both cameras, and how it looks vs how I want it to look:

    RTCamera.png MainCamera.png View.png



    I’ve read up online and found some info posted by Bgolus about using the headset’s projection matrix, but I’m afraid what I found either wasn’t related close enough or maybe just a little over my head. I’ve also put together a small example project of the issue in case the description isn’t fully clear. Any help on this would be hugely appreciated. Thank you!

    Link to example project (On Google Drive)

    For context, this is the project / concept it's being used for
     
    Last edited: Sep 26, 2019
  2. MaxIzrinCubeUX

    MaxIzrinCubeUX

    Joined:
    Jan 13, 2020
    Posts:
    5
    You could use a relatively simple clip shader to create the effect in the gif you linked to.
    I did just that to create a sphere of "visibility" to create a floating map for an AR project.
    See the attached shader, apply that to an object and it will only be visible a given distance in a coulmn around the "Origin" point.
    You can go into the code and see where I make the coordinate calculations; switch that for a cube, or whatever other shape you need.

    If you want "Window" style masking, you can actually use the canvas renderer for this, you can manually assign it a mesh, and it will work with the UI mask component, creating a "window" of sorts.
     

    Attached Files: