Search Unity

Resolved Planar Reflection Mirror (Legacy Renderer, VR Compatible)

Discussion in 'General Graphics' started by neginfinity, Sep 16, 2021.

  1. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    Update:
    Solution I ended up with:
    https://github.com/NegInfinity/NiVrMirrorDemo


    ----
    Original text:
    ----

    What is the standard way of implementing a planar reflection these days?

    I'm looking for solution for the Legacy rendering pipeline that is VR compatible.

    As far as I know, SSR and Reflection probes ain't it. With sufficient amount of time I can roll out my own solution using either Camera Stacking or stencil buffer or render targets.

    However, I've seen VRChat implement a flat mirror that works on Quest and can be cut into fragmented pieces, and then at later date another VR app used the same technology.

    That makes me think there's a commonly used way I'm not aware of.

    How can this be done?
     
    Last edited: Jan 3, 2022
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    AFAIK the standard method is rendering an oblique projection calculated from the current camera's projection and the intended reflection plane into a render texture, as it has been kind of for the last decade at this point. Until raytracing becomes super common, it'll remain the only real viable option if you need reflections of things not visible to the original camera, like the face of a character looking at the mirror from a first person or over the shoulder third person view.

    What do you mean cut into fragmented pieces though? If all of the parts are still facing the same direction and on the same plane, they can use a single reflection camera & render texture. If they move or rotate off of that plane, then you'd have to either render unique projections for each one, or use faux refraction style distortions to fake it, probably falling back to reflection probes. Similar to how SSR usually falls back to reflection probes when sampling outside of the view frustum.
     
  3. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    There's two eyes in VR, though. So, two render textures? Isn't camera stacking possible? I haven't tinkered with separate rendering for each eye in VR (yet?).

    I also recall this jam game which is not too different from a mirror:

    Unreal engine also has planar reflection feature, it is supposedly super expensive, but it works in VR and retains depth (so each eye sees a different image), at the same time you can downsample it. So I guess in case of Unreal it is a render texture.

    One more option is clipplanes and stencil buffer trickery. Implementing that in unity could require a lot of custom shader code.

    The mirror is static, unmoving, and pieces are on the same plane, so it could be the same reflection. The mirror, however, is floating in air and has holes in it, and you can see scenery behind it through the holes. It can also reflect different things compared to the scene. For example, you can turn off the world, and only show the character.

    The important thing is that it works on quest, meaning it is very high performance and is rendered by a pretty much mobile gpu.

    I also do not see any hint of pixelation or filtering on it, it is as sharp as the rest of the environment. That makes me think that rather than render texture it is something akin to camera stacking.

    I can make a video if you really want.
     
  4. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Could've led with that :p
     
  5. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    I blame severe lack of sleep.

    Give me a half an hour.
     
    hippocoder likes this.
  6. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    Here's what it looks like.

    Very crisp, huge and on low power device on top of that.



     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Here's the thing. You're rendering the scene twice. Four times for VR. That's expensive. It doesn't matter that much if you're rendering into a separate render texture or using complex clipping schemes.

    If what you're rendering is cheap to render, then rendering it 4 times is fine. If what you're rendering is expensive, then rendering it 4 times is bad. On a low power device like the Quest mirrors are always expensive. They might seem cheap if the stuff in the scene / reflection is cheap to render.

    My money is still on them using render textures. Stencil based approaches require every object in the scene is using special shaders that know how to work with stencils. You can't make that assumption in VRChat as you don't have control over ever shader that might be used. Camera stacking still ends requiring render textures most of the time as you need some way to mask an arbitrary area of the screen, and the "easiest" way to do that is to render out the shape of the portal / mirror to a render texture... or use stencils ... but then you have the same problem again.

    Render textures are the only option that works no matter what other materials are used in the scene.

    Using one render texture, or "two" render textures (unlikely), or even one render texture that's a 2D array, is more a question of how the stereo rendering is being done than a question of the reflection technique. Unity does stereo rendering differently for different platforms. The short version is you create a camera, calculate the oblique matrix for both eyes, assign it a render texture set to be for stereo rendering, and then sample the resulting render texture using the stereo render texture sampler helpers.

    Why would there be? Understand the render texture isn't applied "on the surface". It's a screen space texture with likely same resolution as the "main" camera view. The mirror object has a shader on it that just reads that screen space texture using the stereo screen UVs. It should be exactly as sharp (and maybe even sharper due to not getting the benefits of fixed foveated rendering) the main view is.

    Also, it's still another camera. So you can have it render or not render whatever you want using layers.
     
    neginfinity likes this.
  8. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    What you say makes sense, and it looks like I'll need to read up on Oblique reflection matrices.

    The thing I'm wondering about is why the avatar's finger is not painted as sticking out of the mirror (assuming there's mirror area for that). Because that would be the usual error in situation when the camera is rendering onto a texture from another point of view.

    In order for the finger not to stick out, it should be clipped, and because there are no clipplanes, it can be only clipped against near plane.

    And if it is clipped against near plane then render target is not strictly necessary and it can be done with camera stacking.
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    With the oblique matrix, the near plane is on the plane of reflection.

    The problem with camera stacking is limiting the rendering of the "mirror" to the area the mirror appears. Also be mindful that because the "mirror" uses a different projection than the main view the depth buffers are incompatible... which is fine since you'll want to clear the depth anyways.

    Though I can imagine a way of doing it like this:
    1. Render the reflection camera without a depth clear, and with the oblique projection matrix.
    2. Clear depth to the near plane. Can be done with a command buffer, or a custom shader.
    3. Have the mirror object's geometry render in this camera with a queue of 0 and a shader that only writes to depth at the far plane. This will effectively create a hole for stuff inside the mirror to be visible in, otherwise everything outside will be depth rejected.
    4. Render the main camera view, using a depth clear.
    5. The mirror object's geometry needs to render again with queue of 0 and a shader than only writes to depth, but using it's real depth this time.
    Technically steps 2 and 3 can be skipped. But that should be a relatively decent performance improvement when the mirror is far away.
     
  10. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    I was thinking of:
    1. Render reflection geometry. with reflection camera.
    2. Switch to main camera, clear depth.
    3. Draw transparent polygon in the shape of mirror with zwrite enabled.
    4. Render scene with the main camera.
    This will work as long as skybox honor zbuffer.

    The problem with this approach is that it is necessary to clip the geometry by the mirror on the step one. And the easiest way to do it is world clip planes. Which are not available. But I suppose this can be fixed with oblique matrix.

    On related note, I was about to ask about oblique matrix documentation, then I found this:
    https://docs.unity3d.com/ScriptReference/Camera.CalculateObliqueMatrix.html .
     
  11. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    Well. I tried my idea (without render targets) and did get it working on flatscreen.

    upload_2021-9-17_13-20-38.png

    However, I can see why it is common to use render targets.

    Basically, render targets allow you to distort the image (for example with waves), while this approach doesn't.

    Another thing is that this sort of mirror requires horizontal flip of camera, which means negative scale on one of the axes of projection matrix AND it is necessary to invert backface culling during render. There are reports from people that this doesn't work in VR.

    I'm pretty sure I got one of the components of oblique clip plane wrong as when I start rotating the mirror, it breaks.

    @bgolus, perhaps you'd want to add some comment?
     
    Last edited: Sep 17, 2021
  12. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Is there a drawback to rendering the mesh again mirrored but discard pixels outside a volume ? This should work by default in VR and can be handled entirely in a single shader passing the volume matrix in.

    Did similar before but outside of VR. I don't see why it wouldn't work in VR. Obviously not for VRChat as anyone can have any shader.
     
  13. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    Yes, it requires a custom shader for the mesh and will not work on arbitrary geometry with arbitrary shaders. This alone makes it too annoying to deal with. It will also only work on geometry/objects you marked as "mirror-able".

    This technique was used in the past, for example, to create mirror rooms in silent hill.


    But it is too limited.
     
  14. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Because discarding using
    clip()
    or
    discard
    means you're still paying the full cost of rendering those fragments, and the discard makes them even more expensive because the TBDR hardware has to run the shader twice or switch to immediate mode rendering.

    And as @neginfinity mentioned, requires custom shaders.

    The Vector4 for the plane is in camera space. Specifically the
    cam.worldToCameraMatrix
    space, which isn't the same as the camera's transform space. The xyz is the normalized normal, and the w is the distance from the origin (the camera's position). That can be calculated manually with a dot product, or you can use Unity's
    Plane
    class, in which you can pass a normalize normal and a
    Vector3
    transform position and then use the
    plane.distance
    of for the w value.

    See: https://wiki.unity3d.com/index.php/MirrorReflection4

    I think the hack is you flip the world to camera matrix, not the projection matrix. Like what the above script does.
     
    neginfinity likes this.
  15. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    I believe it is also possible to implement clipping via geometry shaders. That would still require a custom shader and the game would still run vertex shader.

    I'll take a look. What I did is this:

    Code (csharp):
    1.  
    2.     void OnPreCull(){
    3.         cam.ResetProjectionMatrix();
    4.  
    5.         var worldClipPos = mirrorPlane.transform.position;
    6.         var worldClipNormal = mirrorPlane.transform.up;
    7.  
    8.         var worldClipPlane = new Vector4(
    9.             worldClipNormal.x, worldClipNormal.y, worldClipNormal.z,
    10.             -Vector3.Dot(worldClipPos, worldClipNormal)
    11.         );
    12.  
    13.         var camClipPos = cam.transform.InverseTransformPoint(worldClipPos);
    14.         var camClipNormal = cam.transform.InverseTransformVector(worldClipNormal);
    15.         camClipNormal.x = -camClipNormal.x;
    16.         camClipPos.x = -camClipPos.x;
    17.  
    18.         var camClipPlane = new Vector4(
    19.             camClipNormal.x, camClipNormal.y, camClipNormal.z,
    20.             Vector3.Dot(camClipPos, camClipNormal)
    21.         );
    22.  
    23.         var proj = cam.CalculateObliqueMatrix(
    24.             camClipPlane
    25.         );    
    26.         proj = proj * Matrix4x4.Scale(new Vector3(-1.0f, 1.0f, 1.0f));
    27.  
    28.         cam.projectionMatrix = proj;//cam.projectionMatrix * Matrix4x4.Scale(new Vector3(-1.0f, 1.0f, 1.0f));
    29.     }
    30.  
     
  16. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    You want:
    Code (csharp):
    1. var camClipPos = cam.worldToCameraMatrix.MultiplyPoint(worldClipPos);
    2. var camClipNormal = cam.worldToCameraMatrix.MultiplyVector(worldClipNormal);
    The game object transform space and camera space are not the same, at all.

    You also want to move the camera, or at least set the
    cam.worldToCameraMatrix
    , to be in the reflected position of the current camera rather than leaving it where it's at before doing this too.
     
    neginfinity likes this.
  17. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    It is in reflected position, it is just the movement script is in LateUpdate.
     
  18. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    I've revisited this, solved the issue from the previous post and decide to share the script.
    upload_2021-12-16_22-22-21.png

    Currently this does not work in VR (I'm getting a mess in the HMD). It might be possible to fix it for VR, however.

    Hole shader:
    Code (csharp):
    1.  
    2. Shader "Unlit/HoleShader"
    3.  
    4. {
    5. Properties{
    6.         _MainTex ("Texture", 2D) = "white" {}
    7. }
    8. SubShader{
    9.         Tags { "RenderType"="Opaque" "Queue"="Geometry-1" }
    10.         LOD 100
    11.         //ColorMask 0
    12.         Blend SrcAlpha OneMinusSrcAlpha
    13.  
    14. Pass{
    15.  
    16. CGPROGRAM
    17. #pragma vertex vert
    18. #pragma fragment frag
    19. // make fog work
    20. #pragma multi_compile_fog
    21.  
    22. #include "UnityCG.cginc"
    23.  
    24. struct appdata{
    25.     float4 vertex : POSITION;
    26.  
    27.     UNITY_VERTEX_INPUT_INSTANCE_ID
    28. };
    29.  
    30. struct v2f{
    31.     float4 vertex : SV_POSITION;
    32.  
    33.     UNITY_VERTEX_OUTPUT_STEREO
    34. };
    35.  
    36. v2f vert (appdata v){
    37.     v2f o;
    38.     UNITY_SETUP_INSTANCE_ID(v);
    39.     UNITY_INITIALIZE_OUTPUT(v2f, o);
    40.     UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
    41.     o.vertex = UnityObjectToClipPos(v.vertex);
    42.     return o;
    43. }
    44.  
    45. fixed4 frag (v2f i) : SV_Target{
    46.     UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(i);
    47.     // sample the texture
    48.     fixed4 col = fixed4(0.0, 0.0, 1.0, 0.5);
    49.     // apply fog
    50.     return col;
    51. }
    52. ENDCG
    53.  
    54.  
    55. }//pass
    56. }//subshader
    57. }//shader
    58.  
    59.  
    Mirror Script:
    Code (csharp):
    1.  
    2. using System.Collections;
    3. using System.Collections.Generic;
    4. using UnityEngine;
    5.  
    6. [ExecuteInEditMode]
    7. public class MirrorCamController : MonoBehaviour{
    8.     [SerializeField]Transform mirrorPlane;
    9.     [SerializeField]Camera sourceCam;
    10.     Camera cam;
    11.  
    12.     void Awake(){
    13.         cam = GetComponent<Camera>();
    14.     }
    15.  
    16.     void updateCameraPos(){
    17.         if(!cam || !mirrorPlane ||!sourceCam){
    18.             Debug.LogWarning("Mirror camera misconfigured");
    19.             return;
    20.         }
    21.  
    22.         cam.CopyFrom(sourceCam);
    23.         cam.depth -= 1;
    24.  
    25.         var n = mirrorPlane.up;
    26.         var p = mirrorPlane.position;
    27.  
    28.         var diff = sourceCam.transform.position - p;
    29.         var rDiff = Vector3.Reflect(diff, n);
    30.         transform.position = p + rDiff;
    31.  
    32.         var rUp = Vector3.Reflect(sourceCam.transform.up, n);
    33.         var rForward = Vector3.Reflect(sourceCam.transform.forward, n);
    34.         var rot = Quaternion.LookRotation(rForward, rUp);
    35.         transform.rotation = rot;
    36.     }
    37.  
    38.     void drawGizmos(Color c){
    39.         var oldC = Gizmos.color;
    40.         Gizmos.color = c;
    41.         var p = transform.position;
    42.         var x = transform.right;
    43.         var y = transform.up;
    44.         var z = transform.forward;
    45.  
    46.         Gizmos.DrawLine(p - x, p + x);
    47.         Gizmos.DrawLine(p - y, p + y);
    48.         Gizmos.DrawLine(p - z, p + z);
    49.  
    50.         Gizmos.color = oldC;
    51.     }
    52.  
    53.     void OnDrawGizmos(){
    54.         drawGizmos(Color.yellow);
    55.     }
    56.  
    57.     void LateUpdate(){
    58.         updateCameraPos();
    59.     }
    60.  
    61.     void OnPreRender(){
    62.         GL.invertCulling = true;
    63.     }
    64.  
    65.     void OnPostRender(){
    66.         GL.invertCulling = false;
    67.     }
    68.  
    69.     void OnPreCull(){
    70.         cam.ResetProjectionMatrix();
    71.  
    72.         var worldClipPos = mirrorPlane.transform.position;
    73.         var worldClipNormal = mirrorPlane.transform.up;
    74.  
    75.         var worldClipPlane = new Vector4(
    76.             worldClipNormal.x, worldClipNormal.y, worldClipNormal.z,
    77.             -Vector3.Dot(worldClipPos, worldClipNormal)
    78.         );
    79.  
    80.         var camClipPos = cam.worldToCameraMatrix.MultiplyPoint(worldClipPos);
    81.         var camClipNormal = cam.worldToCameraMatrix.MultiplyVector(worldClipNormal);
    82.  
    83.         var camClipPlane = new Vector4(
    84.             -camClipNormal.x, camClipNormal.y, camClipNormal.z,
    85.             -Vector3.Dot(camClipPos, camClipNormal)
    86.         );
    87.  
    88.         var proj = cam.CalculateObliqueMatrix(
    89.             camClipPlane
    90.         );  
    91.  
    92.         proj *= Matrix4x4.Scale(new Vector3(-1.0f, 1.0f, 1.0f));
    93.         cam.projectionMatrix = proj;
    94.     }
    95. }
    96.  
    97.  
    Requires a mirror plane and an object to act as a mirror.
     
    Last edited: Dec 16, 2021
  19. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    @bgolus I think the last time you asked what's the point of doing it this way. Looking through the documentation, it looks like currently unity renders scene in single pass, meaning both eyes are processed at the same time.

    By that logic, rendering the reflection using this method might allow rendering the mirror at lower cost. Meaning two renders instead of 3 or 4.

    I've still not converted this to VR (seems doable, but I'm unsure), though, as last attempt to mess with matrices resulted in a large number of fun things happening.
     
  20. mabulous

    mabulous

    Joined:
    Jan 4, 2013
    Posts:
    198
    I didn't read through the other comments, but since I have implemented high-performance planar reflections on the quest2, I can tell you the following:

    1. Since you are targeting a tile-based rendering architecture, you really don't want to render into a render texture, since this will completely obliterate your GMEM bandwidth (unless you use the renderpass system of the SRP and manage to get it into a subpass that does not need to get resolved to system memory and loaded back in for compositing - but this didn't work for me).
    What you want to do instead is render everything directly into the output buffer. Use a stencil to mask out the reflective surface, then render the reflected geometry (either mirroring it in the vertex shader or have preprocessed reflection meshes for static geometry, which allows you to run some optimizations like pre-culling triangles that aren't visible in the reflection).

    2. if you want to render your reflection on transparent objects such as windows, do the following:
    - render the reflective surface/ setting stencil buffer value in the process and set the destination alpha to the inverse of the reflectivity want to have (so 1 meaning no reflection and 0 meaning full additive reflection. If you want to use an Ior for fresnel reflectivity, that's the place to compute it and write the inverse of that into the destination alpha).
    - if you want to make use of the depth buffer to solve for self-overlap of the reflected geometry, you should render the reflective surface a second time, this time with ZTest Always but using the previous stencil value to test against, and in the pixel shader set the fragment depth to the far-plane (essentially clearing the depth buffer where your reflection will go)
    - then render your reflection geometry using front-to-back alpha blending (GL_ONE_MINUS_DST_ALPHA, GL_ONE) and using stencil test to mask the reflection.
    - discard fragments that are on the wrong side of the reflection plane either in the fragment shader (discard) or, if you want to be extra fancy, by patching the near plane of your projection matrix (this allows the hardware to fully use early depth tests since you don't need the discard statement in the fragment shader)

    voila, perfect and highly performant planar reflections on mobile VR.
     
    Last edited: Dec 21, 2021
    neginfinity and hippocoder like this.
  21. mabulous

    mabulous

    Joined:
    Jan 4, 2013
    Posts:
    198
    the problem is that on tile based rendering architectures (which most mobile platforms are), unless you can employ Vulkan subpasses or
    GL_EXT_shader_pixel_local_storage, the renderer is forced to send the render texture from GMEM to system memory and then load it back in. On these platforms most of the time the bandwidth between GMEM and system memory is the bottleneck, so you really don't want to do that. Using stencil techniques and rendering geometry twice is comparatively very cheap on these architectures.
     
    hippocoder likes this.
  22. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    So it looks like my idea to attempt this via camera stacking was on the right track.

    I'm kinda interested in PC first as I don't have quest 2 and quest 1 is less performant.

    I'll keep the suggestions you gave in mind.
     
  23. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    I've succeeded.



    I'll look into publishing this on github later, though this is not really amazing or super optimal.

    Long story short - my idea with camera stacking does not work, because SetStereoMatrix is non-functional in instanced mode. (see: https://forum.unity.com/threads/unable-to-use-camera-setstereoviewmatrix-in-instanced-mode.1217127/ )

    Rendering can be done using temporary render targets. Two extra renders in addition to the first one (instanced). 1024x1024 target is imperceptible with Quest 1 + Virtual Desktop, at least for me. It is possible that this will glitch at higher FOV, like Pimax.
     
    MagiJedi and bgolus like this.
  24. MagiJedi

    MagiJedi

    Joined:
    Apr 17, 2020
    Posts:
    32
    Please do share. I'd love to see how you did it (and test with my pimax).
     
  25. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    Give me a day or two. I wanted to make it as a package, and currently I'm not enjoying the way they implemented the samples folder.
    ------
    Thinking about it just making a project is going to be faster.... hmm.
     
  26. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
  27. MagiJedi

    MagiJedi

    Joined:
    Apr 17, 2020
    Posts:
    32
    Thanks man! You're the best! I'll drop in here and update once I've had a chance to mess around with it.
     
  28. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    Have fun.

    On Pimax, if you manage to make the mirror take your full field of view, I'd expect it to reflect a square area in 90 degree fov, and the rest should stretch like some sort of band/ribbon.
     
  29. EnergeticEnergy

    EnergeticEnergy

    Joined:
    Sep 26, 2020
    Posts:
    3
    I'm a bit late, but is there any chance you could convert this to URP? I'm looking for a solid VR mirror solution for URP and there isn't really one that I can find. I would do it myself, but I still need a bit of work with my shader code knowledge.
     
  30. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,573
    The mirror is an unlit shader. As far as I know, unlit shaders often work as is in shadergraph pipelines.

    Have you tried to just plug it in?
     
  31. DardilaC18

    DardilaC18

    Joined:
    Dec 2, 2021
    Posts:
    1
    This is awesome. I was trying to use it, but I found a bit of an issue on the Meta Quest 2, in which the clipping plane seems so close, despite the fact im using the default values from the OculusPlayer. I tried adjusting the values but no matter what I do, it only works when i see it in front, and not moving my head at all