Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We are updating our Terms of Service for all Unity subscription plans, effective October 13, 2022, to create a more streamlined, user-friendly set of terms. Please review them here:
    Dismiss Notice
  3. Have a look at our Games Focus blog post series which will show what Unity is doing for all game developers – now, next year, and in the future.
    Dismiss Notice

First Person Rendering in Unity 5 (writeup)

Discussion in 'General Graphics' started by PhobicGunner, May 2, 2016.

  1. PhobicGunner


    Jun 28, 2011
    (This is copied from this post on my blog)

    If you've worked with Unity long enough, you'll know quite well that, almost since the dawn of Unity itself, a problem has plagued Unity users to no end - how to properly render meshes in first person view.

    Up till now the solution was to render with a separate camera. This worked great in old versions of Unity, where light and shadow were really barely a concern. Then dynamic shadowing was introduced, and this method broke down - weapons in first person didn't receive shadows from scene geometry. It got a bit better with Unity 3.5, since you could use the new Light Probe feature to at least make your weapon vaguely look like it was receiving shadow from the scene. Then along came Unity 5 with Enlighten, and the light probe method is now broken since light probes don't bake shadow information for realtime lights, and since dual lightmaps are gone realtime lights are necessary for dynamic shadows.

    So, back to the drawing board. How do we render a first person mesh in Unity 5? First, let's evaluate the options before I get to the solution I've come up with, so bear with me ;)

    Keep in mind I'm assuming this is in the context of a game targeted at decent gaming PC rigs (not mobile). I'm therefore assuming deferred rendering is used.

    Overlay Camera

    I've already described why this one doesn't work, but we'll cover it for completeness. So, you put your first person renderers on their own layer, make sure the main camera doesn't render that layer, and have a separate weapon camera set to clear Depth Only which renders just that layer.
    And it doesn't work. Even though the second camera is set to Depth Only, it still appears to be clearing a solid color (grey in my case) and overwriting the first camera.
    No big deal, we'll just set it to Forward and - well, hold on, it still doesn't work? Right, Unity appears to have broken something here in Unity 5. This would have worked in Unity 4, though. So right there, this time-honored method is totally off the table now. What else do we have?

    Really Tiny Gun

    Ain't that a cute little gun?

    Turns out this is actually sort of the method Unreal Tournament 4 is using. You just scale the gun down *really, really small* so that it doesn't clip the environment. Oh, OK, so if that's what UT4 is doing then our problem is solved, right?
    Sadly, no. I would have liked the solution to really be that simple, but this introduces problems of its own.

    For one, you don't get a custom FOV. Sounds like no big deal but actually you usually want to render your gun with a lower FOV than your main camera. Especially for a shooter, where players might be cranking their field of view all the way up to 90, which can introduce major distortion for your weapon models.

    But then, there's another even bigger problem - that of floating point precision. Just 500 meters away from origin the gun had a vicious wobble that reminded me of a PS1 game. That alone makes this technique completely unusable for me, as 500m should be *well* within the limits of floating point precision. I'm not sure how Unreal is handling it, potentially via smarter concatenation of matrices. In any case, looks like this isn't going to work.

    Is there any other way of rendering the gun? Turns out the answer is yes.

    Custom Shader & Custom Projection Matrix

    I can already feel your interest waning. "Wait, hold on, custom shader? You mean those third party shaders I'm using aren't going to work anymore? And what about surface shaders?"

    OK, wait, bear with me! I wanted to make this as easy as possible on myself, so actually it's practically a one-line fix for a shader!

    So, first, projection. The idea behind this is that I use a custom projection matrix which is borrowed from a separate disabled camera via camera.projectionMatrix, set up as a global shader variable. Additionally, I use that matrix in a custom version of UnityCG.cginc, which I copied from the unity editor CGInclude folder. Basically, there's a UnityObjectToClipPos function in there. It's responsible for applying the MVP matrix (model view projection) to the vertices, which projects them onto the screen. Surface shaders internally use this function, which is a good thing for us.

    OK, first thing's first - I copied UnityCG.cginc and dumped it into my project directory. *Not in the Assets folder*, mind you. Just outside it, in the root of your project (this is important, because shader includes are relative to the project directory, not the Assets directory). Now, whenever a shader is compiled, it will compile in this new custom UnityCG.cginc file, rather than Unity's built in one. So far so good.

    Next, I modified the UnityObjectToClipPos function. Remember, this code will be copied into a shader when it's compiled, so we can actually check for defined symbols. So I made two versions of the function - if a shader defines the FIRST_PERSON symbol, it compiles a version of that function which constructs a custom MVP matrix from it (and also multiplies z position by 0.5, which will offset our weapon's depth value and keep it from intersecting scene geometry):

    Code (csharp):
    2.     float4x4 _CustomProjMatrix;
    4.     #if defined(FIRST_PERSON)
    5.         // Tranforms position from object to homogenous space
    6.         inline float4 UnityObjectToClipPos( in float3 pos )
    7.         {
    8.             float4x4 mvp = mul( mul( _CustomProjMatrix, UNITY_MATRIX_V ), unity_ObjectToWorld );
    9.             float4 ret = mul( mvp, float4(pos, 1.0) );
    11.             // hack to fix vertically flipped vertices post-projection
    12.             ret.y *= -1;
    14.             // hack to offset depth value to avoid scene geometry intersection
    15.             // todo: check if it works in OpenGL?
    16.             ret.z *= 0.5;
    18.             return ret;
    19.         }
    20.     #else
    21.         // Tranforms position from object to homogenous space
    22.         inline float4 UnityObjectToClipPos( in float3 pos )
    23.         {
    25.             // More efficient than computing M*VP matrix product
    26.             return mul(UNITY_MATRIX_VP, mul(unity_ObjectToWorld, float4(pos, 1.0)));
    27.         #else
    28.             return mul(UNITY_MATRIX_MVP, float4(pos, 1.0));
    29.         #endif
    30.         }
    31.     #endif
    Then, I'll modify an example surface shader to add support for the new function. Turns out this is super easy! All you need is this:

    Code (csharp):
    2.     #pragma multi_compile __ FIRST_PERSON
    Super easy.

    Next, we need a script which actually prepares this custom matrix. And here it is:

    Code (csharp):
    2.     using UnityEngine;
    3.     using System.Collections;
    5.     public class SetWeaponProjectionMatrix : MonoBehaviour
    6.     {
    7.         public Camera SourceCam;
    9.         void LateUpdate()
    10.         {
    11.             Shader.SetGlobalMatrix("_CustomProjMatrix", SourceCam.projectionMatrix);
    12.         }
    13.     }
    And, finally, a script which enables the FIRST_PERSON feature on a renderer's material:

    Code (csharp):
    2.     using UnityEngine;
    3.     using System.Collections;
    5.     public class EnableFirstPersonShaderVariant : MonoBehaviour
    6.     {
    7.         void Awake()
    8.         {
    9.             GetComponent<Renderer>().material.EnableKeyword("FIRST_PERSON");
    10.         }
    11.     }
    And that's it. Surprisingly, it was not that hard at all. If you want to see the results, here's a video of this technique in action:

    Further Notes: 5.4 Beta 16 & Motion Vectors
    Now, one final note - this doesn't quite work for B16's new Motion Vector Buffer feature without a few extra steps. Unfortunately, here's where things start to get a bit more manual.
    For one, you need to change how you set up the projection matrices:

    Code (csharp):
    2.     using UnityEngine;
    3.     using System.Collections;
    5.     public class SetWeaponProjectionMatrix : MonoBehaviour
    6.     {
    7.         public Camera SourceCam;
    9.         private Matrix4x4 prevProjMatrix;
    10.         private Matrix4x4 lastView;
    12.         private Camera cam;
    14.         void Awake()
    15.         {
    16.             prevProjMatrix = SourceCam.projectionMatrix;
    17.             lastView = SourceCam.transform.worldToLocalMatrix;
    18.         }
    20.         void OnRenderImage(RenderTexture src, RenderTexture dest)
    21.         {
    22.             prevProjMatrix = SourceCam.projectionMatrix;
    23.             lastView = SourceCam.worldToCameraMatrix;
    25.             Shader.SetGlobalMatrix("_PrevCustomVP", prevProjMatrix * lastView);
    26.             Shader.SetGlobalMatrix("_CustomProjMatrix", SourceCam.projectionMatrix);
    28.             Graphics.Blit(src, dest);
    29.         }
    30.     }
    Why OnRenderImage and not, say, Camera.onPreRender? I tried the latter, and it didn't work. I still don't really know why. Meanwhile, OnRenderImage does work even though it introduces that extra draw call from the blit (which kinda bugs me, but until I have a better solution it'll have to do).

    Next, you need to modify your surface shader to add a custom motion vector pass. Here's what mine looks like:

    Code (csharp):
    2.     Pass
    3.         {
    4.             CGINCLUDE
    6.     #include "UnityCG.cginc"
    8.             struct MotionVertexInput
    9.             {
    10.                 float4 vertex : POSITION;
    11.                 float3 oldPos : NORMAL;
    12.             };
    14.             struct v2f_motion_vectors
    15.             {
    16.                 float4 transferPos : TEXCOORD0;
    17.                 float4 transferPosOld : TEXCOORD1;
    18.                 float4 pos : SV_POSITION;
    19.             };
    21.             float4x4 _PreviousVP;
    22.             float4x4 _PreviousM;
    23.             bool _HasLastPositionData;
    24.             float _MotionVectorDepthBias;
    26.             float4x4 _PrevCustomVP;
    28.             v2f_motion_vectors vert_motion_vectors(MotionVertexInput v)
    29.             {
    30.                 v2f_motion_vectors o;
    32.                 o.pos = UnityObjectToClipPos(v.vertex);
    34.                 // this works around an issue with dynamic batching
    35.                 // potentially remove in 5.4 when we use instancing
    36.     #if defined(UNITY_REVERSED_Z)
    37.                 o.pos.z -= _MotionVectorDepthBias * o.pos.w;
    38.     #else
    39.                 o.pos.z += _MotionVectorDepthBias * o.pos.w;
    40.     #endif
    42.                 o.transferPos = o.pos;
    44.     // Here, we're manually applying the custom projection matrix
    45.     // note that we're not actually using the last frame's camera view*proj matrix,
    46.     // for first person weapons it's unnecessary (you don't want blur from camera motion on them anyway)
    47.     #if defined(FIRST_PERSON)
    48.                 o.transferPosOld = mul(_PrevCustomVP, mul(_PreviousM, _HasLastPositionData ? float4(v.oldPos, 1) : v.vertex));
    49.                 o.transferPosOld.y *= -1;
    50.     #else
    51.                 o.transferPosOld = mul(_PreviousVP, mul(_PreviousM, _HasLastPositionData ? float4(v.oldPos, 1) : v.vertex));
    52.     #endif
    54.                 return o;
    55.             }
    57.             float4 frag_motion_vectors(v2f_motion_vectors i) : SV_Target
    58.             {
    59.                 float3 hPos = ( / i.transferPos.w);
    60.                 float3 hPosOld = ( / i.transferPosOld.w);
    62.                 // V is the viewport position at this pixel in the range 0 to 1.
    63.                 float2 vPos = (hPos.xy + 1.0f) / 2.0f;
    64.                 float2 vPosOld = (hPosOld.xy + 1.0f) / 2.0f;
    66.     #if UNITY_UV_STARTS_AT_TOP
    67.                 vPos.y = 1.0 - vPos.y;
    68.                 vPosOld.y = 1.0 - vPosOld.y;
    69.     #endif
    70.                 half2 uvDiff = vPos - vPosOld;
    71.                 return half4(uvDiff, 0, 1);
    72.             }
    73.             ENDCG
    75.             Tags {
    76.                 "LightMode" = "MotionVectors"
    77.             }
    79.         Name "MOTIONVECTORS"
    81.             ZTest LEqual
    82.             Cull Off
    83.             ZWrite Off
    85.             CGPROGRAM
    86.             #pragma multi_compile __ FIRST_PERSON
    87.             #pragma vertex vert_motion_vectors
    88.             #pragma fragment frag_motion_vectors
    89.             ENDCG
    90.         }
    This just gets pasted in right after the ENDCG of the surface shader.
    So, unfortunately it does make supporting custom shaders more of a pain, but it's still not that bad. And the results seem pretty good!

    Thanks to Fuzzy_Slippers for helping me catch this one - replaced "shader_feature" with "multi_compile" so that Unity doesn't "helpfully" strip out the first person variant in a build.
    Last edited: Jun 8, 2016
    nxrighthere, Leoo, LastDraft and 3 others like this.
  2. Tiny-Man


    Mar 22, 2014
    Great share! Might show my team this as we're running into this problem too.

    Also noticed you're using my ak12 asset haha! Pretty cool to see that around some video and games.
  3. PhobicGunner


    Jun 28, 2011
    Actually, I'm not using that one. This is the one I'm using:!/content/46033
    I could go ahead and buy yours to give a try as well, if you'd like :)
  4. Tiny-Man


    Mar 22, 2014
    That is my one lol
  5. PhobicGunner


    Jun 28, 2011
    Oh, really? The different publisher name threw me off XD
  6. Tiny-Man


    Mar 22, 2014
    Yeah, rebranding lol, but if you check my weapon attachments link in my signature you can see it links to the same publisher.
  7. Silly_Rollo


    Dec 21, 2012
    I downloaded the 5.4 beta and couldn't get this method to work at first but I believe it was because all of my existing shaders called mul(UNITY_MATRIX_MVP, v.vertex) instead of UnityObjectToClipPos. Updating to that seems to be working fine. What are motion vectors used for? Motion blur? It wouldn't seem like worth the extra drawcalls/fiddle just to get motion blur on your weapons.
    Last edited: May 26, 2016
  8. PhobicGunner


    Jun 28, 2011
    Well, motion vectors are used for a number of things. Temporal anti-aliasing relies on it for instance. But yes, they're useful for motion blur. And if you're using motion vectors at all you DO need to implement the shader on the weapon!
    In the case of motion blur if you don't implement the custom pass, it will just be completely blurred out because the built-in motion vector shader needs to also be updated to use the correct vertex shader, or if you don't render it into the motion buffer at all it will just be blurred by the background (imagine facing a wall and then moving right - the weapon will actually be blurred horizontally because the motion blur effect is using the wall's motion vectors).
    In the case of everything else, you're going to get wrong results for temporal effects.

    EDIT: Oh, and it is important to note that there's no harm in including that pass. If you don't have the camera set to render motion vectors, that pass will just not be used (so it won't add draw calls).
  9. Silly_Rollo


    Dec 21, 2012
    Ah well I guess it is worth adding the extra elements to my shaders then. Most everything in my project uses custom shaders but I do use Standard in a couple of places but never anything rendered on the view model layer. Would I need to implement the extra pass on the Standard shader for those or do they not need it since they are always rendered normally?

    Was this possible before 5.4 or did this require some of the new 5.4 features? This is really cool stuff! Way more sensible than the multi camera approach. I was getting image effects by rendering the scene twice and then combining into a third camera and I'm sure that was pretty heavy performance wise. If you end up making that first person controller I'll definitely purchase it to support your efforts.
  10. PhobicGunner


    Jun 28, 2011
    AFAIK you only need to implement the pass on anything with a custom vertex shader. Surface shaders already have their own motion vector pass.

    I don't know when the UnityObjectToClipPos was introduced? I just sort of found out that everything was using that function, and realized that I could override it with a custom UnityCG replacement. Maybe that was introduced in 5.4?

    EDIT: Oh, and I'll give that first person controller thing more thought. I may need to migrate it to a new project since the one it's in has become a cluttered sandbox of sorts for testing graphics effects and such XD
  11. Silly_Rollo


    Dec 21, 2012
    Did you ever try this in a standalone build? So far I can't get it to work. I thought it might be because of the custom UnityCG.cginc so I decided to create my own cginc with a custom ClipPos function and #include that in my shaders instead which works fine in the editor but isn't working in builds. No errors just doesn't render properly.
  12. PhobicGunner


    Jun 28, 2011
    ... no.... >.>

    Not until you mentioned it. And it doesn't work.

    Well that's embarrassing. However, it looks like it might be due to Unity stripping out "unused" shader_feature variants. Give me a bit to test and see if this is truly the case and whether there's a solution...

    EDIT: YES! That was the case. So it turns out I had a massive misunderstanding about how shader_feature vs multi_compile worked. Basically what's happening is it's stripping out the first person variant because no material in the project has that keyword defined (because the keyword is set from script at runtime instead of from the material in the editor).

    So if you just replace all of the shader_feature lines with this instead:

    Code (csharp):
    2. #pragma multi_compile __ FIRST_PERSON
    It will not strip the variants in a build and will work.
    Last edited: Jun 8, 2016
  13. Silly_Rollo


    Dec 21, 2012
    Thanks that worked perfectly. I never would have guessed that. Always something new to learn with shaders I guess.
  14. PhobicGunner


    Jun 28, 2011
    I'm glad I could find a solution XD
    Also may try and find time to put together a demo project of this in case somebody just wants to download a project and see how it works instead of going through my crappy tutorial ;)
  15. SomeGuy22


    Jun 3, 2011
    That's a great solution for rendering the gun! But how would it work practically?

    Typically in my shooters, I'll use an empty GameObject for instance creation--it's just put at the end of the gun's barrel. When I spawn the shooting effects, it'll just emit from that point. Sure, the effects can just use the same rendering technique and you can see them through the walls. But what if you had to shoot a bullet from that point? (Or raycast for that matter) The projectile will end up already inside the wall because its physical position will be beyond the player's collider.

    You might say, "just move the bullet spawner back into the gun," and sure this would work. But, (and this is just me thinking of the top of my head) what if your bullet gained strength over time? The time the bullet has been active will be slightly more then what it appears to be for the player. If you ever needed that information, the "time since spawn" value would be off.

    The above isn't much of a problem on it's own; you could compensate with some simple offsets. However, this is where it starts to fall apart: what if your Doom-esque fireball is larger then your gun? You can't spawn it inside the collider, because it will be shown through the weapon. And you can't spawn it outside because it could be through a wall. How would you work around this?
  16. PhobicGunner


    Jun 28, 2011
    Actually, it sounds weird but most games basically spawn projectiles at the player's eyeballs, directly from the camera's center. It sounds like it shouldn't work, but seriously play Left 4 Dead for example and throw a pipe bomb or bile jar. It spawns from the screen's center. Doing it that way IMHO is a significantly better option than spawning from the gun's actual barrel - otherwise, the point the crosshair is showing and the point the gun will actually hit could be a bit off, especially when facing a wall. Most players at this point expect their shots to hit at the exact center of screen.
  17. Silly_Rollo


    Dec 21, 2012
    Yeah I split my weapons between visual fx and physics. Physics/raycasts/etc all act on crosshair center and ignore whatever the weapon is doing. If I'm going to do muzzle flashes or whatever I spawn them from the barrel and plot a motion that integrates it with the physics action.

    In motion in real time it is really difficult to notice this kind of thing if you aren't the dev staring at it until your eyes bleed. You'd be surprised the number of proper commercial games that just half ass this and you've never noticed. This gets a little trickier in multiplayer I presume but my game is single player only which you can play here if you are curious how it works out in practice with PhobicGunner's render technique implemented. Though I have few firearms being fantasy.

    I think what is most important is getting player intention. If you can anticipate what the player is intending to do and make that happen that always trumps visual or physics fidelity.
  18. xAvatarchikx


    Aug 17, 2012
    Hello! And is there an example? Does it work on version 5.5?
  19. Reanimate_L


    Oct 10, 2009
    @PhobicGunner is this technique still works?
    Edit : it is working suprisingly. need to do some tweaks in 2017.1 especially for the motion vector part
    Last edited: Sep 8, 2017
  20. unity_mad_coder


    Oct 22, 2016
    @rea did you fixed the part of motion vectors?
    Last edited: Sep 16, 2017
  21. Reanimate_L


    Oct 10, 2009
    Uhh yeah i just copy the built in shader motion pass and add the First_Person shader feature into the motion pass
  22. Reanimate_L


    Oct 10, 2009
    @jacks and for everyone that wondering about the motion vector pass in latest unity, this is my motion vector pass
    Code (csharp):
    3.            #include "UnityCG.cginc"
    5.                // Object rendering things
    7.                #if defined(USING_STEREO_MATRICES)
    8.                float4x4 _StereoNonJitteredVP[2];
    9.                float4x4 _StereoPreviousVP[2];
    10.                #else
    11.                float4x4 _NonJitteredVP;
    12.                float4x4 _PreviousVP;
    13.                #endif
    15.                float4x4 _PreviousM;
    16.                bool _HasLastPositionData;
    17.                bool _ForceNoMotion;
    18.                float _MotionVectorDepthBias;
    20.                struct MotionVectorData
    21.                {
    22.                    float4 transferPos : TEXCOORD0;
    23.                    float4 transferPosOld : TEXCOORD1;
    24.                    float4 pos : SV_POSITION;
    25.                    UNITY_VERTEX_OUTPUT_STEREO
    26.                };
    28.                struct MotionVertexInput
    29.                {
    30.                    float4 vertex : POSITION;
    31.                    float3 oldPos : NORMAL;
    32.                    UNITY_VERTEX_INPUT_INSTANCE_ID
    33.                };
    35.                MotionVectorData VertMotionVectors(MotionVertexInput v)
    36.                {
    37.                    MotionVectorData o;
    38.                    UNITY_SETUP_INSTANCE_ID(v);
    39.                    UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
    40.                    o.pos = UnityObjectToClipPos(v.vertex);
    42.                    // this works around an issue with dynamic batching
    43.                    // potentially remove in 5.4 when we use instancing
    44.                #if defined(UNITY_REVERSED_Z)
    45.                    o.pos.z -= _MotionVectorDepthBias * o.pos.w;
    46.                #else
    47.                    o.pos.z += _MotionVectorDepthBias * o.pos.w;
    48.                #endif
    50.                #if defined(USING_STEREO_MATRICES)
    51.                    o.transferPos = mul(_StereoNonJitteredVP[unity_StereoEyeIndex], mul(unity_ObjectToWorld, v.vertex));
    52.                    o.transferPosOld = mul(_StereoPreviousVP[unity_StereoEyeIndex], mul(_PreviousM, _HasLastPositionData ? float4(v.oldPos, 1) : v.vertex));
    53.                #else
    54.                    o.transferPos = mul(_NonJitteredVP, mul(unity_ObjectToWorld, v.vertex));
    55.                    o.transferPosOld = mul(_PreviousVP, mul(_PreviousM, _HasLastPositionData ? float4(v.oldPos, 1) : v.vertex));
    56.                #endif
    57.                    return o;
    58.                }
    60.                half4 FragMotionVectors(MotionVectorData i) : SV_Target
    61.                {
    62.                    float3 hPos = ( / i.transferPos.w);
    63.                    float3 hPosOld = ( / i.transferPosOld.w);
    65.                    // V is the viewport position at this pixel in the range 0 to 1.
    66.                    float2 vPos = (hPos.xy + 1.0f) / 2.0f;
    67.                    float2 vPosOld = (hPosOld.xy + 1.0f) / 2.0f;
    69.                #if UNITY_UV_STARTS_AT_TOP
    70.                    vPos.y = 1.0 - vPos.y;
    71.                    vPosOld.y = 1.0 - vPosOld.y;
    72.                #endif
    73.                    half2 uvDiff = vPos - vPosOld;
    74.                    return lerp(half4(uvDiff, 0, 1), 0, (half)_ForceNoMotion);
    75.                }
    77.                //Camera rendering things
    78.                UNITY_DECLARE_DEPTH_TEXTURE(_CameraDepthTexture);
    80.                struct CamMotionVectors
    81.                {
    82.                    float4 pos : SV_POSITION;
    83.                    float2 uv : TEXCOORD0;
    84.                    float3 ray : TEXCOORD1;
    85.                    UNITY_VERTEX_OUTPUT_STEREO
    86.                };
    88.                struct CamMotionVectorsInput
    89.                {
    90.                    float4 vertex : POSITION;
    91.                    float3 normal : NORMAL;
    92.                    UNITY_VERTEX_INPUT_INSTANCE_ID
    93.                };
    95.                CamMotionVectors VertMotionVectorsCamera(CamMotionVectorsInput v)
    96.                {
    97.                    CamMotionVectors o;
    98.                    UNITY_SETUP_INSTANCE_ID(v);
    99.                    UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
    100.                    o.pos = UnityObjectToClipPos(v.vertex);
    102.                #ifdef UNITY_HALF_TEXEL_OFFSET
    103.                    o.pos.xy += ( - 1.0) * float2(-1, 1) * o.pos.w;
    104.                #endif
    105.                    o.uv = ComputeScreenPos(o.pos);
    106.                    // we know we are rendering a quad,
    107.                    // and the normal passed from C++ is the raw ray.
    108.                    o.ray = v.normal;
    109.                    return o;
    110.                }
    112.                inline half2 CalculateMotion(float rawDepth, float2 inUV, float3 inRay)
    113.                {
    114.                    float depth = Linear01Depth(rawDepth);
    115.                    float3 ray = inRay * (_ProjectionParams.z / inRay.z);
    116.                    float3 vPos = ray * depth;
    117.                    float4 worldPos = mul(unity_CameraToWorld, float4(vPos, 1.0));
    119.                #if defined(USING_STEREO_MATRICES)
    120.                    float4 prevClipPos = mul(_StereoPreviousVP[unity_StereoEyeIndex], worldPos);
    121.                    float4 curClipPos = mul(_StereoNonJitteredVP[unity_StereoEyeIndex], worldPos);
    122.                #else
    123.                    float4 prevClipPos = mul(_PreviousVP, worldPos);
    124.                    float4 curClipPos = mul(_NonJitteredVP, worldPos);
    125.                #endif
    126.                    float2 prevHPos = prevClipPos.xy / prevClipPos.w;
    127.                    float2 curHPos = curClipPos.xy / curClipPos.w;
    129.                    // V is the viewport position at this pixel in the range 0 to 1.
    130.                    float2 vPosPrev = (prevHPos.xy + 1.0f) / 2.0f;
    131.                    float2 vPosCur = (curHPos.xy + 1.0f) / 2.0f;
    132.                #if UNITY_UV_STARTS_AT_TOP
    133.                        vPosPrev.y = 1.0 - vPosPrev.y;
    134.                        vPosCur.y = 1.0 - vPosCur.y;
    135.                #endif
    136.                    return vPosCur - vPosPrev;
    137.                }
    139.                half4 FragMotionVectorsCamera(CamMotionVectors i) : SV_Target
    140.                {
    141.                    float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
    142.                    return half4(CalculateMotion(depth, i.uv, i.ray), 0, 1);
    143.                }
    145.                half4 FragMotionVectorsCameraWithDepth(CamMotionVectors i, out float outDepth : SV_Depth) : SV_Target
    146.                {
    147.                    float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
    148.                    outDepth = depth;
    149.                    return half4(CalculateMotion(depth, i.uv, i.ray), 0, 1);
    150.                }
    151.            ENDCG
    153.            // 0 - Motion vectors
    154.            Pass
    155.            {
    156.            Tags{ "LightMode" = "MotionVectors" }
    158.                ZTest LEqual
    159.                Cull Back
    160.                ZWrite Off
    162.                CGPROGRAM
    163.                #pragma vertex VertMotionVectors
    164.                #pragma fragment FragMotionVectors
    165.                #pragma multi_compile __ FIRST_PERSON
    166.                ENDCG
    167.            }
    169.            // 1 - Camera motion vectors
    170.            Pass
    171.            {
    172.            ZTest Always
    173.                Cull Off
    174.                ZWrite Off
    176.                CGPROGRAM
    177.                #pragma vertex VertMotionVectorsCamera
    178.                #pragma fragment FragMotionVectorsCamera
    179.                #pragma multi_compile __ FIRST_PERSON
    180.                ENDCG
    181.            }
    183.            // 2 - Camera motion vectors (With depth (msaa / no render texture))
    184.            Pass
    185.            {
    186.            ZTest Always
    187.                Cull Off
    188.                ZWrite On
    190.                CGPROGRAM
    191.                #pragma vertex VertMotionVectorsCamera
    192.                #pragma fragment FragMotionVectorsCameraWithDepth
    193.                #pragma multi_compile __ FIRST_PERSON
    194.                ENDCG
    195.            }
    nxrighthere likes this.
  23. torvald-mgt


    Jun 14, 2015
    Has anyone tried a transparent version? In this case, I get a bug in which the model is painted over with black.
    What the author shared in this topic is very cool and useful)
  24. emilylena


    Jan 18, 2018
    This doesn't work in the 2018 version of Unity because there's no ObjectToClipPos definition in UnityCG.cginc
  25. richardkettlewell


    Unity Technologies

    Sep 9, 2015