For a pp effect I'm writing, I need to construct a MVP matrix myself. here's the test code. It doesn't work well. Am I misunderstood something? Shader: really straight forward, just compute the screen pos using custom MVP. I include a toggle to switch between the builtin UnityObjectToClipPos and custom MVP computation. Code (CSharp): Shader "Custom/TestScreenPos" { Properties { _MainTex ("Texture", 2D) = "white" {} [Toggle]_UseCustomMVP("Use custom mvp", Float) = 0 } SubShader { Tags { "RenderType"="Opaque" } LOD 100 Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag // make fog work #pragma multi_compile_fog #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; float4 vertex : SV_POSITION; float4 p : TEXCOORD1; }; sampler2D _MainTex; float4 _MainTex_ST; float4x4 _RenderMVP; float _UseCustomMVP; v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv = TRANSFORM_TEX(v.uv, _MainTex); o.p = v.vertex; return o; } float4 frag (v2f i) : SV_Target { float4 clipPos; if(_UseCustomMVP) { clipPos = mul(_RenderMVP, half4(i.p.xyz, 1) ); } else { clipPos = UnityObjectToClipPos(i.p.xyz); } float2 kuv = clipPos.xy / clipPos.w * 0.5 + 0.5; #if UNITY_UV_STARTS_AT_TOP kuv.y = 1-kuv.y; #endif float4 col = tex2D(_MainTex, kuv); return col; } ENDCG } } } C# part: just setting up the MVP matrix: Code (CSharp): using System.Collections; using System.Collections.Generic; using UnityEngine; public class TestScreenPosRunner : MonoBehaviour { void Start () { //material = GetComponent<Renderer>().sharedMaterial; } void Update () { Camera cam = Camera.main; //Matrix4x4 VP = cam.projectionMatrix * cam.worldToCameraMatrix; Matrix4x4 MVP = cam.projectionMatrix * cam.worldToCameraMatrix * transform.localToWorldMatrix; Shader.SetGlobalMatrix("_RenderMVP", MVP); //Shader.SetGlobalMatrix("_RenderVP", VP); } } When I use UnityObjectToClipPos, every thing works [Pic below] However when I use custom MVP, it gives a a complete garbage: What's the magic here? Update: Solution: see #10 and #13
Lets go through these: cam.projectionMatrix - Read the documentation on that one: https://docs.unity3d.com/ScriptReference/Camera-projectionMatrix.html transform.localToWorldMatrix - And the documenation for this one too: https://docs.unity3d.com/ScriptReference/Transform-localToWorldMatrix.html cam.worldToCameraMatrix - That one is fine, it's the only matrix you're using correctly actually. So, in the end, it should be: Matrix4x4 MVP = GL.GetGPUProjectionMatrix(cam.projectionMatrix, false) * cam.worldToCameraMatrix * transform.GetComponent<Renderer>().localToWorldMatrix;
I'm guessing you're rendering to a render texture... https://docs.unity3d.com/ScriptReference/GL.GetGPUProjectionMatrix.html
No it's not. and I tried both options. Left, RT=false, right, RT=true The calculated VP matrix is nothing equal to UNITY_MATRIX_VP
Hmm... no idea then. That code should produce an identical MVP matrix to what Unity uses. Well, technically the function Unity's shaders use does the local to world (unity_ObjectToWorld) and world to clip (UNITY_MATRIX_VP) separately, but for this it shouldn't matter.
Is more than one object in the screne using this shader? If so, the object might be batched in which case the per object local to world isn't valid. There's no way to detect this from script. You can turn off batching on the shader by adding "DisableBatching"="True" in the subshader tags. But I'm not sure that's it. There are several threads here and on answers from people asking how to recreate the MVP or VP matrices, and I can't see anything wrong.
Perhaps that's not the reason. I uploaded this tiny test scene in the attachment "MVPTest.rar". Take a look?
Solution found. I found this is affected by a tricky Unity flaw or bug. I added a line "public Camera cam;" in the TestScreenPosRunner.cs, assign the main camera to it. Then, delete that line. The net effect is no code or inspector changes. After this. It works mostly, aside from that the V coordinate is upside down comparing to Unity's builtin variable. Then, flip the renderToTexture switch to "true" GL.GetGPUProjectionMatrix(cam.projectionMatrix, true); Everything works. It appears, without a proper Camera property at least once, Unity will somehow clip away the Camera matrix computation out of user scripts and caches the decision, without ANY WARNING. Current (2018.2) workaround: Either add a Camera property, or add-then-delete the Camera property with compilation at least once. This behavior is not documented. If my guessing is correct, Unity staffs should take serious steps about the engine's compilation stability issue.
Remaining issue: Both Camera.main and Camera.current are unable to provide previews simultaneously in Scene view and Game view, where Unity's builtin function does that with no problems.
Camera.main is literally just looking for a camera on a game object named MainCamera. If you don't have one, or have more than one camera with that name, it won't work well. And it'll never work for the scene view. Camera.current isn't valid during Update, only very specific functions that happen per camera. Update happens once per frame, and you need something that happens once per camera, like OnPreRender. There are two versions of OnPreRender. One requires your script is on the same game object as the camera, which works fine, but won't work for the scene view. The other is a static delegate which Unity will call for all cameras. https://docs.unity3d.com/ScriptReference/Camera-onPreRender.html It passes the camera being rendered to the function directly, so there's no need to use Camera.main or Camera.current.
Code (CSharp): using System.Collections; using System.Collections.Generic; using UnityEngine; [ExecuteInEditMode] public class TestScreenPosRunner : MonoBehaviour { void OnEnable () { Camera.onPreRender += UpdateMVP; } void OnDisable () { Camera.onPreRender -= UpdateMVP; } void UpdateMVP (Camera cam) { Matrix4x4 M = transform.GetComponent<Renderer>().localToWorldMatrix; Matrix4x4 V = cam.worldToCameraMatrix; Matrix4x4 P = GL.GetGPUProjectionMatrix(cam.projectionMatrix, true); Matrix4x4 MVP = P*V*M; Shader.SetGlobalMatrix("_RenderMVP", MVP); Shader.SetGlobalMatrix("_RenderM", M); Shader.SetGlobalMatrix("_RenderV", V); Shader.SetGlobalMatrix("_RenderP", P); } } This code works in the editor for both the scene and main view. However if you want to support multiple objects with something like this, I'd recommend only passing in the VP matrix and using the built in unity_ObjectToWorld matrix to transform the mesh into world space, or applying the matrix to only the specific mesh the script is attached to (using a MaterialPropertyBlock). Otherwise if multiple objects are using this script the MVP matrix that they'll all use will simply be the last one that got updated before rendering actually begins.
I just had to hit play once. Your script as you had it won't run unless you're in play mode, then it worked fine. For the above example script I added [ExecuteInEditMode] so that it runs when not in play mode. Yeah, this part is curious. It seems like, at least in the editor, Unity is always render to a render texture? I'm too lazy to build to standalone to see if it's still flipped or not.
This is brilliant, it solves the preview thing. If everyone's millage can be different, that's a big problem. For me, I have to recompile with Camera property. Otherwise, it'll look the pics I uploaded with the test project. Actually, since it's the screen coordinates to be used with texture space sampling, it has to be always true. I have run the build on all APIs, and it confirms.