Hello, I'm working on an unwrap surface shader but I'm confused about how the vertex shader modifier works. I know how to create a basic unwrap shader which displays the full-screen uv map of an object: Spoiler: Basic unwrap shader Code (CSharp): Shader "Custom/Unwrap" { SubShader{ Pass{ Lighting Off Cull Off CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct v2f { float4 pos : SV_POSITION; }; struct appdata { float2 uv : TEXCOORD0; }; v2f vert(appdata v) { v2f o; o.pos = float4(v.uv.x * 2.0 - 1.0, v.uv.y * 2.0 - 1.0, 1.0, 1.0); return o; } fixed4 frag(v2f i) : SV_Target{ return float4(0.4,0.5,1,1); } ENDCG } } } And I'm trying to use this technique with surface schaders: Spoiler: Surface unwrap shader Code (CSharp): Shader "Custom/UnwrapSurf" { Properties { _Color ("Color", Color) = (1,1,1,1) _MainTex ("Albedo (RGB)", 2D) = "white" {} _BumpMap ("Normals", 2D) = "white" {} _Glossiness ("Smoothness", Range(0,1)) = 0.5 _Metallic ("Metallic", Range(0,1)) = 0.0 } SubShader { Tags { "RenderType"="Opaque" } LOD 200 CGPROGRAM #pragma surface surf Standard fullforwardshadows vertex:vert #pragma target 3.0 sampler2D _MainTex; sampler2D _BumpMap; struct Input { float2 uv_MainTex; float2 uv_BumpMap; }; half _Glossiness; half _Metallic; fixed4 _Color; void vert (inout appdata_full v) { v.vertex.xyz = float3(v.texcoord.x* 2.0 - 1.0, v.texcoord.y 2.0 - 1.0, 1.0); } void surf (Input IN, inout SurfaceOutputStandard o) { fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color; o.Albedo = c.rgb; o.Metallic = _Metallic; o.Smoothness = _Glossiness; o.Normal = UnpackNormal (tex2D (_BumpMap, IN.uv_BumpMap)); o.Alpha = c.a; } ENDCG } FallBack "Diffuse" } As you can see, the vertex function didn't work as expected. In fact, it seems the texcoords are in model space and not in uv space. There must be something wrong in the vert function: Spoiler: Vert function Code (CSharp): void vert (inout appdata_full v) { v.vertex.xyz = float3(v.texcoord.x 2.0 - 1.0, v.texcoord.y 2.0 - 1.0, 1.0); } Does anyone know how to solve this?
The vertex fragment shader's output is the clip space position, what the GPU uses to determine the screen space position. Normally you'd take the object space vertex positions and convert them into clip space using the UnityObjectToClipPos() function, where as the trick here is you're converting the UVs directly into a clip space position. In a Surface Shader, the vertex function just lets you modify the object space vertex data. The Surface Shader's next step outside of the custom vertex function is to immediately apply the UnityObjectToClipPos() to the v.vertex value. To do what you're looking to do with a Surface Shader, you'd have to calculate the clip space position in object space. Unfortunately that's harder said that done as Unity does not provide the necessary matrices to transform from clip space back to object space. There's no easy solution to this apart from calculating the inverse projection matrix in the shader manually (which there's no built in functions for), or do it in c# and pass the matrix to the shader.
Hello, thanks for your answer. I'm interested in the "inverse projection matrix" solution, it's fine for me to compute it in a C# script and pass it to the shader. If I understand wat you're saying, in my custom vertex function I have to "prepare" the v.vertex value so that when the shader applies the UnityObjectToClipPos() to it, the resulting value is the clip space position. Am I correct? I'm still not sure about a couple of things: 1) Is it the Camera.projectionMatrix to be inverted, or it's something else? 2) How to use the invese projection matrix to calculate clip space position in object space? Can I simply do something like this? Code (CSharp): v.vertex = mul(INV_PROJ_MATRIX, v.vertex); Thanks, Francesco
Yes. You have to apply the inverse matrix operations to basically "undo" what the UnityObjectToClipPos() is going to do to it so the resulting position is the one you already calculated. Really you want to calculate the inverse UNITY_MATRIX_VP (view projection) matrix so you can apply exactly the inverse operations, since that function does this: Code (csharp): inline float4 UnityObjectToClipPos(in float3 pos) { // More efficient than computing M*VP matrix product return mul(UNITY_MATRIX_VP, mul(unity_ObjectToWorld, float4(pos, 1.0))); } See that page's notes about the matrix used on the GPU. You also need to use the GPU world to camera matrix, which you can get from Camera.worldToCameraMatrix. So in C# you need to do this: Code (csharp): // get GPU projection matrix Matrix4x4 projMatrix = GL.GetGPUProjectionMatrix(cam.projectionMatrix, false); // get GPU view projection matrix Matrix4x4 viewProjMatrix = projMatrix * cam.worldToCameraMatrix; // get inverse VP matrix Matrix4x4 inverseViewProjMatrix = viewProjMatrix.inverse; You can then set that matrix on the material directly, or as a shader global. In the Surface Shader you'll then want to do this: Code (csharp): float4 clipPos = float4(v.uv.x * 2.0 - 1.0, v.uv.y * 2.0 - 1.0, 1.0, 1.0); v.vertex = mul(unity_WorldToObject, mul(_inverseViewProjMatrix, clipPos));
@bgolus As you can see in the following image, the shader outputs vertices in the correct screen position only when the camera projection is set to orthographic. It isn't a problem for me to use orthographic projection, but I'm wondering why it doesn't work with perspective projection. However, the real problem is that no lights are rendered on the model's surface when unwrapped (and I actually need them). I suppose lighting is computed after the vertices have already been modified. Is there a Surface Shader-based solution or a workaround for this? Even thinkin about a fragment/vertex solution, the vertex shader will always change vertices position before the fragment shader's execution, and this would mess up per-pixel lighting calculation... The only thing I can imagine to solve this problem is to compute both world space and clip space positions in the vertex shader (in the classic way), then unwrap the vertices and finally compute lighting using the world space positions. Would this be a good/correct approach? Thanks, Francesco
Yeah, I don't think you can do this in a surface shader while still retaining the proper world position data. Since you can only modify the object space vertex positions, and that same data is used to determine the world space, you're dead in the water. You'll have to modify the generated shader code directly rather than trying to stay within a Surface Shader. I'm not totally sure either. Might be some floating point math problems causing the object to get clipped... you could try: float4 clipPos = float4(v.uv.x * 2.0 - 1.0, v.uv.y * 2.0 - 1.0, 0.5, 1.0);