So I don't normally ask questions on here (usually there already is a question with an appropriate answer to my question somewhere out there), but I couldn't find the answer to my specific question so far. So I'll just keep it short and simple. I've got a vertex and fragment shader, I displace my model with a noise texture in my vertex shader and add two textures in my fragment shader (a main Texture, and a Overlay Texture), all pretty basic stuff. Now I'm sure what I want is very easy and simple to achieve aswell, but I couldn't find an answer so far. I want to now take this output (displacement and the textures), and add some basic lighting to it. To my knowledge this is done using a surface shader, and I got it all mostly setup already, but how do I now take the output of vertex and fragment shader, and plug that into my surface shader for further computation? So tl;dr, how do I take the output of a vertex and fragment shader, and further compute this in a surface shader? I'll attach my shader I got so far below: Code (CSharp): Shader "Unlit/PlayerShader" { Properties { _MainTex ("Texture", 2D) = "white" {} _SideTex ("Texture", 2D) = "white" {} _OverlayTex ("Texture", 2D) = "black" {} _TintColor("Tint Color", Color) = (1,1,1,1) _NoiseSpeed ("Noise Speed", Float) = 0.25 _NoiseStrength ("Noise Strength", Float) = 0.25 } SubShader { Tags { "RenderType"="Opaque" } LOD 100 Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag // make fog work #pragma multi_compile_fog #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; UNITY_FOG_COORDS(1) float4 vertex : SV_POSITION; }; sampler2D _MainTex; sampler2D _SideTex; sampler2D _OverlayTex; float4 _MainTex_ST; float4 _TintColor; float _NoiseSpeed; float _NoiseStrength; v2f vert (appdata v) { v2f o; float4 colVal = tex2Dlod(_SideTex, float4(v.uv + (_Time.y*_NoiseSpeed),0,0)); v.vertex.xyz *= 1+(colVal.x*_NoiseStrength-(_NoiseStrength/2)); o.vertex = UnityObjectToClipPos(v.vertex); o.uv = TRANSFORM_TEX(v.uv, _MainTex); UNITY_TRANSFER_FOG(o,o.vertex); return o; } fixed4 frag (v2f i) : SV_Target { // sample the texture fixed4 col = tex2D(_MainTex, i.uv) * _TintColor; fixed4 overlayCol = tex2D(_OverlayTex, i.uv); if(overlayCol.w > 0.1){ col = tex2D(_OverlayTex, i.uv); } // apply fog UNITY_APPLY_FOG(i.fogCoord, col); return col; } ENDCG } CGPROGRAM #pragma surface surf Lambert struct Input { float4 color : COLOR; }; void surf (Input IN, inout SurfaceOutput o) { } ENDCG } }
So I figured it out, fragment shader can be merged into the surface shader (the surf function) and the vert shader of the surface shader can be overwritten with a custom vert shader. For completeness sake I would be interested in how/whether it would be possible to transfer the output of a fragment shader either way though^^
The surface shader is a vert and frag shader. Surface shader is just giving you a way to generate a fragment shader that has all the shading functions done for you, with all you needing to do is feed it texture/color values into the outputs you want to use, like albedo, normal, emission, height, etc... So no, there's no way to transfer the data from the frag to the surf, because the surf *is* the frag once it compiles. (you could render to a render texture and use that RT as the input for a surface shader... but that's a pretty costly thing to do and only niche scenarios require that kind of thing)
I see, so merging the functions as I did it is the only way of actually going about things then? Seems a bit weird to me that there is no real ability of communicating between two different passes in a shader, well atleast if those passes are vert+frag and a surface shader. But I did fix my problem, so it's fine I guess^^
Basically, yes. Technically there are other ways of passing information between passes, using shared read/write buffers for example, but this isn't generally something you want to rely on, can be very slow, and isn't available on all platforms. Generally speaking a shader pass has two stages, the vertex shader and the fragment shader. Both stages have access to any properties set on the material, or otherwise passed to it from the application (user set global shader properties, the current time, the camera's view and projection matrices, the object's transform matrices, optionally some subset of the lighting data, etc.). The vertex stage additionally has access to mesh data one vertex at a time for each invocation, and outputs data that gets passed to the fragment shader. The fragment stage gets access to the interpolated vertex stage output depending on where in the triangle is currently being rendered by that invocation, and outputs a single color value that is immediately written into the current render target using the current blend mode. Any data calculated during each invocation of the vertex or fragment shaders that isn't output to either the vertex to fragment interpolators or render target is immediately forgotten as soon as that invocation finishes. Each stage and invocation has no access to any other stage or invocation* outside of those narrowly controlled paths. * Fragment shaders have some limited communication between multiple fragment shader invocations as GPUs actually calculate 4 pixels at a time in 2x2 groups, and you can get the difference between the current invocation and the invocation to the side or above/below via partial derivative functions, ddx, ddy, and fwidth. GPUs use this functionality to derive texture mip levels, but they can be used for other purposes as well, like anti-aliased lines or per pixel surface curvature. Like @Invertex mentioned, Surface Shaders are vertex fragment shaders, or rather vertex fragment shader generators. You can see the actual shader code used by the game by clicking on the "Show Generated Code" button in the inspector. Each shader pass can only have one #pragma vertex and one #pragma fragment or the shader compiler will tell you to eff off, the actual function names are totally arbitrary, which confuses some people since the vertex functions for vertex fragment shaders and surface shaders are usually named the same in examples, but for a vertex shader it's the main entry point for the stage, and for surface shaders the main function is named surf_vert and the vertex function you define there is just another function the main vertex shader function calls.