Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Polygon jitter shader? Possible to port parts of my unlit shader to a standard surface shader?

Discussion in 'Shaders' started by ToasterGameStudio, Sep 27, 2018.

  1. ToasterGameStudio

    ToasterGameStudio

    Joined:
    Feb 22, 2018
    Posts:
    4
    I am new to shaders and here asking for help as to whether this is possible or not (assuming it is). I'm currently working with an unlit shader and was attempting to add polygon jitter to simulate the PSX style graphics which to my knowledge I have done very simply with the code below. (default unlit shader with a small edit. Skip down below code and I explain what I changed from the default shader)
    Code (Unity Shader):
    1. Shader "Unlit/PolygonJitter_Test"
    2. {
    3.     Properties
    4.     {
    5.         _MainTex ("Texture", 2D) = "white" {}
    6.     }
    7.     SubShader
    8.     {
    9.         Tags { "RenderType"="Opaque" }
    10.         LOD 100
    11.  
    12.         Pass
    13.         {
    14.             CGPROGRAM
    15.             #pragma vertex vert
    16.             #pragma fragment frag
    17.            
    18.             #include "UnityCG.cginc"
    19.  
    20.             struct appdata
    21.             {
    22.                 float4 vertex : POSITION;
    23.                 float2 uv : TEXCOORD0;
    24.             };
    25.  
    26.             struct v2f
    27.             {
    28.                 float2 uv : TEXCOORD0;
    29.                 float4 vertex : SV_POSITION;
    30.             };
    31.  
    32.             sampler2D _MainTex;
    33.             float4 _MainTex_ST;
    34.            
    35.             v2f vert (appdata v)
    36.             {
    37.                 v2f o;
    38.                 o.vertex = floor((UnityObjectToClipPos(v.vertex)*15)+.5)/15;
    39.                 o.uv = TRANSFORM_TEX(v.uv, _MainTex);
    40.                 return o;
    41.             }
    42.            
    43.             fixed4 frag (v2f i) : SV_Target
    44.             {
    45.                 // sample the texture
    46.                 fixed4 col = tex2D(_MainTex, i.uv);
    47.                 return col;
    48.             }
    49.             ENDCG
    50.         }
    51.     }
    52. }

    More specificaly I edited the original vertex code and made it round the vertex (To the nearest whole number or whatever I determen, as rounding to the nearest whole was to much) as seen below.
    Original shader (Line 38)
     o.vertex = UnityObjectToClipPos(v.vertex);

    Polygon Jitter (Line 38)
     o.vertex = floor((UnityObjectToClipPos(v.vertex)*15)+.5)/15;

    Now with that being said, this is in an unlit shader which is definitely not what I want. So here's the question. How (if possible) would I implement the edited code into a standard surface shader? Like the one below (Being a copy paste of the default surface shader)
    Code (Unity Shader):
    1. Shader "Custom/NewSurfaceShader" {
    2.     Properties {
    3.         _Color ("Color", Color) = (1,1,1,1)
    4.         _MainTex ("Albedo (RGB)", 2D) = "white" {}
    5.         _Glossiness ("Smoothness", Range(0,1)) = 0.5
    6.         _Metallic ("Metallic", Range(0,1)) = 0.0
    7.     }
    8.     SubShader {
    9.         Tags { "RenderType"="Opaque" }
    10.         LOD 200
    11.  
    12.         CGPROGRAM
    13.         #pragma surface surf Standard fullforwardshadows
    14.  
    15.         #pragma target 3.0
    16.  
    17.         sampler2D _MainTex;
    18.  
    19.         struct Input {
    20.             float2 uv_MainTex;
    21.         };
    22.  
    23.         half _Glossiness;
    24.         half _Metallic;
    25.         fixed4 _Color;
    26.  
    27.         UNITY_INSTANCING_BUFFER_START(Props)
    28.         UNITY_INSTANCING_BUFFER_END(Props)
    29.  
    30.         void surf (Input IN, inout SurfaceOutputStandard o) {
    31.             fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
    32.             o.Albedo = c.rgb;
    33.             o.Metallic = _Metallic;
    34.             o.Smoothness = _Glossiness;
    35.             o.Alpha = c.a;
    36.         }
    37.         ENDCG
    38.     }
    39.     FallBack "Diffuse"
    40. }

    What code would I need to transfer from the unlit shader to the surface? Is it a simple copy paste that I haven't thought of or is it something more complicated to get it working in a standard shader? Anyone's thoughts and help would be much appreciated. If anyone has a resource be it youtube or even one of the many unity documentation pages that would help get me a step closer to figuring this out would be great. Thanks to anyone who helps. (Also the first time posting in the unity forums so I'm assuming this is the right place but please notify me if I'm wrong and I will move the thread)
     
  2. Pakillottk

    Pakillottk

    Joined:
    Dec 30, 2012
    Posts:
    11
    Hi, well I guess that the solution it's actually pretty simple. You just need to add a custom vertex program to the surface shader. Lucky for us Unity provides a way to do so.

    Just replace this line:

    #pragma surface surf Standard fullforwardshadows


    with this:

    #pragma surface surf Standard fullforwardshadows vertex:vert


    Now you cand add a function called vert in your surface shader that translate the vertex as you like.

    Check out this: https://docs.unity3d.com/Manual/SL-SurfaceShaderExamples.html

    There's some examples of adding a vertex program to surface shaders.

    You can also add a custom lighting function as well. Then you could unlit the surface color or make any calculations. Just check in the same link and look for Custom Lighting Models in Surface shaders.
     
    Last edited: Sep 27, 2018
    ToasterGameStudio likes this.
  3. ToasterGameStudio

    ToasterGameStudio

    Joined:
    Feb 22, 2018
    Posts:
    4
    Thanks for the help! That got me a fair amount closer, still struggling to get the results I'm after tho. Seems everything I'm doing causes the shader to display the textures/models on a flat 2d plan extremely warped and distorted then also renders everything in a 3d environment (visible around and behind the 2d renders) in one colour with odd pop in/out depending on camera location. Thanks again for the help! I'm going to keep dabbling around with what you showed me and see if I can figure out what I need to do for the desired results.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Note that the way surface shaders work, the v.vertex value has to still be in object space after running the vertex function as it will always apply the same UnityObjectToClipPos() function to it afterwards. That means you have to transform the clip space position back to object space. This is a problem because Unity does not supply an inverse projection matrix. At least not one that exactly matches the projection matrix used by the UnityObjectToClipPos() function.

    So instead you'll have to do all of the matrix transforms manually, and use the similar, but subtly different unity_CameraProjection and unity_CameraInvProjection matrices. For the purposes of the effect you're going for, it's identical.

    Code (csharp):
    1. float4 worldPos = mul(unity_ObjectToWorld, v.vertex);
    2. float4 viewPos = mul(unity_MatrixV, worldPos);
    3. float4 clipPos = mul(unity_CameraProjection, viewPos);
    4. clipPos.xy /= clipPos.w;
    5. clipPos.xy = floor(clipPos.xy * _JitterHalfResolution + 0.5) / _JitterHalfResolution;
    6. clipPos.xy *= clipPos.w;
    7. viewPos = mul(unity_CameraInvProjection, clipPos);
    8. worldPos = mul(unity_MatrixInvV, clipPos);
    9. v.vertex = mul(unity_WorldToObject, worldPos);
     
    hippocoder and AcidArrow like this.
  5. ToasterGameStudio

    ToasterGameStudio

    Joined:
    Feb 22, 2018
    Posts:
    4
    Hey, bgolus thanks for the response! it helped out a lot as from what can tell it is doing what I'm after. Except there is a slight issue, okay maybe a little bit more then slight if you think about it. With the code you showed me it has added the jitter I was looking for but also made objects move with the viewport/camera and are far from there original position.

    Rounded to a whole


    Rounded to a tenth


    Rounded to a hundredth


    Above shows that the jitter is working as intended at (least with shadows, idk). but as I've already said objects are not anywhere near where they're supposed to be, from what I can tell they are revolving around the cameras rotation & moving with the cameras world space position (as for the most part objects seem to be where they're supposed to be if you're inside the object's origin, sorta)

    Visible object while in object origin


    Objects not shown near the origin


    As I stated in the original post I'm new to shaders so maybe I'm just not understanding how to use the code that you sent. Regardless I'll post the shader code I have so far below. If you know what I'm doing wrong right away that's great, if not no big deal I'm sure I'll find a workaround for this issue in time. Thanks again for the help.

    Code (CSharp):
    1. Shader "Custom/PolygonJitter" {
    2.     Properties{
    3.       _MainTex("Texture", 2D) = "white" {}
    4.       _JitterHalfResolution("Jitter", Float) = 10.0
    5.     }
    6.         SubShader{
    7.           Tags { "RenderType" = "Opaque" }
    8.           CGPROGRAM
    9.           #pragma surface surf Lambert addshadow fullforwardshadows vertex:vert
    10.  
    11.           struct Input {
    12.               float2 uv_MainTex;
    13.           };
    14.  
    15.       float _JitterHalfResolution;
    16.       sampler2D _MainTex;
    17.  
    18.       void vert(inout appdata_full v) {
    19.           float4 worldPos = mul(unity_ObjectToWorld, v.vertex);
    20.           float4 viewPos = mul(unity_MatrixV, worldPos);
    21.           float4 clipPos = mul(unity_CameraProjection, viewPos);
    22.           clipPos.xy /= clipPos.w;
    23.           clipPos.xy = floor(clipPos.xy * _JitterHalfResolution + 0.5) / _JitterHalfResolution;
    24.           clipPos.xy *= clipPos.w;
    25.           viewPos = mul(unity_CameraInvProjection, clipPos);
    26.           worldPos = mul(unity_MatrixInvV, clipPos);
    27.           v.vertex = mul(unity_WorldToObject, worldPos);
    28.       }
    29.  
    30.       void surf(Input IN, inout SurfaceOutput o) {
    31.           o.Albedo = tex2D(_MainTex, IN.uv_MainTex).rgb;
    32.       }
    33.       ENDCG
    34.       }
    35.           FallBack "Diffuse"
    36. }
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    The problem is this technique fundamentally doesn't work with shadow casting, not without some additional script side hackery at least.

    Lets go back to the original method using UnityObjectToClipPos(). Like I noted above, that's transforming from object, to world, to view, to clip space. You're then quantizing the clip space position to introduce some screen space vertex jitter. The problem for shadows comes from the last two transforms, view and clip space, and how shadow maps work.

    Shadow maps work by rendering the nearest depth of the scene from a light's point of view and storing it in a texture, then when rendering the main camera you get a pixel's location in that light "view" space and check if it's closer or further than the distance stored in the shadow map for that position. I'm using quotes on the "view" space there, because really it's the equivalent to a screen space position, though obviously the light isn't rendering to a screen. Screen space is clip space that's been normalized to a 0.0 to 1.0 range. Clip space is xyz is in -w to w range (at least what is visible).

    So, in the UnityObjectToClipPos() example, if we were to do that for the shadow map pass as well we'd be snapping the vertices in the light's clip space, not the camera's clip space, and now the don't match and suddenly you get weird self shadowing of triangles casting shadows onto themselves, and generally mismatched shapes. There's a reason why once shadow mapping started getting used we weren't using integer based rasterization anymore (the reason why the PS1 has jittery vertices to begin with).

    For what you're doing you're trying to mimic the visual effect, but still keep modern conveniences like shadow mapping working. For that you need to be quantizing the vertex positions in a consistent "screen space" regardless of what view you're actually rendering from. In my example, unity_CameraProjection is already the main camera's projection matrix (unity_CameraProjection), but the view matrix (unity_MatrixV) is dependant on the current view being rendered, so it's different between the camera and shadow maps. Unity doesn't provide a view matrix for the main camera, only the currently rendering camera. So the solution is to calculate your own combined view & projection matrix from a script attached to the main camera and set it as a global property. You then also need to set a global inverse matrix to support surface shaders.

    https://docs.unity3d.com/ScriptReference/Shader.SetGlobalMatrix.html
    https://docs.unity3d.com/ScriptReference/Camera-worldToCameraMatrix.html
    https://docs.unity3d.com/ScriptReference/Camera-nonJitteredProjectionMatrix.html
    https://docs.unity3d.com/ScriptReference/GL.GetGPUProjectionMatrix.html