Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Need some help with gradient

Discussion in 'Shaders' started by Jaynesh, Oct 17, 2017.

  1. Jaynesh

    Jaynesh

    Joined:
    Feb 28, 2015
    Posts:
    36
    Hi guys,

    I've got this really simple fragment shader that projects a gradient across the side of my voxel mesh (which is generated at runtime).
    The gradient continues all the way from the bottom to the top. How can I get the gradient to reset if the voxel is at a different z position?

    e.g here is what I have


    but this is what I want



    Here is my shader code


    Code (CSharp):
    1. Pass {
    2.         ZWrite Off
    3.         Blend SrcAlpha OneMinusSrcAlpha
    4.    
    5.         CGPROGRAM
    6.    
    7.         // Define the vertex and fragment shader functions
    8.         #pragma vertex vert
    9.         #pragma fragment frag
    10.         #include "UnityCG.cginc"
    11.    
    12.         // Access Shaderlab properties
    13.  
    14.         uniform float4 _Color;
    15.    
    16.         // Input into the vertex shader
    17.         struct vertexInput {
    18.             float4 vertex : POSITION;
    19.              float3 normal : NORMAL;
    20.         };
    21.         // Output from vertex shader into fragment shader
    22.         struct vertexOutput {
    23.           float4 pos : SV_POSITION;
    24.           float4 worldPos : TEXCOORD0;
    25.           float3 normal : TEXCOORD1; //you don't need these semantics except for XBox360
    26.          float3 viewT : TEXCOORD2; //you don't need these semantics except for XBox360
    27.  
    28.         };
    29.  
    30.         // VERTEX SHADER
    31.         vertexOutput vert(vertexInput input) {
    32.              vertexOutput output;
    33.             output.pos = UnityObjectToClipPos(input.vertex);
    34.             output.worldPos = mul(unity_ObjectToWorld, input.vertex);
    35.             output.normal = normalize(input.normal);
    36.  
    37.           return output;
    38.         }
    39.  
    40.         // FRAGMENT SHADER
    41.         float4 frag(vertexOutput input) : COLOR {
    42.  
    43.             return _Color * input.worldPos.y * input.normal.x;
    44.         }
    45.     ENDCG
    46.     }
    47.  
    48. }
     
  2. DominoM

    DominoM

    Joined:
    Nov 24, 2016
    Posts:
    460
    There's not enough information available in the shader to know whether it's a separate block or not. You'll probably have to generate UV co-ordinates for the voxel mesh faces and use V as the gradient height instead of the worldPos.y co-ordinate in the shader.
     
  3. Jaynesh

    Jaynesh

    Joined:
    Feb 28, 2015
    Posts:
    36
    Thanks for your reply.

    How would I go about getting the uv co-ordinates in the shader and using V as the gradient height?

    I'm just getting started with learning shaders.
     
  4. DominoM

    DominoM

    Joined:
    Nov 24, 2016
    Posts:
    460
    The second example here shows visualising the UV co-ordinates from the mesh in a shader. You could then use i.uv.y as the height. I can't help with generating suitable UVs though, that'll be very specific to your voxel mesh generator. It'll need to scale the UVs of each face (of continuous voxel faces) to the correct range for the gradient texture.
     
  5. Jaynesh

    Jaynesh

    Joined:
    Feb 28, 2015
    Posts:
    36
    Thanks for your response.

    In the shader, can this be done by comparing the position of the vertex with the vertex below it?

    e.g

    Code (CSharp):
    1.  
    2. if (worldPos.x != worldBelowPos.x) {
    3. // colour output here
    4. }
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    No, because at no point does the shader have knowledge of that other vertex.

    A vertex shader only knows about itself and nothing else.
    A fragment shader only knows about the interpolated values it receives and nothing about the individual vertices that produced it.*
    Going one step further, a geometry shader only knows of the 3 vertices that make up a single triangle.*

    The only way to do this is to encode the necessary information into the mesh when generated via c#. In other words have the gradient be based off of manually calculated UVs and not purely world position.

    * Technically on AMD GPUs the fragment shader does the interpolation and has knowledge of all 3 vertices for each triangle, but this isn’t readily exposed to Unity’s shaders.
    * Adjacency data could give more information beyond that, but it wouldn’t be enough to act on for this situation, and Unity doesn’t support adjacency data in geometry shaders.
     
    DominoM likes this.
  7. DominoM

    DominoM

    Joined:
    Nov 24, 2016
    Posts:
    460
    As @bgolus said, no. There's another problem with that approach even if it was possible.

    In your example if you flipped the middle top section over so the small step was resting on the bottom block and the big one next to it, then that approach would shade the smaller step wrong. The small step wouldn't continue the face below, but would be part of the face one block right and one down.

    The UV generation would need to allow for all three quads that make up that face so the narrow bottom bit would be red and the wider top bit white. Otherwise you end up with two different gradients on what should be one continuous face.

    With faces that could span many blocks to the sides, and any of those could continue up or down, there's a lot of variations to allow for in what counts as a face for the gradient.
     
    Last edited: Oct 18, 2017
  8. Jaynesh

    Jaynesh

    Joined:
    Feb 28, 2015
    Posts:
    36
    Thanks for your replies. Okay I understand now.

    If you look at this shader https://www.shadertoy.com/view/ldl3DS how is it able to calculate the gradients like that. Is the data baked into the mesh? It seems to all be calculated in the shader.
     
  9. DominoM

    DominoM

    Joined:
    Nov 24, 2016
    Posts:
    460
    The mesh is generated in the shader as well, so it can do the math for the extra faces to see if AO is needed or not. This is why there are 8 getVoxel calls in voxelAO - this only works because there isn't a mesh, just some math to make one built in to the shader.

    Following the origins of the shadertoy there is an explanation of the fake occlusion used by that shader. It still needs uv maps (per voxel face) and needs to know whether neighbour voxels are occupied or not, so you'd still need to modify the mesh generation to provide that information to a shader based on this. It avoids the multiple voxel face issues by each gradient only covering one voxel block.
     
    Last edited: Oct 18, 2017
  10. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    To build on @DominoM 's response it really is important to understand there is no mesh being rendered on that page apart from a single quad used to render the fragment shader to the screen. That shader and all Shadertoys that appear to be using meshes are raytracing volumetric data stored & calculated within the shader. Because of this they effectively have access to the entire scene's "geometry" at every pixel and can really do things like test against the face below it.
     
    DominoM likes this.