Search Unity

Is there a way to accurately render a texture on a mesh via a vertex shader?

Discussion in 'Shaders' started by draco_nite, Oct 7, 2017.

  1. draco_nite

    draco_nite

    Joined:
    Jul 17, 2015
    Posts:
    9
    I'm trying to make a mesh that represents a world map that smoothly blends between terrain types.

    I know WHY the vertex shader isn't rendering the texture properly: it's just taking one point on the UV map and applying just that color to just that vertex, and calculating the blend inbetween the vertices. I think that you're SUPPOSED to render this kind of thing with a fragment shader, but considering that I have no idea how to pass any kind of information about where on the mesh the shader is, I'm not sure how I'd use a fragment shader. The shader would need to know this information since it's supposed to render a different texture based on the position on the mesh. (e.g. This part is a desert, so render the desert texture here, this part is a grassland, so render the grassland here, then blend between them where they meet)

    Pic A what this shader is doing, pic B is what I want it to look like.




    Here's my shader code:
    Code (CSharp):
    1.  
    2. // Upgrade NOTE: replaced 'mul(UNITY_MATRIX_MVP,*)' with 'UnityObjectToClipPos(*)'
    3.  
    4. Shader "Custom/ExampleVertexColorShader" {
    5.     Properties {
    6.         _Tex1 ("Albedo (RGB)", 2D) = "white" {}
    7.         _Tex2 ("Albedo (RGB)", 2D) = "white" {}
    8.     }
    9.     SubShader
    10.     {
    11.         Tags { "RenderType"="Opaque"}      
    12.         pass
    13.         {
    14.             CGPROGRAM
    15.             #pragma vertex wfiVertCol
    16.             #pragma fragment passThrough
    17.             #include "UnityCG.cginc"
    18.  
    19.             sampler2D _Tex1;
    20.             sampler2D _Tex2;
    21.  
    22.             struct VertOut
    23.             {
    24.                 float4 position : POSITION;
    25.                 float4 color : COLOR;
    26.                 float2 uv : TEXCOORD0;
    27.             };
    28.             struct VertIn
    29.             {
    30.                 float4 vertex : POSITION;
    31.                 float4 color : COLOR;
    32.                 float2 uv : TEXCOORD0;
    33.             };
    34.  
    35.             VertOut wfiVertCol(VertIn input, float3 normal : NORMAL)
    36.             {
    37.                 VertOut output;
    38.                 output.position = UnityObjectToClipPos(input.vertex);
    39.                 float y = input.vertex.y;
    40.                 y = y % 0.0001;
    41.                 y *= 1000000;
    42.                 y = round(y);
    43.                 if (y == 12.0) {
    44.                     output.color = tex2Dlod(_Tex1, float4(input.uv.xy,0,0));
    45.                 } else if (y == 11.0) {
    46.                     output.color = tex2Dlod(_Tex2, float4(input.uv.xy,0,0));
    47.                 } else {
    48.                     output.color = float4(0,0,0,1);
    49.                 }
    50.                
    51.                 output.uv = input.uv;
    52.                 //output.color = float4(y,y,y,1);
    53.                 return output;
    54.             }
    55.             struct FragOut
    56.             {
    57.                 float4 color : COLOR;
    58.             };
    59.             FragOut passThrough(float4 color : COLOR)
    60.             {
    61.                 FragOut output;
    62.                 output.color = color;
    63.                 return output;
    64.             }
    65.             ENDCG
    66.         }
    67.     }
    68.     FallBack "Diffuse"
    69.  
    70.  
    71. }
    72.  
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    By using the UVs you're already passing to the fragment shader? The default unlit shader that Unity will make for you if you make a new shader from the editor does this.
    https://docs.unity3d.com/Manual/SL-VertexFragmentShaderExamples.html

    You could pass the vertex position onto the fragment shader in another TEXCOORD, or just the y coordinate you care about by itself as a .z component of the TEXCOORD0 you're using for the UVs (just make it a float3). Or you could do the height test in the vertex shader like you are now and pass a 0.0 or 1.0 value that will be interpolated to the fragment shader.
    The TEXCOORD semantics are just arbitrary data and don't have to explicitly be texture coordinates. The same is true of the COLOR semantics. You could pass the vertex.y as the o.color.a.

    This might help you.
    http://www.alanzucconi.com/2015/06/10/a-gentle-introduction-to-shaders-in-unity3d/
     
  3. draco_nite

    draco_nite

    Joined:
    Jul 17, 2015
    Posts:
    9
    I haven't had time to read the introduction, but isn't it impossible to get the vertex via fragment shader? Doesn't the shader function only really know about the exact pixel it's iterating over at that time?
     
  4. draco_nite

    draco_nite

    Joined:
    Jul 17, 2015
    Posts:
    9
    Now that I'm not busy, I'll write a full response. I might sound condescending in this post, but I'm not trying to. I'm simply trying to explain in as much detail as possibly what I'm trying to do so that nothing whatsoever gets lost in communication.

    I think I didn't explain my problem properly, and I apologize for that. What I've got is a procedurally-generated hex grid mesh used to represent the world map, basically the way that Civilization, Warlock, Age of Wonders, and other 4X games generate and represent the world. In these games, hexes will often have different types of terrain, usually affecting gameplay in different ways. These are represented on the mesh with different textures. Since these world maps are procedurally generated, there has to be some algorithm that determines what hex has what terrain. If the shader is to render the correct texture in the correct place, the shader needs to know not only where on the mesh it's rendering, but also what's supposed to be at the spot that it's rendering. The only way I could think of getting that information to the shader was to offset the Y value of a vertex by a microscopic amount and use that to determine what to render, but that doesn't work because, well, my screenshots should show why.

    So, I'll be honest, I know that shader semantics grab some kind of data about the mesh, but I don't know exactly what TEXCOORD grabs. I'm not sure whether it's grabbing the coordinate of the mesh or of the texture. I'd assume it's the texture, considering that it's called TEXCOORD. Is there something like MESHCOORD?
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    I'll break it down like this:

    You have your mesh data. Your mesh has vertices and triangles. Each vertex has at the bare minimum a position, but additionally an optional color, normal, tangent, and multiple texture UVs.

    When you render an object that object space vertex position (vertex shader input semantic POSITION) gets passed to the vertex shader to be transformed into clip space (vertex shader output semantic SV_POSITION, though POSITION can work as well). Usually you additionally pass any other information you want the fragment shader to have access to like the normal (NORMAL), color (COLOR), or texture UV (TEXCOORD#). This data and their vertex input semantics are set by the data you have on the mesh's vertices, but how they get transferred to the fragment shader is up to you, and can be completely arbitrary. For the most part the same semantics can be used again to pass the normal (NORMAL), color (COLOR), and texture UV (TEXCOORD#) on, but the reality is they're all just arbitrary float values that can be what ever data you want. For example it is quite common to pass data like the vertex world position, or distance from the camera, or full rotation matrices for tangent space normal maps, all via the vertex shader output & fragment shader input TEXCOORD# semantics.

    Going back to the original mesh, technically all of the data you're supplying for the vertex position, color, and texture UV are arbitrary data as well. If you wanted to you could put the vertex position in a texture UV and vertex color in the position as long as your shaders knew to handle the data that way.

    So how does this help you? Like I said, it's all arbitrary data. You can choose to put any kind of data you want or need into any of the slots. For your case, assuming you don't need the actual vertex color you could encode the texture slot you want to use for that vertex into that data. The simplest method, if you only have 4 textures, is to use each color channel, R, G, B, and A, to denote a different texture, and optionally a fifth if none are set. In the vertex shader you would just pass that color on straight to the fragment shader and let it blend between them. In the fragment shader you sample all 4 (or 5) textures and lerp between them based on the interpolated vertex color. This is known as a splat map and very common for terrain rendering.

    If you are looking to use the vertex color, you could use another texture UV. If you use SetUVs() instead of .uv you can pass a Vector4 to the vertex shader on a single texture UV, which could again be used as another splat map, or any other data you want. You have up to 4 of these Vector4/float4 UVs you can use.