Search Unity

Question Density map approximation in vertex/geometry shader

Discussion in 'Shaders' started by iamvideep, Jan 6, 2021.

  1. iamvideep

    iamvideep

    Joined:
    Oct 27, 2017
    Posts:
    118
    I have been working on creating foliage at runtime, I have stumbled across a problem with sampling a texture/density map in vertex shader. While we do not get ddx and ddy in the tex2Dlod function, the sampling of texture coords at vertex position will yield in poor sampling results. With that said, to get appropriate high-res sampling, more vertices need to be added. This can be done:
    1) By using tessellation shader,
    2) By modeling a high poly mesh

    In both the cases, we end up bumping up the vertex count at runtime. While tessellation would be more appropriate as it can be controlled with distance, it seems more optimized way.

    Is there another point of view that I am missing out on or a more optimized way to do the same?

    Left is sampled with vertex shader, Right is sampled with fragment shader.
    upload_2021-1-6_9-23-19.png
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    You can sample the texture at any location you wish to within the vertex shader. But yes, the interpolated value will only be as detailed as the vertices are.

    But that shouldn't be a problem. Presumably you're looking to do something like generating grass in a geometry shader. You can sample the texture in the geometry shader at the positions you intend to put grass by getting the barycentric position within the triangle and manually interpolating the texture UV for that position.
     
  3. iamvideep

    iamvideep

    Joined:
    Oct 27, 2017
    Posts:
    118
    Thanks for your reply. This is what I am working on :


    In this video, I am compelled to have high density mesh geometry to sample a good resolution density map.
    I can paint the density map at runtime. But how to sample that displacement map if the mesh has low vertices?

    If I generate random bary-coords, still what amount or how many do I generate?

    Need to understand how this works in gaming? Ideally more vertices = more triangles = more computation = slow games...

    This is where I am puzzled heh!
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    You're going to have to describe how the above system is currently working.

    I'm assuming you're using a geometry shader, and it's working something like this:
    Vertex shader samples density map, passes the sampled value along to the geometry shader. Geometry shader chooses random spots on the triangle to draw quads at. Uses the interpolated density, or maybe just the density of the first vertex, to exclude some percentage of those random spots.

    If the above is correct, the solution is to pass along the density map UV to the geometry shader, and not the sampled value from the density map. The random positions you choose inside of the triangle to draw your geometry at, you're likely already using barycentric coordinates to ensure it's on the surface of the triangle. Use those same barycentric coordinates to interpolate the UV from the 3 vertices, and sample the density map with that interpolated UV.
     
  5. iamvideep

    iamvideep

    Joined:
    Oct 27, 2017
    Posts:
    118
    This is how the system works:
    1) A shader is applied to the mesh
    2) The vertex function isnt much responsible here
    3) I will skip the realtime painting part for now
    4) Assume we already have a displacement map as follows:
    upload_2021-1-6_13-40-38.png

    5) In the geometry shader, I will sample the texture based on the vertices
    6) Create additional geometry (leaves)

    Now, if the geometry is high resolution with good topology, we can get a precise density map and hence place more geometry precisely as we will get more range of color.

    If the geometry has less vertices, we will have to randomly sample.

    Will the difference between uniform and random sampling is create any issues while generating the geometry?
    As if points are uniform, each time you play the game, geometry will be generated along those points, but random will always have a different value
    Uniform: upload_2021-1-6_13-44-53.png
    Random : upload_2021-1-6_13-45-40.png
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    "Random sampling" on a GPU isn't ever actually random. You're going to be using a pseudo random value, a hash, to get your "random" position. So as long as the seed you use isn't changing (like you're not using
    _Time
    or the camera position, etc.) it will be the same every time.*

    * On that device, depending on the pseudo random hash function you're using. If you're using something that has
    frac(sin(dot(
    ... that's not guaranteed to be the same on different GPUs.
    http://jcgt.org/published/0009/03/02/
    https://www.shadertoy.com/view/4djSRW
    https://www.shadertoy.com/view/llGSzw

    You also can't guarantee a high polygon mesh will have good topology. In some ways it's easier if you have a lower poly mesh, as you can calculate the area of a triangle and adjust the number of random points to use based on that. With smaller triangles you're essentially limited to generating at least one mesh per triangle as there's no good way to enforce density across the entire mesh.
     
  7. iamvideep

    iamvideep

    Joined:
    Oct 27, 2017
    Posts:
    118
    So now I understand that for getting a higher resolution mapping we would either require a set of pre-defined samples passed to the gpu for a triangle and then map it to the triangle being currently processed. If we use the GPU random sampling, it wont be random due to the seed value.

    Else we can try to use the tessellation shader, but at a cost of increasing the triangulation of the base mesh.

    I guess, I will post some updates here if I get stuck anywhere.

    Thanks Ben, you have been awesome!