Search Unity

Feedback Pixel Artifacts when Supplying Discontinuous UV coordinates

Discussion in 'Shader Graph' started by Salmonman, Aug 24, 2019.

  1. Salmonman

    Salmonman

    Joined:
    Jul 14, 2016
    Posts:
    23
    I have noticed that when a Sample Texture 2D node is supplied with UV coordinates that aren't situated directly next to it's neighbor, then the border between them results in a very strange looking pixely line separating them. One would think that each pixel would simply decide what color to be based on the UV's provided, but this doesn't seem to be the case.

    For example here I have a simple graph which takes the modulo of the X coordinate of the fragment and 1. This should actually result in a continuous looking texture, after all that's pretty much whats happening on any repeating texture.



    However the result still has this weird line artifact. Even though the texture actually lines up, as the math says it should.




    So is this actually a strange undesirable behavior from the node, or am I just going about this wrong?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    It’s due to how Mipmapping on GPUs works. The GPU processes on screen pixels in 2x2 quads, and compares the UV values between them to determine the appropriate mip level to use when displaying a texture. When you have a modulo that happens inside of that 2x2, the UV is changing by a large amount and the GPU thinks it should be using a lower mip level.

    The easiest solution is to disable mipmaps on your texture, or use a Sample Texture 2D LOD node with a fixed LOD of zero. But I don’t suggest this as it’ll result in heavy texture aliasing.

    My usual suggestion for this problem is to use tex2Dgrad and compute the derivatives yourself using the pre-modulo-ed UVs, but Unity doesn’t yet provide an equivalent node to tex2Dgrad. It does provide a tex2Dlod in the form of the afore mentioned Sample Texture 2D LOD node, so you could calculate the mip level manually (again using unmodified UVs).

    However there’s another way to do it that’s a little easier. Use two UVs, one offset by 0.5 before the modulo then offset back by -0.5, and use the UV with the smallest derivatives.

    I showed something similar here:
    https://forum.unity.com/threads/what-is-this-mipmap-artifact.657052/#post-4512022
    Pay particular attention to the nodes after the divide node though to the branch. That specific example isn’t doing a modulo to both UVs (the values going into the DDXY nodes), but is doing an offset to both. You want UV > Modulo (or Fraction node) > DDXY and UV > +0.5 > Modulo (or Fraction) > -0.5 > DDXY.
     
    Last edited: Aug 25, 2019
    zwcloud and Zyblade like this.
  3. Salmonman

    Salmonman

    Joined:
    Jul 14, 2016
    Posts:
    23
    Wow thanks for the super detailed reply. I'll try looking into these solutions.

    My actual goal was to make a modified triplanar shader which doesn't need to use 3 different texture samples. The visual style I'm going for is rather forgiving of sharp edges so I figured I could save some processing power without losing anything major. Though in the end I wonder if maybe it would be more efficient to simply do the 3 texture samples and blend them the normal way.

    EDIT:
    Ok so yeah, I went ahead and implemented the LOD texture sampler which fixes the problem. Might end up using that. Although once again, is this really any more efficient than a regular triplanar node?

     
    Last edited: Aug 26, 2019
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Is selecting from a set of UVs to do a single texture sample faster than sampling from 3 and picking from those? Yes. ...ish.

    GPUs are super good at hiding texture sample latency, that is the time it takes to sample a texture or multiple textures can be hidden by doing other parts of the shader. Lighting for example doesn’t really need to know the albedo color until after those calculations are done. So unless you’re hitting bandwidth limits (sampling several textures, or on mobile) or have a lot less stuff to calculate after sampling the texture, then it might be hard to notice.


    That all said you will notice using a fixed LOD of 0. That has a significant performance hit, especially if used across multiple objects and with lots of textures. You can halve your frame rate even on desktop using that “fix”. Way, way worse on mobile. If your actual content doesn’t have a lot of smooth normal transitions between where the triplanar edges are, you shouldn’t even need to fix anything since the artifact won’t appear anyway. Otherwise for this style of triplanar then I would suggest at least manually calculating the LOD level for each face and using that value, switching between them using the same kind of Branch node setup.
     
  5. Salmonman

    Salmonman

    Joined:
    Jul 14, 2016
    Posts:
    23
    What then would be the best way to calculate a proper LOD level in shader graph? I figure it must have something to do with the distance between the fragment and the camera, right?
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    You have to calculate it manually the same way you would using a vertex fragment shader. Here’s an example with GLSL:
    https://community.khronos.org/t/mipmap-level-calculation-using-dfdx-dfdy/67480/2

    The dFdx and dFdy are represented by the DDX and DDY nodes. And the input “texture coordinate” is the UV multiplied by the texture’s resolution. The resolution of a texture input can be gotten using the TexelSize node.