Search Unity

  1. If you have experience with import & exporting custom (.unitypackage) packages, please help complete a survey (open until May 15, 2024).
    Dismiss Notice
  2. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice

Question Normal map for procedurally generated texture

Discussion in 'Shader Graph' started by IamTirion, Jun 6, 2019.

  1. IamTirion

    IamTirion

    Joined:
    Mar 3, 2019
    Posts:
    20
    Hello, I have been trying to make procedural textures with Shader Graph, like what Substance does. I have been told that Shader Graph wasn't designed for this. I am not sure how true that is, but I still wish to try. Down below is a screenshot of my attempt. I made a tile by combining 2 perpendicular rectangles, then I made the normal map using the Normal from height node.
    upload_2019-6-6_21-16-11.png
    The result is this
    upload_2019-6-6_21-23-8.png
    The indented effect of the seams between the tiles is visible, but how do I make it more pronounced? Increasing the normal map strength beyond a certain point stops increasing its effect. Should I make textures using the Shader Graph? I need help most with the normal map but is there anything else I can improve?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,363
    The Rectangle node uses screen space derivatives to produce an anti-aliased rectangle which has a blurred edge that's always ~1 pixel wide.

    The Normal from Height node also uses screen space derivatives to find out how much a value changes. The result is what you see above where your normal mapped edge is only ever 1 pixel wide, and also rather blocky (because screen derivatives work on 2x2 pixel quads).

    You'll get better results if you don't use a rectangle node or the normal from height node, and instead calculate a soft edged rectangle yourself for 4 different UV offsets and construct a normal from that manually.


    And yes, like you've already heard, Shader Graph is not really intended for procedural texture generation. It can certainly be used to do so, but there are potentially significant performance penalties for doing so.
     
    Ne0mega likes this.
  3. IamTirion

    IamTirion

    Joined:
    Mar 3, 2019
    Posts:
    20
    Thank you for the detailed explanation. I do not understand your solution though. How can I calculate a soft-edged rectangle? Why are 4 UV offsets needed? Are they for the red, green, blue and alpha channels respectively? About constructing a normal manually, do you mean that if I follow your steps, I will get an image file and I can make a normal map of it just like real photos?

    Why does procedural texture generation have performance penalties? I heard that procedural texture requires less storage space than using image files. Though I mainly wish to use it because I cannot tweak a fixed image like the width of the seams. Is there no better way?

    I have also heard that an official asset, called Measured Materials, used only the Shader Graph in HDRP to produce a wide range of realistic materials. Is the Shader Graph in HDRP intended for procedural texture generation then?
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,363
    The code used by the rectangle node is shown in the documentation. You can recreate that code with a node graph. The only difference would be to use a material property instead of the fwidth() to modify the "d" value to set the sharpness.
    https://docs.unity3d.com/Packages/com.unity.shadergraph@5.3/manual/Rectangle-Node.html

    Because that's how you calculate a normal from a height map. You sample 4 positions in a cross shape, find the slope of the vertical and horizontal positions, and get the cross product.

    You'll get a normal map of the image, not of the object in the image. It's just based on the brightness of the pixels, so a bright highlight, or a shadow, will produce peaks and valleys in the normal map. Tools that try to generate normal maps from images are a bit more complicated, ranging from cheap (blur the texture with multiple different ranges and techniques, generate normals from those blurred images and blend those together to get something sorta believable) to expensive (using machine learning to try to make sense of the lighting and make guesses about the actual structure of the objects in the image). This isn't either of those.

    If you've used substance designer, you'll see some numbers under each node with a "ms" after it. This is the amount of time it took to do the calculations that node does in milliseconds. To render at 30 fps, you have ~33 ms per frame.

    Substance Designer calculates all of the nodes and writes them to textures. Those baked out textures are what are actually used for rendering.

    Shader Graph calculates all of the nodes in the graph for every single pixel for every single frame.

    The nodes and timings aren't going to be quite equivalent, but if you were to make graph in Substance Designer that takes 100 ms total to calculate, something similar in Shader Graph is going to take something close to that, maybe more maybe less, as again they're not exactly the same.

    For something like what you're doing, it's fine. But once you start getting to the "generating rocks embedded in sand" level of substance design texture generation, it's not going to be viable.

    If by "only the Shader Graph" you mean "and 1.5 GB of scanned real world texture data", then yes!

    It's not doing any texture synthesis in Shader Graph, just using Shader Graph to tweak some values and do some blending.
     
  5. IamTirion

    IamTirion

    Joined:
    Mar 3, 2019
    Posts:
    20
    You are a life saver. I could never have hoped to get so many of my questions answered at once.