Search Unity

Remapping Values Automatically detect Bounds - Shader Graph

Discussion in 'Shaders' started by Nightstriker, Aug 7, 2018.

  1. Nightstriker

    Nightstriker

    Joined:
    Feb 11, 2018
    Posts:
    11
    Hi All,

    I am a newbie on Shaders so this is probably something that I am missing on their logic. I want to basically normalize a list of values (let's say the Red channel of an image) based on its range. If I use the remap component I would need to provide the Min and Max values of the original range, which I don't know. I can have it as an input on the shader but then I would need to somehow calculate it and id I do it inside the shaders that would be more accurate and efficient.

    Using c# logic I would just loop through all the values and keep the largest / smallest, but obvjiously I cannot do that. So I guess my question is how you usually do that with shaders? Is there a way to have "global values" that the program can access? Is there another trickery that solves this problem? Do I just need to do it in C#?

    Some form of guidance would be much appreciated!
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    Is the texture one you're importing in to the editor? If so, precalculate the range in C# and set them as material properties, or maybe just remap the texture asset itself.

    However if this is a texture being generated at runtime, like for a post process effect where you need to calculate the range of the current scene, then that's harder. Pulling the texture back onto the CPU to process in C# isn't what you want to do. Shaders are perfectly capable of iterating through textures on their own and it'll likely be able to churn through a texture in less time than it takes to transfer the image back to the CPU from the GPU. There are multiple ways to go about this.

    Old school multi-sample downsample:
    Down sample the image over multiple Blit() calls. In each one use a shader that samples the texture at multiple points and writes out the min-max values. The key here is while your first image might be a single channel texture, the first and all subsequent textures have to be at least two channels so you can write out the min and max values. Then you downsample that one with a shader pass that knows to look at both channels. Keep going until you only have one pixel left that has the min and max values. Now you can pass that texture into your other material (or set it as a global texture) and sample it to get the ranges. This is the technique games have been using for ages for things like auto exposure, but usually part of this technique is to only samples some of the pixel in the first downscample and not all of them so the result is only an approximation. Part of this is because most screen resolutions aren't nice powers of two, so scaling down to 1x1 in multiple steps would lead to lots of non-integer sizes, and old GPUs didn't have the power to sample every pixel in a reasonable time frame. It also meant it didn't require variable size loops in the shader which older hardware didn't support.

    Brute Force Compute Shader:
    Basically do exactly what you would do in C#, but in a compute shader. Iterate over all of the pixels in the compute shader and track the min-max values. Output the final values to a texture or to a compute buffer. Technically this brute force approach can be done with a fragment shader and blit to a single pixel target as well. The brute force approach isn't great usage of a GPU, all of the work is being done in one "thread" and GPUs by their nature are best when work can be spread out over multiple compute units. It is however the most straight forward approach, and for small enough textures can be quite fast.

    There are also approaches that are somewhere between the two, like using compute shaders to do the down sampling in multiple passes, or other techniques for doing as much of the work across multiple threads where possible.
     
  3. Nightstriker

    Nightstriker

    Joined:
    Feb 11, 2018
    Posts:
    11
    Hi Bgolus and thank you for your reply. I am glad there is a way. Do you know any resources that deal with these two approaches that I can use and adjust?

    Also, will I be able to do any of them using the Unity Graph Editor and creating my own custom nodes? I couldn't find a way to do complicated behaviors, like having global variables.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    The auto exposure in the Post Processing stack does something similar, but I wouldn't necessarily recommend trying to start with that, though I think that's something like a brute force compute shader. For the old school method the best I can think of is look into tutorials for doing blurs, which is where that technique gets used most often.

    No. Most of this has to be done with custom vertex fragment shaders and command buffer scripts. The Shader Graph can take the final range value as an input material property which you would have to set from script as well. Node based shader editors by their nature are not well suited to writing these kinds of shaders.