Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice
  3. Join us on November 16th, 2023, between 1 pm and 9 pm CET for Ask the Experts Online on Discord and on Unity Discussions.
    Dismiss Notice

Reading relative pixel position on face

Discussion in 'Shaders' started by mgto, Sep 19, 2018.

  1. mgto

    mgto

    Joined:
    May 30, 2013
    Posts:
    22
    Hi everyone!

    I'm wondering if there is any direct way to access the interpolation values in the fragment or surface shader.
    What I'm looking for is a value that describes the position of the pixel (green) within the triangle. My end result would be three floats from 0-1. Here depicted as blue, red and yellow arrows.
    For example: would the green pixel be positioned directly at the upper right vertex the yellow and red values would be 1 and the blue one would be 0. If the pixel would be perfectly in the interpolation centre all three values would be 0.5



    Currently I'm mapping each triangle in a normalized way to a texture channel (v0 (0,0), v1 (1,0), v2 (0,1)). Then I read the uv position and after a bit of trigonometry, I retrieve the three values.

    But I'm wondering if I could save the uv coords and the calculations and somehow directly read the interpolated position out of... somewhere?! At least the hardware should know it..
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,248
    You're talking about the barycentric coordinates and interpolators. Nope, there's no way to get access to them directly.

    Encoding them into a UV channel or vertex color is the easiest way to do it.

    Technically AMD's GCN based GPUs have access to the barycentric coordinates in the fragment shader, their GPUs do the interpolation directly in the fragment shader rather than in fixed function hardware. However they're only accessible in the shader assembly code, or using HLSL / GLSL extensions which Unity doesn't have support for. Nvidia GPUs do not have access the barycentric coordinates in the fragment shader and still use fixed function hardware to calculate and pass in the interpolated values to the fragment shader. Their Turing architecture (RTX 2080) will be the first to give access to the barycentric coordinates directly, but again via extensions.
     
  3. mgto

    mgto

    Joined:
    May 30, 2013
    Posts:
    22
    Ah well..
    but anyway - thx bgolus for this great info!
     
  4. ChrisDirkis

    ChrisDirkis

    Joined:
    Jun 1, 2017
    Posts:
    38
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,248
    That's getting the barycentric coordinates in a geometry shader, then passing it on to the fragment shader in the same form the OP is already getting.

    Technically you can pass all three vertices' data to the fragment that way, along with the barycentric coordinates, but you can also just as easily encode that data into the mesh itself. Geometry shaders are kind of a last resort.
     
  6. ChrisDirkis

    ChrisDirkis

    Joined:
    Jun 1, 2017
    Posts:
    38
    Ah, sorry, my bad. I misread the post pretty badly.
     
  7. mgto

    mgto

    Joined:
    May 30, 2013
    Posts:
    22
    Hey I don't mind! Great link nonetheless.

    Btw. I guess I was asleep yesterday.. You don't need trigonometry if the uv's are set up correctly.. Don't know what I was thinking..
     
  8. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    Can you share how you retreived the data using vertex data using the barycenter, I have been trying to do just that with no success so far.
     
  9. mgto

    mgto

    Joined:
    May 30, 2013
    Posts:
    22
    You can use two vertex colors channels. Lets call them c1 and c2.. could be red and green channel. Then assign the value 1 in channel 1 for vertex 1 and the value 1 in channel 2 to vertex 2. When using the red and green channel this would mean vertex 1 is red and vertex 2 is green.
    Then you can say for any pixel on this triangle:
    distance 1 = c1 (e.g. red channel)
    distance 2 = c2 (e.g. green channel)
    distance 3 = 1 - c1 - c2

    Or you use two uv coordinate channels and set the vertex uvs like this: v1 = (0,0); v2 = (1,0); v3 = (0,1)
    Then you can just say
    distance 1 = 1 - u - v
    distance 2 = u
    distance 3 = v
     
  10. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    Yeah I get the distance part, but how you reconstruct the data at the vertex?

    Let say I do like you say, but also have a texture atlas index inside the alpha, it get interpolated at the pixel position, so it's jumbled as a mix of three vertice index. I want to find the index of the closest vertices (which the min distance of what you shared) to select the texture (and hence have 255 blending case). How do I reconstruct the index at the vertices (ie distance =0)?

    Or maybe I misunderstood and that's not what you are doing?
     
  11. mgto

    mgto

    Joined:
    May 30, 2013
    Posts:
    22
    Well this cost you even more.. 3 more channel.

    You most likely want to do it to store some discrete information per vertex.

    In my case I use 3 channels of the vertex colours (I use color.rgb as I use the UVs for the position value)
    You need to spread the data over those three channels - for each vertex of the triangle - as they would otherwise mix up during interpolation and kind of 'erase' the original discrete information.

    So for example we store the values [0.6 | 0.99 | 01] for vertex 0, 1 and 2 in the vertex colours r g and b
    vertex 0 gets the value red = 0.6 and the other two get red = 0
    vertex 1 gets green = 0.99 and the other two green = 0
    vertex 2 gets blue = 0.1 and the other two blue = 0

    Now you can reconstruct the original vertex value from the interpolated pixel value by just re-scaling it by the position value.

    Let's say your current pixel has a position value of 0.25 for vertex 0 (let's call it 'strength')
    And an interpolated value of 0.15 (vertex 0 -> red channel)
    Then you get the original vertex value by: interpolated_value * 1 / normalized_position_value
    In our example: 0.15 * 1 / 0.25 = 0.6 < which is our initial value for vertex 0

    Now, after you know the original three vertex values (most likely a material Id or something similar) you can run the relevant shader code (up to) three times (once for each unique material Id) end then blend the resulting outputs together:
    finalOut = output0 * strength0 + output1 * strength1 + output2 * strength2

    Also you might want to add some checks for cases where two or all vertices share the same value to avoid unnecessary calculations.

    Hope this helps. I came up with this myself as I couldn't find any simple solution on the internet. So this might not be the best solution that possibly exists.

    cheers
     
    Last edited: Sep 22, 2018
  12. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    THANKS! :D I'll test it, it looks like a dot product too :eek:
     
  13. Feral_Pug

    Feral_Pug

    Joined:
    Sep 29, 2019
    Posts:
    49