Search Unity

Bug Position transformation

Discussion in 'Shader Graph' started by rsodre, Apr 7, 2019.

  1. rsodre

    rsodre

    Joined:
    May 9, 2012
    Posts:
    229
    I'm making a custom vertex displacement node that need to do calculations in World space.
    Since the Master node Position input is in Object space, I'm transforming my output using the same Transform node functions, without success. Even using the Transform node to convert my output to Object space does not work.

    That's strange, so I did a small test. A very simple scene and shader, with just a transformed mesh, passing the Position in Object space to the Master node works fine, as expected:

    Screen Shot 2019-04-07 at 16.36.39.png

    If I send the position in World space to a Transform node (World > Object), then to the Master, I do not get the Object space coordinates at all. I can't even find where my mesh ended....

    Screen Shot 2019-04-06 at 19.36.56.png

    This is s very simple workflow, but there's something wrong here.
    Am I missing something?
    Or is that a bug?
     
    Last edited: Apr 7, 2019
  2. StaggartCreations

    StaggartCreations

    Joined:
    Feb 18, 2015
    Posts:
    2,266
    Try multiplying a world-space position by a Transformation Matrix node set to "Model". I found this works to transform world-space directions/positions to local-space.
     
  3. rsodre

    rsodre

    Joined:
    May 9, 2012
    Posts:
    229
    I think you mean the inverse model, right? I also tried that, and the result is the same as the Transform node. Multiplying by the Model matrix also gives me strange results.

    For the sake of completeness, here's the result multiplying by both...

    Screen Shot 2019-04-07 at 16.33.23.png
    Screen Shot 2019-04-07 at 16.33.40.png
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Try invert the order you’re piping your position and matrix into the multiply node. You also need to convert the Vector3 value coming out of the position node into a Vector4 with a w of 1.0 before multiplying it.
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    btw, I tried the world position to transform world > object to master node setup you have in your second example and it worked correctly in LWRP 5.10, and likely also works properly in HDRP 5.10 if you're not already on that.
     
  6. Kink3d

    Kink3d

    Joined:
    Nov 16, 2015
    Posts:
    45
    Hi,

    This is a known issue in SG with HDRP as HDRP uses camera relative rendering and SG is not handling that correctly currently. There is a PR open to solve this case:

    https://github.com/Unity-Technologies/ScriptableRenderPipeline/pull/3130

    But the main issue remains that we need to solve both camera relative and absolute world position in a pipeline agnostic way as LWRP does not (at least currently) support camera relative rendering. It is likely that this PR will evolve into something larger that handles this issue in a complete way...
     
    rsodre likes this.
  7. rsodre

    rsodre

    Joined:
    May 9, 2012
    Posts:
    229
    No success, with either Model or Inverse Model.

    I think the Transform node already do that, but anyway I did all this, without success:
    • Position > Split > Combine (RGB,1) > Multiply (Model, RGBA) > Split > Combine (RGB) > Master Position
    • Position > Split > Combine (RGB,1) > Multiply (Inverse Model, RGBA) > Split > Combine (RGB) > Master Position
    • Position > Split > Combine (RGB,0) > Multiply (Model, RGBA) > Split > Combine (RGB) > Master Position
    • Position > Split > Combine (RGB,0) > Multiply (Inverse Model, RGBA) > Split > Combine (RGB) > Master Position
    • Position > Split > Combine (RGB,1) > Multiply (RGBA, Model) > Split > Combine (RGB) > Master Position
    • Position > Split > Combine (RGB,1) > Multiply (RGBA, Inverse Model) > Split > Combine (RGB) > Master Position
    • Position > Split > Combine (RGB,0) > Multiply (RGBA, Model) > Split > Combine (RGB) > Master Position
    • Position > Split > Combine (RGB,0) > Multiply (RGBA, Inverse Model) > Split > Combine (RGB) > Master Positio
     
  8. rsodre

    rsodre

    Joined:
    May 9, 2012
    Posts:
    229
    Thanks for the feedback!
    Means I did not unleard anything. ;)
     
  9. Tricktale

    Tricktale

    Joined:
    Jan 23, 2015
    Posts:
    42
    I'm encountering an issue with this as well I think, trying to create a shader that collapses all vertices to a world position based on distance. If I subtract the camera position, the mesh appears in the correct position, but if the mesh is scaled or rotated, the result is not correct. Maybe there's something I'm not doing correctly.

    Here's the graph:

    collapse.jpg
     
  10. Kink3d

    Kink3d

    Joined:
    Nov 16, 2015
    Posts:
    45
    Forgive me if my advice is incorrect as I'm responding on just looking at your graph, but it looks as if you're subtracting the camera position from the object space position, you should be doing it before the multiplication with the inverse model matrix.
     
    Tricktale likes this.
  11. Tricktale

    Tricktale

    Joined:
    Jan 23, 2015
    Posts:
    42
    Yes! Thank you so much! That fixed it. It didn't even occur to me that that might be the issue.