I'm making a custom vertex displacement node that need to do calculations in World space. Since the Master node Position input is in Object space, I'm transforming my output using the same Transform node functions, without success. Even using the Transform node to convert my output to Object space does not work. That's strange, so I did a small test. A very simple scene and shader, with just a transformed mesh, passing the Position in Object space to the Master node works fine, as expected: If I send the position in World space to a Transform node (World > Object), then to the Master, I do not get the Object space coordinates at all. I can't even find where my mesh ended.... This is s very simple workflow, but there's something wrong here. Am I missing something? Or is that a bug?
Try multiplying a world-space position by a Transformation Matrix node set to "Model". I found this works to transform world-space directions/positions to local-space.
I think you mean the inverse model, right? I also tried that, and the result is the same as the Transform node. Multiplying by the Model matrix also gives me strange results. For the sake of completeness, here's the result multiplying by both...
Try invert the order you’re piping your position and matrix into the multiply node. You also need to convert the Vector3 value coming out of the position node into a Vector4 with a w of 1.0 before multiplying it.
btw, I tried the world position to transform world > object to master node setup you have in your second example and it worked correctly in LWRP 5.10, and likely also works properly in HDRP 5.10 if you're not already on that.
Hi, This is a known issue in SG with HDRP as HDRP uses camera relative rendering and SG is not handling that correctly currently. There is a PR open to solve this case: https://github.com/Unity-Technologies/ScriptableRenderPipeline/pull/3130 But the main issue remains that we need to solve both camera relative and absolute world position in a pipeline agnostic way as LWRP does not (at least currently) support camera relative rendering. It is likely that this PR will evolve into something larger that handles this issue in a complete way...
No success, with either Model or Inverse Model. I think the Transform node already do that, but anyway I did all this, without success: Position > Split > Combine (RGB,1) > Multiply (Model, RGBA) > Split > Combine (RGB) > Master Position Position > Split > Combine (RGB,1) > Multiply (Inverse Model, RGBA) > Split > Combine (RGB) > Master Position Position > Split > Combine (RGB,0) > Multiply (Model, RGBA) > Split > Combine (RGB) > Master Position Position > Split > Combine (RGB,0) > Multiply (Inverse Model, RGBA) > Split > Combine (RGB) > Master Position Position > Split > Combine (RGB,1) > Multiply (RGBA, Model) > Split > Combine (RGB) > Master Position Position > Split > Combine (RGB,1) > Multiply (RGBA, Inverse Model) > Split > Combine (RGB) > Master Position Position > Split > Combine (RGB,0) > Multiply (RGBA, Model) > Split > Combine (RGB) > Master Position Position > Split > Combine (RGB,0) > Multiply (RGBA, Inverse Model) > Split > Combine (RGB) > Master Positio
I'm encountering an issue with this as well I think, trying to create a shader that collapses all vertices to a world position based on distance. If I subtract the camera position, the mesh appears in the correct position, but if the mesh is scaled or rotated, the result is not correct. Maybe there's something I'm not doing correctly. Here's the graph:
Forgive me if my advice is incorrect as I'm responding on just looking at your graph, but it looks as if you're subtracting the camera position from the object space position, you should be doing it before the multiplication with the inverse model matrix.