Search Unity

Question Converting Amplfy reconstruct world pos from depth to SG

Discussion in 'Shader Graph' started by Passeridae, Jul 16, 2021.

  1. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    Here's the native Amplify graph that reconstructs world pos from depth. I've stripped it from all unnecessary (for my goal) parts:
    upload_2021-7-16_23-32-20.png

    Here's my best attempt at recreating it in Shader Graph:
    upload_2021-7-16_23-34-32.png

    The output is not the same.
    So, here's a list of things that may have gone wrong. If you are good in both systems (or shaders in general) please, take a look at it.

    1) Amplify's "Screen Position: Normalized" is not the same as SG "Screen Position: Raw". If that's the case, would be cool to know what's the right equivalent in SG then.

    2) Here's the Amplify "Screen Depth" node:
    upload_2021-7-16_23-46-27.png
    My best guess is that it's equivalent in SG is "Scene Depth" set to "Raw". But I'm not so sure.

    3) Amplify "Projection Matrices: Inverse Camera Projection" may not be the same as SG "Transformation Matrix: Inverse View Projection"

    4)I have no idea how to replace Amplify "Camera to World Matrix"

    Thanks!
     

    Attached Files:

  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    Default is equivalent.

    Yes.

    There is no direct equivalent in Shade Graph. That's the
    unity_CameraInvProjection
    matrix, which Shader Graph doesn't expose. The closest is the Transform Matrix node set to Inverse Projection, but they're not quite the same.

    Again, there's no direct equivalent in Shader Graph. That's the
    unity_CameraToWorld
    matrix, which Shader Graph doesn't expose. The closest is the Transform Matrix node set to Inverse View, and again, they're not quite the same.

    However you can get what you want with this graph:
    upload_2021-7-16_16-37-29.png

    Using the Scene Position node set to Center gets the xy values already scaled to a -1 to 1 range, which is what the Scale & Offset node in ASE is doing.

    The Scene Depth node is set to Raw, and doesn't have any input node because it defaults to using the screen UV you want.

    I'm multiplying the Vector4 by the Inverse View Projection matrix, which is equivalent to doing the inverse projection and then the inverse view one after the other, but in one step.

    At the end I have a Transform node that transforms from World to Absolute World space. This does nothing in the URP, but corrects the HDRP which renders in camera relative space.


    One note of caution, this won't work in OpenGL. The ASE example has the advantage of working in OpenGL as well as all other APIs, but the above SG example does not. The difference between the "camera" matrices that ASE is using and those exposed to SG is they the same regardless of if you're using OpenGL or another graphics API. Basically they're always the OpenGL projection matrix. The SG matrices are the ones actually used for rendering, which can change based on the API ... specific if using OpenGL or not. If you're wondering why that's the case, it's because OpenGL is weird and everyone else decided they did it wrong and there's a better solution to how to handle projection matrices, so all other graphics APIs work exactly the same as each other, which is different than OpenGL. Desktop OpenGL can be fixed to work the "correct" way, but Unity's stance at this point is "you should use Metal or Vulkan instead" as those both work "correctly", though that's a problem for old OpenGLES mobile devices that don't support Vulkan or Metal.

    There's also that seemingly random multiply by -1 at the start. That's to deal with Unity rendering everything upside down on non OpenGL APIs, so the input screen position is also upside down to what it should be. Unity does that because OpenGL renders upside down from all other APIs and long ago Unity decided to make all other APIs act like OpenGL and decided not to fix that for the SRPs.
     
    Passeridae likes this.
  3. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    Thank you very much for your extensive reply and this graph! Works like a charm:)
    I have one more question if you don't mind.

    So, I use this graph in conjunction with a rendertexture from a planar reflection camera (provided by the 3rd party asset). Since I can't get the depth info from the planar camera directly, I placed the quad with this shader above it. Later this depth data will be used to drive contact rough reflections.
    upload_2021-7-17_14-37-43.png

    So, essentially I need depth only. But I used Amplify's "world pos from depth" node because it is the only approach in which the output doesn't depend on the camera angle/distance. Therefore I added these nodes to your graph to get the result from the screenshot above:
    upload_2021-7-17_14-42-20.png

    It's intended to keep the depth steady no matter how the position or angle of the surface changes. But it works only for the 90° step rotation. So, 0, 180, -90 etc on any axe is okay, 1, 45, 89 or any other angle is not okay = broken depth.
    Do you know how can I fix this in this graph?

    Thanks!
    Btw, is this depth to world pos transformation performance heavy if we are talking about high-end hardware?
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    Why not? You can make the URP render a depth texture for any camera.
     
  5. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    I'm using HDRP. The planar reflection itself isn't made by me, I'm using the "PIDI 3" asset for this. The camera that renders planar reflection is created inside their scripts and isn't exposed to the user. I've tried to use the "depth capture" custom pass but it doesn't see the camera.
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    If you’re using the HDRP, why are you using a third party planar reflection script vs. the built in planar reflection probe which already supports rough reflections?
     
  7. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    Third party planar reflection can be assigned per material, have refresh rate control, behave fine with transparency and is overall faster. And the update with even better performance, lod support and etc. is coming.

    And frankly speaking, HDRP native rough reflections are kinda horrible. They are very pixilated almost like they use just a mip-map blur. Even when I plug the graph you've created in the LOD of the HD scene color node the result is better. And I have a single pass gaussian fast blur that will improve the situation even further, if I find a way to couple it with depth.

    This is 1K (and setting it to 2K wont make a big difference) HDRP rough reflection:
    upload_2021-7-19_0-43-36.png
     

    Attached Files:

    Last edited: Jul 18, 2021
    bgolus likes this.
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    Probably because they do.:D

    The problem is you really do need to get depth from the actual reflection camera. There's no way to get it from the main camera with any kind of accuracy. And even if you do manage to get it working it'll be affected by the same problems screen space reflections have, which is if the reflection can see something you can't from the main camera, you won't have depth information for it. So the only answer is ... to get the depth from the reflection camera.

    This is especially odd to me because I use PIDI 3 with the built in renderer and it already has the feature to render the depth for reflection cameras as a check box on the reflection renderer. It's the "Contact Depth Support" option. I would contact the author of the asset and ask about it if it's not an option that's exposed for the HDRP version.
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
  10. Passeridae

    Passeridae

    Joined:
    Jun 16, 2019
    Posts:
    395
    It is, unfortunately, not available for HDRP/URP.
    upload_2021-7-19_2-59-11.png

    I've contacted the authors, and I was told that the depth support will hopefully be added in the upcoming (4.0) version which is delayed. Though, according, to the authors "it will have some performance impact as several optimization tricks that are available in Standard / URP are not possible in HDRP". Therefore in the meantime I'm looking for other ways to implement it.

    But if you happen to know how to expose the PIDI camera, so I could at least plug it into the
    alelievr's custom pass to output depth alongside the normal reflection, please enlighten me :)
    Because I have no idea how to do it or anything else that can help me.

    This is my attempt at blur + contact roughness so far. Using your graph + some tweaks.
    upload_2021-7-20_3-28-47.png

    Here's the difference (left is HDRP native rough reflection): https://cdn.knightlab.com/libs/juxt...html?uid=4b110c50-e8e9-11eb-abb7-b9a7ff2ee17c
     

    Attached Files:

    Last edited: Jul 20, 2021