Search Unity

Official Compute Thickness pass Feedbacks / Support

Discussion in 'High Definition Render Pipeline' started by chap-unity, Jan 25, 2023.

  1. chap-unity

    chap-unity

    Unity Technologies

    Joined:
    Nov 4, 2019
    Posts:
    766
    Hi !
    In 2023.1, part of the focus on HDRP was the improvement of transparents in general.

    Among other tasks, HDRP now provides a fullscreen pass to compute the accumulated thickness for objects on a given **LayerMask**.

    HDRP computes optical path and the overlap count (i.e the number of triangles traversed), which can be useful for instance for SubSurface Scattering or Refraction.

    The overlap count can be used for flat or non-closed objects like vegetation.

    There’s a few limitations when mixing Transparent and Opaque objects but it should work with any type of material.

    This post is to centralize feedback and support for this specific features and transparency questions in general.

    Documentation is coming but in the meantime, here’s a few tips to use this if someone is interested:
    • Compute Thickness needs to be enabled on the Default Frame Settings under Rendering foldout
    • Compute Thickness needs to be enabled on the current HDRP Asset
    • One or more layer needs to be checked under the Compute Thickness properties in the HDRP asset
    • One or more object has to be in the one of those selected layer for the pass to be filled with any meaningful data
    • Lastly, the result of the thickness pass is accessible via the HD Sample Buffer node in the Shader graph. The node requires the layer index as input

    Default Frame Settings



    HDRP Asset



    Layer index



    HD Sample Buffer node



    The use cases:
    • More accurate Refraction and Absorption without having to author a thickness map.




    • SubSurfaceScattering/Translucency with no thickness map needed


    • For Vegetation and non closed objects, an overlap count can be used to multiply the thickness of leaves to have a fake accumulation for dense alpha clipped vegetation for exemple.


    Varying the thickness of the leaves (see attached gif)
    • This can be used as well in more creative ways like sampling the thickness of objects in other layers than the one the object is to create X-Rays effects like.


    This will be demonstrated further in an upcoming sample on transparency as well.

    Limitations :
    • Does not support Tessellation
    • Mixing open and closed mesh on the same layer can cause negative values. For better results, use separate layers.
    • Mixing transparent and opaque objects can creates unexpected results due to when the transparent object are rendered in HDRP
    Cheers!
     

    Attached Files:

    Last edited: Jan 25, 2023
    Ruchir, DevDunk, florianBrn and 5 others like this.
  2. JohnAustinPontoco

    JohnAustinPontoco

    Joined:
    Dec 23, 2013
    Posts:
    283
    Can you talk any more about the underlying technique? What's the performance cost?
     
  3. SoufianeKHIAT

    SoufianeKHIAT

    Unity Technologies

    Joined:
    Jan 16, 2020
    Posts:
    7
    It add a simple fullscreen pass to compute the thickness of multiple object at the same time for each given layer. That sample the existing depth buffer to have the thickness interacting with the environement cf. the image of the green glass dragon interacting with the plane.

    You have a control on the performance by controlling the resolution of this layer it can be full screen, half resolution or quarter resolution.
    upload_2023-1-27_11-31-45.png
     
  4. SomethingElse12

    SomethingElse12

    Joined:
    Apr 4, 2022
    Posts:
    45
    Do standard lit shaders support this feature or only custom shader graphs? It doesn't seem to work with standard materials no matter what I try. Also do materials in those layers that don't have transparency option enabled still render to thickness buffer? That would be excessive, because I want to use layers for other game features too.
     
  5. chap-unity

    chap-unity

    Unity Technologies

    Joined:
    Nov 4, 2019
    Posts:
    766
    For the buffer to be filled with the thickness of the object, the object needs to be in a layer and that layer needs to be selected in the HDRP asset list.
    Then the only way to sample this buffer is via shader graph by using HDSampleBuffer node.
    So, you don't have to use shader graph for object to write their thickness into the buffer, but you have to use shader graph to read that texture and do something with it.

    And yes, it's compatible with transparent and opaque objects.

    The best way to setup this would be to have a specific layer for object that needs to write their thicknesses and only use that layer for that and nothing else.
     
  6. SomethingElse12

    SomethingElse12

    Joined:
    Apr 4, 2022
    Posts:
    45
    I mostly use layers for collision matrices, camera culling and other things (creating multiple layers for same purpose but with thickness is not an option). Wouldn't it be a better way to have shader pass do the rendering into buffer, rather than layer? This would automatically select materials that have thickness boolean enabled similary to how receive SSR, decals works. Or this would suck up the performance?
     
  7. SoufianeKHIAT

    SoufianeKHIAT

    Unity Technologies

    Joined:
    Jan 16, 2020
    Posts:
    7
    That would be possible, but that will block us to split object on different layer. For instance we can split transparent and opaque on different thickness layers to avoid interaction issues.
    We'll think about it as an improvement to have an index reserved for material-based selection if possible.
     
  8. Qleenie

    Qleenie

    Joined:
    Jan 27, 2019
    Posts:
    868
    Had a brief test run of the feature, works very well out of the box! It was easy to setup in shader graph, and I did not encounter any issues. Seems to even work properly in XR, unlike nearly all other features I tested which were introduced in the 2022 / 2023 cycle.
    It allows much more dynamic thickness calculation which would not be possible with thickness maps.
    Two thumbs up:)
    Edit: And the performance seems really good, minimal impact on GPU time.
     
    alixmic and chap-unity like this.
  9. KuanMi

    KuanMi

    Joined:
    Feb 25, 2020
    Posts:
    41
    Can I calculate the thickness of multiple layers at the same time? Such as skin, bone and metal. This makes it possible to blend the final shape with different material weights.
     
  10. KuanMi

    KuanMi

    Joined:
    Feb 25, 2020
    Posts:
    41
    Will URP have similar functionality? Or can give some reference document or url to help me achieve similar function in URP? I'm guessing that the depth of the back is rendered first, and then the thickness is obtained by calculating the difference from the depth of the front?
     
  11. chap-unity

    chap-unity

    Unity Technologies

    Joined:
    Nov 4, 2019
    Posts:
    766
    yes, if each material / objects are into a different layer, each layer will fill a separate buffer and you will be able to get the thickness of each separately :)

    Can't really comment on URP although, AFAIK, nothing's planned. And you are right, when you boil down the algorithm to its simplest component it's basically subtracting backface depth from front face depth (thus why we have issues with open meshes.. etc)
     
  12. Qleenie

    Qleenie

    Joined:
    Jan 27, 2019
    Posts:
    868
    After playing more with it, I still find it very useable, and pretty fast! Two thing I found:
    - Thickness seems to produce invalid values sometimes. I need to saturate the values, but I guess this should already be done in the thickness pass
    - On a bit more complicated geometry sometimes getting bad results, e.g. on a character with teeth in the same mesh the teeth start glowing through the skin if mouth is closed. This can be corrected with transmission map or thickness remapping, but this is not always optimal.
     
    chap-unity likes this.
  13. chap-unity

    chap-unity

    Unity Technologies

    Joined:
    Nov 4, 2019
    Posts:
    766
    would be great if you can talk about your case, IIRC, it's not satured on our side because we thought users might want to know if the value is negative for example, that's an info that will be gone if we saturate it ourselves.

    For this one, if you can provide a mesh that's problematic, I'll happily have a look :)

    Thanks for the feedbacks !
     
  14. Qleenie

    Qleenie

    Joined:
    Jan 27, 2019
    Posts:
    868
    Our usecase is to caclulate thickness for SSS (for transmission); what's the semantic meaning of a negative value? I thought this is some "error", because how can a mesh have negative thickness?

    For the problematic mesh: It's a standard DAZ / Genesis 8 shape, which basically has everything in one mesh (eyes / teeth / head / body). I could do a bug report with a project about this, if it's helpful (and considered as a bug). Could also be that the issue is the SSS, which is both used on teeth and skin.
     
  15. chap-unity

    chap-unity

    Unity Technologies

    Joined:
    Nov 4, 2019
    Posts:
    766
    Basically to have the per pixel depth, Compute Thickness subtract the depth of back faces with depth of front faces. As long as the mesh has even number of those front and back faces for each pixel, it's fine.
    When your mesh is "open", you can have an odd number, you can end up with a negative thickness. That can also happen if your normals are not properly facing "outside" your mesh for exemple. See here for more details

    You can send me directly the model, and I'll have a look then, don't bother with a report if you're not sure.
     
  16. Qleenie

    Qleenie

    Joined:
    Jan 27, 2019
    Posts:
    868
    @chap-unity helped me to debug the mesh, thanks for this! The causing issue is an open mesh for the teeth. My workaround solution is to a) saturate the output of thickness pass and b) remap thickness of Diffusion Profile, so that the Min Thickness prevents the shining through of teeth.
     
    chap-unity likes this.
  17. icauroboros

    icauroboros

    Joined:
    Apr 30, 2021
    Posts:
    168
    On 23.2.0, rendering debugger shows thickness only if assigned to default layer, other layers always shows gray even if I enable compute thickness on them. Also I would like to know, why gameobject layers used instead of rendering layers?
     
  18. chap-unity

    chap-unity

    Unity Technologies

    Joined:
    Nov 4, 2019
    Posts:
    766
    I just tested this and it seems like it works.
    Are you sure you actually selected the proper layer in the debug view as well ?

    upload_2023-11-20_10-17-11.png

    We could have used rendering layers as well. It's just that RL are limited to 16 and are also used for lights and decals in HDRP and we can be limited very quickly so this is mainly why it has been decided to use GameObject layers for this
     
  19. mgeorgedeveloper

    mgeorgedeveloper

    Joined:
    Jul 10, 2012
    Posts:
    327
    Sorry to barge in here - just want to mention that the problem with gameobject layers, is that it doesn't behave like flags and you can only assign one layer to a gameobject - that can make it tricky when your gameobject is already assigned to a layer for some purpose, and now you need to use another layer for another purpose. I often run into situations where I'm torn between needing to assign an object to different layers.

    I know this is a historic issue in Unity and it was probably a very old bad decision that can't be corrected now. It feels like some additional 32-bit flags value might be needed for general purpose stuff in the future.
     
    chap-unity likes this.
  20. chap-unity

    chap-unity

    Unity Technologies

    Joined:
    Nov 4, 2019
    Posts:
    766
    Don't be, it's actually very valuable and it makes some sense. IIRC, there was another reason why we chose gameobjects layers (why the use of mainly in my answer) but I can't remember why currently, I'll raise this and see if anyone else remembers.
     
  21. Why? It could be, there is simply no effort from Unity to do so. They could easily replace the original storage with a bigger one to store more layers and then simply leave the old API alone, it would work as if one of the original layers was set and a new API could be introduced with multi-layer handling and whatnot. It's not complicated at all.
     
    mgeorgedeveloper likes this.
  22. icauroboros

    icauroboros

    Joined:
    Apr 30, 2021
    Posts:
    168
    Oh, I did not notice the slider, now it works, thanks a lot! So that is mean we can only debug one thickness layer at the time right?
    Another question I want to ask how the correctly utilize this thickness data,
    is getting shader node and connect output to thickness block for subsurface shader is enough? if not is there any guide using this feature?
    Do Default lit shader with subsurface or transmission automatically use that info when we activate it? if not is it a planned feature?
    For me I think Rendering layers would be better fit, I could not imagine using all 16 rendering layers in my projects, but my projects fairly small indoor scenes without decals anyways. Maybe you guys could allow for 32 bit layers in future if 16 is not enough for users.
     
  23. chap-unity

    chap-unity

    Unity Technologies

    Joined:
    Nov 4, 2019
    Posts:
    766
    Yes, debug view can only see ONE layer.

    It's enough in some situations depending on the thickness and scale of your object, usually you probably want to have a way to remap the thickness a little, especially since SSS can be subtle sometimes depending on your parameters and setup. However, some samples are coming (in a few weeks / months) to have a starting point for this feature. I'll update the thread here when it's done :)

    Nope, that's why it's a bit of an "advanced" feature as by default, it's just a new pass in the pipeline but you have to sample it in SG for it to do something. By default, it doesn't do anything and default LIT shader won't sample it. For now it's not planned to have that by default, but it would be possible theoretically.
     
    icauroboros likes this.
  24. chap-unity

    chap-unity

    Unity Technologies

    Joined:
    Nov 4, 2019
    Posts:
    766
    it's not about the difficulty, it's about budget.
    We could indeed add more than 16 rendering layers but then it would mean having another GBuffer which is something that has clearly more drawbacks than benefits as a whole.
     
  25. mgeorgedeveloper

    mgeorgedeveloper

    Joined:
    Jul 10, 2012
    Posts:
    327
    I think he was referring the gameobject classic layer.

    Currently we have 32 possibilities (I guess stored in a 32-bit int) with no flags behaviour. We're wondering if it is possible to enable flags behaviour in the UI (so multiple layers can be assigned per game object) and also expanding this to 64-bit.

    When you then load a current project, it behaves like the gameobject has the layer selected that it already has - but you can then add additional layers to the same gameobject (so it's backward compatible without changing much).
     
  26. chap-unity

    chap-unity

    Unity Technologies

    Joined:
    Nov 4, 2019
    Posts:
    766
    I understand, but sadly we (in SRPs) have very little control about gameobject layers since it's in the engine directly so can't really say if it's possible on our side :|