Search Unity

How to fix Z-fighting in URP

Discussion in 'Universal Render Pipeline' started by kevinetourneau, Apr 28, 2021.

  1. kevinetourneau

    kevinetourneau

    Joined:
    Oct 3, 2017
    Posts:
    9
  2. DrewFitz

    DrewFitz

    Joined:
    Dec 9, 2016
    Posts:
    30
    Hard to say without more details. For instance, a logarithmic depth buffer won't fix two polygons that perfectly overlap.
     
    Ruslank100 and kevinetourneau like this.
  3. kevinetourneau

    kevinetourneau

    Joined:
    Oct 3, 2017
    Posts:
    9
    Thank you for information.

    We got z-fighting problem with:
    • case 1: Duplicated same objects at same position default.png
    Surface1 hided : surface1.png
    Surface2 hided: surface2.png
    • case 2: Clashes between different objects : clash.png
    • case 3: Duplicated vertex in mesh: duplicated_vertex.png
    Solutions considered :
    • Physically move the objects further apart
      • We do not want to have to move the 3D objects of our customer CAD model (to keep model accuracy).
    • Increase the near clipping plane and decrease the far clipping plane of the camera
      • We try this solution but it doesn't fix z-fighting for any case.
    • Nudge the shader depth offset to force an object to be rendered on top
      • Sometimes, we got more than 2 objects at same position.
      • How to use "Offset -1 -1" with more than 2 objects ?
    • REALLY force an object to be ALWAYS rendered on top
      • We try to set "ZTest Off" but it generates rendering problems.
    • Logarithmic Depth Buffer
      • It does not fix our z-fighting problem.
    We use Unity 2020.3.6f1 LTS with URP 10.4.0 and we build app for ios (ipad pro 2020).
    We load glb file at runtime, so we need to find a runtime solution.

    Does anyone have any solution or idea ?
     
    Last edited: Apr 29, 2021
  4. kactus223

    kactus223

    Joined:
    May 20, 2014
    Posts:
    35
    We've been trying to fix the same bug. Anyone has an idea?
     
  5. april_4_short

    april_4_short

    Joined:
    Jul 19, 2021
    Posts:
    489
    You could get crazy, and do a polygons that are matching test, and delete polygons that match.

    Brute forcing this will lead to problems if two exactly the same models fill the same space, so test for this and pick a model to delete.

    One of your suggestions was moving models apart, but you're worried about the client's model no longer being accurate. If it's a visual representation of anything... there's going to be fudges. Nothing is perfect, despite the lure of digital perfection, resist this urge, explain to them the problem and the solution, and work on how many times you separate models versus effort they expend ensuring less of these problems.

    There is always a bit of yin and yang with regards representation model formatting and conversion. It's normal, and nothing to worry about, your client should be able to at least have a conversation about the difficulties of optimising and cleaning up whatever they're sending you. They may not want (or be able) to do it, so then you move to finding an acceptable level of fudges, that you both can agree on, in terms of work load on your end and quality on their end.
     
  6. DrewFitz

    DrewFitz

    Joined:
    Dec 9, 2016
    Posts:
    30
    This most likely won't be easy to automatically fix at runtime. You'll need to find some way to assign materials (shaders) with depth overrides to the overlapping geometry. If you already know what priority you want to render the overlapping objects with, then biasing the depth buffer is probably what you want. Give your "high-priority" objects a shader that writes a very slightly higher value to the depth buffer so it will override the overlapping "low-priority" objects. The
    Offset -1 -1
    code is doing something akin to this but I can't remember offhand if that's exactly the parameters or depth modifier you want here. I think that code will always render on top of everything. You want a small constant offset in the depth, just enough to make the more important surface win the z-fight against the other surface(s).

    Caveat: if two high-priority surfaces overlap, then you're back to square-one because they'll get the same boost to their depth and start z-fighting again. Like I said: this most likely won't be easy to automatically fix at runtime.
     
  7. AlexisDelforges

    AlexisDelforges

    Joined:
    Nov 30, 2021
    Posts:
    22
    Same problem here, does anyone has ever had a logarithmic depth buffer working on URP ?
    Upon loading a 3d model, some objects are exactly at the same position and cannot be moved due to business purposes causing Z-fighting. Hope someone come to the rescue !
     
  8. burningmime

    burningmime

    Joined:
    Jan 25, 2014
    Posts:
    845
    A logarithmic depth buffer isn't going to fix that. A logarithmic depth buffer can give better precision in scenes with large depth differences (eg a space/flying simulator something)*. Precision isn't your problem. You have 2 triangles occupying the exact same point in space; no amount of added precision is going to fix that. Z-fighting is arguably the "correct" result when the GPU doesn't know which one is in front.

    The easiest way is just to move one of them very, very slightly:
    position += position * FLT_EPSILON
    . If you care which one is in front (taken from this article):
    position += sign(direction) * abs(position * FLT_EPSILON)
    That article also says that FLT_EPSILON is a bit too small for all GPUs so the compatible version is:
    position += position * 0.0000002
    or
    position += sign(direction) * abs(position * 0.0000002)
    . Your business purposes can accept an 0.2 micron difference at meter scale and you can choose which object gets priority.

    [EDIT: This needs to be done in clip space in the vertex shader after the perspective transformation, not to the transform in C#. The alternative is to use SL_Offset as mentioned above, which AFAIK just instructs the GPU to do basically this.]

    If they have different colors/materials and you actually *want* to show both at the same time in a way that doesn't cause seizures, it gets harder. To just blend them is easy enough (but a little slower than moving one), you just draw one of them opaque with z-write and the second transparent without z-write. But you need to detect (eg using a spatial hash or something) that 2 tris actually do share a position. If you want a clear visual indication of 2 surfaces sharing a position (eg some kind of striped pattern or dithering), you can switch to alpha testing and use that as a jumping off point, or try something tricky using the stencil buffer.

    * (I get the impression that industry moved away from logarithmic Z for this and switched to Camera-Relative Rendering)
     
    Last edited: Dec 16, 2021
  9. DevDunk

    DevDunk

    Joined:
    Feb 13, 2020
    Posts:
    5,063
    If there is Z fighting the model is not game ready imo.
    There never should be 2 planes on the same height, only use the one which you need to see.
    If you need to switch just switch the model
     
  10. AlexisDelforges

    AlexisDelforges

    Joined:
    Nov 30, 2021
    Posts:
    22
    Thanks for replying,
    Seems more clear why logarithmic depth buffer help, but not enough.
    We tried to move some objects after loading the way you did, but results only shown to be good enough with 5cm offset, which is not quite good for my use cases (can handle a <1 cm offset but not more)

    I think the third paragraph only apply if I wanted to show both objects that shares a position, hopefully it's not the case :)

    I'm curious if the FLT_EPSILON differs on GPU category (PC, MOBILE, APU...) ?
     
  11. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    This only applies if can actually fix the models to be game ready. People who need something like this are typically dealing with existing assets that need to get visualized in some way. Manually fixing the data is not feasible option when you have tons of assets that are just prepared in way that is unfriendly to game engine rendering. Here the game engine rendering isn't always the end product either, but rather some side product where people just want to import existing data in.

    That being said, there is this thing https://unity.com/products/pixyz but I don't know what things it solves or doesn't solve related to this.
     
    DevDunk likes this.
  12. burningmime

    burningmime

    Joined:
    Jan 25, 2014
    Posts:
    845
    Are you moving them in the shader or on the CPU?
    positionCS*0.0000002
    in the vertex shader after you've transformed the position into clip space should be enough. The reason it works is that epsilon differs based on how far the point is -- in the case how far the point is from the camera (technically, the clip plane). You'd only need to move the point 5cm if the point was 80km away from the camera (if my math is right; coffee has not kicked in yet). But if you're moving the position before transformation (eg moving the object's Transform property in a C# script), then that could explain needing to move it so much.

    There is another way to do the depth biasing thing instead of doing it manually, but I've not tried it myself: https://docs.unity3d.com/Manual/SL-Offset.html . That might be faster or clearer than doing the multiplication yourself in the shader.

    I don't know enough about mobile/APU precision to answer the last question. It's possible that on some platforms, you don't actually get the full 32 bits of precision (I know for normals, Unity will use a 16-bit half on mobile GPUs -- which is really imprecise for position data, and might account for needing a 5cm move). But I think Unity uses 32 bits for vertex position and at least 24 bits for depth on all platforms.

    Can you share a screenshot of the problem and/or code of the vertex shader that's doing the movement?
     
  13. AlexisDelforges

    AlexisDelforges

    Joined:
    Nov 30, 2021
    Posts:
    22
    I was indeed doing translation in C# script, doing a random offset of spawned objects (to a max value of 5cm).
    Checked out the SL-Offset and seems to be a good solution, thanks for pointing out. Need to check is behaviour is the same on GPU, APU and mobile tough.

    I'm currently working on the shader to add a SL-Offset on different objects, I still need some rules to say that some objects are to be on top of others (data is coming from BIM - IFC files).

    Here is some example :
    Road is composed of several layers that shares same position in space, and need to be sorted of some rules that I have to write based on business case.

    upload_2021-12-15_7-49-25.png
     
  14. burningmime

    burningmime

    Joined:
    Jan 25, 2014
    Posts:
    845
    Once you get it working with SL_Offset so there's no longer any Z fighting, it should be a one-line change to move it into the vertex shader body for choosing layers.