Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice
  2. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  3. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Separate Physics and Rendering layers

Discussion in 'Physics Previews' started by varfare, Nov 21, 2017.

  1. stonstad

    stonstad

    Joined:
    Jan 19, 2018
    Posts:
    654
    Yes, and fixing this for DOTS does absolutely nothing for the overwhelming majority of developers who use the built-in pipeline to ship commercial games. This is why people are jumping ship to Unreal. @yant
     
  2. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    704
    Wow! I can't believe this is still not being addressed. Rendering and Physics are totally different animals yet they still share the same layers. Does it require rocket scientists to separate them?
    Physics requires a collision matrix and Rendering does not. Why add Rendering into the same layer that makes the matrix configuration so complicated. It's beyond me.

    I understand that the Engine is designed in early 2000 but ~15 years have passed, so let's move along.

    While I'm at it, I have similar issue with Tag. Please change Tag to Tag list. Tag in real-world works with more than just a single tag.
     
  3. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    if you really have to have this today... you can do it today. Just make a separate hierarchy / scene for your physics and your rendering and sync the rigidbodies to meshrenderers gameobjects via code.
     
    rboerdijk likes this.
  4. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    704
    I use layers for camera stacking and I don't have much control over rendering.

    Besides, 32 hard-limit for physics alone is not enough. Expanding to 64bit will not break anything, would it?
    Make 32bit as default and make 64bit as an option if devs choose to. It will solve our problem.

    Yeah, there is Physics.IgnoreCollisions() but it works per collision base and it's a cumbersome workaround. Workarounds, workarounds... as if there aren't any issues here.
    Don't we already have enough workarounds to deal with?

    We can code everything to ourselves, my question is then, why do we need APIs?
    APIs, in my opinion, is a convenient set of functions and prevents us from reinventing the wheels over and over again.

    Unity needs to sit down and think hard what needs be done. They won't even fix simple problems laying for years. They are either Lazy, Stubborn or Incompetent. Or all of the above.

    If they are not going to do anything, they should give us buildable(not full) source codes so that we can continue working.
     
  5. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    We do get full c# source of Unity Physics package (DOTS) and we can sim that in separate worlds already if we want. DOTS isn't production ready yet but it does show that Unity IS actually doing something about this.

    Whether users like it or not, it doesn't seem likely that there is going to be major workflow changes on old systems like built-in physics as major changes could break backwards compatibility and thousands of projects. Right now it really looks like they focus bigger changes like these for the DOTS.

    There's only few people maintaining the built-in 2D + 3D physics afaik and there's still updates, like we saw with PhysX 3.4 and now 4.1 upgrade and articulation support, or ability to sim physics per scene. Calling people lazy is just harsh and ignorant IMO when there are constraints on what level changes people can make to these "legacy" systems.

    Also if you look at other game engines you can get access to, it's not like they separate physics and rendering any better. This of course doesn't mean it shouldn't happen (I really am all for it) but things like these really boil down on priorities and what is feasibly to change.

    Just think about the DOTS setup for a second, if they keep adding lots of feats to old and new (DOTS) at the same time, this will just add more confusion among users, what to use, why there are parallel systems that don't compliment each others etc. It really doesn't make much sense in the bigger picture.
     
  6. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    704
    DOTS has nothing to do with the issues here and we can talk about DOTS when it's ready, I mean really ready.
    Don't get me wrong. I love DOTS but honestly, it will be irrelevant for the many at least a couple of years and somehow we need to keep working. Pushing everything after DOTS is just another excuse for their Laziness.

    If they are not going to do anything until DOTS, what are they afraid of providing buildable source code so that we can solve our own problems? Afraid of someone who can make better engine then theirs? Insecurity from Incompetence?
    Unreal Engine has been open source from the getgo and I don't think they are worried at all as no one can keep up with the pace of Unreal development team.
     
  7. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    We've hoped for Unity giving engine core source code for years, especially after UE4 and Cryengine did open theirs for their users. But the bottom line here is that the Unity core holds a lot of licensed 3rd party solutions which prevent this from happening easily.

    If you followed what happened with both UE4 and CE, both had to get rid of lots of handy 3rd party libraries before they could do what they did. This made UE4 pretty half baked solution especially for the first year as it simply lacked many basic systems Epic hadn't time to redo yet.

    There's really no motivation for Unity to do this as they are essentially doing their replacement already which - like it or not - is DOTS. DOTS gives us c# package source code and DOTS runtime c# sources, which will essentially be ~90% of the engine. They also license current Unity engine source code access for $$$ for Unity Pro subscribers. If it comes down to not wanting to spend that money and need for source code access for everything, it's pretty clear Unity is not the alternative you should be looking at (unless you are willing to wait for DOTS to mature).
     
    DragonCoder likes this.
  8. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    I'd like to add that IMHO lack of current Unity engine source code makes using Unity a big risk to any dev team atm. I've faced countless bugs / regressions I could have fixed / reverted breaking changes in matter of days, instead of having to wait for Unity to fix the issues after I've reported them - which usually takes at very least multiple weeks until I get hands on the fixes (and potentially new issues along with the new version).

    Also there's been bunch of feats that I really would have wanted to have and with full source access I could have added them in few days myself (like I've done in Unreal) and at the same time Unity simply doesn't implement these features despite users requesting them for years.

    I'm only writing this to make it very clear that I'm not against source code access, I'd very much LOVE to have it but there are realities to face here (simply Unity publishing their existing source code for all users isn't possible nor does it make sense for them considering the path they are moving towards atm).
     
    romanpapush likes this.
  9. stonstad

    stonstad

    Joined:
    Jan 19, 2018
    Posts:
    654
    Could you provide some additional detail around this approach? It sounds like you are suggesting a scene for physics objects and a scene for mesh renderers -- I'm not sure that is a workable approach for a sizable game.
     
  10. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    Basically yes, it's not going to scale well if you are already bound by dynamic objects/transforms perf but would be still suitable for most online shooters etc. Do note that you only need to do double GO's for actually dynamic physics objects, rest can stay like they were normally as there's no gain for separating rendering and physics for things that never move.

    And most online games try to minimize the dynamic parts anyway to minimize the networking traffic, I can't think many online games that would even deliberately try to sim so many dynamic objects that you'd run near the limitations of Unity's physics engine/transform system

    Edit. Afaik you could also use TransformAccessArray and Unity's job system to sync transforms in multithreaded jobs so the perf impact of this operation wouldn't be that big (or at very least wouldn't be bound by main thread perf).
     
    Last edited: Feb 17, 2020
  11. stonstad

    stonstad

    Joined:
    Jan 19, 2018
    Posts:
    654
    Thank you for your thoughtful reply.This sounds like a solution worth exploring.
     
  12. andyz

    andyz

    Joined:
    Jan 5, 2010
    Posts:
    2,243
    Just like to add this is still mad! Physics and rendering are separate systems which different limits!
     
    romanpapush, rboerdijk and reinfeldx like this.
  13. rboerdijk

    rboerdijk

    Joined:
    Aug 4, 2018
    Posts:
    96
    Wanted to avoid multiple bullets/projectiles (sprites) hitting themselves via putting then in a separate "userlayer". The moment I did, they stopped being rendered. Googling turned up this thread and, quickly skimming through it, it seems indeed strange to have rendering and physics connected via layers (and also a 1 tag limit per go ^^ ).

    Why not create a parent object which has the RigidBody and box2d/3d-collider in a "Noncollidable dynamics" user-layer, and add a child-object having the sprite/mesh in the "Default" layer to have it be rendered.
    I'm sure I'm not the first to come up with this idea - so is there a reason this isn't a viable?
     
    Last edited: May 10, 2020
  14. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    704
    Unity's stubbornness, laziness, and incompetence will continue throughout the 21st century.
    The only argument I hear is that they don't want to break the existing codes.
    It's the proof of their laziness and stubbornness, and perhaps they don't know how to do it properly.
    My honest opinion is that they shouldn't be in Engine business if they are afraid of breaking things.
    Oh btw, they are already doing that with URP, DOTS, and many others, aren't they?
     
    PROE_ and ihgyug like this.
  15. joshcamas

    joshcamas

    Joined:
    Jun 16, 2017
    Posts:
    1,276
    Yep, I struggle with this A LOT, since my game is a rather large game with a large amount of systems and logic.
     
    reinfeldx and rboerdijk like this.
  16. Trevir

    Trevir

    Joined:
    Dec 1, 2015
    Posts:
    15
    This physics/layer limitation is incredibly frustrating and is a huge roadblock for a physcs based game I am helping to develop.

    I'm building a 2.5D game in which objects take up intervals of space along the Z-axis (which we call 'z layers') while exhibiting 2D physics. Objects on different Z intervals shouldn't collide with each other.
    Screenshot_80.png

    We would be able to use unity layers if not for the fact that a single object can exist on multiple 'Z layers' (as mentioned previously in this thread, multiple unity layers cannot be assigned to a single gameobject).
    Screenshot_81.png

    (If you've ever played LittleBigPlanet, this ought to look very familiar to you)

    We have tried to get around this problem by:
    • Giving each object children with colliders assigned to different unity layers for each Z layer occupied (major physics performance issues)
    • Using a customized C# port of Box2d (poor performance even without modifications)
    • Using another C# physics 2d physics system for unity (poor performance again)
    • Calling Physics2D.IgnoreCollision between all objects in the scene (slows down as more objects are added)
    • Using 3D physics and slicing up concave shapes into convex subshapes (3D physics is not robust enough for our needs, and several features present in 2D physics are missing)
    We are incredibly frustrated with how a simple behavior is seemingly impossible to achieve in a decent, not-janky way. Unity uses Box2D for its 2D physics, which, outside of unity, supports objects existing on multiple mask layers. I'm certain there is a good reason for the limit of 1 assigned layer per gameobject, but I can't imagine a good reason for physics layers to use the same layer system that everything else uses
     
    reinfeldx likes this.
  17. rboerdijk

    rboerdijk

    Joined:
    Aug 4, 2018
    Posts:
    96
    What I do is define my object and add a script which clones the physics part and put it in a separate layer. That means there's two objects now (instead of just one) and it's just working around the problem, but at least it's mostly automatic - and I don't have that many objects, so it's fine in my case. Something like this (partially hardcoded for my case, feel free to generalize or improve... e.g. I'd prefer moving the component instead of copying it, but that didn't seem possible):

    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using System.Reflection;
    4. using UnityEngine;
    5.  
    6. // A GameObject can only be in one layer, which is problematic if we want to Render it in a certain layer, but
    7. // need Physics to be in another layer. This class works around this limitation by taking the polygon-collider,
    8. // copying it to a child-object and changing the layer, and then removing the polygon collider from the parent.
    9.  
    10. // TODO: Other colliders, make the target-physicslayer selectable with a public variable
    11.  
    12. public class GameObjectPhysics : MonoBehaviour
    13. {
    14.   void Start()
    15.     {
    16.         PolygonCollider2D pc2d = GetComponent<PolygonCollider2D>();
    17.         if (pc2d != null)
    18.         {
    19.             GameObject pgo = new GameObject();
    20.             pgo.transform.parent = this.transform;
    21.             pgo.transform.gameObject.layer = 11;    // hardcoded layer static obstacles = 11
    22.             pgo.transform.gameObject.name = transform.gameObject.name + "-physics";
    23.             pgo.transform.localPosition = Vector3.zero;
    24.             pgo.transform.localScale = Vector3.one;
    25.             Rigidbody2D rb2d = pgo.AddComponent<Rigidbody2D>();
    26.             rb2d.bodyType = RigidbodyType2D.Static;
    27.             rb2d.useAutoMass = true;
    28.             CopyComponent<PolygonCollider2D>(pc2d, pgo.gameObject);
    29.             Destroy(pc2d);
    30.         }
    31.     }
    32.  
    33.     T CopyComponent<T>(T original, GameObject destination) where T : Component
    34.     {
    35.         System.Type type = original.GetType();
    36.         var dst = destination.GetComponent(type) as T;
    37.         if (!dst) dst = destination.AddComponent(type) as T;
    38.         var fields = type.GetFields();
    39.         foreach (var field in fields)
    40.         {
    41.             if (field.IsStatic) continue;
    42.             field.SetValue(dst, field.GetValue(original));
    43.         }
    44.         var props = type.GetProperties();
    45.         foreach (var prop in props)
    46.         {
    47.             if (!prop.CanWrite || !prop.CanWrite || prop.Name == "name") continue;
    48.             prop.SetValue(dst, prop.GetValue(original, null), null);
    49.         }
    50.         return dst as T;
    51.     }
    52.  
    53. }
    54.  
    I guess for your case you'd customize it, so you have multiple physics components which are tagged in some way (material?) so you know which needs to end up in which physics-layer and then auto-create multiple child-objects for each layer and move the appropriate physics-component to the correct child-object... ?

    Completely aware this is not great (far from) but at least it's "something" - if anyone has a better solution, I'd love to hear it (and I bet more people would).
     
    Last edited: Aug 2, 2020
  18. Trevir

    Trevir

    Joined:
    Dec 1, 2015
    Posts:
    15
    we already tried this, and it caused severe frame drops when objects with complex shapes collided with each other.

    Today I came up with what seems to be the best option so far...
    Screenshot_83.png
    Screenshot_84.png

    ...ugh
     
    Last edited: Aug 3, 2020
    reinfeldx and rboerdijk like this.
  19. joshcamas

    joshcamas

    Joined:
    Jun 16, 2017
    Posts:
    1,276
    Just Unity Things :^)
     
  20. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    ...
     
    Neto_Kokku, reinfeldx and joshcamas like this.
  21. Tarrag

    Tarrag

    Joined:
    Nov 7, 2016
    Posts:
    215
    it'll be interesting to see what the priorities are now that Unity has listed on the exchange
     
  22. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,203
    URP renderer has render layers, camera culling mask would get that and suddenly problem solved.
    (it might already in urp10)
     
  23. joshcamas

    joshcamas

    Joined:
    Jun 16, 2017
    Posts:
    1,276
    Is it in URP, or do you mean they *should* add it to URP? Confused
     
  24. HellGate94

    HellGate94

    Joined:
    Sep 21, 2017
    Posts:
    132
    It is in URP


    but you cant use it :)

     
  25. Gondophares

    Gondophares

    Joined:
    Mar 9, 2013
    Posts:
    28
    There are plans to make this usable in the way you'd expect, as discussed in this topic. The pull request is currently marked as "draft". I'm not exactly clear on what that means for further development, but it does make me a little worried it's been shelved.

    However, there seems to be some discussion that many of the Rendering Layer Mask values would actually be used internally by SRP. My initial thought was that these Rendering Layer Masks were a sound architectural decision to break with the omnipresent evil of layers-do-everything. (And that that memo simply didn't make it to whoever implemented the default Renderer Features.) Seeing this discussion on the PR page, however, I've begun to wonder if Renderer Layer Mask was only accidentally exposed - which would mean the architectural improvement requested by many people in this thread was never even considered.

    The documentation even mentions how "When using a scriptable render pipeline [...] [this] filters the renderers based on the mask the renderer has". That statement is correct, strictly speaking, if you actually build that functionality yourself. This page gives an example of how to do it, which as you can see involves quite a bit of work.

    Either way, the current situation is absurd. The Rendering Layer Mask is exposed on every Renderer, and its name (and documentation) suggest it's a much sought-after solution for a long-standing problem. In practice, it does nothing out of the box, it can only be used for complex custom-made rendering tools. From a UX perspective alone, that's generally frowned upon. To boot, half of its values may or may not clash with the SRP's internal implementation right out of the gate.

    I wouldn't want to understate the sheer complexity of building an entire game engine, but in this case, I feel Unity simply failed to internal evangelize a clear architectural framework across, leading to inconsistent paradigms and awkward half-implementations. Legacy code has it own kind of inertia.
     
  26. Hobodi

    Hobodi

    Joined:
    Dec 30, 2017
    Posts:
    101
    BUMP
    Separating virtual and physical layers is absolutely essential. These are completely different things that interfere with each other.
    And it shouldn't be technology-limited for multiple people using DOTS and URP.
     
  27. reinfeldx

    reinfeldx

    Joined:
    Nov 23, 2013
    Posts:
    164
    Bumping this because it's currently causing issues for me. Any update on this, @yant ?
     
  28. ZenTeapot

    ZenTeapot

    Joined:
    Oct 19, 2014
    Posts:
    65
    Bump. What can I say. A must have for serious game development with many multipass effects
     
    chillyroominc and reinfeldx like this.
  29. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    I'm not sure why it's so hard to implement. It doesn't seem like it. Even in upgrade process editor could just make a copy of current layers, mark first one for physics and second as for graphics.

    I recommend everyone going to roadmap, scroll to the bottom and post "Separate Physics and Rendering layers" in
    new request: https://unity.com/roadmap/unity-platform/rendering-visual-effects
     
  30. reinfeldx

    reinfeldx

    Joined:
    Nov 23, 2013
    Posts:
    164
  31. Rocksuit

    Rocksuit

    Joined:
    Sep 11, 2019
    Posts:
    5
    I think maybe this way can seperate the physics and render layer in current unity version, don't use the physics layer in editor and tool built in unity to set up your physics layer name and render layer name. you should define it in your code, and make sure the gameobject's hierachy you build is seperate the render and physics gameobject, for example, the most outside gameobject is for physics, and the inner one sub gameobject is for rendering, set up their layer with your own layer defines, and you will get 32 physics layers and 32 render layers. But of course, the sepreate work should handle by unity themself !!!!!!!
     
    dannyalgorithmic likes this.
  32. reinfeldx

    reinfeldx

    Joined:
    Nov 23, 2013
    Posts:
    164
    Last edited: Dec 31, 2021
    Kamyker likes this.
  33. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    Hobodi and reinfeldx like this.
  34. CrystalClod

    CrystalClod

    Joined:
    Jan 21, 2019
    Posts:
    3
    Just wanted to join in saying that it's baffling this feature was ignored for so long, especially that it seems that the fix would be relatively simple.
    Hope we can get some new response from Unity.
     
  35. ZenTeapot

    ZenTeapot

    Joined:
    Oct 19, 2014
    Posts:
    65
    Any news on this? My eyes are bleeding for this to be implemented.
     
    DragonCoder and Hobodi like this.
  36. reinfeldx

    reinfeldx

    Joined:
    Nov 23, 2013
    Posts:
    164
    @LeonhardP @yant If I'm reading the note in this comment titled URP Rendering Layers correctly, it looks like what we've been discussing in this thread is now available in the 2022.2 Beta. Can one of you confirm the current status? I'm asking because in the roadmap it's still showing as "In Progress."
     
  37. LeonhardP

    LeonhardP

    Unity Technologies

    Joined:
    Jul 4, 2016
    Posts:
    3,130
    Hi @reinfeldx, URP Rendering Layers are indeed included in the 2022.2 beta. We have updated the roadmap to reflect that. Thank you for for flagging that.
     
    MarkHelsinki, reinfeldx and joshcamas like this.
  38. Epsilon_Delta

    Epsilon_Delta

    Joined:
    Mar 14, 2018
    Posts:
    256
    I just tried it and it's a good start, but the two (imho) most important features are not yet implemented - Camera culling and render objects can still only use the old Layers as far as I understand it.

    I have seen somewhere that the render objects are indeed going to use new rendering layers, any info on camera culling mask?
     
  39. reinfeldx

    reinfeldx

    Joined:
    Nov 23, 2013
    Posts:
    164
    Good callout. Any word on this @LeonhardP ?
     
  40. ZenTeapot

    ZenTeapot

    Joined:
    Oct 19, 2014
    Posts:
    65
    I have not tried, but it would sound really comical if this is true. Like it's basically THE feature that defines this layer thing.
     
  41. LukasCh

    LukasCh

    Unity Technologies

    Joined:
    Mar 9, 2015
    Posts:
    102
    To confirm you refer as camera culling mask, the property that allows filtering rendering of game objects with specific layer (aka physics layer) and you also want to posiblity to filter with rendering layers?
     
  42. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    Not possibly but ONLY using rendering layers. Physics and rendering are completely unrelated to each other. I don't get how this issues isn't fixed for so long. It would literally took one day to implement:

    Project is updated to X unity version:
    1. Show warning that project may not be able to downgrade Unity version if new layers are changed.
    2. Clone normal layers (these below) to new rendering layers:

    3. Change rendering code to use new rendering layers
    4. If someone decides to downgrade, keep new rendering layers saved in project in case of another upgrade (if not possible just delete it) and use old layers.

    Mind blowing that no one at Unity uses both rendering and physics to see how bad current layers are.
     
    jason_yak, Qriva, Cranom and 2 others like this.
  43. Epsilon_Delta

    Epsilon_Delta

    Joined:
    Mar 14, 2018
    Posts:
    256
    Yes that's what I meant.
    No, not "also", not necessarily. I would like it to use only rendering layers and keep the physical layers completely unrelated. Or other solution could be that we don't use gameobject's layer, but rather collider component is on some layer and renderer is on some (other) layer (picked from the same layer list or (imho better) completely different list of rendering layers).
    I don't know what is the best solution, but current solution is suboptimal. If I want some subsets of gameobjects to use RenderObjects feature or be culled and at the same time I want some other subsets of gameobjects collide/not collide and these subsets have various intersections, it gets really complicated really fast.
     
    Last edited: Dec 9, 2022
  44. ZenTeapot

    ZenTeapot

    Joined:
    Oct 19, 2014
    Posts:
    65
    Exactly this, please, if it's different from the current state.
    Camera should use rendering layer to cull and do whatever, becaue camera does the rendering, and it's a layer for rendering.

    For a concrete example:
    1. I have a prefab called wood_box, that renders a box sprite, and a box2D collider on it.
    2. I set the the collider to have the "SoftWall" physics layer, while the sprite renderer to have a "WriteDepth" rendering layer. At the moment the only way to do it is to assign the renderer to a child object, that doubles the object count.
    3. The reason I want this is because from a rendering perspective, I only care if an object writes its depth or not. Objects with "SoftWall" collision behavior may or may not write its depth depending on the specific sprite it's using, not its collision behavior. These 2 things should really be decoupled.
     
    Last edited: Nov 5, 2022
  45. michalpiatek

    michalpiatek

    Joined:
    Feb 26, 2021
    Posts:
    81
    Physics and Rendering layers should be totally separated.
    I posted a request/proposition for this back on Unity Feedback, and it was one of the highest upvoted propositions in the physics category before Feedback was closed.

    In 2017 I posted this forum thread where I also provide an actual real-life use case and why this separation makes sense.
    https://forum.unity.com/threads/separate-physics-and-rendering-layers.505580/
     
    goncalo-vasconcelos likes this.
  46. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    Can you update the roadmap as it's not completed?
     
  47. reinfeldx

    reinfeldx

    Joined:
    Nov 23, 2013
    Posts:
    164
    @LukasCh @LeonhardP Has there been any discussion at Unity and any feedback you can provide on the issues raised over the last couple of months in this thread? This continues to affect my work and implementing the recent suggestions here would be a huge win IMO. It feels like we're pretty close to the finish line on this.
     
  48. JuozasK

    JuozasK

    Unity Technologies

    Joined:
    Mar 23, 2017
    Posts:
    84
    I am not familiar with the whole thread, but on the question of saving some layers when it comes to physics calculations, you can now use collision layer overrides to define which layers each collider or rigidbody should ignore or collide with.
     
  49. Kamyker

    Kamyker

    Joined:
    May 14, 2013
    Posts:
    1,084
    Doesn't matter, these are still the same layers as rendering.

    If someone uses all 31 layers for rendering then they are forced to share rendering and physics layer names. It's a mess we are asking to be fixed for years...
     
  50. michalpiatek

    michalpiatek

    Joined:
    Feb 26, 2021
    Posts:
    81
    tl;dr
    We need a total separation of layer systems - one for each "subsystem". Currently there is one set of layers that have to be shared between Rendering, Physics, Post Processing Volumes, Gameplay and Navmesh. That's extremely limiting for reasons explained in the first post in this thread.