Search Unity

  1. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice
  2. Ever participated in one our Game Jams? Want pointers on your project? Our Evangelists will be available on Friday to give feedback. Come share your games with us!
    Dismiss Notice

Separate Physics and Rendering layers

Discussion in 'Physics Previews' started by varfare, Nov 21, 2017.

  1. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    https://feedback.unity3d.com/suggestions/separate-physics-and-render-laye

    It is a longstanding Unity issue. Layers are used for both Physics and Rendering. This is probably fine for small games but when you start creating custom rendering buffers you can really run into corners. Basically for few games I made in the past (Hard West, Ancient Space) I just couldn't do certain graphics features due this limitation. I just had to cut them out.

    The only solution for separating graphics and physics layers is to duplicate your gameobject and assign different layers to each duplicate. That is not acceptable in a complex game due to performance reasons (you are adding considerably large amount of new gameobjects). Imagine doing that for 20 skinned characters (I am not even mentioning the need of syncing animations for each duplicate). I just don't see a reason why layers would be shared between physics and rendering. And also, why one object can be only assigned to one layer? That is a mystery to me.
     
    palex-nx, Maeslezo, Sylmerria and 9 others like this.
  2. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    OK, that's an actual production example explaining why this is important for me.


    That video is time-stamped.

    During a production of this game we had two "dimensions", only one visible at a time. One called Normal and another one called Nightmare. We were switching rendering between dimensions by assigning different objects to either Normal or Nightmare layers. So if Nightmare dimension was being shown, Nightmare layer was rendered and Normal was excluded. This was also used for physics for obvious reasons - we didn't want to do raycasts or interact with currently invisible layer. That worked well.

    What was not actually working well were custom render buffers. I wanted to implement screen-space subsurface scattering for characters and foliage. Normally I would assign all foliage to a custom layer called Foliage, render SSS textures into a buffer and do some light calculations in a shader. Easy. What I couldn't actually do is exactly that - assign all foliage to a new layer. This is because we already had to assign all gameobjects to either Normal or Nightmare layers.

    But I could have used tags, right? Not really as they were already being used for gameplay purposes.These days I think I could achieve similar results with command buffers but they significantly more time to setup due to lack of proper examples and something you just need that simplicity which layers offer.

    Another issue we had and I don't see that being solved with latest Unity is assigning objects to different physics layers. Basically, we wanted to trigger destruction for each object being touched by a weapon's bullet passing by. I wanted to use physics for ragdolls of debris. Super easy but for various gameplay reasons I could not have any colliders on Normal nor Nightmare layer. This means I needed my own physics layer to which I would assign my colliders to interact with.

    So the issue was that we had this Normal and Nightmare dimensions which we were switching between. Let's just say I have a destroyable crate which is seen only in Normal dimension and is constructed out of 6 individual pieces
    • I had to create 6 gameobjects rendering components only, assign them to Normal layer
    • To each piece I had to add a child object with colliders only and assign them to Normal_VFX layer so that I would be interacting with proper physics objects
    • That's a total of 12 gameobjects in best case scenario
    Let's say that now I need to use the same destroyable crate in both Normal and Nightmare dimensions. I would have to:
    • Create 6 gameobjects with rendering components only, assign them to Normal Layer
    • Create 6 child objects with colliders and assign them to Normal_VFX layer
    • Create 6 child objects with rendering components only, assign them to Nightmare layer
    • Create 6 child objects with colliders and assign them to Nightmare_VFX layer
    • That's a total of 24 gameobjects
    All these problems would be solved if we could only assign the same gameobject to few different layers or have separate rendering and physics layers.
     
    Last edited: May 2, 2020
    recursive, WilkerLucio and Peter77 like this.
  3. PhilSA

    PhilSA

    Joined:
    Jul 11, 2013
    Posts:
    1,374
    +1 for this
     
    WilkerLucio likes this.
  4. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    Can we get an official Unity clarification on on that? Normally I would not push so hard for an official response but Unity staff always suggests putting a feedback ticket. One was created in 2011 and has 278 votes so far so what else there is for me to do?
     
    Last edited: Nov 30, 2017
    DragonSix, ZiadJ and Peter77 like this.
  5. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    Bump.
     
  6. SpAM_CAN

    SpAM_CAN

    Joined:
    Jun 5, 2016
    Posts:
    1
    This has been a requested feature since 2011, and they've ignored it repeatedly. I doubt it's gonna happen, though I'd certainly love for this to be added. It makes zero sense that physics and rendering are linked like they are.
     
  7. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,729
    But physics and rendering isn't coupled - only in your mind. Just abstract it with your own naming scheme.
     
  8. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    Not sure if this is irony or not so forgive me if it is ;)

    Physics and graphics are coupled simply because you use the same layer system for both camera culling and physics channels. If you need a gameobject to be placed in Layer A which would be used for camera culling but you need it to be using Layer B for physics, both at the same time, you need to clone this gameobject, place one copy into Layer A, and another one to Layer B. This way one can be culled by the camera and the other can be used for physics. Like I said - not a problem for a small game but for something larger with complex physics and culling setup aaaand large amount of gameobjects, this spins out of control quite fast. Have I mentioned that maximum amount of layers allowed is 32?
     
    hippocoder likes this.
  9. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,729
    Don't forget light culling. Although you only get (4?) of those to play with.

    Not sure why Transparent FX is sitting next to Ignore Raycast. They must really get along like best buds. I'm sure Unity was extremely pro about this part!

    Why can't we even change 8 of them? That alone would help a lot. But it is what I meant by being able to repurpose their meanings. The entire thing is illogical and I think, for Unity's convenience more than ours.

    HAVING SAID THAT... I don't actually use more than a handful of layers in all the projects I've used, big and small. Because I generally design things that don't rely on Unity systems all that much (it is usually just used for physics only).
     
  10. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    Is there anyone watching physics forums? What's the purpose of it if noone from Unity staff is watching it? It looks quite dead. Sorry if I am being too rude but it seems that some Unity regions have better community support then others. For example communicating with particle system developers is really easy and straight-forward whilst getting any information on terrain system or physics is basically impossible.
    @Adam-Mechtley @MortenSkaaning @yant

    @hippocoder
    Would you mind sharing your secret on how did you abstract from Unity's layers?
     
    Prodigga likes this.
  11. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,729
    Well I just reuse the blank ones, 'reserved' ones and have a class of constants instead so the actual names don't matter. I admit it's of limited use to some, and it's stupid that Unity mixes physics and rendering. Although I'm struggling to find a situation where I can fill it up - even in complex titles over the last 8 years or so I've been doing stuff in Unity.

    Even if I don't have a problem with it, I recognise it's a problem for a lot of people and the design is absolute nonsense. Water layer mandatory? Really, Unity? since when do people even desire Unity's water? or use it? These layers should never, ever be shared with Unity internals anyway.

    I guess it basically is one of those neglected areas of Unity. It works fine, it's just lower priority I suppose.
     
    ZiadJ and Alverik like this.
  12. thibouf

    thibouf

    Joined:
    Mar 17, 2017
    Posts:
    73
    Just found this thread and wanted to add one voice here... What a strange design decision ...
     
  13. Peter77

    Peter77

    QA Jesus

    Joined:
    Jun 12, 2013
    Posts:
    4,771
    It's a really good (real-world) example that varfare gave, why separating layers makes sense. Too bad Unity staff isn't responding or perhaps not even looking at this thread.
     
    laurentlavigne likes this.
  14. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,729
    I'm 100% sure they know but there is not much user pain from this. Just vague examples and plenty of workarounds. You can see why it's not high priority.

    I've just had to moderate laurent posting links to this thread in the beta forum for 2017.3 (which I deleted cos there's a time and a place for nagging and it's totally unrelated to beta), there needs to be more actual ship stopping issues for it to get bumped up in priority.
     
    Alverik likes this.
  15. yant

    yant

    Unity Technologies

    Joined:
    Jul 24, 2013
    Posts:
    402
    Hi,

    Yes, we've been using this shared layering system for a while now. When introduced, it used to be a natural and a pretty convenient tool in helping developers to learn parts of Unity, where features that looked similar were actually shared so one had to learn the same concept only once and for all. It's clear though, that the approach is fairly limiting these days. As it was noticed above in the thread, we don't allow for assigning GOs to multiple layers. We also use bitmasks for filtering, so there is a natural limit to the amount of layers we can afford, being the amount of bits a certain type of a processor word has. It's 32 at the moment, and could probably be bumped up to 64 should we allow to deprecate a few platforms. I reckon it's a stretch anyways.

    That said, there were a number of options considered. One particular interesting was to make it so that each physics object could belong to multiple layers at the very same time. A simple, mask-based, physics query could then be used to discover all objects visible to camera, but not NPC-produced projectiles for example. At physics team, we were discussing that with the editor folks to align our roadmaps and make sure we don't do the same work twice but in a slightly different way. As a result, it's clear we need to start working on that, but the dates are not quite finalised yet. That's why there might be some perception of the problem being ignored while it's definitely not the case at all. Going to get that done for sure.

    Hope that helps.

    Anthony,
    R&D Physics
     
    Maeslezo, DragonSix, Sluggy and 7 others like this.
  16. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,729
    I would just like to set up what is colliding with what layer in code as well, seems a bit painful just using the editor.
     
  17. yant

    yant

    Unity Technologies

    Joined:
    Jul 24, 2013
    Posts:
    402
    Yes, sure. I guess two objects could be passed to the broadphase for the collision computation once their masks being &-ed give a non-zero value (i.e. if they belong to at least one common layer). The old pair-wise collision ignorance should be remained as is I think. That would give the full scripting coverage as well as full editor coverage. Makes sense?
     
    varfare likes this.
  18. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,729
    Sure, if the performance is the same, I'm happy.
     
  19. yant

    yant

    Unity Technologies

    Joined:
    Jul 24, 2013
    Posts:
    402
    It's an indirection right now to do a linear table lookup alongside a few bitwise operations. Should be just a bitwise operation in the future, so I'd expect it to perform even better.
     
    Alverik and varfare like this.
  20. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    Sure it does! Thank you for the reply.

    Would it be safe to assume that this might appear sometime next year? Or is it something in the back of the backlog?
     
  21. yant

    yant

    Unity Technologies

    Joined:
    Jul 24, 2013
    Posts:
    402
    Yes, I'd expect some updates on this next year. As for the priorities, it's so dynamic that it's hard to say what gets rolled out after what.
     
    Alverik and varfare like this.
  22. CDF

    CDF

    Joined:
    Sep 14, 2013
    Posts:
    850
    I am very happy about this.
     
  23. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    Ok, so I started testing new Post Processing v2 beta and I am really concerned now. New Postprocessing system requires user to define which Layers will be postprocessed. This will be massively problematic if Layers will be strictly defined by gameplay and artists will not be able to fully use this selective postprocessing feature. I know, @yant already said that physics and rendering layers will be split this year but I am just voicing my concern that this should happen sooner than later, especially with new graphics features coming which seem to be heavily reliant on layers.
     
    IgnisIncendio and Deleted User like this.
  24. dadude123

    dadude123

    Joined:
    Feb 26, 2014
    Posts:
    790
    Sorry, I don't understand the problem.
    You simply set the camera to some layer, and then your post processing volumes to the same layer; and that's it.

    It sounds like you're saying that some 3D objects get excluded from post processing because they're on a different layer (which doesn't really make much sense because post processing operates on the final image (that's why it's called post processing)), so what am I missing here?
     
  25. deab

    deab

    Joined:
    Aug 11, 2013
    Posts:
    83
    Just started using physics recently, and tripped up a few times with the layers, having to move colliders on to their own gameobjects in some cases. Does seem crazy the same system is used for rendering and physics.
     
  26. sebastiansgames

    sebastiansgames

    Joined:
    Mar 25, 2014
    Posts:
    114
    I agree that this should be changed. Separate physics and render layers! As is my projects are a tangled mess of weird layer names -- renderInForegroundButHittable etc etc. The current linked system quickly leads to errors as I'm moving stuff to different layers for rendering but forget they're there for physics.

    EDIT: found the feedback feature request and added my votes:
    https://feedback.unity3d.com/suggestions/separate-physics-and-render-laye
     
    Last edited: Apr 2, 2018
    ArachnidAnimal and deab like this.
  27. deab

    deab

    Joined:
    Aug 11, 2013
    Posts:
    83
    Added my votes.
     
  28. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    7,289
    try { vote += inf } catch (e) { ++vote }
     
    reinfeldx and joshcamas like this.
  29. joshcamas

    joshcamas

    Joined:
    Jun 16, 2017
    Posts:
    1,018
    I agree completely, it's very painful to have to create many more gameobjects, so I can have a mesh be culled on a certain layer, while also be an object that has overlapping.
     
  30. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    1,541
    I'd like to add that if Unity ever reworks this, I do hope the change the overlap setup at the same time. I actually didn't realize this for a while since it's been so long since I've actually used Unity physics but it's actually impossible to use same collider for colliding some and overlapping others (so that it fires overlap events, which Unity calls triggers for some reason). To do this now, you need to add the same collider twice for your gameobject and mark other as trigger but then you can't change the collision matrix for the trigger only if that is necessary as both would be bound by the same layer then (unless you put the other collider into child so you can swap the layer but this makes an ugly structure).

    Basically this could be solved by putting a following collision type selection for colliders: "collision, trigger, all" instead of the current "is trigger" checkbox. For "all" and "trigger", one could setup a new collision matrix in the physics settings for doing the same thing but this time for triggers. This would still be bit more limited than what UE4 for example does (they let you do the collision and overlap selections per layers per each rigidbody basically), but it would still solve most of the issues with the current setup as you could then use same collision setup for getting both hits and overlaps (on different objects).
     
    hippocoder likes this.
  31. iamarugin

    iamarugin

    Joined:
    Dec 17, 2014
    Posts:
    429
    One year have passed. Any news?
     
  32. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    It seems that the answer is: no.
     
  33. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    7,289
    ECS seams best option by far now.
     
  34. joshcamas

    joshcamas

    Joined:
    Jun 16, 2017
    Posts:
    1,018
    What does ECS have to do with this?
     
  35. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    7,289
    That you can render most objects, without worry about layers at all, as you can create your own.
    Hence keeping away from current layer system.
    I am just pointing, this is an option.
     
  36. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    And there we are, almost one and a half years later. Not sure what to say really.
     
  37. Ben-BearFish

    Ben-BearFish

    Joined:
    Sep 6, 2011
    Posts:
    1,197
    Any updates or roadmap announcements?
     
  38. joshcamas

    joshcamas

    Joined:
    Jun 16, 2017
    Posts:
    1,018
    Still think this is a good idea. Godot achieves this by having a mask and layer bitmask, which are good names imo
     
  39. cecilcruxis

    cecilcruxis

    Joined:
    Sep 17, 2013
    Posts:
    108
    We probably should wait for GDC Monday. They have a keynote live stream that they are going to be announcing a lot of stuff at. Well at least that is what I heard. Think they made a blog on it on the site. Hopefully we hear more stuff Monday during the live stream on what is all coming.
     
  40. Ben-BearFish

    Ben-BearFish

    Joined:
    Sep 6, 2011
    Posts:
    1,197
    Was this part of any announcements?
     
  41. cecilcruxis

    cecilcruxis

    Joined:
    Sep 17, 2013
    Posts:
    108
    None that I have seen. The main physics thing coming is Unity revamped physics engine based on DOTS. This will be a new way to do physics in Unity. Also we have the Havoc physics engine being put into Unity. With these I don't know how they work or much about them so not sure how the layers work with them or if they still act like the old layer systems. So don't have any concrete information, but there is a video showing off the new physics system.
     
  42. PeterAshford

    PeterAshford

    Joined:
    Mar 4, 2019
    Posts:
    2
    Goddammit! I can't believe they *STILL* haven't fixed this. It's causing me all sorts of pain
     
  43. Tarrag

    Tarrag

    Joined:
    Nov 7, 2016
    Posts:
    200
  44. ccvannorman

    ccvannorman

    Joined:
    Jan 20, 2013
    Posts:
    9
    Bump +1 for separate layer for Physics and Rendering, especially difficult for multiplayer games where you want to cull rendering based on team color
     
  45. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    7,289
    Not really. You just control from the script, which objects to display (Enable/Disable). This way you can just hide meshes of other teams. And you can do this once for example, rather every frame.
     
    joshcamas likes this.
  46. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    You can't enable/disable specific objects per camera.
     
  47. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    7,289
    You can for each client individually, in case of multiplayer. Most of games use single camera at the time anyway. So I don't see much issue for such cases. Maybe second as radar, in some specific uses.
     
  48. varfare

    varfare

    Joined:
    Feb 12, 2013
    Posts:
    227
    Sure, but you can't do that for a single player game which is the issue I raised in the original post. Today I'd probably use custom rendering to fetch objects which do need to be rendered for each given camera but this is much more work than what would separating rendering and physics layers take to set up if this was implemented.
     
    Tarrag likes this.
  49. Hannibal_Leo

    Hannibal_Leo

    Joined:
    Nov 5, 2012
    Posts:
    134
    Scriptable Render Pipeline also requires certain objects on specific layers to be rendered behind walls as shown in this video:


    Well, I can't just mess up all layers in my physics based game just for the sake of rendering objects like shown in the video...
    And using the experimental DOTS Physics is no fun at all. Making Ragdolls? I wasn't able to create any collider during runtime - this combined with runtime destruction physics?

    We've got the "new" nested prefabs and a few new buttons in the UI to make them work. Can't we have two seperate layer systems for physics and rendering in a similar style? Or the same system used in ECS/DOTS Physics for PhysX?
     
  50. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    646
    In DOTS Physics there's no correlation between Render Layer and Physics Mask Bits.
     
    joshcamas likes this.
unityunity