Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Need Some Advice On Collision Detection Design

Discussion in 'Scripting' started by p1zzaman, Apr 20, 2019.

  1. p1zzaman

    p1zzaman

    Joined:
    Jan 1, 2017
    Posts:
    64
    So I'm making something for the VR and the design pattern I am following for my Monobehavior script is to have different state interfaces and implementations and switch the state.

    My question is that I am trying to keep a consistent pattern throughout when doing collision logic when my VR controller touches the object or touches and presses a button (such as grabbing and tossing the object). Meaning I want the controller to detect the collision and invoke the necessary action on the target.

    At the moment, I have all the detection logic in the controller script it self.

    Basically along the lines of:

    Code (CSharp):
    1. public void onTriggerEnter(Collider other)
    2.     {
    3.         if (TagUtil.isTerritory(other.tag))
    4.         {
    5.             this.interactableEntity = other.gameObject.GetComponent<IInteractable>();
    6.             if (this.interactableEntity != null)
    7.             {
    8.                 this.interactableEntity.setEnterColor();
    9.                 this.interactableEntity.toInteractionMode();
    10.             }
    11.         }      
    12.     }
    Where IInteractable is tied to a Monobehavior of an object and toInteractionMode() is a method in that script that switches the state of that script. Depending on the current state of that script the setEnterColor() could mean different things.

    That is all good and stuff. However, I realized that a lot of the logic is heavy on the controller side, and the objects in the games themselves don't really have detection logic of when a controller touches it.

    There are some downsides to just keeping it in one place. It makes coding a bit more cumbersome, sometimes I have some class of objects implement IHasActions, and some IInteractable, etc and I would have to check before invoking the necessary calls. Whereas if I put that in the target object's script it self, that logic is encapsulated there.

    Is it common to be placing the detection logic like this in one place, either source or target? Or do people tend to mix it up a bit? My only concern with mixing it up a bit is the logic is scattered and hard to track.

    Hope this makes sense. Thanks for the advice.
     
  2. GroZZleR

    GroZZleR

    Joined:
    Feb 1, 2015
    Posts:
    3,201
    I would definitely make the objects implementing your interface perform all of the logic related to being invoked, rather than one master controller class. Imagine a similar situation but in a different context, like a user interface with an IClickable interface. It would be madness to have one master class defining all of the logic for 100s of UI widgets when OnClick is called.
     
    Suddoha likes this.
  3. Suddoha

    Suddoha

    Joined:
    Nov 9, 2013
    Posts:
    2,824
    In this specific case I'd move the entire logic to a component that you can attach to the entity. That controlling component should not instruct the interactables that they've entered - this part is taken care of by the engine.

    So the engine messages serve as the notification mechanism that tells the entity's component to start (enter) and stop (exit) their "reactive behaviour".

    But let's first discuss the issues with the current approach:
    What if your next "interactable" does not need to change a color upon entering, but needs to play an animation, or a sound?

    Dirty solutions:
    - Implement all the additional stuff into the existing method, which currently suggests it only sets a color (that's terrible, renaming would be required)
    - Add additional methods to the IInteractable interface - responsibilities grow, and it's definitely not related to being an interactable (in fact, based on its name, the interface should limit its responsibilities to exactly the information that the "ability to interact" suggests), additionally, all the implementors would need methods that they might not even use at all
    - Add additional interfaces / component types that you can get using GetComponent - this is somehow generic, but requires to query every interface / component that might be attached.

    Better solutions with that sort of centralized control:
    - similar to the first solution above, but way more abstract: rename the interactable's method to something that does not indicate that a specific action (like changes of color) needs to take place, but one that tells the component what state it needs to transition to, something that expresses you "entered its trigger zone" and such.

    In fact you can look at it from the perspective of UI development, actually, it's some just sort of that.

    Now, since there's already the engine that sits in the background, you don't have to explicity tell the component to that you entered and exited, because the engine itself does it by sending the physics messages.

    With that you can attach multiple, small components that do their own thing (component based design and easy to refactor for ECS). Or one coordinating component that assembles the reactive behaviour by using other attached components which do not react themselves (might save some calls of the engine messages).