Search Unity

  1. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice
  2. Unity is excited to announce that we will be collaborating with TheXPlace for a summer game jam from June 13 - June 19. Learn more.
    Dismiss Notice

Question What is sent over the wire for an avatar's animation state?

Discussion in 'Netcode for GameObjects' started by MrBigly, Nov 3, 2022.

  1. MrBigly

    MrBigly

    Joined:
    Oct 30, 2017
    Posts:
    221
    In a MP configuration, the owner of an avatar sends some information to the authority that is then replicated to the other clients. Obviously the position and aiming/orientation are sent and replicated. But what about the animation state?

    Given that a game could have many animation states blending together, I can't imagine that animation state information is sent, or is it?

    And if not, how do the other clients know what animation clips to blend and at what points in their clips?

    I looked over the netcode for gameobjects a few months ago and don't remember seeing anything that addressed animations directly or indirectly.

    edit: I came across a video that discussed passing animation state over the network, but that video used a simplistic use case using a single value of +1 (forward), 0 (idle), and -1 (backwards). I am interested to know how others handle animation on replication to clients.
     
    Last edited: Nov 5, 2022
  2. CodeSmile

    CodeSmile

    Joined:
    Apr 10, 2014
    Posts:
    6,596
    You have the NetworkAnimator for basic sync.
    Other than that it‘s really more about replicating the animation states whenever they change using NetworkVariable. Or even simpler: if that object position changes it goes into the „walk“ anim and speed is determined by difference to last position - no anim state needs to be transferred.
     
    MrBigly likes this.
  3. MrBigly

    MrBigly

    Joined:
    Oct 30, 2017
    Posts:
    221
    I need to study up on NetworkAnimator to see how it works exactly. But I figured why send any animation state at all since I need to send position and aim information, and I can deduce animation from that like you described. That is what I am thinking of doing, and I should save a lot of bandwidth in the process given the more than dozen clips that could potentially be blended together. I was wondering if anyone else thought this was a good approach, so thank you for your reply.