Search Unity

[NetCode] Larger server-client delay?

Discussion in 'NetCode for ECS' started by florianhanke, Feb 12, 2020.

  1. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    I'm currently working on a relatively slow-moving WW2 tactics game, and have recently implemented NetCode. It works well, but is not a perfect fit.

    For example the commands sent to the server can have a delay of two seconds since they do not have to have a near-immediate feedback on other clients, such as in the DOTS example shooter. Also, clients can be behind the server by 2 seconds. Basically, the clients are mostly a view onto the simulation that is run on the server, with non-realtime, limited commands from the clients.

    Since realtime is not so important, the clients could have larger buffers for frames coming from the server, and thus there could be less hiccups / slowdown/speedups.

    Is it currently possible in NetCode to allow for a larger delay, so that clients can lag a longer time behind the server? Could I – as hinted at in https://docs.unity3d.com/Packages/com.unity.netcode@0.0/manual/time-synchronization.html – use my own NetworkTimeSystem to calculate an earlier server time to present on the client, so that there would be a larger delay?

    Thanks in advance! :)
     
  2. timjohansson

    timjohansson

    Unity Technologies

    Joined:
    Jul 13, 2016
    Posts:
    473
    I don't think increasing the delay will give you any change in how much the game slows down / speeds up - but it will make it deal better with packet loss and highly variable ping so you get less hickups. The speedups / slowdowns will get better when the bug where we over compensate for bad command age is fixed.

    In the game you are describing it sounds like it might be a good idea to reduce the NetworkTickRate. The netcode packet has two rates, SimulationTickRate which is how often you simulate the game, and NetworkTickRate which is how frequently you send snapshots from the server to the client. The current interpolation delay on the client is based on the NetworkTickRate, so reducing that will implicitly increase the delay and also reduce the bandwidth.

    If you do want to increase the interpolation delay the NetworkTimeSystem have two constants -
    KInterpolationTimeNetTicks and
    KInterpolationTimeMS which control how long a snapshot is stored before it is applied. If the MS version is set it uses that, if it is zero it uses the frame based version instead. This delay is on top of taking ping and ping variance into account. The interpolation time is not exposed as something you can configure without packet code changes right now - but the plan is to do that at some point in the future.

    The snapshot buffer sizes also comes into play, if you increase the interpolation delay too much you will not have the snapshots you are trying to apply. High ping should be OK for snapshot history buffer though. High ping can cause misspredictions if you have more than about 1 second of ping, but it doesn't sound like you are using prediction anyway.
     
    xaguilarf, AdamBebko, Wobbers and 2 others like this.
  3. florianhanke

    florianhanke

    Joined:
    Jun 8, 2018
    Posts:
    426
    Ah, good to know, thanks!

    Yes, I am not using prediction – I may if it makes sense to do so, but for the moment it's out. I'll try your suggestions, many thanks!
     
    AdamBebko likes this.