Search Unity

  1. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Feedback multiplayer data flow & multiple player management

Discussion in 'General Discussion' started by StarBornMoonBeam, May 8, 2023.

  1. StarBornMoonBeam

    StarBornMoonBeam

    Joined:
    Mar 26, 2023
    Posts:
    209
    Hi guys,

    Is this normal practice?

    Since we have thousands of ports but we are also able to overload a single port with too much data flow. Should unique ports be allocated to different players ? To help with management of multiple players?
    Would such a thing enable something like mass multiplayer? Regarding my servers ability to manage incoming data flow.
    I don't have a game design for a huge multiplayer and it's not necessarily the goal, but I am sitting at a crossroads and trying to get my head around doing something with an intended experience more social for the user than the things I have made before. And last time I made a multiplayer chat room I discovered that a single port can only receive and process so much data at one time, likewise the port may only flow so much data outward reliably, in the one frame that is; so I wonder if utilizing multiple ports allow for larger flux of simultaneous data intake and flow on the host cpu for processing and redelivery to clients by server.

    I wonder also if I wanted an experience where players could host a server and have clients join, and the host could quit game and the server can migrate to the following player as to not end game, I wonder if using multiple ports of delivery, and maybe some dedicated listener flow can help keep the other players in contact with each other in the event the game host crashed. And so some kind of decision can be made on clients for an inheritance of host.

    I have other thoughts such as

    Is it possible to set up a multiplayer communistically. Where actually everybody is the server and client. So in a quad of four players they all actually connected to each other instead of relayed through a prime or dedicated deliverer. What are the negative implications of such a thing and how serious do you think those implications might be?

    Sorry for complex.
     
  2. Bunny83

    Bunny83

    Joined:
    Oct 18, 2010
    Posts:
    3,854
    TL;DR
    Splitting traffic over multiple connections does not allow for a larger throughput. The throughput is limited by the hardware and connection speed. All virtual connection share the same bandwidth. It's also note practical to criss-cross connect all peers in most cases due to NAT traversal and that you would not have a single source of truth. I would highly recommend those two videos by Jason Weimann. 12 Tips(MMO specific stuff starts at 17:50) and his live stream about High Performance Game Networking.

    ----

    It seems you have a bit of a misconception about what a "port" is. A port is literally just a 16 bit number in a datapacket. Ports do not have any bandwidth limit applied to them. Ports are a completely virtual concept. Do you know the OSI reference model? The concept of ports only show up in some protocols in the transport layer. Those protocols usually relevant for us nowadays are either TCP or UDP as those are the only two protocols that are NAT-able by common routers.

    This is already a huge problem. Since most users are behind a one-to-many NAT, the source port gets actually "abused" to identify a certain host behind the NAT. Likewise you can not connect from the internet to a host behind a NAT router unless you either configured a port-forwarding rule in the router to expose a certain port of a certain machine to the internet or the router has enabled UPNP so hosts inside the local network can essentially configure such port forwarding rules themselfs. Though because UPNP is a security risk, it's usually disabled on most routers. There are other ways like NAT-punchthrough techniques, however they are unreliable as they rely on a weakness in the router and more and more can't be exploited this way anymore. Selfhosting is a difficult topic, especially when you want "everybody" to host a game. That's why most bigger games have dedicated game servers. Unity does provide a relay server infrastructure, but this is a paid service. I'm not even up-to-date if they still offer it :)

    One of the most important things in a network game is the "single source ot truth". That's why you usually have a single server who has the final word on any decision. Even though a lot of games put the players position authority in the users / clients hand, the server usually applies at least some sanity checking. Though the majority of the game would run on the server. Building a meshed-network with no single authority just comes with countless issues regarding synchronisation. What if 3 users interact with an object in the world at more or less the same time? Such a system would be painfully vulnerable for all sorts of hacks and cheats.

    Host migration is a highly complex topic as the server (usually) always has hidden information that it does not want to share with everyone. However when a host migration should take place, the server has to actually transfer the whole gamestate to another peer. Though because most people sit behind NAT routers, it's not guaranteed that ANY of the remaining players could even act as a host. There are different ways to support host migration. The most robust way is to have the old server handle the handover. Of course this doesn't help when the server suddenly dropped (lost connection, sudden shutdown, ... ). There are solutions for this. However it requires that the whole gamestate is shared all the time with all clients. Likewise all client IPs would need to be shared with each other peer which is also a security risk. An unaided host migration is very unreliable and complex. So most don't even try.

    To come back to your original Idea of having multiple separate listinging sockets for each client, it just doesn't make much sense. When we talk about TCP, the listening socket isn't used at all for data transfer. It's only used to establish the connection. Each client, once connected, has a separate socket anyways. Each TCP socket in general has a source and a destination port. The source port is usually randomly choosen by the client. That's the port that gets manipulated by a NAT router. A NAT router holds a temporary mapping table to tell which source port belongs to which local machine. So when a packet arrives for that port the router will "route" that packet to that specific machine. The berkley sockets already abstract all of that concepts away. A TCP socket is a bidirectional endless stream of data. How you may chop up that data into meaningful chunks is up to you / the application layer protocol (your own protocol). UDP is a connection-less packet-based protocol. However UDP is unreliable and each packet has a limited size which makes sending larger amounts of data more difficult. However for efficient network transfer you usually compress the data. That's why it's usually more efficient to pack all the necessary data together, apply delta compression to get the most out of the available space.

    I would highly recommend to read up on some network basics. Maybe have a look at WireShark which is a network packet analyzer that can capture all network traffic on your machine so you can have a look at it. Though keep in mind that https connections are encrypted, so you can't really do much with them besides seeing the protocol headers.
     
    tmonestudio and Ryiah like this.