Search Unity

Multiple [BeginSend -> Write -> EndSend] in a single frame

Discussion in 'Unity Transport' started by DigitalSalmon, Dec 1, 2021.

  1. DigitalSalmon

    DigitalSalmon

    Joined:
    Jul 29, 2013
    Posts:
    100
    Code (CSharp):
    1. driver.BeginSend(connection, out DataStreamWriter writer);
    2. // writer.WriteInt, etc
    3. driver.EndSend(writer);
    4.  
    5. while (condition){ // In my real use case, this may loop 10 times, with different data written each iteration.
    6.     driver.BeginSend(connection, out DataStreamWriter writer2);
    7.     // writer2.WriteInt, etc
    8.     driver.EndSend(writer2);
    9. }
    In the simplified above example, the following behaviour occurs:

    When connected via 127.0.0.1 (In both builds and editor) everything behaves as expected.

    When connected via my public IP, with correctly forwarded ports, the following happens:

    • If everything inside the while loop is commented, the first ReadInt on the client will read the correct Int.

    • If the while loop is present, the first ReadInt on the client will read incorrect (usually large) int values, which vary depending on the data being sent in the while loop.

    • If this method is run async, with Task.Delay(10) before the loop and between iterations, things return to normal.

    Is there some shared memory between multiple "synchronous" begin/end calls? I would not expect the later calls to effect the first.

    Or perhaps I am not correctly using a reliable pipeline, and the later calls are consistently arriving before the former?
     
  2. DigitalSalmon

    DigitalSalmon

    Joined:
    Jul 29, 2013
    Posts:
    100
    Whilst it's odd that the behaviour above only shows itself with non-local connections, I have made the following progress:

    Code (CSharp):
    1. NetworkSettings networkSettings = new NetworkSettings();
    2. var reliabilityParam = new ReliableUtility.Parameters() { WindowSize = 16 };
    3. networkSettings.AddRawParameterStruct(ref reliabilityParam);
    4. driver = NetworkDriver.Create(networkSettings);
    5. Pipeline = driver.CreatePipeline(typeof(ReliableSequencedPipelineStage));
    Storing the pipeline struct and passing it to the various BeginSend calls appears to be the appropriate way to use the reliable pipeline.

    Code (CSharp):
    1. int sent = driver.EndSend(writer);
    2. Debug.Log(({(sent == expectedLength ? sent.ToString() : ((StatusCode) sent))} sent)");
    Don't forget to cast the result of the EndSend method to StatusCode if it doesn't match your expected size.

    In my case this told me the reliable queue was full. I've adjusted my packet sizes to accomodate (less packets, larger size).

    This is all part of a wider system that allows me to serialize and send large structs across a network. It appears to be working well now.

    This is not the place to ask, but on the off chance - If anyone has a recommended maximum packet size, I'd be interested. I understand the maximum TCP packet size is 65535 bytes.

    22 packets <= 512 bytes - Blocks the pipeline
    3 packets <= 1024 bytes - Working well
    2 packets <= 2048 bytes - "There are pending send packets after the baselib process send" warning in console.
     
  3. simon-lemay-unity

    simon-lemay-unity

    Unity Technologies

    Joined:
    Jul 19, 2021
    Posts:
    441
    Unity Transport (UTP) by default will limit packets to fit within 1400 bytes (and that's including internal headers and such). You can set up a fragmented pipeline to send larger payload sizes (they will be chopped up in smaller pieces and reassembled on the other end):

    Code (CSharp):
    1. var settings = new NetworkSettings();
    2. settings.WithFragmentationStageParameters(payloadCapacity: 4096); // Or whatever value you want...
    3. var driver = NetworkDriver.Create(settings);
    4. pipeline = driver.CreatePipeline(typeof(FragmentedPipelineStage));
    A pipeline can also be fragmented and reliable (just add
    typeof(ReliableSequencedPipelineStage)
    as an extra parameter to
    CreatePipeline
    ). But note that reliable pipelines have a maximum number of packets that can be in-flight at any moment (this the window size setting). The current maximum is 32. So combining fragmentation and reliability can quickly reach that limit (since each fragment counts as an in-flight packet).

    Now regarding the original issue you reported, that is indeed very odd. We've had issues recently where we had different behavior when sending to a remote host compared to a local one (a lot of these "pending sends" warning are caused by that). What OS are you testing on?

    Thanks for the report, by the way!
     
  4. DigitalSalmon

    DigitalSalmon

    Joined:
    Jul 29, 2013
    Posts:
    100
    Thankyou for your comments, very useful!

    I am working with Windows 10 Pro (10.0.19042 Build 19042).

    It sounds like the fragmented pipeline is what I've essentially rolled my own solution for (Since I wasn't aware it existed). Could I just clarify my understanding;

    • The fragmented pipeline will mean attempts to send more than 1400 bytes will be decomposed/recomposed by the transport package. For example, sending 2000 bytes would automatically be split into 2 packets and recomposed, without affecting the connect.PopEvent DataStreamReader side of things (It would read the full 2000 as a single event).

    • Given a window size of 32 and a maximum packet size of 1400, the maximum single BeginSend/EndEnd that could occur (Or set of Begin/End if in quick succession before the queue clears) is 44800 bytes, less the internal headers).

    • In cases where > 44800 bytes are being sent, it would be sensible to roll your own solution that only enqueued the next Begin/End when the pipeline queue is clear.

    • If I stick with my own fragmentation solution, using 1024 bytes of data as my limit is a safe size (In the sense that the internal headers I don't handle should not exceed 1400-1024=376 bytes).
     
  5. simon-lemay-unity

    simon-lemay-unity

    Unity Technologies

    Joined:
    Jul 19, 2021
    Posts:
    441
    Yes, exactly. From a user's point of view it's exactly as if the entire 2000 bytes were sent in a single packet.

    Yes, that's roughly the upper limit. But note that if you reach that limit of 32 packets, it won't clear until some of them have been acknowledged by the remote end. So at that point the time it takes for the queue to clear will be driven by the ping to the other peer.

    Another thing to note is that in version 1.0.0-pre.9 (and earlier) of UTP, sending a payload that gets fragmented into more than 32 pieces results in the payload being silently lost. That's a bug, the correct behavior would be to return an error in this case. I'm working on a fix for that.

    This is what I'd recommend, yes. If you want to figure out when it's safe to send (when the pipeline queue is clear), unfortunately the only way to do so currently is trying to send.
    EndSend
    will return -5 (
    NetworkSendQueueFull
    ) when the queue is full.

    Right, no way our internal headers amount to 300 bytes of overhead. If you want to find out exactly how much is safe to send, you can call
    BeginSend
    without specifying the required payload size (the optional last argument). The
    Length
    of the returned
    DataStreamWriter
    will be exactly 1400 minus our internal overhead (we can't just provide it as a constant because it depends on the pipeline and protocol in use).