Search Unity

Sending large amounts of data through RPC

Discussion in 'Multiplayer' started by RonHiler, Mar 23, 2012.

  1. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    Hey guys, I've been searching around for some kind of answer on this, but I'm not seeing anything definitive.

    The background is that I'm writing a game patching system (this is so my testers can pick up any daily builds they haven't gotten yet). Mostly this is done, the last bit is to actually start transferring any files that are out of date.

    Key to this process is the ability to send files from the server to the testers (the clients) via RPCs. Of course, some of those files can get to be quite large (the biggest one I have so far is a bit over 8 MB, I presume as my game matures, that will get much larger).

    Now, I've discovered (and used) byte[] arrays to move data between the server and clients (although why the use of byte[] in RPC is undocumented is beyond my ability to understand!). And I've read that byte[] has no limit wrt RPC, but I suspect I'd be pushing that theory trying to send GB sized files across :) Does anyone have any information on the practical limitations of the size of byte[] arrays across RPC? Should I break apart large files into smaller chunks before sending them?

    Secondly, is there a way to get a % completion value on large transfers like this? I'm thinking about progress bars on the client screen (they need some sort of visual update that something is happening while these files are being transferred).
     
  2. fholm

    fholm

    Joined:
    Aug 20, 2011
    Posts:
    2,052
    Build the patcher as an external tool (every game does that) and then transfer the files over a normal TCP connection.
     
  3. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    No thanks. I spend $1,500 on an engine with networking capabilities so I don't have to write an external networking engine. One of the design decisions made during the design of this system was to make it as easy on the testers to update their local copy as possible (otherwise, it won't get done). Forcing them to use an external program is contrary to that philosophy.

    So, anyone have any insights to the original questions? Devs?
     
    CloudyVR and Psyco92 like this.
  4. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    After some thinking about it, I think I have a plan, if anyone is interested. This actually kills three birds with one stone.

    I couldn't find anything that would let me get any sort of status update on long RPC calls, so that led me to thinking about breaking up the files into chunks.

    The client will know what files it needs and how big they are as they exist on the server (this is part of the info it will get from the server beforehand). So I will make the client request files in 100 KB chunks (100KB is just a number I pulled out of a hat, if that is still too big for an RPC, I can drop it down as far as it needs). So that takes care of that issue.

    At the same time, I can use the completion routine in between chunks requests to update a progress bar. And even a fairly slow connection should be able to pick up 100KB fairly fast, so there will be feedback for the user.

    The third benefit is that I was going to do a CRC check on the downloaded file, and if it didn't match the server sent CRC, the file would be rejected and redownloaded. Annoying if a 10GB file fails. But this way, I can actually do a CRC on each chunk, so if anything is corrupted during transmission, the whole file doesn't have to be redone, just that one chunk. Much better.

    So that's my plan. If anyone has any comments or suggestions, I'm all ears.
     
  5. fholm

    fholm

    Joined:
    Aug 20, 2011
    Posts:
    2,052
    No it's not, how do you plan to over-write local files that loaded in the application that is running? Every major game uses an external patching tool because of the inherent problems in updating an application that is already running.

    This is not a networking engine problem, you just need to transfer a file which can be done over HTTP or a myriad of other already existing solutions. And you're going to have huge problems (and have a very slow solution) transferring lots of data over an UDP connection.

    Also, the solution you proposed seem like a jury rigged version of torrents. But whatever man, just saying that the solution your proposed seems like a very hard and annoying way to do something which is really simple.
     
  6. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    I solved that issue years ago. This is not the first time I've done something like this, only the first time I've done it in Unity.

    How so? UDP is no slower that TCP in transferring files (that I know of). In fact, it's faster. At least, from my understanding.

    I'd be interested in understanding what your objections are, specifically. Because I don't see the issue. Other than the fact that you cannot write to a file that is loaded (which I assure you I have dealt with), what's the drawback?

    Understand, I've had experience with this sort of thing before, and in my experience, you HAVE to make the updating process as painless and easy on your testers as you possibly can. It is hard enough to keep them motivated to submit bug reports on a regular basis (I find the tester turnaround is pretty high). So I'm trying to keep it utterly easy for them. Which means running ONE program, getting the update news on the front page, with an simple one button push to update the files (well, mine's not quite one button push, I'm making them log into an account to update, but I think that's reasonable). And yeah, that means it's a little more work for me up front, but I'm okay with that if it means I get more frequent bug reports.
     
  7. fholm

    fholm

    Joined:
    Aug 20, 2011
    Posts:
    2,052
    From the questions you're asking it sure doesn't seem so (how to get a progress meter on the transferred data, etc.). There are several ways to do this if you build your own application, sure - but unity will load the assemblies of your compiled code into memory and IIRC will lock the files.


    UDP is only faster if you don't need reliable transfer, but if you need reliable ACK:ing (and especially with the RPC overhead in Unity) of packages it will for sure be more fragile then anything you could build TCP on.

    Also, with the confidence you speak and your "years of experience" I'd assume that you knew that the max size of an UDP package is 2^16 bytes. So yes, you will hit a limit where you need to split it up.

    The drawback is simple: You're building a fragile, jury rigged transfer system for sending files on-top of a networking library and engine that was not designed for this purpose. Instead of using a simple TCP connection, which does almost everything for you and has been battle tested throughout many years.

    And yes, the whole "writing to files that are loaded"-issue is possible to work-around if you have control over the complete stack, but the less control over your runtime environment you have the harder it gets.

    Also, just the simple fact that : Every modern game I have played in the last 5 years have had a separate patcher/launcher solution just because it's easier to build and more reliable and you don't need to use dirty tricks to work around the "writing-to-a-loaded-file"-problem.
     
    Last edited: Mar 24, 2012
  8. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    Let me try that again (you should never post when you are irritated :) ).

    Look man, I didn't mean to get into a pissing contest with you. You would code this differently. I get that, and it's fine. I think my reasons for doing it this way are valid. I prefer less work for the testers, and that means more work for the coder (me). I'm okay with that. I've never been afraid of work. You would prefer to use existing and battle tested solutions. Fair enough.

    If you want to know, my solution to writing to files that are open involves invoking a small external program, shutting down the client, letting that program do a bit of file shuffling, and restarting. Not that hard (MMOGs do this routinely when they need to update their patcher programs). You probably think that's a hack, but prefer to think of it as an elegant solution :)

    If it turns out,as you say, that Unity can't handle this sort of thing (because you are right, I am using it in a way it probably wasn't designed for), then okay, I'll code an external app (and just so you know, I'd take from code I already have, which is UDP based). But I'd like to try this first. If Unity's UDP networking system is robust, I don't see why it couldn't handle this, even if I have to drop the size of the chunks (which, based on what you say, I will have to).

    I do have to say, though, I don't appreciate you calling me a liar. I've been a C++ coder for over 20 years, most of that professionally (it's how I make my living). I've been coding in other languages since the early 80s (remember 6502 assembly language? I used to be fluent, though not so much any more, heh). I am not a network programmer, it's not my specialty (which if you want to know is actually interfacing scientific equipment with computers for various research labs, a lot of data capture type stuff). But I do know what I am doing with regards to design, I've been doing it for a long time.

    I HAVE programmed this type of system in the past. I did so the first time in C++ using IO/CP UDP (in my own 'engine', so to speak), and it worked just fine (that was a few years ago). I did it again with another engine a couple of years ago. And now I'm doing it in Unity. That being said, I'm not an expert in all things networking. It just isn't my area. What I don't know (or have forgotten) I will learn as needed.

    Perhaps you are right, maybe this won't work. If so, I'll go to your solution.
     
    Last edited: Mar 24, 2012
  9. Poindexter

    Poindexter

    Joined:
    Aug 16, 2011
    Posts:
    21
    Just for the record, the limit on byte[] over RPCs in Unity is 4096 bytes maybe a little bit less. Make it 4000 and you're safe.
    This may not be the most performant number though, you may achieve better throughput creating packets smaller than 1500 bytes due to MTU problems and such. Also, I don't know how Unity will handle millions of packets in it's output buffer, so try to pace it a little.
    As for the completion percentage problem, I suggest you send the filename and size on a RPC first, before sending the payload RPCs, and calculate the percentage locally.
     
  10. ar0nax

    ar0nax

    Joined:
    May 26, 2011
    Posts:
    485
    i implemented zlib (compression library) on my server, when packets are bigger that 100 bytes, the server compresses the package, then it'll send that package with a header of the exact size before archiving, and on the client i check the extracted pkg with the header i got before I 'inflate' it.

    works like a charm (packets now have 3k and inflated ~ 15k)
     
    Last edited: Mar 28, 2012
  11. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    Are you sure? I sent a 10,000 byte chunk and it came through just fine (did a CRC check on it on the other end and it was intact). Perhaps Unity is auto-splitting them internally, sending as separate packets, and then recombining them on the other end? I was toying with the idea of bumping up that number and see how high I could get it before Unity chokes, just to see what the limit is. But I'll probably stick with 10K bytes, unless there is a reason to do otherwise.

    Yeah :) Actually I have it set up such that the client won't request the next chunk until the prior one arrives intact, so I shouldn't end up with a case where there is a billion packets sitting in the buffer. I'm not terribly worried about performance (it's a patching system, it's not supposed to be fast). Although obviously I don't want it to take hours :) So I'll keep an eye on performance, and if it gets too iffy, I can try dropping the size of the packets being sent and see what happens.

    Yeah, that's exactly what I did. The client initially grabs a file that contains a set of data for every file in the game (name, hash, size, and a couple other odds and ends). This is how it determines which files need to be updated.

    Ooh, good idea. I'll check into that
     
  12. Poindexter

    Poindexter

    Joined:
    Aug 16, 2011
    Posts:
    21
    Maybe I recalled the wrong number, maybe this limit changed recently - anyway this is great.
    ar0nax's suggestion of using zlib is also great, but in your case it's probably better to compress the whole file before sending, instead of compressing each packet. And please, keep us posted of your progress, this is very interesting.

    In a related question, ar0nax, which kind of payload did you compress with zlib? I tried compressing some 300 bytes of binary data and was only able to achieve a 15% reduction in size. Any thoughts?
     
  13. tigeba

    tigeba

    Joined:
    Sep 11, 2008
    Posts:
    70
    The ability of any compression scheme to reduce the size of the payload is highly content dependent. If you have a lot of text in your message payload, this tends to compress quite well. The closer your payload becomes to a bunch of random bytes ( high entropy ) the less success you have with compression. We offer this type of scheme in Electroserver and EUP, but we also attempt to custom fit encoding schemes on the protocol level so they fit best with the data type being compressed.
     
  14. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    Yeah, you are right, most likely it got changed in some recent update. Since it is an undocumented feature, who knows what happens to it.

    Yeah, good point, I'll do that. The only real work I've done with compression is when I wrote a png decoder not too long ago. I could probably leverage some of that code, come to think of it :)

    But yeah, I'll let you know what happens. Really, now that I'm sending data across the pipe, it should be pretty straight-foward from here on out. If my clients would leave me alone for a day, I could probably get this finished up :)
     
  15. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    Okay, so I finally managed to find a bit of free time, and I've pretty much finished up this system (the last thing I have to do is fix up and invoke the restart utility, I haven't got that going yet).

    It works fine sending 10,000 bytes at a time. With this method, it takes two or three minutes to update the entire file set (which is about 20 MB in total at this point). I still have to test it on a system that is outside my network, but I don't THINK that should make any difference (it should all act the same, but I won't be able to fully test that until Monday, and you never know with network stuff, heh). As for performance, I may play around with the numbers a bit just to see if I can improve on that, but for the most part I'm happy with it.

    If anyone ever does a search and comes across this thread and is interested in setting up a similar system, you should know that there are a few files that you cannot update without special effort (this is because you cannot write to a file that is open by Unity, which is what fholm was getting at upstream in the thread). These files consist of:

    The executable file (obviously)
    *_Data/sharedassets0.assets
    *_Data/Mono/mono.dll
    *_Data/Resources/unity default resources

    That's it, just those four files (this assumes your patching utility runs from the first scene and that you are on Windows, otherwise YMMV). Everything else will update without any issue while the client is running. Now, in my case, my patching system will detect that these files cannot be written to, and it will simply append a .new extension to them and take note that a restart of the client is required. (In most cases, those four files won't have to be updated, unless Unity updates their engine and incorporate that update into my game build, or if I make any changes to Scene 0, [which I won't, since it is JUST the patching screen, it won't get updated much once I finalize it], so in reality, the restart utility will not have to be called most of the time).

    If, after patching, a restart is required, I invoke my small utility program and shut down. The utility program simply removes the .new extensions from any files that have them (and deletes the original versions), then reinvokes the client again. That's the part I haven't finished yet, but it ought to be trivial at this point (I already have the program, just have to make some minor changes to it and invoke it).

    So yeah, I'm happy. I haven't messed with ar0nax's/Poindexter's thought of file compression yet. I may wait to do that until my next milestone or two. It's a good (great) idea, but I have a few more fish to fry at the moment :) So it'll have to wait for now.
     
  16. Poindexter

    Poindexter

    Joined:
    Aug 16, 2011
    Posts:
    21
    RonHiler,

    You did this test in a LAN? Because the protocol you described (the client requesting each packet) is latency-dependent, and performance will be much worse in the internet. You can try to keep more packets "in flight" (maybe the client could request a handful of packets at a time...) A few years ago I wrote a protocol to transfer files through the internet and had this same problem.
    Gee... sorry to keep posting ideas and bugging you about your work :)

    I know. And that's why I asked about his payload. Also, since he was compressing blocks as small as 100 bytes (which I think are too small to compress), maybe he used other tweaks too.
     
  17. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    Yeah, it was through LAN. You are probably right, it won't perform as well over WAN (although the way I have it set up, I THINK it did go out over the internet anyway, even on the local system. That's because I have the server on a dynamic IP, and so I have to have the client go through an external lookup service to grab the current IP address to connect to. I don't really have a great grasp of how all that works, TBH, but it all works. I've done the same thing for my SVN server for years, heh). I will give it a test over the internet tomorrow and see what happens. I'm hopeful, but I suspect you are right in that it won't perform as well. Your idea of sending multiple chunks at a time is a good one, and it's not a difficult change. If needed, I can implement that, thanks!
    Oh, no worries, I appreciate the comments and ideas. Anything that makes it better and more useful is a good thing :)
     
  18. George Foot

    George Foot

    Joined:
    Feb 22, 2012
    Posts:
    399
    Why don't you just use WWW to download the data? TCP is designed for reliable transport of large streams of data, web servers are an incredibly convenient way to host the data, and WWW is an easy way to access it.
     
  19. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    Yeah, I could have done that, I suppose. But it would have been a bit more work for me whenever I put up a new build (I would have to upload it to my website). I prefered to keep the master build local for a couple of reasons. One, it's less work for me at build time (I just hit a button on my server and the client build happens). Two, I can keep local build notes very easily (that is also done on my server program, with the notes stored in an SQL database). Three, by keeping it local, I can manage who gets to download the updates (I make my testers log in to accounts to update their builds). I don't neccessarily want just anyone downloading alpha builds, ya know? :)

    And I guess I could have set up a local web hoster (rather than using my external web provider) to do all this, but again, this is not my area of expertise! I would have to research how to do all that. I already know how to do it this way (having done it before). Besides, I'm not a huge fan of TCP. It works and all, but given the choice, I prefer UDP, even though it means you have to add an ordering/reliability layer on top of it (whcih Unity already did for me, so no worries there!)

    I guess what it comes down to is the design. Once I design and finalize a system's features on paper, I don't make comprimises on it when implementing that design. I'm very picky like that (it's why I'm very successful as a contract programmer, even though I'm more expensive than my competition!). If I had used the WWW class, it would have been similar to what I designed, but not exact, I would have had to comprimise some of the features. Therefore, not good enough for me; that's just how I am :)
     
  20. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    Just a final update on the results of this.

    I finally had a chance to test this over a WAN connection. It was a bit slower, but not so bad (about 30% longer than when I was on a LAN machine). But then again, I was only about 30 miles away from the server machine, so it wasn't so far away. But that's about the best I can do (I'm not going to drive 300 miles just to test this, that's what I have testers for, hehe). On the other hand, I was also patching to a thumb drive, so some of that 30% extra would have been write times (I should have written the test to the hard drive, but in all honesty, I was rushed and didn't think about it).

    I finished up the restart utility, and everything works (which means I can patch every file that makes up the game, even ones that are locked by Unity while it is running). That also finishes up my first milestone for the game I will be adding on to my patching system :)

    So yeah, it all works. It seems reasonably fast (I don't know if it is faster or slower than TCP but even a full game download doesn't take all that long). I can play around with sending multiple chunks from the server instead of just one per client request if it gets to a point where things start to slow down too much. But for now, I'm happy.
     
  21. virror

    virror

    Joined:
    Feb 3, 2012
    Posts:
    2,963
    So, what internet connection did you have? I think this is good to know to be able to tell if performance is good or not : )
     
  22. kayoone

    kayoone

    Joined:
    May 16, 2009
    Posts:
    67
    This sounds like a ridicolous way of solving this problem and your argument (like "so i dont need to upload the builds onto my webserver") make me lol. Sorry, but to me it seems you have absolutely no clue what you are talking about and the implications of your "solution". Some people proposed the correct solution to you here, which i using a stable standard webserver like apache and use HTTP to download your Updates. You dont want that, so be it, fail and learn from it ;)

    Sorry, dont want to sound harsh, but this level of ignorance is mind boggling. Calling your solution simpler is the most ridiculous part of it.
     
  23. RonHiler

    RonHiler

    Joined:
    Nov 3, 2011
    Posts:
    207
    Well, thanks for your opinion. Allow me to rebut.

    1) It didn't fail. I finished this system some time ago. It works perfectly and has been used by my testers for several weeks now with no reported problems (well, they've reported issues, but not with this system). It's just as fast, if not faster than using TCP. At first it was noticeably slower, but I took a second optimization pass at it (which mostly just involved increasing the packet size), and it speed up significantly. It works just fine, thanks.
    2) I have the advantage of having full control over who connects and is able to download the update. Something I would not have with a webplayer. I probably could get that with TCP Apache/etc with some scripting but I don't see any advantage over what I've done, it would just reproduce what I already have.
    3) I don't remember saying it would be simpler to program (I didn't read through this thread again, but I don't see why I would have said that, I knew perfectly well this would be much harder than just uploading a web build). My goal was not to make it easy to program, it was to make it easy to both create patches and obtain patches for the developers and for the testers. I have achieved that goal. I don't have to have an external program to patch, it's all built right into the game. Just push a button, log in, and the patcher starts. Which is what I wanted.

    So I'm not quite sure which part you are objecting to. Because it was hard to program? So what? I've programmed stuff WAY harder than this before, and this was well within my abilities to do. What else? It's not a rhetorical question, I'd truly like to know what you think is ridiculous about it. Because I don't see it.

    I object to your proposal that there is a "correct" solution to any given programming problem. I've been programming a long time, and in my experience, there are always multiple ways to go about doing something. Just because I did it differently than you would have done it, doesn't make mine "WRONG". Mine works, and fulfills all the design goals that the system was asked to fulfill (which is more than I can say for your proposed solution). That's really all the proof I need, isn't it?
     
    Last edited: May 12, 2012
    Tethys likes this.