Search Unity

  1. Good news ✨ We have more Unite Now videos available for you to watch on-demand! Come check them out and ask our experts any questions!
    Dismiss Notice
  2. Ever participated in one our Game Jams? Want pointers on your project? Our Evangelists will be available on Friday to give feedback. Come share your games with us!
    Dismiss Notice

Jobs *Any* way to access static data safely?

Discussion in 'Data Oriented Technology Stack' started by joedurb, Jul 18, 2018.

  1. joedurb

    joedurb

    Joined:
    Nov 13, 2013
    Posts:
    36
    Wanting to use jobs system for a background worker thread for terrain "chunk" generation.

    The Data I need access to is all constant, never changing lookup tables, etc. But I get them at runtime.

    I've been looking into ways to make runtime data const and failed...

    I've been trying to convert the data into NativeArrays, etc.. And run into MANY variable length Struct arrays,
    so it's being a nightmare to convert and just the waste of transmission.

    If there was just some Safe way of accessing Shared Never-changing data that doesn't require Blittable, that would make this jobs system way more accessible?


    My use case is a big custom terrain chunk generation, and I have lots of server-supplied rules and definitions supplied at runtime, these never change, And converting all of it to Blittable is a large process and really makes the code unreadable and ugly.
     
    Jes28 and Igualop like this.
  2. LazyGameDevZA

    LazyGameDevZA

    Joined:
    Nov 10, 2016
    Posts:
    129
    I think someone from Unity should rather comment on this, but this unfortunately isn't something that's as simple as it sounds. In order to guarantee determinism I believe the decision had been made to not allow access to static data from within a job. Using the containers provided by Unity allows them to build the job system in such a way that it can track what other jobs are accessing that specific part of memory and changing it. This is the big driving behind ensuring that us as developers don't run into race conditions.

    This does however come at the cost of losing some accessibility. The purpose of the JobSystem isn't as much to just provide background processing alone, but rather provide a clean and performant solution that makes it easier to write code that won't introduce race conditions.

    It does sound like you're finding it somewhat difficult to think of this process from a data-driven approach which might make the conversion seem very daunting. One approach this problem other than using built-in .NET threading technology to do the work and once done yield back to the main thread to get everything up to date. That would likely be a better solution to your problem if you're not using ECS to drive game logic.

    If you do want to use ECS to drive game logic then I'd be inclined to ask where during the process are you trying to convert this static data to Blittable types? It might be easier to convert to blittable types as the data arrives from the server rather than trying to do it at a later stage, which in all fairness still might lead to unreadable ugly code, but I feel it might be worth it as it should be constraining the ugly unreadable code to a single place.

    In all honesty I think your example is a little vague and I can't fully comment on why you feel the code becomes unreadable and ugly.
     
  3. joedurb

    joedurb

    Joined:
    Nov 13, 2013
    Posts:
    36
    Caveat, I've never done multithreading,

    So, .Net threading may have been a better route. Jobs made it sound easy to do multithreading so i jumped at it, abusing it likely. An Ugly but wonderful hack would be a set Class just for "Global" Jobs access ;)
    Class UnityJobsData -- anything in there is readonly from jobs or somesuch. Then I could have reaped the
    benefit of getting extra cycles work via jobs without tons of data conversion.

    When i quick tested jobs, it took me 30minutes to implement with access to my existing Data. And it 99% ran correctly, but converting everything into nativearrays and passing to jobs, that took ~10hrs of coding, and now my code base is way more confusing :)

    I did manage to convert the data, but i went from a nicely sorted and logical multi-level struct,
    to several single dimension single structs, with bloat, speed and logic losses, but it's working.

    Building a mesh from a complex data set is a great job for "Jobs" but only if there is easy access to needed data :)

    But, It's all working really well, other than the uglier code and annoying 4frame warnings (which supposedly will be fixed).
     
  4. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    650
    You can build a Native*<T> structure for read-only access or pass single static variables as fields in a job. It's the cleanest workaround I've seen and used so far.

    I think once more helper functions common use cases are demonstrated, it'll be easier to build things the right way.

    It's also a matter of reorganizing how you think about the relationship between code and data, which is a lot harder to change and slows down a lot of people when they start using Jobs/ECS.
     
  5. Bytestorm83

    Bytestorm83

    Joined:
    Mar 17, 2018
    Posts:
    5
    How exactly IS the data being generated and stored? Anything you can share?

    I myself have been playing with Minecraft-style terrain generation and have got it almost complete with the Job System.
     
  6. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    650
    So it depends, I'm using Jobified ECS.

    * If it's data I can tie to one system and is stable, create it with OnCreateManager() and dispose of it in OnDestroyManager(), you can just create Native containers as persistent. You can then inject that system into other systems and access the data in their OnUpdate() calls (both ComponentSystem and JobComponentSystem).
    * As above, but you feed data into a particular system from the outside, useful if you're going the hybrid route and just need to redirect.

    The data can be anything really, if you need to use some normal C# class object (also Unity's existing class-based types like MonoBehaviour / GameObject) you can do so on the main thread in both System types.
    If you need to feed a look up table or other readonly struct a job, you can have a
    [ReadOnly]
    marked field for that data structure and assign it to the job.

    Combine this with the system injection as above, and you can have systems that don't really do much but monitor state, build / update lookup structures, and can be injected into other systems that need to use that data. My current working example is an InputDeviceSystem:
    • Manages input device events from the new Input system and converts them into entities.
      • Does this by currently tracking the event kind, getting the unique device ID, and the device's class type code. I'm using the device's Type.GetHashCode(), as the .NET TypeCode is a specific enum for built-in types for what I assume is managed interop. The basic concept is the same.
    • Also creates a type code Hierarchy structure with a Native container for easy lookup if we need to determine if a specific device is a child of a certain device (say an XInputController is a Gamepad).
      • Since the InputDeviceTypeCode is really just an integer, we can perform this lookup and traverse the hierarchy in a job and not have to deal with the actual managed references unless absolutely necessary on the main thread.
    • Tracks devices that have been added and removed in a managed lookup structure to be accessed on the main thread.
    This makes it easy to do everything on a job, and has honestly simplified a lot of my boilerplate logic. When I started writing the InputSystem -> ECS bridge I was working on, I had a lot more generic types going on. By focusing on the data of an Input Device and not the object InputDevice, I've been able to jobify much more code, reduce the amount of redundency, and potentially eliminate

    It's best to think of all jobs as running in a black box outside of the normal managed space. This is true in a sense as they're run by a non-managed scheduler and the job struct itself is treated more of a function object with an attached argument buffer than a normal C# struct, especially after it's been copied into the Job scheduler and runner environment. The data sources should purely lie outside the jobs themselves, as any external access would have to be volatile and therefore non-deterministic due to the nature of multi-threading.
     
  7. joedurb

    joedurb

    Joined:
    Nov 13, 2013
    Posts:
    36
    Yeah, I had trouble getting anything other than NativeArray to work, so I had to flatten my existing dataset of structs within structs withing structs :)

    i.e. example pseudo code struct architecture:

    struct worlds[] {
    size;
    location;
    struct continents[] {
    name;
    location;
    struct countries[] {
    location;
    population;
    }
    }
    }


    etc. I would have to convert that to:

    NativeArray<countries> {
    worldID;
    continentID;
    location;
    population;
    }

    + continents native array

    + worlds native array


    And now rather than accessing the correct data by ID, I have to search the data, or rebuild the efficient data struct back within the job.

    It's Just *ALOT* of work, for data that I *KNOW* is not going to change. would be all const if it could :)

    But yeah, starting a project from the ground up, one could prepare for this, I just hacked the jobs into an existing project with "Proper" data organization.

    Wait. . . My brain just exploded.. . . could a Native array hold native arrays?!?!? I never tried that...
    I was trying normal arrays of structs. . . But, I'm guessing that wouldn't work either.

    Anyway, I did the effort on my project and it's working, It just would have stayed way cleaner if i could have passed the example structure above rather than several flattened native arrays.
    -Joe
     
  8. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    2,835
    Hierarchies like that are generally problematic in any context. Basically what you want is start with a normalized data model. Think relational databases here.

    If you normalized then you wold have ended up with something like a continent has a world id and a country has a continent id. Basically three tables. That's your base. Transforming it is then an optimization for specific use cases like ECS. Or you might transform it the other way going from some type of editor into your normalized format and then into an ECS format.

    If the data only exists in Unity then it might be ok to not actually store anything in a normalized format. But it's still useful to normalize it as part of your design before then optimizing it for something like ECS. You will just end up with better results.
     
    mkracik likes this.
  9. joedurb

    joedurb

    Joined:
    Nov 13, 2013
    Posts:
    36
    optimized and organized like shown Is *Way* faster for getting the data needed. Searching through all elements rather than jumping right to the proper pre-indexed data was essential pre-jobs-system.

    I have the database that holds my normalized data, my game engine takes that and organizes it efficiently.
    Makes sense to me, and runs waay faster. I'm already skirting the fps acceptable edges :)

    Like i say, I've broken it apart and ship the data in, just not clean, as now i have multiple data sets, one organized for access from non-jobs. and one flattened/normalized for shoving into jobs. If there could be a way to "Lock" an existing structure somehow, or snapshot it, or special Static Class or. . . ? some fancier Native* type creature.

    The solution to keep my data was likely to run my own threads i guess, but I was just sharing the one jobs hurdle which took my work from 30minutes to 10 hours of pain :) to implement jobs.
     
  10. M_R

    M_R

    Joined:
    Apr 15, 2015
    Posts:
    532
    why you need to search through all elements? you can just declare the ID to be the index in the respective NativeArray:
    Code (CSharp):
    1. struct YourJob : IJobParallelFor {
    2.     [ReadOnly] public NativeArray<World> worlds;
    3.     [ReadOnly] public NativeArray<Continent> continents;
    4.     [ReadOnly] public NativeArray<Country> countries;
    5.    
    6.     [WriteOnly] public NativeArray<Result> results;
    7.     void Execute(int index) {
    8.         var country = countries[index];
    9.         var continent = continents[country.continentId];
    10.         var world = worlds[continent.worldId];
    11.         // do your work
    12.         results[index] = GetResult(country, continent, world);
    13.     }
    14. }
     
  11. joedurb

    joedurb

    Joined:
    Nov 13, 2013
    Posts:
    36
    sure, that works for top down, but bottom up is more common in my use cases.
    I.e. whats the population of all countries in world X?
    ex:
    foreach (countries ) if (countries.worldID==X) population+=countries.population

    then you have to loop or rebuild an indexed structure.
     
  12. M_R

    M_R

    Joined:
    Apr 15, 2015
    Posts:
    532
    if you need to know that often, just cache the population per-world and use a ReactiveSystem to keep them synced.
    or keep a separate NativeArray<Country> for each world and pass only world X to the job.

    same for all "hot queries" you have. you need to organize your data based on your needs.

    you can also do this:
    - keep all countries ordered by world (, then continent)
    - keep a (start index, count) of countries for each world (continent)
    - get a NativeSlice of the countries relevant to your world in the job (or only loop through your world range)
     
  13. joedurb

    joedurb

    Joined:
    Nov 13, 2013
    Posts:
    36
    My Lessons/FYI For anyone else who didn't realize how Insanely simple multi-threading is in C#.
    If you have a big project with data sets or need long running threads, just roll your own.

    Ended up unrolling all of the ugly code I used to convert my datasets to native arrays for jobs system,
    and just implementing native C# threads. Wow, Easy and more performant, My bad!

    I was certainly mis-using the Unity Jobs system, which is built to handle tons of multithreading with small data chunks.
    I saw the jobs system, and thought, oh, it's made multi-threading easier! great.. wrong!

    So, Moral of the story, there is a definite place for normal threading, and I would say it's when your thread usage is simple, and not- timing critical as to avoid threading bugs, and/or your dataset is huge and varied - and doesn't/can't utilize the ECS stuff.
     
  14. Lurking-Ninja

    Lurking-Ninja

    Joined:
    Jan 20, 2015
    Posts:
    5,345
    And you will need to be careful because you'll be competing for the available cores with Unity. Because when you use C# threading instead of the Job system, Unity can't schedule the work in sync with its own jobs. So don't be surprised if either your tasks or Unity's jobs (transform translates and such) will be postponed sometimes... But other than that, yes, you can use it, it's just not recommended because of these reasons. There is nothing you cannot do in the Job system.
     
  15. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,015
    That definitely shouldn't be the conclusion. Generally speaking. The approach we are taking to multithreading with C# jobs / burst is a Data oriented design approach. It requires structuring & controlling your in a different way than what you might be used to. This can take some time to learn / unlearn old habits.

    However it is not correct that C# threads are a good solution to this problem:
    1) Overuse of cores / context switching will hurt performance
    2) no safety system makes no gurantees about your code being thread safe
    3) no burst optimizations means your code runs slower
    4) Not forcing yourself into using good data layout means you are very likely leaving lots of perf on the table.

    No one prevents you from keep going same old traditional .NET direction. It just has a lot of downsides and it is absolutely not recommended for the health of any project in the long run.
     
    _met44, GilCat, hippocoder and 4 others like this.
  16. meanmonkey

    meanmonkey

    Joined:
    Nov 13, 2014
    Posts:
    130
    I have a background/async loading setup too. I adapted my code from c# threads & managed arrays to unity jobs & native arrays and it works out pretty well for me. What I did:

    1.) Converted my nested managed arrays into single native arrays, and pass offsets rather than array/map IDs.
    2.) Created a static version of all methods which I need to use in jobs (you'll have to reference all needed nativearrays and non static values)
    3.) Created an object version off all the static versions for convinience (basically just a wrapper)
    4.) I'm running parallel unity jobs + burst on those native arrays for frame crucial stuff
    5.) I'm running single persistent background worker jobs for non frame crucial stuff (invoked with waithandle from mainthread)

    Of course you can ditch unity jobs and nativearrays, but once you realize how extremly fast parallel unity jobs + burst is, you will have a hard time doing that :)
     
    Last edited: Dec 10, 2018
    Antypodish likes this.
  17. wobes

    wobes

    Joined:
    Mar 9, 2013
    Posts:
    758
    Actually we can't run a network thread in job worker thread, because of the job nature. So that means it is better to switch to a separate C# thread and have some kind of RingBuffer to allow inter-thread communication.
     
  18. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,015
    Piefayth likes this.
  19. wobes

    wobes

    Joined:
    Mar 9, 2013
    Posts:
    758
    https://github.com/Unity-Technologi...ntation/samples/jobifiedserverbehaviour.cs.md

    Doesn't it mean that the network thread is depended on game loop, so basically game loop is a bottleneck for updating the network logic. Because in the example above, they schedule a job from main thread. That means if your game has 10 fps, your network will work with additional 100 ms of latency. Perfect solution is to have a separate thread that works with its data and just pushes events out to a non-blocking queue.
     
    Last edited: Dec 16, 2018
    Vincenzo likes this.
  20. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    564
    In the current state, it's far from to be efficient. Networking is all about I/O and high-level abstractions for data exchange between internal systems and synchronization between machines/devices. If you know a thing or two about sockets and take a look into C code in that repository, you will notice that there's just primitive non-scalable implementation on top of UDP that suited only for simple games. I don't see any efficient multiplexing/descriptors monitoring that leads to high-performance I/O. I don't see any good well-known patterns there that widely used by experienced networking programmers.

    About jobification of that layer. I've never used Unity's Jobs system, but I read everywhere that it's not designed for I/O tasks, and Tim said that in this blog post as well. I don't know how C# Jobs implemented under the hood, but if its native side is something similar to what Naughty Dog engineers made, then ironically fibers/coroutines mixed with threads is exactly what programmers are widely using in networking systems. But from what I see, this is not the case.

    A bunch of missed must-have functionality is another story which we'll talk about on another day I think.
     
    Last edited: Jan 24, 2019
  21. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    564
    So, I just dig into the C# Jobs manuals and learned some stuff. Now I clearly see that this mess on top of sockets is not even close to true parallelism, it's an imitation. Besides the fact that the programmer who made the C underlayer doesn't know how the kernel works, he (or another guy?) doesn't even know how to make Unity's tech truly efficient. It's just tons of pointless overhead that will not benefit the end-user somehow. The whole implementation is overcomplicated and slowing down itself involving latency to the whole flow without any sane reason.

    On top of this, I see at least 2 security vulnerabilities that open the doors even to a junior hacker who know how to manipulate packets.
     
    Vincenzo, AngelRc and e199 like this.
  22. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    564
    Rough roadmap:
    • Define protocol standards, new noticeable change means new major version.
    • Investigate security, prevent tampering and replay attacks at least.
    • Reliability should sit at low-level for low-latency, consider KCP.
    • Remove hardcoded scatter/gather from transmission functions, pass it as a parameter and add context, make functions more agnostic.
    • Implement fibers, use fcontext_t for green concurrency. If Unity's jobs already implemented similar way, then half of the work already done.
    • Remove any dependency between a game loop, I/O, and worker threads. No shared data, no shared states, no scheduling from hot-path. Everything should work independently of each other it should be stopless flow/conveyor. Make it truly parallel.
    • Avoid allocations on the heap for everything, use scalable concurrent pools with TLS.
    • Try to avoid/minimize interoperability/cross-language implementation overhead across layers and levels.
    • Organize a clear code structure, eliminate spaghetti.
    • Add pipeline-like API for injecting custom functionality into data flow, extend modularity and flexibility.
    • Implement high-level abstractions as modules for stuff that related to synchronization, various game mechanics, and so on.
    • Use Span<T> for contiguous memory access and buffers manipulation at high-level. Don't know about JIT optimizations in Mono, but the latest version should support fast Span at least.
    • Stop working in the shadow, nobody wants to contribute to the 2-month outdated repository with a single branch. Organize collaborative workflow, commit in real-time.
    • Provide a code of conduct and contribution guides for not in-house open-source developers.
    • Introduce two concepts like scriptable render pipeline, but for networking. One lightweight for mobile platforms/small games and one advanced for desktop/servers.
    • Investigate sockets-related kernel implementations across platforms.
    • Integrate scalable event interfaces, epoll and kqueue for readiness-oriented I/O on Linux/UNIX/POSIX, and IOCP for completion-oriented I/O on Windows.
    • Implement semi-real world testbed using multiple machines and heavy-load/high-parallelism simulations. Synthetic benchmarking for basic measurements during work.
    • Stabilization, fixes, and gap-closing with continuous updates.
    • Maximize performance and optimize resources usage.
     
    bb8_1, Ziflin, bhseph and 11 others like this.
  23. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    564
    If your business is not ready to make networking a first-class citizen in Unity, don't try to create a parody.
     
    Karasu416, Vincenzo and wobes like this.
  24. MichalBUnity

    MichalBUnity

    Unity Technologies

    Joined:
    May 9, 2016
    Posts:
    17
    Let me start of by saying that the overall plan for the Network Transport is to become a true citizen of the DOTS(Unity Data-Oriented Tech Stack) family. In order for us to do so we also need to experiment with code and flows. That means that not everything that is released will be in its final form the first time you guys see it. We chose a way forward that might not always be the most efficient way to do networking at the moment but it gives us leverage to see and think about how the API should grow and thrive in the future. I agree that our transparency has been more opaque the last months and I really hope we can do a better job at that, to share what specific features we are working on and how we can move forward together.

    That being said I can say that a lot of the things mentioned that are missing are coming. And I will make it my personal mission to make sure you guys are kept up to date with our progress beginning next year.

    Finally to give you guys a idea of how we work: We first try to define the problem we want to solve, in the case of the Transport our challenge is to create a networking library that supports the finite set of platforms we have in a efficient and effective manner while maintaining compatibility with both GameObjecs and ECS. To do so we identified that in order to get the most of each platform we will most likely need to have a specific solution in-place for that platform. So we made a decision early on to not tackle the specific implementations of each platform before we fully understood the characteristics of that platform. This way we can reach more platforms early so we can gather data for that platform and see in what ways we can improve the flow for that specific platform, as well as get a feel for how the API should be laid out in order to support the different platform specific implementations and quirks.

    Our current focus this iteration has been to make sure the flows from send/recv make sense and that the API it self feels fluent, this includes pipelines and moving work off the main thread (currently work in progress). And I hope we will be able to share this progress with you guys soon.
     
    thelebaron, Kender, GliderGuy and 4 others like this.
  25. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    564
    @MichalBUnity Thank you, and sorry for the text in the aggressive form... Joe's post triggered me a bit (@Joachim_Ante, I love you and your hair, don't get me wrong). I understand that work is still in progress, and I would like to help/contribute, but many stuff is still not clear. Repository seems abandoned, issues just ignored, there's no any activity. Open to us the development process we are programmers too.
     
    Kender, GliderGuy, zhuchun and 2 others like this.
  26. Quatum1000

    Quatum1000

    Joined:
    Oct 5, 2014
    Posts:
    809
    There are not enough valued senior programmers on the market to solve the networks transport data systems well. And the guys who are able to don't want to start at unitys team, unfortunately because of short time hype seasons and that's good for long term secure employment. The solution you mentioned takes about 2.5 - 3 man years to solve in a appropriate way. So the most dev teams start over with their own solution or enhance the existing unity system.

    You can be assured if you're able to enhance the current unity NTDS in a useful and correct way, the unity net dev team will collaborate on this. Be friendly and courageous.
     
  27. wobes

    wobes

    Joined:
    Mar 9, 2013
    Posts:
    758
    Where the data comes from? We aren't talking about solving everything in the early stage. However, do not claim that the repository is super efficient because it utilizes "Jobs". Efficient networking is not just about jobifying. In fact it's not efficient at all and the whole network layer is simple marketing. 2 Month outdated repository and unclear goal what Unity is trying to solve when there are ton of efficient transport layers.
     
  28. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,727
    My only requirement isn't revisiting the HLAPI fiasco. I would rather have LLAPI only, and code modules I can work with that are relevant to most games, so you would implement a proper clean approach using your only API (ie it's all one API not LLAPI and HLAPI) - and tell us clearly how to use it.

    If it's not clean, direct and 1 message from Unity (nobody needs 100 ways to skin a fish) then I'll probably just be safer going with Photon no matter what. It's simple with Photon, 1 way of doing it and do it well. Works for all games.

    UNet is basically just something best forgotten. Looking forward to the future, but try to err on the side of less options and more solid performance. I don't care that every programmer wants a different API. I care, that with networking, it works and my customers don't give my game one star cos it screwed up, or Unity mixed API up so much that I had no choice but to screw it up myself.

    It matters more with networking than any other feature in Unity because with networking, at least 50% of traffic is not currently on your computer.

    That's why simple with less optional stuff is better - ESPECIALLY for an API that is still in design phase. Be authoritative, listen less to customers and lead better. If you want an example of what customers already know and understand, use Photon for a few months. Otherwise, know we need to be led firmly and clearly with networking.

    Approaching networking development is not like any other area in Unity. It is the area with the most chance to ruin a project... or win it.

    Unity needs to be assertive in this area, and don't be pulled in multiple directions. For example HDRP render pipeline is led well, it knows best, knows what specifically has worked in the field, and what continues to work in the field.

    I want that leadership from Unity because Unity lost my trust with UNet.
     
    Kender, bhseph, GliderGuy and 4 others like this.
  29. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    690
    I fully agree with you on this one. But I think it's more about ownership than leadership. I don't think there isn't anyone that who care enough about Unity enough themselves. It's like everyone is saying about anything they want and they don't seem to communicate with each other.

    Marketing is always at the fullsteam selling stuff, but developers always miss the target, (not sure if they have any real roadmaps), and users aways at full of hopes.
    Seriously, I don't want to hear another marketing person start making another "you are covered" campaigns. Unity really needs to sit down and how to make Unity a solid product at the core, I'm talking about as a game engine, instead of lurking into other markets. I think you know what I mean.

    I think in the case of the network stack, I rather have Unity just copy UE network stack concept and figure out how it can better implement using ECS jobs.

    I really wish Unity can do better than UE but I have very low confidence that Unity can, therefore just borrow UE concept at the high level. I'm pretty certain that it would probably a safer strategy.

    UE has the best network stack by far and still improved a lot lately. If it can support ~100 players with low-latency and high-throuput, I'm sure it can handle many other types of games except of course MMORPG.

    Network programming is probably the most difficult task and one side benefit of borrowing UE concept is that it will make easier to learn as there are plenty of examples and documentation how UE networking works. (I can share my experiences and points to you some materials) And it will also enticing new UE developers to Unity easier.

    Anyway, Unity needs be more transparent about what's going on right now instead of "soon" Soon in Unity can mean years and early next year can mean Summer. Please don't let us down this time.
    Thanks.
     
    GliderGuy likes this.
  30. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,727
    I agree with you regarding networking. Unity's never done networking well so far. But in other areas I put to you that Unity is leading. HDRP, VFX, ECS, Burst, Jobs - these are so good that (I can't name names) but a pretty big AA studio is moving over to Unity because of ECS. They could've rolled their own C++ solution, but it's actually a bit too expensive for them to do so and their staff is already familiar with Unity from personal projects. And since they can ask me about obscure info, it wasn't a big risk.

    HDRP+VFX was the sweetner they needed to make the final decision, but ECS performance really is an industry talking point, specially as it's so easy to get to (relative to other solutions).

    So I understand why Unity wants to make sure networking is ECS based. You also have deterministic mathematics on the way as well...

    If they don't screw this up, Unreal's networking isn't going to beat this. However it's a long way away I would imagine, and I'm sure nobody at Unity blames me for adopting a wait-and-see attitude to the new networking.
     
    GliderGuy, Cynicat and FROS7 like this.
  31. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    2,835
    The game industry generally has never done well when it comes to client/server architectures. Years back when I was still working on game machine I ran into the Improbable team and we had some interesting conversations about how the hell was the game industry like literally a decade behind in this stuff.

    So it's not as easy for Unity to just bring in top people. It's a niche area and on top of that some of the best people are outside the game industry. Even larger studios often don't have real domain experts in this area. It really is a rather scarce resource in this industry.

    With their purchase of that hosting company that I can't remember the details of, I would think long term they definitely do plan on putting together an A team on this. At least it makes sense to me that they would.
     
  32. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    690
    I was a long time Unreal user so I'm biased, however, if there is *one* thing I want to steal from, it's their network stack. Theirs is light years ahead of everyone and it's battle proven. I wish Unity really sits down and carefully study them before they start another unproven experimental project. I'm sure thiers is not without fault and there are rooms for improvements and I also think that ECS can grasp such opportunities.

    Unity, when I looked at it, ~10years ago, I was so disappointed looking at how rudimentary their Editor was compared to UE back then, It still is but what surprises me is that it's basically the same editor 10 years ago. UE4 is built from scratch in the past few years and it's already light years ahead of Unity in many areas. I'm so disheartened that simple yet fundamental workflow has not been fixed at all. That's why I think there is a lack of ownership(leadership).

    The reason I came back was that I'm a big fan of C# and I was so excited when I watched Unite Berlin to learn about ECS and their plan for the "Best Networking System". Rendering-wise, it's almost there and I'm not worried about it. I thought the important missing piece(network stack) will finally be there I convinced my partner to try Unity for their next project. I feel deeply responsible and I have so much trouble right now *surprise* dealing with large size assets. I never expected it would be a problem. Nothing else will matter right now unless I know I can work with a large size project. I hated UE so much for their long compile time, but right now I hate Unity so much how long everything else takes so much time. It loose all of its advantages of having a faster compiler.

    Unity Editor is getting slower and slower each day. This is something I never experience with Unreal. In UE, only the compile time gets longer and longer with increasing project size and it is normal. Click on "Play" on UE is almost instant. I just found out that click on "Play" on Unity will reload all assemblies even when they are not used and changed at all each time. I'm dumbfounded it has been working that way all this time. It takes about 15 seconds to load a simple scene and it takes about 9 seconds to load an Empty scene in my project. Yeah, there problems with Asset initialization during the reload because some 3rd party asset doing that but why is it reloading the whole assemblies each time? It shouldn't do that to begin with. Does Unity know about this problem? Probably they do but it's probably not an issue because it adds only fraction of a second since they haven't done any large size project themselves, otherwise, it's impossible that they left it that way.

    Before Unity talks about "Performance by Default", Unity really need to make editor "Performant by Default" first so that it saves countless hours of our life. Will they ever listen and do something about it? I don't know. It's so hard for them to admit the problems and even if they do, it sits there for years and years. I remember Unity saying "Why do you need another GUI? There is no problem with IMGUI, you can do everything and anything with it" It tooks years for them to admit that there is a need for a new gui and it took about 5 years to deliver it. (As a side note, there still is big problem with Inspector where it trys to redraw each time you scroll. Thanks to IMGUI again. There are custom asset that causing the big slow down affecting general usability to the Editor. I tooks me a while to figure out why it's so slow but there is very little I can do about unless I rewrite someone else's Asset. I hear that it will support UIElement but I think it's 10 years too late and it will take years for everyone else to adopt it.)

    Instead, they seem to promote stuff that not so critical such as dockable windows, consolidated preset menus, font type, button background in the keynotes. To me it's laughable and not worthy of keynote but just oneliner in the patchnotes. Sorry, but I had to be that guy to point it out.

    What they should start to promoting is "Remove the Painpoints" or "Making Unity Easy to Use" campaign. There are literally many simple and easy yet fundamental fixes lying around. Unity should gather them all in one basket(Trello?) and show it to users that they are removing one by one in realtime. It shows that Unity cares about the users and it will put me to ease if they do. If they had such a mentality, Unity would be been so much better now.

    Anyway, I'll stop my rant but I can't help saying it cause I care.

    Wish the very best.

    Cheers!
     
    Last edited: Dec 20, 2018
    hippocoder, FROS7, Piefayth and 2 others like this.
  33. Quatum1000

    Quatum1000

    Joined:
    Oct 5, 2014
    Posts:
    809
    That's not true. As I started with quake3 long time ago 1990, the net code was very efficient from the first time and was optimized step by step. This net code is the state of the art and all game companies learned from quake3.The ability to have a minus _timenudge by deterministic prediction is outstanding.

    There is only one guy I know personally is able to solve problems in this case, but hes working at Dimension Data Austria. https://www.linkedin.com/in/david-hamann-a33437a8/
     
  34. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    2,835
    The game industry has solved some things well, things that were very specific to games and most of it not really at the system design level but more in implementation of very specific narrow problems. But if you look at the bigger picture, like their abstractions around concurrency and messaging, or even more basic principles like separation of concerns. They were at times literally a decade behind modern approaches.

    It was the financial industry that was leading the way when it came to low latency cpu cache friendly designs. LMAX Disruptor was open sourced. Aeron is a modern low latency messaging framework although it's fairly recent. Frameworks like Mina and later Netty were iterating on designs that worked well for networking pipelines. The Scala team had a huge impact on how concurrency was handled, borrowing from the actor model and pushing reactive design via Akka.

    There is so much good stuff the industry could have borrowed from. And recently they are doing a better job, but for many years they failed miserably and client/server design is an area where it really stands out.
     
    Spy-Shifty and nxrighthere like this.
  35. Spy-Shifty

    Spy-Shifty

    Joined:
    May 5, 2011
    Posts:
    542
  36. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    564
    One of the biggest problems that I encountered while working with high-throughput UDP systems is that the kernel across platforms is not really suited for high-performance multiplayer games. When you just start multiple workers to read/write datagrams in the traditional way, a lock contention, memory allocations, inefficient queues management, route lookup, and tons of system calls in the kernel will make any well-designed system on top inefficient by default relatively to a count of concurrent connections. You don't really need a heavy multi-threaded environment to build small and efficient networking system using the traditional ways of wrapping UDP sockets. Because you can't scale in the userspace while the underlayer including the kernel is not scalable. For small multiplayer games/small server instances that are using traditional UDP approaches, a single networking thread for everything and few non-blocking queues like ring buffer to deliver stuff to the main thread is more than enough. Everything else is just workers for high-level abstractions and sub-systems.

    The only way to bypass the kernel overhead and unleash the potential of a system that bet on high-parallelism and low-latency using UDP for extremely high throughput is:
    1. Utilize latest sockets-related I/O technologies built by networking engineers of a particular platform.
    2. Build a custom I/O framework using direct network API (as the kernel module if you want).
    Only then you will be able to scale in the userspace:
    plot-tx-clock-201109.jpg
     
    Last edited: Dec 22, 2018
    wobes likes this.
  37. nxrighthere

    nxrighthere

    Joined:
    Mar 2, 2014
    Posts:
    564
    I just looked into the source code of Aeron (C implementation), and it's the same traditional UDP with vectored scatter/gather that you can find in Unity's repository, but Aeron's implementation is more correct, agnostic, and way more complex. Should be great for IPC.
     
    Last edited: Apr 9, 2019
  38. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    2,835
    Ya Aeron has a number of good ideas to borrow from, but it was obviously designed for server to server communication. And for games we almost always have some type of locality or easy way to partition stuff so that we don't need to handle the type of volume Aeron was designed for on a single server.

    The Unity networking I don't care if it scales really. It's always going to be a small number of studios that work at scale. It's going to be far more now with realtime moving to mobile in a big way. But the job system and ECS are solving the really hard problems and give us a foundation we can do something with.

    Right now even large AAA studios don't know how to scale realtime at mobile scale. The hard problems are really more around the instance/match per process model that has been common, and scaling database queries around that. It has huge inefficiencies as studios like Epic are discovering. And I think it's just natural that the best solutions are going to grow out of studios who have the actual problem in front of them. Unity has that on the client side, but they are kind of a fish out of water when it comes to what should the server side look like.
     
  39. orionburcham

    orionburcham

    Joined:
    Jan 31, 2010
    Posts:
    491
    On the OP topic:

    Now that Injection is going away, System Injection is removed as a workaround for accessing ‘static’ data.

    What other options do we have to access the same Native Collection in multiple systems (and their jobs)?

    Specifically, I’m looking to use a NativeMultiHashmap as a lookup table. It’s needed in at least two jobs.

    Thanks!
     
  40. Spy-Shifty

    Spy-Shifty

    Joined:
    May 5, 2011
    Posts:
    542
    I don't unterstand your question.
     
  41. orionburcham

    orionburcham

    Joined:
    Jan 31, 2010
    Posts:
    491
    Way back when this thread was about accessing static data in Jobs, recursive wrote this:

    That would prevent you from needing to access static data, and would let you dispose of your Native containers in the System's OnDestroyManager() function.

    But that way also uses [Inject], which is being depreciated. Since it won't be an option soon, what other solutions do we have to access a single Native Container across multiple System updates?

    ...Something tells me I'm missing an obvious, face-palm-worthy solution. But I'm not seeing it yet. *Anyway*! That was my question. Any clearer? Thank you for any help.
     
    hippocoder likes this.
  42. RecursiveEclipse

    RecursiveEclipse

    Joined:
    Sep 6, 2018
    Posts:
    149
    I think he means injecting systems like this: ECS - The "Correct" way to handle complex shared data between systems. I've also been wondering if there is a currently accepted way to do system injection or use EntityCommandBuffer without [Inject]. I'd like to eliminate Inject but it doesn't seem currently possible to do so completely. I have a nativehashmap that needs to be created from a list on a monobehaviour, I'm not sure if I should just build it on the monobehaviour itself or create a system just for that and inject it. Neither option feels very clean to me.
     
  43. Spy-Shifty

    Spy-Shifty

    Joined:
    May 5, 2011
    Posts:
    542
    To "injekt" a system... You can also use:
    Code (CSharp):
    1.  
    2. MySystem mySystem = World.GetOrCreateManager<MySystem>();
    The only thing we can't access without injection is ComponentDataFromEntity<T> / BufferFromEntity<T> as far as I know

    Because the methods are internal methods of the EntityManager class...
     
    Last edited: Jan 1, 2019
  44. eizenhorn

    eizenhorn

    Joined:
    Oct 17, 2016
    Posts:
    1,995
    Not true :) GetComponentDataFromEntity/GetBufferFromEntity methods in BaseSystem. Versions in EM is not for public use and they without correct reader/writer. You must use GetComponentDataFromEntity/GetBufferFromEntity from ComponentSystem.
     
    Spy-Shifty likes this.
  45. MatthieuPr

    MatthieuPr

    Joined:
    May 4, 2017
    Posts:
    56
    I would guess the new singleton approach could give a nice work around for that :)

    https://forum.unity.com/threads/singleton-components.535331/

     
    orionburcham likes this.
  46. orionburcham

    orionburcham

    Joined:
    Jan 31, 2010
    Posts:
    491
    That would be great! However, Unity's ECS Components must be blittable, so they can't contain collections or native collections. So a few restrictions there. Lucky that Systems don’t have those.
     
    Last edited: Jan 2, 2019
  47. orionburcham

    orionburcham

    Joined:
    Jan 31, 2010
    Posts:
    491
    In Unity's Twin Stick Shooter sample project, some EntityArchetypes and Component types are stored as static members of the bootstrap class:

    https://github.com/Unity-Technologi...tickShooter/Pure/Scripts/TwoStickBootstrap.cs


    Code (CSharp):
    1. public sealed class TwoStickBootstrap
    2. {
    3.     public static EntityArchetype PlayerArchetype;
    4.     public static EntityArchetype BasicEnemyArchetype;
    5.     public static EntityArchetype ShotSpawnArchetype;
    6.  
    7.     public static MeshInstanceRenderer PlayerLook;
    8.     public static MeshInstanceRenderer PlayerShotLook;
    9.     public static MeshInstanceRenderer EnemyShotLook;
    10.     public static MeshInstanceRenderer EnemyLook;
    11.  
    12.     // continues...
    13. }
    Later, Systems access these archetypes to create new entities, add components, etc.

    What should be made of this? I've been taking the warning of "don't access static data from Jobs" quite literally, and looking for ways to avoid it in all cases - even the reading of static readonly data.

    Have I been following this too strictly? What should be understood from the Twin Stick Shooter example?

    Thanks for any advice.
     
    Last edited: Jan 3, 2019
  48. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    26,727
    Feb now, and GDC on the way. The beginning is already over.
     
  49. wobes

    wobes

    Joined:
    Mar 9, 2013
    Posts:
    758
    You should pass an archetype to your job as a field.
     
  50. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    690
    If it's not December and it's still the beginning of the year in Unity terms. ^^
    They need to get their acts together, not just eat their own dog food but also eat their own words.

    Seriously, GDC means, they are preoccupied with the event preparing demo and PT, a month before and to recoupe a month after.
    We have 5+ United events this year and I'm really worried if they will get anything done this year.
     
    e199 likes this.
unityunity