Search Unity

  1. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice
  2. Unity is excited to announce that we will be collaborating with TheXPlace for a summer game jam from June 13 - June 19. Learn more.
    Dismiss Notice

Discussion Is the scripting part of engine underdeveloped?

Discussion in 'Scripting' started by nullptr128, May 8, 2024.

  1. nullptr128

    nullptr128

    Joined:
    Jul 15, 2020
    Posts:
    3
    Hello!

    Not sure if its just me, but I feel like the scripting part of engine is quite underdeveloped. I think it's like several years already without any serious architecture changes in terms of scripting API. There are no improvements planned on the scripting and game architecture in roadmaps. I've posted several ideas like a year ago or so, and still did not get any response from that.

    I just feel like scripting part of engine is left over without any clear direction and improvements. And while, the scripting part is decent - there are many issues with Unity when project scales (i.e. when creating real game, not just a dude who can shot and jump). Community develops their own various ways to deal with various Unity shortcomings, but this fragments community a lot and I feel like Unity lacks of "official way of doing things".

    Let me explain several examples.

    Event system long tail of backward-compatability
    Events are used in various ways, both in engine and in user code. What is lacking in the engine in my opinion is a consistent event system that is used both in engine internal stuff and on user code.

    We have UI components that mostly use UnityEvent, which can be used both programmaticaly and via inspector. But for example RigidBodies do not use them and instead rely on magic methods like OnCollisionEnter, etc. Canvas items may implement IPointerXXXHandler to react on user mouse. Localization uses simple C# events (LocalizationSettings.SelectedLocaleChanged += xxx), etc.

    On user side, we either can use UnityEvent, one of hundred Event assets from store or use native C# events.

    Why can't we have one, single way of dealing with events in Unity instead? It just feels like there is bunch of random patterns thrown into single bucket.

    Id like to use a single pattern like:
    button.onClick.AddListener(...)
    rigidBody.onCollisionEnter.AddListener(...)
    camera.onPreCull.AddListener(...)
    localization.onSelectedLanguageChanged.AddListener(...)
    qualitySettings.onQualityLevelChanged.AddListener(...)

    And so on.
    Ideally, if that event system would be auto-managed (like UnityEvent), generic, be able to be broadcasted or listened both locally (someInstance.someEvnet.AddListener) and globally ("listen to all XXX events that happen").

    Lot of deprecated stuff all over the place
    When we finally remove some deprecated/antipattern stuff like MonoBehaviour.camera or Camera.main? There are a lot of stuff that has long history of being there for maintaining backward compatability.

    Lack proper way of referecing "global stuff"
    Because Unity needs some inspector references, it is always cumbersome to find a way to reference that stuff in your scripts. Especially true if you are using combination of MonoBehaviours and plain C# classes. When developing a game, you will find a lot of global prefabs, scriptable objects and other stuff that need to be referenced in some way. Some people just scrap that and use Resources.Load(), others will but a magic global gameobject that holds the references, others will use Addressables. Some people will drag&drop single scriptable object into 100 different prefabs to share data etc.

    When creating a new project, I must always do some boilerplate with creating global prefabs that use RuntimeInitializeOnLoadMethod to quickly inject themeselves as DontDestroyOnLoad GOs where all my global stuff is referenced. And while it works and you can get used to it, it still feels like I need to hack my way through the engine. Some assets try to address this (like Weaver Pro from Animancer creator) but hey.. we should have first class solution here.

    Perhaps some way to quickly reference these assets would be nice. I do not know which idea would satisfy the most of community needs, but maybe we could have another building block, like "Data Providers" which could be static classes with static fields that can be set up in inspector and be accessed as simple as MyGameDatabase.enemies, MyGameDatabase.prefabs and whatever you set up in this class.

    The singleton nightmare
    It looks like engine was designed like old-school FPS games worked. You have a "level"/"world" (scene in Unity) that contains entities which self-manage themeselves. But I believe that Unity is general purpose engine and I believe very few games can be designed as a bunch of scripted entities that dance together. You WILL need some global controllers, managers, command buses, ui handlers and stuff.

    Because of lack of native Unity way to deal this, most people use singleton pattern which again - can work but feels like hacking through engine limitations. As far as I am working, reading blogs, reddit, watching vides, etc - I know that Scenes in Unity are used not only as "levels" but as a building blocks of game. It might be a level, a screen, some "sub-screen", UI etc. For example, a lot of people use single "Game" scene that handles main gameplay loop, some "Menu" scenes for providing before-game interface. Many developers design their UI as separate scenes that are loaded in game to have separation between various building blocks of game.

    When I select the scene in my project file list, nothing really appears in the inspector. Why couldn't we have another building block, lets say "scene components"? It could be script inheriting from SceneBehaviour class that could be attached onto scene (in one-scene-many-components relation, like game objects).

    This way, when developing your UI scene, you could attach your GameUIManager component directly into scene instead of putting it on some random game object. When you design your "main gameplay scene", you could also attach one or more components into the scene.

    These components could be then queried easily for example via SceneManager.GetComponent<GameSceneManager>().Something(); instead of doing a gameobject with singleton. This would also scale, because you could have several scenes with GameSceneManager component attached and you could setup them in inspector (for example changing the settings for testing-iteration scenes for scenes for Unity Test Framework).

    Unreal has this in form of "Level Blueprints" I think. Having engine-supported, single way of defining and accessing global components that act as high-level orchestreator (all that "...Manager" classes) would be better than rellying on random patterns like Zenject, singletons, global variables, RuntimeInitializeOnLoad, referencing in inspector a scriptable object that controls gameplay, etc.

    Creating gameobjects
    Unity controls the moment a monobehaviour scripts are created, we cannot initialize it in constructor. This is understandable that engine needs to control this. But as for now, there is no "official" way of creating (instantiating from prefab) a game object WITHOUT adding it to scene.

    Even a simple pattern like:
    Monster monster = GameObject.Instantiate(monster);
    monster.SetType(someMonsterType);

    Lets say the monster instantiates its children based on monster type passed (like behaviour, sprite/models etc.) in their Awake/Start methods.

    Comes with some pitfalls, because your Start(), Awake() methods will be called before your SetType() is called, so you need to have your own custom initialization method as there is no easy way to setup your GO before it is added to scene and initialized. So you need to either null-check "monster type" in Start/Awake to handle this very brief moment or scrap the engine Start/Awake and provide custom "Initialize()" method that you can control when its called. This is especially true if you mix static scene monsters that have "type" set up in inspector with dynamic ones that are spawned in game (for example your wizard has a summoning spell,whatever).

    What I am doing for this is I am creating a disabled game object on scene and I am instantiating prefab as child of that scene, so Awake/Start methods are not called immediately. Then I setup my GO and then I finally reset transform.parent to null. Again - this works but I am feeling like I am fighting with the engine.

    Why cant we have new way of instantiating that create objects in detached/dangling state and allow us to manually control when it is added to scene?

    ScriptableObjects
    They are great tool but what they lack is to have an ability to create "inlined" scriptable objects inside GameObjects or other ScriptableObjects. Interface could work like in Scene Lightning Settings, where next to scriptable object, we have "new" button that creates a scriptable object inlined inside current Object (and is serialized "inside" object instead of separate asset).

    This way we could avoid creating unnecessary scriptable objects that exist on filesystem just because they must. There are assets that deal with this in some way, like FlowCanvas allows creating graphs (scriptable objects) as separate assets or "bound" ones, but as far as I know, they are not using any Unity first class solution and instead they serialize data to json and save them inside game object.

    This idea actually comes from Godot's Resources which are very flexible in this matter. You can create them as inlined resource or you can create them on disk and reference them only. You can even change your mind in progress and drag&drop your inline Resource into filesystem, turning it into filesystem asset.

    There are many use cases for that. This SO could be a some sort of gameplay element, like an ability for example. And you would want to have them as separate assets for player skills, but perhaps monsters could reuse this ability SO and you would prefer to save them inside monster ScriptableObject to avoid having 10+ files for every enemy and reduce clutter.

    This can be done manually with custom inspector that provide [New] button and save asset as child of current asset (AddObjectToAsset + hideFlags), but this wont for GameObjects on scene which always need SO as a static file reference. Fighting-the-engine.

    Serialization
    Its 2024 and we still don't have a way of natively serializing other collections apart from List/Array. Dictionaries, HashSets etc. all those collections should be serializable by default, instead we need to rely on 3rd party assets.

    Runtime serialization could use more love as well, for example by using the same serialization editor does (with ability to deserialize references to scriptable objects automatically, etc.). Doing this manually always results in huge mess and lot of boilerplate code to transform serialized references in both ways.

    Spawning Flags & Overrides
    What Unity lacks it to have ability to override some of gameobjects/scriptable object preferences based on flags when game is launched. Some of flags could be static, engine provided (like "Desktop", "Mobile", "Low Quality", "High Quality" etc) and some could be defined by user (like Layers - there are several hardcoded ones but we have a space for custom ones).

    These flags could override some properies of GO/SO based on build. Simple example, you develop a cross-platform game for mobile+desktop and you are using realtime shadows on desktop and blob shadows on mobiles (most of the time, real time shadows are too hard for general mobile games).

    What I would like to have is ability to "override" that certain gameobject will be automatically disabled on certain quality levels or other flags. Instead I must create a 5-line script that will check if its android and will enable blob shadow object based on that.

    Other example, on mobile build I am using reduced set of post-processing effects (tone mapping, bloom, etc) for performance/thermals on mobile. But my lights are oversaturated without these effects so I need to reduce their intensity on mobile platforms. Again, I must make a script that on runtime will check if its android and halve the intensity. Works OK, but I lose ability to preview this on editor scene view and must run the game to check it.

    Localization has some way of dealing with this problem, as you can select "locale mode" and then override some properties just for this locale. I think this could be reimplemented/reused for this case.

    Asynchronous programming
    While certain games can rely on timple Update -> do something multiplied by Time.deltaTime approach, there are some types of games that are more easily implemented using asynchronous programming pattern. As for now, we can use 3 ways of doing this: Coroutines, Awaitables from Unity or UniTask.

    Coroutines are great and integrated with engine (e.g. they stop when game stops or when containing game object is destroyed), but they do not support try/finally, their syntax is weird, they allocate garbage. In the other hand, what is good about them is that you can grab reference to coroutine in order to stop it later on manually.

    Awaitables or UniTask are more native, which means they can be easly used with async/await features. But they are not tied with the engine and your async tasks continue to run when you stop your game in editor. Also, they don't stop automatically when your game object is destroyed and you are required to pass cancelation tokens all over the place in order to reliably use them, which is REALLY cumbersome. In the other hand, you gain some nice features like try/catch or ability to await several tasks at once (await UniTask.WhenAll(projectileTask, damageTask); etc.).

    Async programming is especially useful in turn-based games, where you need to precisely control what happens, when it happens and have possibility to await it. A complete spagetthi code can be turned into simple:

    foreach (Enemy enemy in enemies) {
    await enemy.TakeTurn();
    }

    with Async/Await. But even real time games sometimes may have good use for Coroutines.

    What I would love to see in Unity is to have their own way of creating async tasks, integrated into engine. To combine best of the two worlds, by allowing these tasks to be spawned, controlled (StopCoroutine etc.) and lifecycle managed by Unity but also use async/await syntax with all benefits like try/catch, Forget() or WhenAll().

    I understand that its not easy task to do, but this does not need to be implemented in native way and could rely on code generation/weaving under the hood (for example, in order to manage lifecycle, when compilling Unity could add its own CancellationToken to all calls, under the hood). We already do it in DOTS/Jobs or multiplayer, where source code !== thing that is compiled under the hood.

    Final words
    This is not final list, there are just some examples that could steer Unity into being a little bit more opinionated in use, but also have healthier learning curve. I think that some openness in engine is nice, but in Unity everyone is doing things in their own way in order to circumevent engine limitations. This is not healthy practice, and while I understand that improvements take time, I didnt saw anything serious in this matter for a YEARS already. Roadmap is empty, nothing is developed.

    And of course, you can argue that there pitfalls can be avoided if you know how. Okay, but first of all, it increases learning curve, makes community separated because everyone does stuff differently. And the third thing is that there are no single things you can avoid and go forward.

    Everytime you create project you need to create:
    - a stack of managers like ui manager/whatever managers
    - a pipeline that creates that stuff with runtime initialize on load
    - a way of querying global stuff in your game
    - a controller that handle your async/awaits
    - your game object spawner
    - etc. etc.

    Also, this hurts community I think, because most of tutorials on YT or other media are just "here, I will tell you how to make your dude to shoot when you press spacebar, now please subscribe and give thumb up" and they do not scale. There are serious lack of resouces of how to create something that can actually scale to full game and because you have so many ways of dealing with BASIC things, this is really serious maze. Everyone I know learned how to make scaleable Unity project by creating 50 projects that bricked in some part onto totally unmaintainable scale and people were like "next time I will do better".

    Unity, please consider giving the scripting part more love and solve architectural problems this engine have. We need some polish, refinement and adressing of typical use cases, I do not want to fight the engine with simple basic things everyone needs.
     
    SisusCo and CodeRonnie like this.
  2. lordofduct

    lordofduct

    Joined:
    Oct 3, 2011
    Posts:
    8,585
    This is a long post so I haven't gotten through it all. But starting here... I have 1 thing to say:


    Unity has existed for many years and some of these things (UnityEvent and UI system for example) came out years after others. The "messages" that Rigidbody uses (the "magic methods") are one of the oldest and is why Rigidbody uses them and not UnityEvent.

    They also perform differently. UnityEvent is designed for editor registering of events, but has a bit of overhead to them as a result. C# events are just built into C#, they just aren't going anywhere. They all also behave in different ways (see: editor time adding events).

    Which one do we pick? Do we pick one that has ALL the features of every option out there? What happens to all the existing code that relies on the now removed method?

    This topic is called "technical debt" and is a non-trivial discussion. It does not demonstrate a lack of development on the scripting engine... quite the contrary it highlights that development has occurred! Because new features were added but old legacy features still exist.

    Case in point... picking one causes deprecated stuff to linger around.

    We can't just GUT features willy nilly. Some of us have millions of lines of code tied to features in Unity. If you go an gut them rather than slowly deprecate them... it will cause me to feature lock and NEVER update Unity.

    Worst, I probably won't even try to update. Cause to update would require me to read a bunch of release notes every time "just in case" they gutted a feature I rely on.

    That's why there are multiple LTS versions of Unity going at the same time. It allows us to get important updates without taking on new features that we don't rely on. If you want the latest greatest.... Unity 6 is pretty spanking new!
     
  3. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,563
    Unity has been working towards improvements but everything has been waiting on the upgrades to the scripting runtime (.NET rather than .NET Standard/Framework) which is still in development. Until we're on that there's no point in doing anything else.

    https://forum.unity.com/forums/experimental-scripting-previews.107/

    Unity is also working towards unifying GameObjects and Entities.

    https://forum.unity.com/threads/dots-development-status-and-next-milestones-may-2024.1591548/
     
    Last edited: May 8, 2024
  4. spiney199

    spiney199

    Joined:
    Feb 11, 2021
    Posts:
    8,268
    I think inline scriptable objects is unnecessary as we can serialise arbitrary C# data structures.

    The fact that we can't in Godot is my #1 reason for not using it.
     
    Sluggy and CodeRonnie like this.
  5. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,563
    I haven't looked too deeply into it but word is the code behind the
    [Serializable]
    attribute has slowly been marked obsolete with the goal of removing it and with .NET 9 the attributes themselves will be obsolete too.
    https://github.com/godotengine/godot-proposals/issues/8335
    https://github.com/dotnet/docs/issues/34893
    https://github.com/dotnet/designs/b...tter-obsoletion/binaryformatter-obsoletion.md
     
    SisusCo likes this.
  6. spiney199

    spiney199

    Joined:
    Feb 11, 2021
    Posts:
    8,268
    Well that's going to be a bugger when Unity manages to update to .NET in however many years' time. Would be a huge breaking change for a huge amount of projects.

    Maybe Unity can just add their own
    SerializableAttribute
    under the UnityEngine namespace to keep the functionality alive.
     
    Ryiah and Kurt-Dekker like this.
  7. zulo3d

    zulo3d

    Joined:
    Feb 18, 2023
    Posts:
    1,056
    It's probably overdeveloped. I pity all the new users that have to learn all this stuff that has been added over the years.

    Remember the Roman Empire?..
     
    Last edited: May 9, 2024
    orionsyndrome and Kurt-Dekker like this.
  8. VRARDAJ

    VRARDAJ

    Joined:
    Jul 25, 2017
    Posts:
    64
    I agree with a lot of this thread, although I also agree with lordofduct that it's not for lack of development... quite the opposite. So many topics here... I'll focus on event systems because they've been my obsession lately.

    It can be confusing to have so many ways of implementing the observer pattern. Of the three ways we have to create events in Unity, all three have big trade-offs:

    1) I was grateful for the creation of the UnityEvent class when UGUI launched. Theoretically, UnityEvent can be used much more broadly than with just canvas UI elements, but it comes with a slew of perf op concerns, payload limits, dependency caveates, and debugging nightmares. Some have argued it's hard to consider it a legitimate event type, since events are supposed to faithfully implement the observer pattern. Serialization from observer to subject across two prefabs is by-definition a hard coupling, which sort of defeats the purpose of events. Is an event whose default behaviour is to hard-couple the sender to the receiver actually an event? Or is it something else? It's debatable. However, one-to-many serialization is extremely convenient in some cases, despite it being easily breakable.

    2) C# action events are nearly as simple to implement and much more performant. However, they don't serialize, so programmers tend to love them and artists tend to hate them. They also have payload limits (albeit much higher limits than UnityEvent).

    3) C# multicast delegates are geriatric, but still usable. They have unlimited payload capability, and they have built-in sender signatures, but their payloads require boxing/unboxing, which is its own perf op concern. They're probably still a lot more performant than UnityEvents in most cases, but their biggest downfall is that they're old. Old C# techniques tend to be syntactically verbose in comparison to younger options, so multicast delegates can be confusing for developers who are new to event systems.

    "Why can't we have one, single way of dealing with events in Unity?"

    I was long-annoyed at the lack of a more robust SendMessage API built into MonoBehaviour. The similarities between events and messaging are too many to count, but a best-of-all-worlds among the existing options is near impossible. It's like wanting a product to be high quality but also cheap. The desired traits detract from each other, so we can't have and eat our cake.

    Also, if we got a new native event system, I wouldn't just want it to just consolidate what's already possible. I'd want it to unlock new capabilities. In the case of events, I want things like channel visualization, validation, editable payloads, callback-ability, filtering, two-way signatures, etc.

    IMO, the most important of these missing features by far is validation. Event systems in Unity are dangerous. We shoot ourselves in the foot by creating a majority of the bugs on a typical QA list expressly because we believe it's safe to sever event channels any time we want. By design, severed channels don't throw errors, yet they are broken dependencies that cause an unknown number of downstream bugs. The larger the project, the exponentially greater the number of disabled downstream invocations, creating more fog to hide more bugs. Broken event dependencies fly completely under the radar in unit tests. Their symptoms are found in QA playtesting, but our improper use of event systems is the root cause, and its rarely identified as such, so we keep on thinking we can sever senders from receivers at will with no consequences.

    This is an incredibly hard problem to solve, first and foremost because most of us don't know it exists. I don't think anyone working at Unity is equipped to solve it. I started using Unity in 2010, and the true depth of the problem wasn't clear to me until around 2020, after I'd built event systems for dozens of Unity jobs. Even then, it wasn't clear whether the problem was solvable. Event validation can easily seem like an oxymoron. Validation requires checking for the persistence of components that are supposed to be able to snap on and off of each other like Lego blocks. If an observer is here one moment, gone the next... that's its lego-block nature. Isn't it impossible to validate a thing that's supposed to be severable at any moment?

    A best-of-all-worlds solution donned on me in the past year, but I'm still not done writing it, and I don't think I'd ever have considered it without building dozens of event systems that caused thousands of bugs first. Things like this can never be obvious to the people building the engine. Some solutions will only be obvious to those using the engine a lot.

    I bet we'd find examples like this in every part of the engine. Sometimes, it's on us to identify our own needs in situations where Unity team members wouldn't be able to see the problems without our help. It's a natural progression in such cases for us to build 3rd party tools. The ones that are popular and useful enough may rise up in popularity to be integrated directly into the engine eventually. The process is slow, but reliable. Mecanim, TMPro, and ProBuilder are good examples.
     
    SisusCo and CodeRonnie like this.
  9. CodeSmile

    CodeSmile

    Joined:
    Apr 10, 2014
    Posts:
    6,667
    And there will be decades more without such a "serious architecture change". ;)
    You can't just pull the rug under (roughly) two thirds of all game developers worldwide who are using Unity.

    What will occur is one system eventually phasing out another, older, legacy system but not necessarily entirely deprecating the old system entirely since it may still be needed in maintenance mode for developers to be able to migrate to newer Unity versions without completely rewriting entire systems (that are often tightly coupled with other systems).

    This phasing out is currently happening for the following at various stages of progress:
    • UI Toolkit vs UGUI/IMGUI
    • Input System vs legacy input
    • Scriptable Render Pipelines vs Built-In Render Pipeline
    • And more ...
    There are also systems that enhance built-in systems or simply offer an alternative but will not deprecate the other system. Typically to build games faster or more efficiently, such as these:
    • Entities vs GameObject/MonoBehaviour
    • Cinemachine vs custom Camera programming
    • Addressables vs barebones asset bundles
    • Jobs, Mathematics, Collections + Burst vs single-threaded tasks
    • Shader Graph vs shader programming
    • VFX Graph vs Particle System
    • Visual Scripting vs C#
    • CodeSmile AssetDatabase vs wtf am i doing here AssetDatabase (shameless plug :cool:)
    Not if you do the right thing and use UI Toolkit. ;)

    Other than that the event systems are a choice. C# events can be faster / create less garbage but you can't assign them in the Inspector for instance. Also, they are part of C# so this won't go away.

    You forgot the other "event systems" like SendMessage. Actually, glad you did. If anything, those ought to be removed from the API. SendMessage is not debuggable.

    Nope. You will see that in ANY engine everywhere. Singleton is the first thing that gets taught because it's so convenient. You don't have to "find" or otherwise locate a reference first, it's just there. It's only removing one extra step which goes to show how much people will lean into "convenience". Of course, this will quite often turn to the opposite sooner than later.

    The problem is that most newcomers can't understand the difference between a good singleton (ie CloudSave.Singleton) and a bad singleton (ie PlayerManager.Singleton).

    Or Unity's Serialization package. I still don't understand why this isn't listed in the registry.

    Starting with Unity 2023 we have official awaitable support.

    Overall I have to say you have made good observations and your points are mostly valid. I would love to have a global asset referencing system for instance, and simpler ways to create data-dumps (aka ScriptableObject). There are systems that improve these but given their community nature they are not nearly as widely used.

    The issue with changes like you suggest is that you can't make them in a behemoth of a software that is Unity. It's too big both in codebase and user base. Unity did the right thing and give us choices but of course this comes with confusion and having to relearn a new system every now and then. And this mainly happens as you gain more experience. Thus it feels like constantly having to re-learn Unity - I understand this can sometimes be frustrating particularly if you aim to be a master in a given area like you could be with, say, Photoshop or Blender. But I bet even they have progression and upleveling stories to tell. ;)

    We could see a REALLY great "fixing core features" system to become a de facto standard but only if it is readily available for everyone and well integrated and works flawlessly on all editor and runtime platforms and it ought to support all workflows (mobile, pc, singleplayer, live service multiplayer, amateur, AAA studio, games, industry, movies) ...

    Unfortunately, while the Package Manager would be a great place to share these you can't. The terms of service don't allow even indirect financial benefits and there is no seamless integration without running your own registry server and advertising this within Unity (promoting registries within the editor is against the TOS). That's why OpenUPM is so hidden and requires all assets to be open source and non-monetized. Therefore it's pretty much crippled.

    What I'm trying to say is that there are sometimes business conflicts (Package Manager vs Asset Store vs moderating the ecosystem of assets) and sometimes simple software development issues (deprecating legacy systems in a developer software) that rule these "major overhauls" out.

    Plus, keep in mind you and most of us seeing these issues are not Unity's main audience. It's the game studios, and they already have their improved systems and tooling in place. Which also ties them to the Unity platform simply because of the effort already spent to make it work better in their way. So we're back to business conflicts. ;)

    Lastly, I would still love to discuss a universal system that would solve some of these core issues, particularly around Asset management. Perhaps there's a tool to be made and shared here? Perhaps one that, one day, Unity might buy out and integrate as they did with TextMesh Pro or ProBuilder or BOLT (Visual Scripting) or ....
     
    Last edited: May 9, 2024
    Bunny83 likes this.
  10. SisusCo

    SisusCo

    Joined:
    Jan 29, 2019
    Posts:
    1,358
    Being a big fan of dependency injection, for the flexibility that it provides, the "Data Providers" approach wouldn't be a very satisfactory solution for me personally.

    What I'd love is if on the client side I could just do this:
    Code (CSharp):
    1. class Player(IInputManager inputManager) : MonoBehaviour
    2. {
    3.     void OnEnable() => inputManager.MoveInputGiven += OnMoveInputGiven;
    4.  
    5.     ...
    6. }
    And then be able to somehow configure a service like IInputManager so that only a single instance is created automatically at runtime and injected to all clients that require it. Internally an IoC container could be used to resolve component constructor dependencies whenever a scene was loaded or Instantiate or AddComponent was used.

    How exactly shared services would be registered, is a huge question unto itself. A checkbox shown in the Inspector, similar to Addressables, would be one option. But really, there probably would need to be several ways to register services, for it to satisfy all possible use cases.
     
    Last edited: May 26, 2024 at 11:41 AM
    CodeRonnie, Bunny83 and Ryiah like this.