Search Unity

Data-Oriented Visual Scripting -- The Structure of a Language

Discussion in 'Data Oriented Technology Stack' started by awesomedata, Jan 31, 2020.

  1. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    A Dive into Data-Oriented Visual Scripts w/ Code

    "Data-Oriented" Visual Scripting is a more "legible" way to understand code -- It is so successful for "non-coders" that it could even be thought of as an entirely different (data-oriented) language!

    However, some key components of a proper language (in terms of making it understandable to the recipient) are still missing in general (non-visual) coding that we don't yet have the foundation for proper legibility necessary for code (or even visual code) to communicate a clear message to the recipient. After all, coders are only "communicating" with a computer, right? -- Wrong. Our code communicates with other humans (and ourselves) as well. We need our code to be clear at-a-glance to other humans too. This is critical.
    Sadly, our current understanding of code structure revolves around fuzzy or unclear practices and "programming patterns" that apply to some languages (but not others) to help us understand our code. However, communication is more than just a series of self-referential statements. Even non-verbal communication has clear rules. This amalgamation of self-referential statements we call "programming" very much lacks any clear rules, causing most programmers to write their code using a "language" structure more akin to "run-on" sentences than sentences with proper pacing and punctuation that would ultimately help to highlight the statement of a clear and understandable main idea.

    Even non-verbally this is true. For example, if I angrily run up to you and punch you in the face, my pacing (the angrily running up to you) and my punctuation (the sudden punch in the face) should have clearly and effectively communicated the main idea to you. (That's right. I liked your face so much I just had to punch it!)


    Code: a Language? -- The Programmer's lie

    Most "elite" coders like to call the amalgamation of ideas made of endless self-referential code (because that's truly what it is: something to be deciphered) and tit-for-tat "programming patterns" their programming "style". However, at the end of the day, as hard as one tries, the fundamental lack of a simple-to-understand communication interface turns it into a nonsensical "run-on" sentence.

    In English, "run-on" sentences are a treasure-trove of communication errors. They tend to be separated by commas, an "and" or a "but", or just no periods, punctuation, or capitalization. In general, _any_ sentence can be a "run-on" sentence if there is no clear starting/stopping points or clear overall "main idea" of the sentence.

    Here is an example "run-on" sentence:

    I started programming a game but then I got tired; the experience sucked and I had to get some sleep, but I couldn't rest because I was frustrated with the fact that programming a game is more convoluted than I ever thought it needed to be, so I started studying a lot of game development tools and ideas and learned that they were inefficient; there are better ways to make a game than programming from scratch every time, but I still needed to sleep, so I finally went to bed, but I couldn't sleep, and then I got a glass of water to help me rest, but did you understand whether I got the water before or after I got in the bed?​

    This is exactly what reading (and understanding) code is like to anyone who didn't write the code.

    Can you understand (at a glance) what the main idea is? -- A little? -- If there IS a main idea, it is so vague or entrenched in other unrelated garbage that it doesn't make any sense at-a-glance to anyone who just wants to know (in a timely manner) what the hell is actually going on. Sure, comments help, but can you imagine going back to that sentence and trying to interpret where I've randomly been inserting little notes and text snippets across this "sentence" just to understand where you need to look for the parts you need or what parts actually matter to the main idea, and to what extent -- or in what contexts they actually matter? The legibility of the entire idea flat-out fails due to the lack of structure, pacing, and clarity of the main ideas.

    There's a better way to write (and understand) code!!

    Firstly...


    VERBS can change programming for the better!

    To solve the run-on sentence issue with Data-Oriented Visual Scripting, we must first see how this kind of scripting is like a proper sentence -- and in what ways it is NOT.

    Basically, a data-driven visual script can be broken down into three key parts:

    visual-scripting-language.png

    A script should be a series of subjects, verbs, and main ideas. Together, these will define the _overall_ main idea (i.e. the thing you are trying to communicate overall). The OVERALL main idea (at least in the context of DOTS Visual Scripting) is the stack order, flowing from top-to-bottom (there on the right in the diagram above), with each block reaching out to the left to grab the context passed in by the "sentences."



    VERBS or SUBJECTS! -- Oh my!

    If you are observant, you will find that, oddly-enough, the left side has certain nodes that are not considered "verbs", yet they still seem to "do" stuff. This naive definition of "doing something" is the programming-equivalent of an "and" or "but" in a run-on sentence -- or, using another analogy, it is what the infamous "idea guy" in a game-development team "does" when he "does" stuff -- amiright?

    These "subjects" basically "do" one thing -- they create data (from the void or from wherever) (or conjure "ideas" from wherever, assuming we're using our "idea guy" analogy). This data (or "idea") is meant to be worked with later (via "verbs"). The "verbs" (in the middle, with the pretty colored icons), I call "operators" and "functions" -- because they physically DO something / work with the newly-created/conjured data (or ideas) rather than simply conjure it and pass it on. Because operators and functions actually CHANGE the data directly, they are considered verbs overall. The verbs are the most important part of understanding the overall "main idea" since they help you understand what is happening to each individual idea in the stack over time to see what is actually happening to the overall idea, leading you to a deeper understanding of the overall main idea. As a programmer, it is THIS part (the VERBS) which I care most about noticing while programming (and it is why only the VERBS get those fancy icons in the image!) But don't forget that the original subject matter (or "idea" generated by the "idea guy") -- that is, the actual data slot we modify or use later (the data slot which was also conjured from the void or pulled in from other sources) -- is still very important too (critical, even) -- since the conjured data gives the verbs a place for their work to start from in the first place!

    All in all, the subjects (idea guys' ideas) and verbs (the worker-bees work) determine the pacing (subjects) and punctuation (verbs) that helps to clearly communicate the overall main idea.



    SENTENCES are both pacing and punctuation

    visual-scripting-language-sentences.png


    All in all, the verbs (and subjects) define the main idea, and in Visual Scripting, proper visual-pacing cues (such as icons, colors, verb and subject arrangements in the graph) should always be present to make the main ideas clear.

    Additionally, in a graph like the one above, notice how much that big green ugly line helps to clarify the separation between subjects/verbs and main ideas. A polygon line like this is vital for the purposes of punctuation. Without clear punctuation, you risk running into a visual "run-on" sentence. This is even possible with nice, clear, icons and colors. Take a look at UE4 blueprints and see what kind of vomit unhinged use of color/functionality can do for ruining visual clarity.

    Pacing -- the lack of which causes this:

    visual-scripting-unreal.jpeg

    DOTS Visual Scripting solves much of this with its "stack" approach (see the "main idea" section on the right in my thumbnail). Additionally, if followed by proper visual punctuation, even this graph can be salvaged enough to be given an understandable pacing:

    visual-scripting-unreal (description).png

    I bet you didn't think spaghetti-code could ever be semi-legible, did you?

    Sure, the spaghetti-strings still convolute things, but it's amazing how powerful a simple green line is, isn't it?

    The approach above helps to keep pacing of whole ideas at-once by conveying the overall intent of the verbs on the subjects that make up the main idea at-a-glance. These are made both clear and legible through punctuation, which leads to easily-digestible chunks of computing for humans and computers alike.

    Together, the subjects and verbs create and modify the data over time. When combined with good pacing and punctuation, we ultimately arrive at the nice, clear, easy-to-understand, overall main idea we had always intended. :)



    TL;DR

    I suggest DOTS Visual Scripting focus on making the ability to create legible, easy-to-read, "sentences" its focus (while designing the overall UX), as this will ensure we won't keep repeating the mistakes of the past in future generations of coders. I will clarify these "mistakes" in a future post.
     
    Last edited: Feb 7, 2020
    sirshelley, eggeuk, _met44 and 6 others like this.
  2. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    The Hidden Problems in Functions/Methods

    If you look at Unreal's blueprints, Unreal does a lot of hidden operations for you via built-in functions/methods), for stuff like vector conversion, etc. These utility functions can quickly ruin semantics when you start trying to lay them out in an understandable way without having proper visual separation (see the above article). Functions/methods become unwieldy at best, and a spaghetti nightmare at worst. The unwieldy-spaghetti-nightmare happens because of one thing above all else: the language suddenly becomes filled with symbolic words or phrases (akin to "jargon" or "slang") -- in other words, something only you can understand. As such, it does very little to communicate -- in clear terminology -- what is actually being said in the code to anyone but you and the computer -- and sometimes not even then. After all, the human element can be very error-prone, and can misunderstand (or forget) certain details (even in his own code) in the midst of a complicated spaghetti-node network. Ultimately, it kind of ruins the whole idea of a code/script "language" being used as communication in any form because it's like you're constantly having to look up each and every word of a sentence in the dictionary before you can finally (kinda) understand the overall idea.

    This is why functions/methods (although essential to brevity) are problematic -- These utility functions are symbolic, unfamiliar, words (and phrases!) to which the reader/listener must be privy to for them to understand what is being "communicated". This is not at all unlike all the slang/jargon used by "gangstas" to sound like legit gangstas, yo, and hide dey true intent from the police. Except, at least "gangsta jargon" is consistent from one "gangsta" to the next -- a programmer or artist/designer has no such luxury. Our 'jargon' tends to be project-, team-, code-, and/or programmer-specific.

    BUT -- There is hope!! -- Your functions/methods are not all bad!

    In typical code structure, we use "symbolic" functions/methods to hide unnecessary complexity that typically performs operations on (or formats) our data in special ways so that we can easily work with it later. This is not unlike what a sentence does with words. Words are essentially "functions" -- yet they must be in a certain order (depending on the language of course) to actually make sense. However, this also has a bad side -- Imagine if we put the descriptions in our sentences the same way we already do in code.

    For example, let's say we say "Jim", and we say he "jumps" -- What we are actually saying (i.e. in code) is:

    "An organized lump of growing organic material in the specific form of a "human" called "Jim" uses strong muscle structures attached between two groups of two bones, each group of bone and musculature grown in the form of a hinge joint attached to a central structure, whose muscle structures are commanded to move via the power of a brain's electrical signals across the nervous system into muscle cells whose electrical impulses tell the muscle structures to contract and ultimately exert force upon two platform structures attached to the bottom part of the hinge structures pressed against a large sturdy mass which recoils the human structure via the platforms' attempt to exert force upon the large mass, which propels the whole organic structure into the sky by a reasonable amount, at which point gravity overcomes the upward force, causing the structure's two hinge-based platforms to, again, come into contact with the soil beneath them, which exists at enough mass and molecular density to repel him back up into a standing posture from the recoil of the returned force of the soil from the pressure of the compression of the two platforms, the force of the hinge muscles, and the weight of the human structure counteracting the force of gravity upon its organic mass, forcing it all back up (with adjustment) to an upright and balanced pose."​

    As overwhelming as that is, it should all still be easily understandable (after you decipher it of course) -- unless you have NO FREAKING CLUE what most of those words or phrases mean (which is common in deciphering code!) -- Yay! Now it's dictionary time!!

    ...


    Design is Communication

    The above scary scenario is exactly why artists/designers are (what programmers consider) "afraid" of code.
    We're not "afraid" -- That insurmountable, self-referential, wall of text that goes back-and-forth over and over and over just causes us anxiety when we consider trying to understand it.
    We artists/designers are also not stupid -- It's just -- Who the hell has time for all that back-and-forth??
    A programmer "tinkers" with the code until he understands how it works. A designer just wonders why the hell anyone would even want to try to understand all that abstract nonsense. Why? -- It's because Art and Design is all about purposely (sometimes subtly) communicating an idea (or set of ideas), in a clear and effective way.

    You, my programmer friends, aren't doing that.




    The Interface of an Artist/Designer

    Visual Scripting is an important interface that bridges the world of programming with the world of artists/designers.

    With interfaces for artists, we must keep some things in mind.
    Firstly -- Programmers rarely care about interfaces -- Artists/Designers rely on them entirely.
    Secondly -- There is a misconception about what kind of interfaces we prefer.

    Programmers tend to think we need (or prefer) "simple" interfaces. Not true. Artists/designers don't need (or prefer) "simple" interfaces -- we need and prefer "simple to understand" interfaces. In other words -- We need an interface that communicates itself well to us. Our intuitive nature mandates this -- We need all necessary information available (or at least hinted at very obviously) at a glance.


    The Golden Rule of a designer-friendly UI or tool:

    1) Don't throw too much irrelevant information at us at once
    2) don't force us to go back-and-forth too often
    3) Keep anything we may need to reference either in-context or a single click (or keystroke or two) away!

    An easy-to-understand tool or UI relies on those three bits above all else.


    • Mouseover pop-ups, panel strips that expand into a full panel on mouseover (that can be pinned open), or quick-click dropdown/button previews make us very happy. Tables are handy too. When you want to not present too much information and/or want to keep the clutter down in the UI, these sorts of automatic quick-reference tricks will win our hearts.
    • (Just remember -- we hate irrelevant or self-referential information! -- Minimize it! -- More than that, we also hate waiting for a clear representation of the end-result. -- So communicate with us -- but do it quickly!)



    A Return to the Visual Scripting Interface


    Hidden operations and functions are a critical part of communication and brevity (hopefully made clear by that other bit above) -- and, as such, it needs to be present in some form. That form should rely on portions of a visual script being both easily understandable at-a-glance and also modular-but-compatible-with-semantic-flow in a way that clearly-and-effectively communicates your main idea (quickly) to the uninitiated -- i.e. "Jim" (subject) + "jumps" (verb) = "Jim's ascent into the air" (main idea) + "until he eventually contacts the ground again" (another main idea on the processing stack) = "Jim's ascent into the air until he eventually contacts the ground again" (overall main idea of the "processing stack").

    The form Unity uses (the "layer/processing stack" approach to what I call the "overall main idea" in the above article) is great (amazing, in fact!) since it is a perfect start to the most awesome visual scripting solution out there.

    However, there is one very important (critical!) part of the equation still missing in DOTS Visual Scripts:


    The Magic Line

    I call it the "magic line" -- otherwise known as "punctuation and pacing" for Visual Scripts. This line separates the work from the setup (data creation and import), and the work (data changes) from the main idea stack (final formatting of data for its actual use as well as the actual use of that data).

    This "magic line" allows the brain to visually separate and actually see the setup, execution, and overall main idea portions of a Visual Script, separately -- in a single glance. It is what lets our brain process and quickly decode concepts like "Jim" and "jumps" -- all without needing to understand or decode each individual "word" or "slang/jargon" in-line with the rest of the code. We can instantly see that "Jim" (the function of a human) does something -- i.e. "jumps" (the function of propelling something into the air) -- and the main idea is still the same as it would be whether those functions were expanded or not. We are still talking about "Jim's ascent into the air". The "magic line", as I called it, would be drawn in the same places whether those functions were "expanded" to include the full definitions/decoded functions or not.


    For more information on this "magic line" -- see the above post.

    But to be clear -- the "magic line" is simply a metaphor for the pacing of communication and the delineation of scope. It doesn't have to be a big ugly green line (or whatever) as described above -- it just needs to provide the same kind of functionality.

    Without the powerful "magic line" acting as "punctuation" to combat the ever-increasing lack of _pacing_ (especially without the proper delineation of scope) that irrelevant information inherently instigates (by obfuscating or hiding the intent of a clear and smooth understanding of the main ideas and/or the overall communication), said 'communication' quickly becomes muddled and less relevant -- especially as you attempt to provide more and more information or context to describe specific graph behavior. (And by "communication", I simply mean the scope of the visible graph -- that is, the overall 'communication' provided by the smooth and effortless flow of data operations as they flow through the graph into the main idea "processing" stack {that is -- the place where the data for the individual, sequential, "functions" are actually considered "processed" and ready to be applied to the original data for the next frame or CPU cycle -- otherwise known as the "processing stack" -- which is the stack of sequential data sentences (the data paragraph) that defines the global context and visual scope of the overall visual graph} which describes the gestalt 'function' {or _overall_ main idea} of the entire visual graph).

    Obfuscation (which naturally exists when "binding" functions into one-to-many relationships) destroys any real hope for quickly or easily understanding the data -- except for very simple situations.


    However... inherent natural obfuscation --never-- scales well.

    No matter how "well-designed" your "function" nodes (i.e. "language/words") appear, obfuscation always (inevitably) arises in Visual Script graphs at some point due to a lack of these "magic lines" of pacing and scope delineation. DOTS Visual Scripting (drop 6) may be less likely to quickly devolve into Unreal-Engine-esque blueprint spaghetti-code, but definitely count on the fact that it will devolve -- and that it will devolve more quickly with artists/designers. Since artists and designers aim to break conventions, our designs (and our code) will do that every time. In fact, advanced 3d artists/designers eventually go with scary tools like Houdini because it allows us the flexibility to do this kind of "devolving" artistry more freely than standard 3d artistry tools will allow (i.e. small, discreet, steps on the mesh-processing level) so that our art can have some semblance of (reusable!) LOGIC, enabling us to create our own tools that both work for us and also scale.
    Don't underestimate us artists/designers as "simple" users -- we value step-by-step logic just like you programmers. But we like that to make sense at both high _and_ low levels. We don't like having to think that way and retrace our logic all the time. We need to see things globally too -- and at a glance -- in order to work. We go to great lengths to do great feats with our art -- even braving a world we don't always want to understand. We expect the programmers who make our tools to do the same.
    As scary as it is to most of us, Houdini is proof we really REALLY appreciate it when our tools are general-use and flexible (and we brave these tools even if the UX is from the 90's). We are not pigeon-holing ourselves into a single mechanical "simple" workflow whose UI quickly becomes impossible to manage the moment we try to use the tool for some unexpected purpose or at some unintended scale. Some may think that a scalable tool increases complexity, but if done creatively (and for the purposes of identifying the most intuitive workflows), the opposite is true -- while also gaining valuable style and flexibility! We don't need our tools to be "simple" -- We just need them to be "simple to understand" -- no matter how complex the interface has to be. But the faster this interface is to pick up, the better it is for all of us. And that is my goal here -- to describe the UX that is necessary for the ideal visual tool that can handle logic as complex as scripting a AAA game at scale.

    Scripting can be "simple to understand" too -- it just needs that visual "magic" to bridge the gap between step-by-step logic/reasoning and simple, easy-to-read, visual clarity. For Visual Scripting, that "magic" is the "magic line" of visual and logical clarity, whose job is to separate script flow into bite-sized chunks not reliant on definition, but on overall functionality. :)



    -- Fin
     
    Last edited: Dec 11, 2020
    eggeuk, MegamaDev, bb8_1 and 5 others like this.
  3. Neonlyte

    Neonlyte

    Joined:
    Oct 17, 2013
    Posts:
    338
    Some code example would be nice?
     
  4. Neonlyte

    Neonlyte

    Joined:
    Oct 17, 2013
    Posts:
    338
    I have not done any programming using visualized tools, so I find it very hard to understand.

    - I think first of all you need to explain what this graph does.

    - You partitioned your graph into sentences. Is it OK for me to assume that this graph can now be written into a short and precise English sentence? If so, could you write these sentences down to prove your point?
     
  5. Neonlyte

    Neonlyte

    Joined:
    Oct 17, 2013
    Posts:
    338
    I really think there must have been misunderstandings. Could you give one example to explain what you mean by "back-and-forth" and "tinker"?

    Also, I find it puzzling to argue that one does not want to understand computer programs just because it takes time. It's like saying not wanting to understand your argument because your forum posts are "too long" and thus takes time. Using your visual programming graph as an example, could you make the claim that anyone else glancing this graph without external information can very quickly understand what it does and how it does things, as this is your prime example of what an "artist-friendly" programming convention can do?
     
    Last edited: May 3, 2020
  6. Neonlyte

    Neonlyte

    Joined:
    Oct 17, 2013
    Posts:
    338
    Overall, I think the whole point that concerns DOTS Visual Programming is that you would like purely decorative elements to annotate your program graph.

    Whatever programming pattern is promoted here, it needs to solve a problem. So far, I fail to see what existing problems your given solution aims to solve. You may have set up some hypothetical situations, but I can't see whether it maps to any real-world example.
     
  7. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    First, some clarification

    Anything programmed using current OOP practices in C# would suffice as code examples, especially if it has more than 5 classes/methods and, for good measure, makes use of C#'s "backward inheritance" and "abstraction" principles.

    On top of this, examples are kind of pointless when you realize that OOP generally is based on heavy "object" referencing, which causes lots of overhead already. Then, when you "abstract" or "encapsulate" that, you multiply these references (and performance penalties) exponentially when you actually use said references to generate behavior. This is by default in languages like C#. I will elaborate in the post below.
    For now, just looking over the internet should tell you how clearly people understand (or not) the "4 pillars" of OOP. They are simply not well-understood, even by the most "advanced" programmers out there (who supposedly "use" them).
    My question is -- How can you "use" a practice you don't fully grasp? There are countless articles and forum posts, with no clear consensus to when, where, and what exactly Abstraction and Encapsulation applies to. As I am mainly referring to C# (since Unity uses it), you can find your own code examples demonstrating the wide (mis-)use of these two pillars. The major "misuse" arises from how self-referential and abstract pretty much all OOP languages are, as they force you to encapsulate (for readability purposes), but this causes practical real-world problem scenarios (for example, when sharing your code or returning to it after months). This is so common with programmers that nobody even considers it a problem anymore! "I can just comment my code" most people say. However, even then, if your comments aren't crystal-clear, you are still up a creek without a paddle, inevitably having to relearn your code all over again -- assuming it's even commented to begin with!
    That is a more trivial matter compared to the number of dependencies artificially created by OOP "design" for the sake of abstraction (or is it encapsulation? -- I wonder if anyone truly knows?) in one's code. These two pillars (at least in C#) are just another excuse to sound like you're coding "properly" when really nobody knows what the hell "proper" C# code looks like. For example, what level of encapsulation (or is it abstraction?) should you use on your methods? How many levels deep do you go? -- There is no answer because there is no true guidelines. Most experienced programmers' code "design" is typically just a convoluted mess of spaghetti-code methods endlessly referencing other objects and causing performance overhead. This is with or without Visual Scripting, but this nightmare web of spaghetti-references-and-dependencies is made MUCH WORSE when bringing in visual representations of non-data-oriented code. The "magic line" I mention above becomes more than just a decoration -- it becomes a necessity.


    Fair enough.

    But it is not "just because it takes time" -- but "because it takes a lot more time than necessary."

    There's a better way to program, (that takes a lot LESS time) and is more understandable too -- and it's all thanks to Tool-Assisted Data-Oriented Visual Scripting, using ECS components as a basis for logic, and then systems that act globally on the data they're interested in as a basis for behavior.
    -- Voodoo you say?
    -- You thought ECS was harder and more convoluted too?
    Sorry to disappoint, but I can back this up. Give me another "too long" post (see next post), if you have the time.


    That's actually really fair.


    The problem this solves (which I didn't get into much in this thread -- see my next post) is two-fold:

    1) Code becomes easier to digest (for both programmers and designers alike "because of brains").
    2) Code becomes easier to write (and more performant, without all the overhead of tons of references, because ECS is badass already, but becomes even better when it is tool-assisted).

    I wrote a big (long) post about how ECS is better at OOP than C# (or really any OOP-based "language" out there), and I've backed that up with examples. This post (below) might be interesting to you. I had actually planned to post it in this thread (as the third post here), but you beat me to my third post (and my fourth post, also my fifth and sixth posts too), lol. No worries though -- Your many posts here just tell me that my previous two posts in this thread weren't enough. Thanks for that btw -- Now I can form an even more solid argument. :)



    Not entirely -- but to be honest, at the time when that first post was written, that was partly my goal. At first, DOTS Visual Scripting was heading in a great direction. It was solid. Besides stuff I could program myself, I thought it just needed some visual clarity (which that particular kind of chunk "encapsulation" worked well to do.) I didn't try to drive the point home, but I had hoped the Unity Team could see (visually) how much that chunking method helped UE4's problem with spaghetti-nightmare code. But apparently I wasn't clear enough.



    Speaking of clarity...



    Honestly, that UE4 Blueprint graph's overall functionality doesn't matter.
    The idea I was trying to present was that the "data flow" in that graph is batshit-crazy (both visually and otherwise), and if it could fix a graph like that, it could make a badass tool in any Visual Scripting solution.

    What's so bad about UE4 Blueprints?
    Unreal's blueprints have lots of interdependent graphs and hidden operations on the passthrough data, and just trying to make sense of all that interdependent, hidden, craziness "at-a-glance" could be drastically improved with a set of basic rules mimicking a visual "language" structure.

    More on this further down.


    Not quite -- I simply used sentence structure.

    Which means that this bit here:

    Cannot help me "prove" my point. Instead, I need to rephrase the wording since I clearly miscommunicated with the "sentence" concept.

    What I meant was more akin to a "Data Sentence" -- which fits better, since data alone can't make a "sentence", as it needs an operation to transform it first -- i.e. In the example of "he jumps", the "jump" operation is performed on the "he" data, transforming "he" into a "he" that is also "jumping" (which both the overall main idea, and is kinda what ECS components do in systems, other than just provide the components to query or the data to transform.)

    Back to "Data Sentences" though:



    Data Sentences ~= Sentence Structure

    Sentence structure (which is more of what I'm referring to when I say "sentence" in reference to data) doesn't have anything to do with data grammar (which is subjective and localized, and is the difference from somebody from England versus somebody from the Downtown Brooklyn) -- It has to do with data structure (which is objective and universal, and means the same thing whether one says "he jumped high" or "dat man dun leaped into da sky!").
    "Data Sentences" are structured in a way that follows data flow -- no matter in what form it takes as it flows. As a language, whether you're from downtown Brooklyn or London, English follows a common flow in its structure -- i.e. "dog, blue, upside-down" makes no structural sense, whereas "dog turned blue upside-down" does, as there happens to be both context and transformation for the data, leading to a final state (which is provided by the overall sentence structure). This final state (main idea) would be impossible to achieve with just unrelated (isolated) data or functionality.
    Instead, "sentence structure" means you have a thing, an action, and a main idea that is derived from the interaction of both thing and action.

    The "system" which both contains and transforms data and components in ECS -- provides the overall result (or main idea) that comes from both data and data-transformation. This is often referred to (in OOP) as the "behavior" of code.

    Using "sentence structure" allows partitioning for both the Subject (which is data import, data creation/definition, data transform preparation) and the Verb (which is the actual data transformation, data processing, data passthrough) from the overall Main Idea (being the final data conversion / preparation for system data consumption / assimilation, actual data consumption / assimilation in the system, or the passthrough of data results to another system or system level in the current system).

    Regarding the visuals of a proper "Data Sentence"

    Since the brain can only process small number of "chunks" of (small amounts of) data at a time, a small amount of visual separation plus a small set of rules to further differentiate the data without needing further visual cues. This is important as it can quickly become detrimental in visual "chunk" processing when you try to visually process "chunks" containing other chunks beside similar-looking chunks, without other rules to differentiate them.
    Having clear separation between certain levels of data (i.e. Data Sentences with visuals to break them apart) leads to a MUCH faster understanding "at-a-glance" of what is going on inside your code because the data flow is crystal-clear. You can now instantly understand your data because you can pinpoint its flow at a glance since it is visually-categorized based on Subject, Verb, and Main Idea.

    This is critical to instant understanding due to something called "working memory" -- Working memory (especially for artists) is generally visual. This is why artists and designers (who are visual creatures by nature) hate seemingly-infinite lines of code that don't have a clear (visual) separation or relationship (and therefore meaning) at-a-glance. The number of chunks they can process visually is not infinite -- therefore, being able to see the chunks quickly, and how they're related, saves an immense amount of time, energy, and sanity for most people.
     
    Last edited: May 5, 2020
    eggeuk and GliderGuy like this.
  8. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    Rather than pointing out further problems with the general OOP approach, maybe I will take a moment to point out why ECS is a better way of programming in general -- including how it is better at "Object Oriented Programming" (OOP) than any other language in existence today!







    ECS: The better OOP.

    ECS is very simple -- it has two concepts, components and systems. They act on (or are attached to) 'entities'.


    Easy, right?

    Under this veil of simplicity hides a real beast of a programming language.
    A beast that, if unleashed, can topple the giants of modern-day Object-Oriented-Programming (OOP).
    Am I exaggerating? -- You be the judge. -- But hear me out.

    To new and old programmers alike, ECS does appear to have some "limitations" at first glance.
    One could say the same thing about the seemingly-fragile human body.
    But as (the tiny-but-badass) Bruce Lee once said -- "Make no limit [your] limit. Make no way [your] way."
    This is the rule of nature.
    Like the human body, by using it smartly, ECS can be both powerful and flexible -- There really is "no limit".
    There is a lot of subtle badassery awaiting you underneath its seeming "limitations" (just by way of the sheer number of techniques you can employ to your component and system organization. You can not only make these seeming "limitations" completely disappear, but these organization techniques can also even improve so-called "OOP practices" in a way that no one ever saw coming. We used to think the biggest guys were the strongest -- until Bruce Lee showed up. Now meet my pal, ECS. He's the spiritual equivalent of Bruce Lee for OOP programming.



    ECS's OOP -- "The four pillars"

    Let's see how ECS is actually better at "The four pillars of OOP" than any other language (claiming to be 'OOP'):


    • Encapsulation

      ...doesn't need complex rules (nor endless levels of 'abstraction') to exist in ECS.

      In fact, with the help of a Visual Graph showing the relationship of component queries to a particular system, any encapsulation can (and should!) be kept contained within the script graph itself -- Each script graph can (and should!) be standalone and complete, with all required component data (and changes to the components' data) happening within the scope of a single system.
      Drop 8 went the opposite route (and went wrong!) by not doing this. Drop 8 is full of simultaneously executing bits of logic of systems referencing other systems referencing other systems while never referencing components or data explicitly (i.e. the wild west), while drop 6 scripts were nice, self-contained systems that only referenced the component data they explicitly needed.
      By removing the verticality (in other words, the idea of the self-contained "system") in drop 8 in favor of parallel, heavily-referential (and heavily-dependent!), execution of other systems within systems (in a single graph!), true OOP encapsulation was actually removed in favor of all the things that make traditional 'OOP' slow.

      OOP systems (and/or the data they rely on) should never have to be referenced!

      An encapsulated "object" should be a single, self-contained, unit (of functionality):
      See the first result in google -- https://www.google.com/search?q=OOP+encapsulation

      "Encapsulation is one of the fundamental concepts in object-oriented programming (OOP). It describes the idea of bundling data (and methods that work on that data) within one unit, e.g., a class in Java."​


      This is kinda exactly what ECS is for.

      Before drop 6 was "dropped" entirely, I actually described a state machine system like this in great detail, hoping Unity would take a hint that we needed state machines. To their credit, they totally did -- but to give us these state machines, they removed the ECS component query mechanism entirely, leading us all back to the (non-encapsulated!) spaghetti-code stone-age! Now we get state machines in drop 8, but to get them, we can no longer use ECS data-oriented (non-spaghetti-code) approaches or methodology in (Data-Oriented!?) Visual Scripting at all.

      Now, true OOP encapsulation in ECS is doomed for the ability to "maybe someday" query ECS components.

      This is a problem to me.

      "True OOP encapsulation" does not mean "the way OOP encapsulation is sometimes used" -- because that usage can be (and very well is!) entirely wrong (for all the reasons and justifications explained above and below).

      For example, a "state machine management" system should always be a singular, self-contained, SYSTEM (i.e. a separate, encapsulated, object of data and functionality) -- Which, by default, means it should NOT be a NODE -- or even a group of nodes -- because without this self-contained functionality, the state machine management system is not a separate, independent, (and therefore encapsulated!) object, consisting of its own data and functionality, to use to further define a program. In other words, this is not an object, and therefore, it is not considered true OOP).
      Furthermore, individual "states" should be separate systems too, each managed by their own internal states of component data queries (and component data transformation methods) -- which essentially define this particular thing as a system that has a state (rather than just as a group of systems/components that kinda sorta make a new system/component). A true state should manage its own state internally anyway, including whether it is an "active" state or which state it needs to eventually become next. The "state machine manager" system assigns a new component that marks the new state as "active" -- and away the "new" state goes to manage its state too.

      "But what if the current state doesn't yet know what "new" state it should become!" you ask -- and I say: That's when a new system is built to handle that particular case. In the case of controller input, you add a component called "InputTransitionActions" and viola -- your state adds this component (maybe alongside a group of other components to trigger other systems) and your "state machine management" system prioritizes this component as a way to get a new state (to set to active) for the entity, which would add a particular "ActionState" component and let the "state machine management" system set it to "Active" to trigger the action state. But if you're in a cutscene (with a "Cutscene" component), the state machine manager waits until the Cutscene component no longer exists before it sets InputTransitionActions to Active (by adding an active tag), letting you finally use input again.
      Believe it or not, as you can see above, this type of separate-but-"loosely-dependent"-system can not only easily be authored without referencing other systems or component data from other systems (meaning it is true OOP encapsulation), but it is easy (and natural) to follow as well -- even for someone who is new to your code! The simple and easy-to-understand rules of encapsulated components and systems tell you exactly where you need to look in the code for that next bit of component data or system logic! As long as you know what sorts of components are referenced in a system, it is easy to understand what they are for -- Meaning documentation will be easier (and more straightforward) too!




    • Inheritance

      Shouldn't be backwards or hard-to-manage like it is in 'OOP' languages like C#. Backwards "inheritance" further muddies your ability to "encapsulate" your data/functionality by disallowing its right to flow naturally and effortlessly from a single, easy-to-manage, point of origin into one or more destinations.

      In ECS, inheritance is done by stacking components in a sequential way that flows naturally through separate (but logically consistent) systems, which allows for both the lineage (and specificity) of data inheritance in any system that requires it. A certain series (or mask sequence) of components can progress a system (or many systems simultaneously!) in a particular direction at any given moment. With the correct question (posed naturally with ECS, rather than if/then/else statements), you easily determine which animal is an elephant or a giraffe -- and even better! -- that data is only as specific as you need. You never need to ask for more (or less!) than you need in your system.

      So no more "if (animal == elephant) {probablyNotGiraffe();} else if (animal == giraffe && animal != robot) {notRobot();} else if (animal == elephant && animal != robot && animal != tall) {notGiraffe();}" etc. etc. -- You can speak in plain language with ECS systems -- i.e. "Do "GiraffeBehavior" animation in your system with "Animal -> Giraffe" entities. Do "ElephantBehavior" animation in that same system with entities having "Animal -> Elephant" components. No ifs/ands/buts or thens/elses either. You can always ensure the following:

      "An elephant is not a giraffe sometimes."


      OOP is currently the wild-west, and ECS is understandable (civilized) code. Yes, an Elephant is an Animal, but is the Animal you're asking for an Elephant? -- ECS keeps that answer easy with queries consisting of a few sequential logical components (which is the ECS version of inheritance.) Judging by code like the above if/then/else/ifelses, other OOP languages still (constantly!) struggle with this. ECS has no problem with it.
      The component data tagging lets systems further specify and/or define explicit behaviors for the "type" of animal or behavior association you're after through simple sequential component tagging (or "inheritence".)
      For example: "Animal -> Elephant -> Tallness" components == "Tall Elephant" and the "Animal" system that handles what both tall and short animals can do must be able to easily-reference whether this is an Elephant or a Giraffe, as well as what other properties (read as: components) they have attached to them.
      ECS makes it very clear you can't assume that if "Animal -> Tall" components exist that a system is referring to a Giraffe or a Dinosaur (or in our case, a very tall Elephant).
      Thanks to ECS's sequential logical components, inheritance is easy. In fact, the very sequence of logic you use dictates what animal you apply your changes to. Yes, ECS requires explicit components, but this can protect you from sweeping (and unintentional) changes. On top of this, this sequence of logic can be added/removed by systems (on the fly!) to define exactly what an entity is capable of at any given moment! Yes, it is as powerful as it sounds -- and if sequences of logic are added/removed creatively, you will rarely need more than three or four components for any system. See the "state machine management" example way above.

      The 'sequential component inheritance' concept is extremely powerful, but it doesn't properly exist in Unity's currently-implemented gameobject-based workflow.
      Right now, you get "gameobjects ~= entities", which means entities' components are treated as "static" (and mostly immutable!) in the workflow, causing "on-the-fly" component queries like the above to be obtuse and difficult to create, especially when these component masks/queries need to be simple and fast to create -- on-the-fly -- while authoring our systems in our Visual Graphs...

      The DOTS Visual Scripting design is destined to be premature without this concept. Without proper inheritance, you're stuck making thousands upon thousands of explicit (and avoidable!) references to outside systems and data (like this):



      Which leads me to my next point:





    • Abstraction

      WILL NOT SAVE YOU (from 'complexity')

      Because of encapsulation-heavy approaches in most "OOP" languages, references to external systems / objects will (eventually) crush you with the weight of their "complexity". See above graph.
      That graph is not very "complex" -- yet it's still hard to read.
      Its apparent "complexity" comes from endless references to external systems, external data and external functionality -- usually (falsely) justified that it was written this way "for abstraction purposes"
      Oh yes -- it really does look very abstract -- But, somehow, the graph still doesn't look all that "encapsulated" to me. "This is impossible," you say. "Look at all that encapsulation!"
      And this is where we run into a fundamental problem with our current OOP approaches.
      Abstraction was meant to make things easier to understand, not harder.
      Using abstraction as a "feel good" way to sidestep carefully planned encapsulation is kind of what we do these days -- and languages like C# and Blueprints are designed to help us do that sidestepping more easily.

      This is where it all goes wrong.

      "Abstraction" should never be used as (or equated to) encapsulation in OOP.

      EVER.

      This applies to any "OOP" language -- which includes ECS.



      From now on -- I will kindly refer to the above practice as an abstraction-encapsulation "circle-jerk".

      Why?

      Because what we currently do in OOP languages isn't actually "abstraction" anyway.

      Abstraction was originally intended to reduce the amount of irrelevant data that a human had to interpret so that the human could have a better overview of his code's core behavior, leading him to better-understand what his code was doing from moment to moment, letting him/her modify the relevant parts more easily. What Unity is doing right now in Visual Scripting (in terms of abstraction) is the exact opposite.

      By simply hiding the spaghetti-monster deeper and deeper, under layers and layers of code, we are shooting ourselves in the foot in every step forward.
      We are no longer "abstracting" or "encapsulating" our code -- we are obfuscating it.
      It is no less complex or difficult to understand without the obfuscation (in fact, it's arguably harder to understand.)
      As a result of said obfuscation (i.e. via nodes), instead of making it easier to get a better view of what our code is doing, it actually gets harder and harder to modify it intelligently as we move forward (in our abstraction-encapsulation "circle-jerk") because the crux of our functionality is buried beneath layers of friendly-looking (immutable) 'object' references. Not only does this mean our code is harder to understand; but it also means our code is inflexible too.

      The very moment you try to change how that object behaves (or where particular properties are derived from), you end up going down a rabbit-hole of nightmares you can never escape from. The object is immutable, remember? -- So your only choice is to dig into all the other various properties/classes littered throughout your big (potentially enormous!) project to change one thing about your object (or where it gets its properties and/or behavior from).
      Abstraction (for the sake of encapsulation) just makes this object-decoding nightmare exponentially worse.

      Visual Scripting -- to add insult to injury -- makes encapsulation a NECESSITY.
      So, rather than carefully-designed encapsulation focused on the pillars of OOP, we mindlessly "abstract" our code away by what some of us call "encapsulating" it.
      This is of course circular logic.
      This constant abstraction-encapsulation "circle jerk" causes us to forget that the abstraction was meant to make our code easy-to-understand (and easier to modify), but not all "abstraction" is good "abstraction". The more methods and objects we "abstract" behind other objects and methods -- the less "easy-to-modify" our code naturally becomes. Which means that, even on a moderately sized project, at some point, you will eventually want to tear your eyeballs out rather than willingly traverse a nightmare-web of dependencies you've (inadvertently) created (aka: the spaghetti-nightmare monster).

      So what do we do? -- This is just how OOP code works, right?

      After all that "circle-jerking" to try to "simplify" our code (to no avail), we never stop to wonder if our problem isn't actually with code at all. We never stop to realize that acting with instinct or learned behaviors (rather than with our brain) is causing us to miss something important. Something is causing these small problems and headaches. We just don't see that we've ultimately forgot that code is still a language. And in the end, the whole point of a language is communication with others.

      But, unlike body language, some languages just aren't designed to communicate simply.

      ECS however, is.

      I wrote an article (on Data-Oriented Visual Scripting) entitled "The Structure of a Language" which, in the second part, I explain how "methods/functions" (in the sense of programming for artists) are actually bad for the intended purposes abstraction in code. Abstraction is for providing a quick understanding of our code, leading to an easier understanding of what's happening moment-to-moment -- without requiring a dictionary to look up each variable or relearn every function/method you wrote when you come back to your code after a month! What if the director of every movie you watched made you pull out a slang dictionary to understand every other word those actors spoke for each and every scene? -- I would bet you'd quickly give up watching movies. Artists are about communication with their designs. Why can't programmers be the same?
      The article is worth a read, but if you don't like reading (you really should!), just know that what we are working with currently in OOP is a poorly-designed system for a natural flow to what I call "communicative abstraction".

      Communicative Abstraction is just a fancy way of referring to "abstraction (without encapsulation) that communicates clearly." When using the common OOP "abstraction" pattern of simply plopping a complex piece of reusable code into a function/method and forgetting about it, you are effectively using symbolic "slang" when you call a function/method from someplace else than where it originated. If you don't know what a particular slang word means (or maybe you've forgotten?), you have to look it up. But when it's legitimately code (as in the undecipherable kind) -- and you've got to figure out the meaning of long-forgotten words/phrases yourself each and every time you come back to it -- you might as well get out the reading glasses, a blanket, then a warm cup of tea (or cocoa), and settle-in for the night... It's gonna be a long one...

      Or... you could just use ECS. Then you could do "abstraction" in a different way.

      See, unlike in C# (or most other OOP languages) -- the beautiful thing about how ECS does "abstraction" is that it is a top-down approach using mutable components (rather than bottom-up, with immutable objects and interfaces). Because you can start at (and even change!) the top-level of your concept (at any moment!), you can also easily abstract your data down the chain in a linear, easily-understandable, way and change not only your behavior, but your entire concept. For example: "Animal -> Elephant -> Tall == TallElephant" or "Robot -> Animal -> Giraffe -> Short == ShortRobotGiraffe" can automate entirely new abilities or behaviors or automatic component additions/removals (when combined with systems) simply by using the combination of components on an entity at any given moment, inherited from the top (high-level) logic, going down toward specific, low-level logic. Your systems just interpret that topdown data based on specific component queries, sometimes with a wildcard * masking approach, leading you to quickly change what the hell is happening in your code/systems almost anytime you want (while within a system). You only care about Animals and whether they're VeryTall, but not whether they're Robots, Elephants, or Giraffes? Create a mask of that data query and call for it in the system that needs it.
      "Communicative Abstraction" is done for you (automatically!) by simply arranging your data components (and designing your systems) with certain components while also giving them a purpose! You don't have to worry about syntax, functions, or methods -- You just create explicit component data designed to be used as a mask and your systems interpret that data (specifying its own masks), changing anything that matches its data criterion (and mask) -- all at once. You simply filter it, then create behavior for it -- as is standard in ECS. This is your entire VisualScript. You're good to go.

      As an example:
      Slap a fancy name on the "object" if you want -- i.e. while the logic sees it as "Animal -> Giraffe -> Short", you can still nickname it "ShortGiraffe" if you like (and even use the GameObject workflow). You can also still reference the odd blackbox data (like input state from the hardware) i.e. to make the ShortGiraffe move, but if you can apply changes across the board to all the components you need (i.e. with a simple mask/query like. "InputSystem -> Player -> MoveAction" + " * -> Animal -> * -> * "), a short program describing how that behaves on your data (" * -> Animal -> * -> * -> LocalToWorld", grab the Z component from the LocalToWorld on the entity, *= the entity Zposition based on MoveSpeed / DeltaTime") -- then why the hell would anyone EVER want reference another gameobject directly again?? -- If each "reference" was a masked series of data components (or a nickname for an explicit series of masked components), this would be enough to apply communicative abstraction in a way that would keep your systems small, compact, and easy to understand from moment-to-moment. You want to know how your animal moves? -- Look at the folder for AnimalSystems -> AnimalBehaviors -> AnimalLocomotion. There all your animals' unique locomotion for tall/short elephants/giraffes (including the robot versions) would be stored.

      And if you need a properly "abstracted" Finite State Machine (FSM) state system?
      Easy.
      Just pull in a nice " StateMachine -> *ActiveState -> *MovementState " series of components.
      The "StateMachine" component would indicate you're using state machine entity data to the state machine system. The state machine system would essentially look for whether you have an ActiveState component or not, and if so, it would do nothing (letting the ActiveState's system -- the MovementState system in this example -- take over.) That system's logic would continue to be called each cycle on the entity until the MovementState system itself removes the ActiveState component from some entity. On any entities without an ActiveState component, a NextState component is added by the StateMachine system. The NextState component is consumed by the next system to execute. This system is determined by the component mask "StateMachine -> NextState -> * ", indicating that any entities with the NextState component uses the very next component as the name of the state system -- i.e. StateMachine -> NextState -> *MovementSystem. In the MovementSystem, this NextState component is consumed/removed immediately, and an ActiveState component replaces it. Now, for behavior, let's assume the MovementState system has a clause that it checks for an "InputSystem -> MoveAction" component mask to exist in the entity (to see if the movement action is still being performed), and if that component is not there, the MovementState system removes its ActiveState component. It then adds an IdleState component (which will be picked up by the StateSystem, which adds the NextState component for the IdleState to check for to know if it is the first moment it is executing its IdleState or not so it can remove the NextState component). If the "InputSystem -> MoveAction" exists in the "MovementState" though, the MovementState adds an "ActiveState" component to itself again, since it doesn't exist.

      See?

      Doesn't this kind of abstraction (without the need for encapsulation) feel a lot better and much more suited to your specific tastes (and personal needs) than that gross and impersonal "circle-jerk" you had before?




    • Polymorphism

      "What?? ECS can't do polymorphism!" You say. "It doesn't even have a 'poly' to 'morph' to!!"
      At first glance, it might appear to be the biggest "con" to ECS due to the fact that ECS clearly has no concept of "class" and makes you "spell out" your data and systems, with lots of components and systems (which gets tedious, time-consuming, and eventually impossible to manage, right?)

      Actually... Polymorphism is easier (and more flexible!) in ECS than any other language.

      An object is simply a specific "set" of data that can be referred to someplace else (or inherited from), but above all, this data is typically immutable. In ECS, a "set" of data doesn't have to be specific -- in fact, data "types" are actually just a specific series of components that can be added/removed at any moment by any system. A data "type" in ECS is a lot more transient. Rather than having a to define the data type into a static reference derived from many other static references to ints/floats/other-imported-data-types you throw around and convert all over the place, you can instead simply refer to an exact data type at any point in your code and in any system, attaching or removing any desired or undesired components. This typically forms a specific series of (generally-named) components (i.e. Animal -> Elephant -> Tall = TallElephant), allowing any specific entity in the system to be referenced or even become another kind of entity. As systems add or remove components, they are able to fundamentally change properties (components) and behaviors at will. This is a kind of flexibility with data types you cannot have in traditional OOP.

      The beauty of this is that if I wanted to change my Animal -> Elephant into an Animal -> Giraffe, I could (in theory) have a system drop the Elephant component tag, then add a Giraffe component, then my system that processes the behaviors of an "Animal -> Elephant" would no longer do that (since there is no more Animal -> Elephant) because the one entity now has become an "Animal -> Giraffe" and the system only processes the behaviors for the "Animal -> Giraffe" entity. That means "An elephant is not a giraffe sometimes." -- It is always either entirely an elephant, or entirely a giraffe. You can't guarantee that with standard OOP class structure, especially with backwards-facing inheritance.
      More importantly, to touch on the flexibility, if you want an angry giraffe, and you don't care how tall it is, you can make something like "Animal -> Giraffe -> Tall", add the "Angry" component to it (so now you have "Animal -> Giraffe -> Tall -> Angry") and the system that controls his behaviors will look for "Animal -> * -> * -> Angry -> * " so that anything it finds with an "Animal -> Angry" component (regardless of whatever other components it has on it) will behave like an angry animal when your animal behavior system finds it.

      This opens up a lot of creativity (and flexibility) when designing your games, giving the possibility to more easily create "emergent games" (and gameplay!), naturally making games -- and the creative process! -- much more fun!


    "There is no way ECS can do all this!" -- Yet it can.



    With the right interface...


    ...and technique.


    For such a little guy, ECS is a beast as it powers through the OOP machine!

    But
    ... for new users (and old users alike!) to realize the kind of ease and power a tool like this would bring, you have to "teach" them the strengths and characteristics of our underdog, ECS, in the tool's interface, which shows (intuitively) how new users should handle themselves in this new ECS/DOTS world.
     
    eggeuk, lclemens and GliderGuy like this.
  9. lclemens

    lclemens

    Joined:
    Feb 15, 2020
    Posts:
    260
    Thanks I learned a lot from this!

    However, I was surprised when I read "See? Doesn't this kind of abstraction (without the need for encapsulation) feel a lot better and much more suited to your specific tastes...". I think that whole state system example was complex and confusing even after reading it slowly twice. Or maybe it's just me?

    Also, what are your thoughts on the UpdateAfter attribute? For example : UpdateAfter(typeof(MyPhysicsSystem))]. When I saw that for the first time I was taken aback that such a concept would exist in Unity ECS. With order dependencies strewn throughout the code it seems like that defeats the encapsulations concepts in ECS. So now if Alice sends a few systems to Bob and Bob plugs it into his code, it won't run unless he gets MyPhysicsSystem and all the others dependencies from Alice. Bob could go through all of the systems and strip out Alice's UpdateAfter statements and put his own in if needed, but that seems messy.

    I get that sometimes order specification sometimes is necessary but wouldn't it be better for the order/dependency specifications to be placed in separate script/file? It could be a state system like you described or perhaps a simple list like a manifest file.

    I don't really have all the answers... I'm just getting started with ECS, but I suspect UpdateAfter could bite us all in the ass if we're not careful.
     
    awesomedata and quabug like this.
  10. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    I'm glad you read through it! lol -- Thanks for your efforts!


    Sorry for my absence! -- I didn't get notified of this reply!

    To answer your point -- No, you weren't wrong for thinking it was complex -- The _concept_ wasn't complex, but the _example_ very much was confusing, since I was just throwing a system off the top of my head based on concepts (rather than code, which hadn't been thought through using Unity's actual ECS implementation yet) and trying to convey it all via _text_ rather than through visual diagrams -- which was a terrible idea, lol) So yeah -- don't worry. You're definitely not in the minority. I totally fudged that example, plus, DOTS is diverging from ECS, and isn't capable of doing what I explained in concept in actual execution yet! -- So I apologize for the confusion! D:


    You are ABSOLUTELY RIGHT on this.

    That's what drove me nuts when I was discussing this stuff with DOTS Visual Scripting team. The _actual_ ECS usage and implementation details (which, at the time, I didn't seem to understand) are based on a half-hearted approach to implement data-driven workflows by way of an uninspired API. -- Yes, I get that these are boring details, but these details matter vastly for reasons like the one you specified above -- i.e. data dependency.


    You totally see my point on data dependencies.

    This is NOT a data-driven workflow.

    This is why Blueprints in Unreal are spaghetti-nightmare creatures. The "order" of any arbitrary (throwaway) system is required to _matter_ -- by default.

    But the gameplay only cares whether the player or monster (for example) is "dead" after an exchange.

    Who cares what order a player movement or attack script happens? Does the gameplay care if a monster moves before the player, or after him, as long as the correct damage is applied before the end of the frame? -- In most cases, this doesn't matter. But if it does matter, it _always_ has a specific reason for mattering.
    In other words, as long as the movements (for both monster and player) are done before the collisions, simply check the global physics-processing entity for whether it has completed its cycle or not. Now simply process the damage or other state-dependent things during that same frame once the global physics entity says it's done moving stuff around.

    The dependency nightmare, particularly, is an unintended consequence of the "UpdateAfter" sort of 'feature'.

    Ideally, with ANY data-driven system like ECS, you should NEVER _require_ a system to be implemented in order to use existing data. You simply default the values to "undefined" or some other equivalent. Except for universal data "wrangling" systems such as a "bridge" system used for tying a StateMachine references to, say, checking for collisions for a particular entity (which should only be used as-needed, and implemented in a separate "script" just as you mentioned -- and such a "script" is what I consider to be a "bridge system" with "bridge components" that can be referenced and processed -- and possibly removed -- all from a single place). Ideally, systems should _never_ do but one major thing, which should _always_ be initiated by attaching a component -- i.e. a transform component instructs the transform system to "move" a thing during the current frame that component exists. When it is removed, nothing happens with the entity anymore (or the transform system, when there are no transform entities to process).

    This should also scale to complex systems such as StateMachines (which are probably the most "complex" systems in ECS, since they are essentially trying to ascertain "state" in a "stateless" and "non-individualistic" world. But this isn't too difficult -- IF you have three different component types.

    I've nearly written about these before, but I have held back -- I just don't have the time.

    But basically, these are the following:

    1) LabelComponents (literally just string data for ALL the different kinds of components available to entities, which could be separated into groups for different "kinds" of entities defined by a special component "Base" label component -- These give rise to "hierarchy" style "objects" that can be used in ECS as an OOP approach).

    2) DataComponents (these are the current style of components in ECS we have, but the only major difference is that these should also have a dataless "LabelComponent" index (which can reference a name string in a global lookup table, if desired, but it is not needed often, as the numerical index is faster/easier to process since it never changes). This LabelComponent index is associated with the global "data component lookup table", which is packed back-to-back in memory, a new datablock for each new component sequence (generating a new componentID) that exists, iterated over by entityID (but only after an initial query to a component ID matching its sequence of components), so it can access that entity's particular data references via query to a subsequent datablock.)

    3) ArrayComponents (basically, a "LabelComponent" -- but also can be associated with a "DataComponent" that can be nested (in-order) in memory, for special cases where you need "arrays of arrays" -- In other words, an entity can be processed "all at once" by giving that entity a "parsing order" by way of LabelComponents for all necessary DataComponents, so that all of its data can be accessed sequentially via a single query. This one probably needs a bit more explanation -- but if @Unity wants to know, they can contact me directly.)​


    All kinds of independent state can be handled in a single query if you use these in the correct way, with little to NO memory referencing. The literal ORDER of these could be parsed ONCE, and from that moment on, all execution would be instantaneous, knowing the exact place in memory to access, with the added flexibility of being independent of processing order on the CPU -- and also stateless by default.


    Too good to be true?


    -- Maybe Unity should try me on it.
     
    Last edited: Jul 20, 2020
    eggeuk and GliderGuy like this.
  11. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    5,591
    Boy that's long, subscribing to come back reading it.
     
    MegamaDev, GliderGuy and awesomedata like this.
  12. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    Data-Oriented Visual Script -- DESIGN (as of drop 10)

    So, after drop 6's stackable purely "data-oriented" awesomeness to the somewhat more recent drop 8's insanity, I nearly gave up on Visual Scripting and wrote my own solution. However, I have decided to put a little more faith in Unity and see what I can do to use the latest drop (drop 10) to salvage the UX in terms of using it (at least _somewhat_) for true "Data-Oriented design" -- which was quite hard (but not as hard as drop 8).

    So in my neverending attempt to evolve Unity's UX, I came up with the following image to mockup what I _think_ could be a workable design:







    So the structure of a "language" -- as I mentioned above -- should be legible in general.





    As you can probably tell, the concepts of a "Subject", "Verb", and "Main Idea" are still present from the UE4 example.

    The difference is, unlike UE4, Unity is much friendlier with a "stackable" / "vertical" approach --thanks to its "data portals".
    That being said, there are two issues with this:
    First, I am hoping there is no performance hit due to heavy use of portals.
    Second, I must still (unfortunately) make heavy use of a "portals" concept (like in UE4). This is a problem since portals can link to and from _anywhere_ -- yet are still clearly dependent on other scripts/data imports.


    Are 'portals' bad for workflow?

    In short -- yes.


    But hear me out, please.


    Portals, as they are used above, are useful for readability. However, they also suggest I cannot have (or keep track of) centralized "systems" that my scripts can remain independent of, meaning I would need to make a separate "foreach" "system" for every single script I want to apply data changes to more than one entity of mostly the same type (which is almost _always_) -- with the caveat that I have to introduce additional logic to sort out the entities I _don't_ want the script to apply to (including when and where that application doesn't need to apply) when, in ECS's data-oriented design, all I would have to do is query for the component tags I _do_ want (and _remove_ tags from entities when I _don't_ want them considered in the data transform application).
    This context-dependent "additional logic" reduces the overall importance of the ECS approach while also dismissing ECS's strengths -- i.e. applying specific changes to all interested parties at once by letting interested objects "subscribe/unsubscribe" to a system by "tagging" (or "untagging") it with a component (or a certain _series_ of components). For example, ideally, a central "damage" system should be able to query "all bad guys with the [aerial] and [spiky-hat] component tags" to damage Mario (the entity tagged as [player1]) when he's in the air and is currently colliding with them from above (after trying to jump on this style of enemy).

    The problem with portals is that you never know what data you're getting -- or where it's coming from (in the case of a _global_ "portal" system).
    In the case of a _local_ portal system (i.e. local to the _current_ visual script), you lose the ability to delegate tasks to more (centralized) systems (such as the "damage" system in that Mario example) to handle extremely specialized cases (without additional logic) by simply adding/removing "tag" components. This "loss" of delegation happens because you are always dealing with data _locally_ and in a (too explicit) manner.

    Portals enable micromanagement, but often (in practice) just create spaghetti-networks that quickly become too unwieldy. Instead, of saying "portals are bad" -- I'd rather say "portals should be left to _local_ (per Visual Script) data transformation, while "tag components" should be used to delegate to _global_ systems (which have their own _local_ 'portal' components.)" instead. Portals are bad for global dataflow -- especially when there are no other ways to efficiently _delegate_ the data (i.e. to a more 'global' system -- which is easy and possible to do with the 'series of tagged components' approach described above in the Mario example).


    Readability and UX in Unity's Visual Scripts

    As you might have guessed -- like UE4's blueprints -- even with "portals" and "tags" as their components, readability in Unity's vanilla VS solution is still an issue.

    For decent UX, we need some equivalent of a subject/verb/main-idea in the scripting workspace. To show the contrast in readability of simple layout versus layout _plus_ the additional visual aids, see the two orange lines above to visually separate the subject/verb/main-idea, allowing one to quickly ascertain the main idea of each "block" of vertical scripting "stacked" on top of one another.

    The brain processes "chunks" instantly -- and being able to glance down the "subject" column to find the one you are looking for, helps _immensely_ to find your place in your script. Then you can hop into the "verb" column to see what's _explicitly_ going on in that part of your script (and then, in the final column, what exact _data_ that outputs).

    Handy, right?

    If Unity made explicit (resizable) regions / columns of a certain pixel size and just automated the "lines" (in my case, the orange ones) to "fit" or trace downward (around my nodes) that were dropped within the correct column / region for their respective part of the sentence, I can suddenly have a visual priority to processing things. This visual priority (which isn't present in code editors), works amazingly well for "grouping" things, where the visual brain can _instantly_ process them (without wasting brain-cells to "compute" the node group you're looking for). This is a HUGE win for artists/designers.


    Artists Process Things Differently

    I might be particularly annoyed by this because, as I mastered pixel animation, I had to deal with clusters of colors moving around, and my job was to make sure these super low-res groups of colors were instantly readable at super-low frame-counts (while being super-low-res). I think that is part of the appeal of classic pixel art though -- your brain doesn't have to work so hard to process it (at least the stuff made for legit hardware limitations).
    Which is why "simpler is better" for anyone not trying to "savor" all the individual details of a UI at any given moment. Most of the time, you only care about whether the _result_ is instantly gratifying (or not) -- which is exactly the same thing you care about whether you are a user -- OR whether you are someone who consumes (or creates) pixel art.


    I came across a nice illustration of this principle -- but in a different form:



    This guy (and the kid) didn't say it -- but I will (since it's my own personal UX slogan):
    When you're dealing with UX -- "Remember the human."



    Visual Hierarchy -- UX and SCIENCE!!!

    The most pressing improvements to the Visual Script UX is that it really needs to return to its "Data-Oriented" design roots (i.e. systems for processing all entities at once -- via queries and tags, as described in the Mario "damage" system example). This is because data itself needs a hierarchical presentation as well as hierarchical processing.
    However, even when doing so, the below example for how to process values/methods/entities in a data-oriented design should still (visually and logically) apply.


    The example below shows two "paragraphs" back-to-back, each "stacked" node group being its own "paragraph" comprised of node "sentences" (i.e. rows of subject/verb/main-ideas) for each sentence block "stacked" on top of the others.

    upload_2020-7-22_16-55-11.png

    This is a _VERY_ rough mockup showing the concepts of a decent 'visual hierarchy'.

    Although the orange is garish, when it's used alongside the yellow (slightly brighter) and darker (gray and black) tones, it forms a clear "Visual Hierarchy". You can think of a "Visual Hierarchy" as "layers" of a grayscale heightmap (values from 0-1), which helps the brain to instantly "group" and process information (based on relative contrast). The higher the black/white contrast, the more "visual separation" of the 'layers'. Remember, most people cannot hold (and then process) more than 3-4 (separate) pieces of information in the brain at a time. This is why a "Visual Hierarchy" needs to exist.
    But because of this very same reason, there also cannot be tons and tons of visual separation (i.e. groups of groups) because the visual brain operates in that 0-1 space I was talking about (again, much like a heightmap, with limited room for extraneous data).
    So with contrast (be it from color, luminosity, shape, or negative-space separation), the "visual grouping" produced should not exceed the number of bits and pieces of data (i.e. 3-4) that the average brain can hold and process at any one time. Otherwise the dreaded "boredom" factor sets in, ruining the elusive "Flow" (and not just "visual flow" either!), making your brain "give up" because it is now "working too hard" to try to process the provided information -- leading to "visual fatigue" and can probably be considered a form of "boredom" (in the same way that someone spitting out tons and tons of extremely dry facts can very quickly become 'boring' to most people).
    That being said, over time (and with familiarity) this visual load will seem to "lessen" in a way (since your eyes will _eventually_ get used to placement/location of things), but this "lessened" load will be because you're relying more on "visual memory" now than "visual processing" to "see" the things on that terrible screen / UI in front of you.
    Think of it as processing every single letter into a word, one at a time, versus recalling the finished result of an already processed word - in one go. This is what I mean when I say the brain is "grouping" stuff. -- It is trying to simplify it.
    Into something meaningful.

    This "visual memory" is the ONLY WAY anyone can ever truly use something like Maya (without going nuts) -- It is also why so many people are so terrified of Houdini -- i.e. "ALL THE STUFF!!! -- What does it MEAN????"


    This is where a nice "Visual Hierarchy" can save the day.

    "Visual Hierarchies" exist in nature (objects, ground, sky, horizon) -- it needs to exist in UX too.

    Since the _data_ is the most important element you are working with, DATA needs to be central to the UX as well as the workflow.
    This means that whenever DATA changes or is modified, it needs to be visually-clear and well-represented in the "Visual Hierarchy" that you are working with (and changing) DATA.
    Visual-flow of data changes and representation needs to be heavily considered in the hierarchy -- It cannot be too distracting either. (Again -- those oranges/yellows would make your eyeballs bleed if they were being used seriously -- I only use them as a point of illustration to show visual hierarchy clearly. The gray I used for "highlighting" the DATA input/output nodes would probably be a bit nicer to work with.)


    Script Flow -- in a Visual Hierarchy

    The flow of the above script emphasizes data with the fact that _only_ the stored input/output is featured prominently (and therefore "highlighted" with that thick gray outline, which could possibly be a custom-color tag, letting the author right-click it and select a "tag" to feature that particular local data node prominently in this script, which is very useful in complex scripts doing fancy stuff with only one main set of localized data).
    The processes (and processing) of the dataflow are considered secondary.
    These should also be considered globalized dataflow.

    "Dataflow" should be localized to Visual Scripts (as it is now), with a globalized "link" through a "tag component" styled system. This would ensure that external methods/functions would not be depended on generally.
    Methods such as OnUpdate and OnStart, or "methods/functions" like the GetInputManagerAxis or LogMessage nodes that "transform" data should do the transformation independently of the Visual Script that is currently executing now. Since these "transform" the data, they should be featured (visually) by location (i.e. being located in the center of the script in which is doing the transformation), where they actually _transform_ the "imported" data from the portals (located in the "subject" columns). Then, when the data requires further processing, it is "exported" to a portal via placement in the "main-idea" column.

    It might even be neat to drag a function node to this column to automatically generate its "output" node (and then reposition the function/method node back in the "verb" column.

    Then, when data must be "handed-off" to another system, the "verb" column would have a node to "attach" a new "tag" component (or series of tag components -- which are ECS component types which do not yet currently exist), letting the globalized SYSTEM handle it based on the tags you've chosen to apply to it. The globalized SYSTEM, however, is just another Visual Script that deals in "tagged" _components_ to alter data in a specific way, rather than resorting to explicit ifs, ands, ors, or elses -- which quickly get unwieldy!!
    These can, technically, exist in _any_ Visual Script, as all Visual Scripts should really be considered the "Systems" in ECS, while "Gameobjects" should be considered the DATA "Components". However "tagged" components are just a kind of low-level query data that should not have DATA associated with them. They are, after all, just a query string that allows you to sort and easily change the behavior of gameobjects instantly. Sadly, "tagged" components (with this use-case) do not currently exist -- only "data" components do.


    Subtlety is possible in Visual Hierarchies

    The "Pages" concept (on the far bottom-right) actually allows your individual scripts to not become spaghetti-nighmares, when you're only dealing with a small number of data transforms and allows you to have a nice "visual flow" (without losing your place) while letting you quickly label your "pages" of data so you can find where you are. Just click on the + to add a new script page and toggle between them. Right-click to delete a page. You can also name the title of the page there (i.e. importing). You can have page 1 be the "import" page, and page 2 be the actual "script" page that references the imported data if you like. But I honestly think this defeats the purpose of writing "sentences" that have an intuitive 'flow' to them (and you're getting back in that 'programmer' mindset of anything goes as long as it 'works' -- even if you're promoting bad programming practices).
    At times though, if you simply want a visual separation on the same page (akin to paragraphs on a standard "page" in a book, that don't diverge too heavily on subject-matter), you could insert visual "spacing" of the "main-idea" of that particular "paragraph" with a "separator" line.

    Enter the Yellow line (and its physical spacing).

    This line is meant to be more prominent than the orange ones, and lends itself to letting you know, visually, where your overall main idea (the current 'paragraph' or subject of focus) ends.
    This line is combined with physical spacing of the nodes (preferably automatically) on the vertical axis, allowing an extra-nice visual separation (leading to additional contrast) in a script's visual hierarchy (and adding a rhythm to the flow -- which is absent in most actual "code").


    When you feel like your script has gone on too far vertically, simply make a new "page" and continue "writing" your script where you left off on the previous "page" -- as if you were writing some actual thoughts in a book.

    Imagine that -- programming and thinking. Working hand in hand.

    Eerie, isn't it?


    Hopefully Unity sees this and implements some of these ideas!

    -- I cannot fathom scripting without them!!
     
    Last edited: Jul 23, 2020
    eggeuk and GliderGuy like this.
  13. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    5,591
    Have you tried programming with the old rpg mk 200x?
     
  14. Nyanpas

    Nyanpas

    Joined:
    Dec 29, 2016
    Posts:
    379
    I'm a linguist by degree. I always see code in terms of syntax and semantics. The same that can be applied to human languages hold true for any language you want to learn, and make learning languages a lot easier.
     
    awesomedata likes this.
  15. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    Oh god. I've heard horror stories about one of those, though I don't remember which one.

    But no, not personally.
    I did, however, play around with a couple of different versions of the RPG Maker engine (PS1 and RPG Maker 2000) and had a bit of fun with those, but no, I never got too serious with them. I had already become a pro at Game Maker back in that era of my life, and so I was already looking for something more.


    Honestly, I try my best to even remove "syntax" from the equation when dealing with scripting, wherever I can.

    In fact, @neoshaman made a great point earlier about the fact that there's a difference between "logic" and "syntax".
    Whether they like to admit it or not, most programmers are always fighting with that distinction, because "programming is logical" -- except... its not.

    Programming is overly-complicated -- and needlessly so.
    The worst part to me is that so-called "OOP languages" don't even solve the issues they set out to solve, such as is the case with languages like C# (see my OOP rant above).

    Honestly, while I understand your perspective -- I believe that even semantics are overcomplicated.

    You generally need just three parts to any "sentence" -- Subject, Verb, and some Object -- regardless of its relationship to the Verb or Subject -- and the general simplicity of context and/or tone easily defines in what way that Object applies (or what applies to that and other Objects), defining the scope of the subject/verb/object. For example, a caveman carrying a club, pointing at a cave-woman saying "You, Me, Babies!" while pointing at the ground is pretty clear about what he thinks is going down -- Despite only using three words, his intentions are very clear. I don't see why programming can't be just as clear (in a less-sexist way, of course) while also being less-wordy.


    Visual Scripting, for example, has almost exclusively been focused on semantics "being less-wordy" rather than on logic "being more clear" -- and this distinction is extremely important.
    Logic, at the end of the day, is what code is supposed to boil down to -- That is, how the "data" actually "behaves".


    Just my $0.02 -- It seems a lot of people's opinion differs on that for some reason. D:
     
    Last edited: Aug 24, 2020
    eggeuk likes this.
  16. Nyanpas

    Nyanpas

    Joined:
    Dec 29, 2016
    Posts:
    379
    You cannot write compiling code without correct syntax. Try replacing {} with () in C#. Syntax errors will always be important to take into consideration until there is an interpreter that will also deal with sloppy writing.
     
  17. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    5,591
    What he meant is that it can be abstract away with interface, node coding has syntax in the absolute, but you can't really do "colorless green sleeps furiously" due to baked affordances in how you interact with it, so in a way it remove the exact concern you used as an example, you can omit ";" therefore syntactic concern is abstracted away from the user.

    That's why I mention rpg maker too, but I need to elaborate with visual ...
     
    MegamaDev, Kennth and awesomedata like this.
  18. Gekigengar

    Gekigengar

    Joined:
    Jan 20, 2013
    Posts:
    485
    Hey there, seeing the new VS Roadmap, I am starting to lose faith in Unity's VS future. Dropping Bolt 2 is a mistake, merging DOTS and Monobehaviour VS as a single workflow is a mistake. Their "Snippet Nodes" is not the way to go. This is not performance by default.

    I say instead of trying hard to tell Unity that they should be hiring you in a lot of these VS threads, you should become like Ludiq, create your own solution, prove to be the best, and get Unity to acquire your solution. I support you, and the community proably would too. I would love to see this come into reality, you're sitting on a gold mine if all of your theories are indeed true.
     
  19. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    Thank you for that. -- It really means a lot.
    ...It just so happens I am making a few strides in that particular direction. :)


    And yes, I totally agree about the "Snippet Nodes" thing for sure. This is naive. Inline code is (initially) a convenient way of programming -- In practice though, one quickly realizes that it SCALES like S*** -- and even with DOTS -- IT'S NOT PERFORMANT (by default) due to the affordances provided by this design. There are ways inline performance could potentially be improved (in a limited way) without ECS-like structuring, but the overall "affordances" of the inline nature of the "Snippets" design actually _discourages_ performance (by default).

    Honestly, I can _sort_ of see how Unity could merge the DOTS / ECS workflow and something like Bolt 2 together (they are not inherently incompatible workflow-wise, if following the Bolt 2 design)
    -- BUT! --
    The question is -- should they?



    But before I go any further:

    (To avoid this becoming yet another thread debating the direction of Unity's VS, I would prefer to discuss what this decision means in terms of the overall structure of a language.)​


    So in that vein:
    By not focusing on how the language aspect of how their tools communicate with their users (since all tools have a "language" they speak), @Unity is being naive in their approach.

    Initially, being able to converge the workflow for the tools is a great thought -- BUT -- when considering that offering both workflows in the same tool will inevitably lead to confusion as to the _suggested_ workflow (and therefore the most _supported_ one -- i.e. in America, is it better to know English or Spanish?), a separate (siloed) tool definitely looks more appealing to the end-user (in terms of an understandable UX) because of the language it speaks.
    For Unity, a siloed approach could be both a blessing and a curse, depending on their technology goals in this specific case. But for the user to understand that technology, not having that siloed UX would inevitably lead to a nightmare experience (i.e. "Go back to Mexico if you want everything to be in Spanish! -- Americans speak and read English!").

    Does anyone here remember the "C# versus UnityScript" debate?

    It will be like that for users -- except much more vague. Rather than "This language looks easier -- I'll use this!", you'll have "Which nodes do I use for this task? -- Do I need to focus on scripting in snippets with difficult-to-use C#? -- Does this mean I need to know how to code? -- How do I avoid writing code? -- What is the purpose of this tool??? -- etc. etc." Talk about turning off new users (especially artist/designers -- the very users Unity plans to cater to).

    Unless Unity fully understands the necessary design requirements first (from the user-perspective), this problem can't be resolved properly -- Unity will waste a lot of money trying to do so.

    "I want Bolt 2" doesn't help them here at all -- They have to pick and choose the features that align with their own goals... and their own rationalizations of what USER goals are (and make those two align somehow.)
    @Unity sometimes seems to forget -- "People don't know what they want. Except when they do."
    But even if they do, it's not like people just inherently know how to design their own tools. To design tools, people have to know about tool design to begin with -- and so far, @Unity doesn't seem to have anyone who understands this well enough overall (leading to many subpar tools). Unity has been taking the easy road out and trying to get users to inadvertently design their tools/featureset for them. But when features users want don't align with Unity's goals (or when Unity wants features a majority of users don't want or need), sometimes very important features and workflows tend to get ignored or overlooked. i.e Timeline Events and Mecanim's StateMachine nightmare UX / API (which was so bad someone had to build an asset just to sidestep its "intended workflow" -- see Animancer, if you don't believe me).

    Ultimately, knowledgeable (and creative) tool designers for practical situations are hard to come by -- but this is why I've offered my services, wisdom, (and knowledge) of tool design up until now -- for free.
    I love Unity -- its potential for being the ultimate tool for game design is improving every day. As a user, I simply want to evolve its tool designs because I am a designer (by choice) -- rather than a programmer (by necessity). We're going at a snail's pace with our tool designs in this industry when we can be going at lightspeed (with a proper understanding, of course!) -- and this extends far beyond Visual Scripting tools!

    And simply understanding the inherent structure of language, logic, and the communication of affordances is the key.
     
    eggeuk and mattdymott like this.
  20. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    TL;DR:

    All tools (that work w/data) have a "language"

    A tool's "language" is extremely important for fast/effective communication w/users.

    It's up to the tool designer to structure their tools with "language" and affordances according to the particular nature of the tasks the tool may be "fit" to be called upon to solve for, and then communicate these affordances well. This is really the essence of any truly elegant, simple-to-understand, simple-to-use UX.

    For more detail, see below.


    The "Language" of Tools (and Data!)


    A language is a set of particular 'rules and conventions' (i.e. specific operations applied to specific data in a specific way) applied by way of structural elements, such as 'words and phrasing' (i.e. data malleability -- that is, the particular operation flow / bridging / transformation on particular sets of data in a particular way), that results in 'ideas' (main ideas) arising from the inherent structure and organization (i.e. from "language conventions") that defines a particular idea -- and therefore scope -- in the process.

    If that makes ZERO sense to you, don't worry -- I wrote it from a tool's "data-flow" point of view.

    Now I'll try phrasing it a different way:

    A tool operates. Said operation must rely on a set of conventions to establish an expected operational result (i.e. main idea) when applied to the intended target(s), who inherently define the intended scope, plus the conventions that should be afforded to the tool's operational capacities.
    This "operational capacity" results in the ideal "language" a tool should ultimately use and be structured around.


    Photoshop's "Language"

    To understand the "language" concept better, let's start with Photoshop's "language" as an example.

    Photoshop's language is pretty simplistic.
    It revolves around only a few concepts:

    Layers (scope) -> filters/tools & masks/selections (operational capacity) -> pixel color "correction" results (operational result).
    or
    Subject (operational scope) -> Verb (operation + operational capacity) -> Main-idea (operational result)


    Starting to see a pattern yet?



    Blender's "Language" is very similar to Photoshop's

    Whether you're UV-ing, Sculpting, or Modeling in Blender -- it's universal:
    Again, depending on the operational conventions used (and defined scope), whether manually or automatically, a result is always produced. This result is the "main-idea" of the tool's operational capacity + operation combined, applied to a particular scope (i.e. the tool's "Action" -- as it is commonly-called).

    If this sounds too simple to matter -- you are probably missing the point.

    While clearly ALL "tools" are supposed to 'operate', most BIG and multi-purpose "tools" (i.e. Blender / Unity) only tend to 'function'. Thankfully, Blender intuitively realized this and changed its ways from 2.80+ -- Yet Unity still seems to miss the point. So let me make it more clear:


    function != operation

    While Blender does have many unique "functions", it performs only one 'operation' -- 3D modeling / visualization.
    Photoshop is similar -- except it's 'operation' is 2D photo-editing.
    Unity's 'operation', however, is much less defined right now because its 'operational capacity' and scope is seemingly so vast. The powers-that-be at Unity simply don't understand the difference between 'operation' and 'function' when it comes down to its tool's 'scope' and 'operational capacity'.
    Unity probably believes that since it has many functionally-different 'tools', it is _not_ a traditional "tool" due to the many functions of its tool. If this is not a self-fulfilling prophecy arisen from some manager/administration/leadership's failure to understand its own product (or the concept of tools) thoroughly enough, it is (at least!) purely confirmation bias -- at minimum. As I have established above with Blender (who is also so functionally-complex as to have had its own game engine included at one point!), there is a HUGE difference between operation and function. Blender found its roots and went back to operating as an art and visualization package, letting its operational capacity and scope arise organically from there. It is no longer an art-package + game engine -- it is a 3D modeling and visualization tool.

    Unity, too, is not a 'game engine' -- Unity, imo, should operate as a powerful, user-friendly, game and visualization design and production tool. Anything beyond that is too much. Anything less than that is not enough.


    What's "Language" got to do with Tools/Data?

    Despite every tool (and "language") having its own unique operational conventions, the following pattern always exists:

    [subject -> verb -> main-idea]
    or:
    [operational scope -> operational capacity + actual operation -> operational result]

    The above pattern is what I've previously called a "Data Sentence". -- Despite the specific conventions of a language, the underlying structural "language" (and therefore the logic and operational conventions) that all tools must follow (and be built upon) should never change in and of itself. If anything must change in the tool's fundamental operation, then the entire "language" (i.e. operations and operational conventions) of the tool must change. You should consider these Data Sentences like the structural "DNA" of any digital tool you'll ever want to build. -- It is that important to understand.



    ShaderGraph -- A more "complex" scenario for 'Language'

    Let's take a seemingly "complex" (visual-scripting) scenario -- shaders -- as another example.

    ShaderGraph data flows with heavily parallel and interdependent data relationships.

    However, the "language" is still quite simple:

    You have a defined operational scope (i.e. a particular color and texture), operate on this scope (i.e. using operational conventions such as combining "add/multiply/etc" operations within the "color or texture" data operational capacity), and you get your operational result: the "main idea", which is textures operationally-blended with the color/texture data.

    "Boom."
    Subject (operational scope), Verb (operational capacity + operation), and Main Idea (operational result).
    Just as promised.



    What about Complexity?

    All "Languages" are designed with Affordances.

    The sticking point with data-flow and the level of abstraction has always been the (understandable) fixation on the main "operation" portion of the problem (the "verb", as I casually call it).

    It is human nature to want to attach the 'action' itself to either the thing that performs the action (or to the thing upon which the action is performed)
    Logically, you cannot have the "main idea" without "the thing" _and_ "the thing that happens/exists as a result," right?

    The problem with this mindset is that all languages have their nuances.

    This is where things get sticky for most people.
    While the exact "phrasing" and "pronunciation" can differ greatly from one language (or one tool) to another (i.e. game logic vs shader code), the exact details of the resulting form and structure they should employ should be derived from the particular "Affordances" the particular nuances that the tool (or data) lends itself to.

    In other words, since shaders are all about combining images / colors / visual-effects, the "phrasing" (the nuances and language conventions used to form the shape of the "words" -- i.e. the operations [verb] and operational capacity [subject]) that are used in the particular data sentence / language structure should reflect the affordances the language needs in order to operate -- i.e. they should be "phrased" with the idea of the subject (scope) combining (and being combined with) many different sources of -- in the shader example -- images, colors, or effects.


    An Alternative ShaderGraph??

    Keeping the "affordances" of the language structure of the tool in mind, ShaderGraph would've been much easier and better to work with as a "task-list" (i.e. Layer-based tool) rather than as a complicated spaghetti-node network. Think Substance Alchemist or Quixel Mixer -- but with shader commands / references -- and nice thumbnail previews!
    ShaderGraph's structural and operational conventions -- its "language" -- (that is, its heavily "branching" interdependent visual structures, with no real / true logical hierarchies -- only hierarchies of operations) clearly do not lend themselves well to the actual affordances ShaderGraph needs for combining images / colors / textures from different sources in a clear and sequential way -- which makes it less friendly when trying to understand complex graphs. The node graph, for example, was clearly chosen for ShaderGraph only because it was a convenient design that was copy/pasted from other popular tools at the time. The abundant hierarchical "affordances" a node editor provides has no relevance to the kind of data or data operations that a shader requires -- and as such, these affordances were therefore wholly ignored in the tool design process, resulting in a "visual" tool -- that makes little "visual" sense, especially on more complex graphs.

    Beyond that, a node graph actually (inherently) convolutes the primary operations as they relate to ShaderGraph's overall operational capacity. In english, that means ShaderGraph's TRUE "language" structure (i.e. its data-referential and sequential operational format -- or, in other words, the actual conventions it uses for its operations / operational capacity) are ignored in favor of the hierarchical / spaghetti-reference structure it takes on in its current form, which is actually quite detrimental and convolutes the user's understanding of what is actually happening behind the nodes, and therefore shouldn't have even been considered -- much less used -- as the central focus for designing the data flow for a sequence of (again, sequential) shader operations. This is its true "language" -- and its affordances and structural conventions should have been based upon that sequential set of operations and simultaneous data references.
    ShaderGraph simply imports data from some other place (perhaps a dynamic blackboard?) and sequentially operates on that data in a linear fashion. Who cares where that data comes from three nodes ago? When you're just glancing at a list of sequential shader operations, all you care about is the operational result.
    As you can hopefully see, true hierarchical branching is relatively rare in shaders, and the operational result of the tool could have (just as effectively -- and probably a lot more easily and legibly) been visualized as a set of operations (i.e. Layers) with a dropdown that links to the texture input / sources from viable candidates (i.e. a dynamic blackboard). In the end, since you can only import an RGBA value from a Vector4 -- not Vector2 -- listing only those [named] inputs in the Layer's dropdown to blend with would be much more legible and easy to understand than wrangling huge spaghetti node strings from one side of your graph to all over the place in more complex shaders.

    Hopefully this provides a little more insight into how "data" and "tool" design should work -- and why these much more 'subtle' aspects need to be considered in the overall structure/design.

    Until next time.


    ;)
     
    Last edited: Sep 21, 2020
    eggeuk and mattdymott like this.
  21. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    General thoughts about Visual Scripting progress:

    I was excited when I saw Drop 6 in DOTS VS -- but then it changed direction entirely after one (unreleased) build. At that point, sh*t hit the fan for me. Instead of something new and original -- something I could actually make games with for once -- it was looking a lot like Bolt 1 suddenly. It was as if @Unity had trapped lightning in a bottle -- and then let it go as quickly as they caught it because they were afraid it might somehow 'shock' them. :/

    I think @Unity has a decent "skeleton" of a plan going forward with their position on integrating the UI and underlying technologies for graph tools, but the "meat" of this plan (and its eventual shape) can still make or break it.


    Scripting Workflow Concerns:

    User workflows must remain both data-oriented _and_ flexible to be extremely performant (even if this is all under the hood). But they also need to be user-friendly and intuitive when the user might want to modify their behavior.

    My biggest fear is that I don't get the feeling that anyone @Unity really has a sense of the overall workflow just yet. This is why I am working on my Visual Scripting tool's design (considering Unity technology at its core) -- I am familiar with every workflow inside (and outside) of Unity as far as where design friction in the workflow exists, so if Unity doesn't see the same friction I do, and the more inflexible the underlying system is (or the harder THAT is to modify), the harder I must work to redesign the workflow in my own vision.


    Visual Scripting workflows are notorious for design friction -- even the few that have been battle-tested in recent years. There are always unexpected (extremely varied) use-cases _and_ performance-sapping routines that target EVERY single (existing) Visual Scripting language's Achilles heel.




    Examples of the Achilles heel of various Visual Scripting languages:

    • "State-based" or "Flow-based" languages (like Playmaker) are extremely "containerized" and can be extremely inflexible without constant coding (in larger projects), requiring specially-written nodes to extended.
      The Achilles heel for Playmaker's design (for example) is mostly in the ergonomics of scalability.


    • "Unit-based" or "Snippet-based" languages like Bolt 1 or Fungus or even GameMaker whose "Units" (or 'code' "snippets") are extremely easy to extend, also ultimately tend to face extremely performance-intensive requirements very early on, since these kinds of languages require an unpredictable data scope in order to remain "flexible" and 'easy' to use and extend. While this can sort of make them more "scalable" than "State-based" languages to an extent, performance (thanks to unpredictable structural design and data scope) is often the bottleneck that prevents these from being viable in all cases, which ends up being the Achilles heel of something like Bolt 1 -- even with a Job-based backend.


    • "Optimization-based" languages (like Blueprints or even art-tools like ShaderGraph and/or Houdini) that use an (essentially "blackboxed") set of nodes are generally designed this way to be more "performant" than more freeform "Unit-based" languages like Bolt 1 -- BUT in order to do that, they ALSO require you to know and follow strict rules and (usually extremely unclear) methodologies that (ironically) requires you to study and learn how to use "magic" combinations of nodes to do certain tasks, which can only be learned by (sometimes years) of intense study, learning each and every ("blackbox") node, its capabilities, and the (ironically internal) shortcomings of the node's processing methodologies in regards to performance -- inside and out -- in order to use them in a truly "performant" way.
      Many intensive data type conversions are inherently required to keep things "performant", but will (more-often-than-not) actually bottleneck much of the performance gains you would receive from strict optimization, syntax, and data conversion requirements due to the simple number of conversions required in a system, as well as the increasing bloated complexity that quickly devolves from this requirement in both project-size and memory and CPU performance requirements too, as wildly-duplicated data is shifted around to be converted non-stop.
      While caching and pre-processing data can help complex graph performance -- this kind of thing cannot always be predicted beforehand (especially when it is necessary at runtime) and cannot be relied upon without clever design affordances (i.e. by processing variations of the whole graph once and "shrinking" the data required to process it through an "interpreter" later) or by focusing on convenient "cache" points in the design (i.e. Houdini caches open-world geometry that takes hours to "bake" so that it can be further processed in the graph), which, again, has very little use in realtime applications.

      The true Achilles heel in "Optimization-based" VS languages is a combination of the learning curves required by the "blackbox" nature of the nodes and methodologies themselves and the fact that node behavior often cannot be altered or tweaked without resorting to either a (slower) interpreter-style implementation (Bolt 1) and/or coding in the "original language" (to keep performance) which defeats the purpose of any "Visual Scripting" to begin with. This is why Bolt 2 wouldn't work. Still, workflows and even the systems using the 'optimized' node for practical applications are often slower, more complex, and require more data "conversion" operations than would otherwise be necessary with more malleable nodes. The nodes take a performance hit regardless of the "optimization" approach (which can be huge in some cases). The bloated and convoluted systems you end up with after silently shifting the inherent complexity onto your shoulders (in order to maintain performance). "Designing" performant node combinations is simply not your job as a designer. This is the true Achilles heel of this kind of Visual Scripting editor.

    To design a language that places a major priority on performant, fast, intuitive, optimized, speedy iteration for designers is something that just doesn't exist. Solutions that are designed around these exact principles can fight back (Bolt 2 and the early version of DOTS VS was a good start, though for reasons already mentioned -- didn't pan out thanks to a design limitation or three), but a proper solution like this has lots of knowledge requirements about design affordances and deep-level technology workflow interrelationships that, as far as I've seen -- you won't ever find in your standard college grad.

    I think Unity (and everyone else by now) really hates me for saying this kind of stuff -- but I can't help it.
    It really breaks my heart to know how a single failed design is all it takes to crush a person's hopes and dreams.


    I take that very seriously.
     
    Last edited: Nov 4, 2020
    eggeuk and MegamaDev like this.
  22. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    8,637
    Can someone explain if few short sentences, what this thread is about?
    I scanned briefly through, I can only deduct so far, there are some theories, crossing with rumbles, without backup in practical experience.
    What I miss in all of these, is grouping and creating subsystems nodes.
    Of course, I may be completely missing the point of the subject.
     
  23. DreamingImLatios

    DreamingImLatios

    Joined:
    Jun 3, 2017
    Posts:
    2,262
    I've only skimmed through but I will try to summarize from a programmer's perspective.

    TL;DR
    DOTS re-evaluates the fundamental way code is written for games and brings a lot of benefits. A similar approach can be taken in visual scripting and this thread tries to explore that. Some early visual scripting prototypes aligned with some of the ideas presented, but either took a different direction or were dropped.

    More elaboration:
    Similarly to how when you break apart programming concepts and don't fixate yourself on existing standards and conventions (OOP), you can find something that may be a more optimal solution (DOD and ECS) with the cost of it being different. DOTS leads to not just better performance, but also better control over control flow and execution order, better control over deferred execution, and better decoupling throughout the solution. However it comes at a cost of being initially unintuitive to a lot of people used to OOP. While theoretically, anyone new to programming could grasp DOD concepts just as easily (if not more easily) than OOP aside from the much larger pool of resources aimed at OOP (and in the case of Unity, DOTS is a pretty advanced DOD that requires more basic knowledge than what is practically necessary to learn DOD programming from scratch).

    In a similar way, by breaking apart visual scripting by examining it both as a visual tool and a language, there may be new ways of designing visual scripting that deviates from modern conventions. Such a solution could be more intuitive, flexible, and efficient compared to what currently exists today. This thread explores this by breaking apart both the visual elements and the concept of languages in general as a means to express desired actions within a tool.

    There's also a lot of rambling about how Unity may have started off in a direction that aligned with the ideas presented, but then stopped. I don't really work with designers who need visual scripting, so it is hard to be invested, but I can sort of relate to the frustration behind this:
    This happens in the programming world in DOTS too. While I really like the core of ECS in Unity along with jobs and Burst, a lot of the higher-level functionality (Physics, Audio, Animation, Networking, ect) has been pretty disappointing. I have been building my own solution for these things. Why?

    There's an old saying: "If you want something done right, do it yourself."
     
    GliderGuy, MehO and awesomedata like this.
  24. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    While I cannot explain as clearly as @DreamingImLatios -- I'll offer a bit from my side as well:


    This is a semi "blog-like" thread I've created for informational purposes.

    As an artist/designer -- I hate that all my tools tend to suck. I've found that, in general, there is ZERO information on the internet about what makes good Visual Scripting tools. Artists like me suffer constantly because programmers rarely have the requisite skills / design experience to put together decent (visual) toolsets with artists/designers in mind that actually speaks our "language" -- As a result, this thread is basically a record of my thoughts/experiences/insights about Visual Scripting (and Visual Toolset Design "Language" as a whole) -- in particular where this should be but isn't currently perceived to be a "Language" by programmers. I try to explore different ways of thinking or talking about things that are too "standardized" to be looked into by those who don't generally "need" to -- i.e. programmers.


    This thread includes some generalized results from my own practical experiences with Visual Scripting and the tools I've used over the years -- and in particular I aim to show how it all relates to potential data-driven visual tool implementations.


    This is heavy, weighty, content -- and is not meant to be "skimmed-over" in general.
    These are holistic lessons -- not specific, prepackaged, "modules" of step-by-step information with a "do-it-yourself!" section at the end. The insights shared here are learned the long-way around -- and can't be understood too easily if you haven't done some of the footwork yourself to bring the point home

    This is (most likely) why you didn't get much from it.




    This was intentional.

    However, I'm actually working on providing that aspect as we speak though -- Stay tuned.



    No -- I think you're on the right track.

    Again -- I intentionally didn't get too deeply into that bit, as it is important to what separates my design from anyone else's. Since you can't copyright an idea though, I'd rather not have anyone else know all my secrets -- at least not yet.

    But to put any curiosity at ease -- I share enough that, if you really understood the ideas holistically (like I suggest), you could easily put the pieces together yourself (with enough effort), and form your own (equally badass) system.
    However, looking at my pieces in a localized, step-by-step fashion won't get you there -- you've got to think globally.
    That's all the hints I'll provide for now though. I am probably not as smart as you are, so I need to hold on to what few advantages I do have in the world for now. But only for now.

    If you want something more explicit, you'll just have to wait. :D
     
    Last edited: Nov 3, 2020
  25. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    8,637
    That is fair point. :)

    I am good with your answer, being relatively short and concise. Thx :)
     
    awesomedata likes this.
  26. MehO

    MehO

    Joined:
    Apr 23, 2016
    Posts:
    15
    @awesomedata Did you consider publishing a package on the asset store ?
    Best way I know to validate an idea is to show it to the world (the market) and let it decide if that's suitable or not.

    Personal thought: I need to comeback at some point later to read everything you wrote. I love reading about designs and architecture but man, you are quite verbose.
     
  27. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    Lol -- Fair enough!

    Though, also to be fair, there's A LOT of information to unpack (in a simple way) when you're essentially describing how to (visually) portray (and process) LOGIC and DATA properly. Those are pretty deep subjects -- even without the "visual" aspect.


    I've got something up my sleeve for this. No worries. :)

    And while I recognize this stuff is great (in theory), I know I need to bring some of the density down into a full-on (provable) prototype showing these concepts in action. I simply figured somebody would beat me to the punch as soon as I shared the info.

    Any DOTS programmer interested in assisting me with this -- I am totally up for a collab!
     
    Last edited: Nov 22, 2020
    eggeuk, stuksgens and Lukas_Kastern like this.
  28. Rujash

    Rujash

    Joined:
    Jan 3, 2014
    Posts:
    29
    Your post in the Bolt 2 thread was great.
     
    GliderGuy and awesomedata like this.
  29. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    Thanks!! :) -- Sadly, I wish I remembered which one that was!! -- And what it was referring to!


    (I'm fighting multiple battles on the future of Unity's design here, so things can get lost sometimes!)
     
    GliderGuy likes this.
  30. Rujash

    Rujash

    Joined:
    Jan 3, 2014
    Posts:
    29
    This one, specifically the design issues when making a visual scripting tool and how it's organized; subject/verb/etc.
    https://forum.unity.com/threads/vis...update-august-2020.951675/page-2#post-6205770
     
    awesomedata likes this.
  31. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    Structural Debugging -- Data-Oriented Visual Scripts

    That's a great question -- and the answer is: "It would lend itself extremely well (probably even better) to the idea of debugging (by humans) because it would happen at the rate of human-based logic, readability, reasoning, and understanding -- not at the rate of understanding endless lines of meaningless instructions without any real (global or local) context or meaning toward the (usually more ambiguous) concepts involved."
    The assumption modern computer science has made is that dumb, localized, "step-by-step" logic is somehow superior to "global simultaneous access to containerized logic and reasoning" -- yet any supercomputer (with its thousands of CPUs) will tell you that is simply not true. In fact -- "global simultaneous access to containerized logic and reasoning" is exactly what the human brain is capable of -- and where computers currently fail us.
    However, if we give the brain even a little (visual) assistance to more smoothly access the (containerized) "scope" of operations, making the effort faster and more frictionless, you would be amazed at how fast the brain is capable of working out problems -- and the enormity of the data in which it can access (and process) on its own simultaneously to help it do that -- letting one debug any issue, at any moment in time -- with next to zero effort.

    In fact, a great metaphor for understanding the difference in debugging my way (versus the method of the status-quo) is how "easy" is it to understand the matrix transformations of a 3d object that was moved, rotated, and scaled -- but through calculations on paper -- versus actually using a 3d application like Blender and visually performing that operation with a widget to move, rotate, and scale an object. Clearly, you don't always need to know (step by step) what specific instruction causes the 3d object to move, rotate, and be scaled -- but you can clearly see when it was (properly) moved, rotated, or scaled -- and when it wasn't.
    Who needs (step-by-step) syntax to understand a problem when pure logic and reasoning will suffice? Having a tool that is structured around an idea of the human having global simultaneous access to the whole project's logic and reasoning (and behavior) isn't only 'good enough' (and ultimately is what fixes the most bugs already) -- but it is oftentimes many times faster too -- especially when combating the heavily-contrived organizational structures based around complex (and error-prone) syntax and styling details.


    ------

    To push this point further -- In general -- it is syntax-heavy language, along with many layers of abstraction (and therefore, obfuscation) that often leads to the majority of bugs in the first place. This is a failure of communication with the human brain -- not a problem with the step-by-step (logical) operation of the computer.
    As you should now be able to see -- modern-day debugging is "brute-forcing" technology to help it solve the wrong problems.
    To make the logical intent and pacing (and therefore the proper delineation of each element) of a game project frictionless and understandable to the mind is my ultimate goal -- and I assure you, it is a valid (and effective) goal to have, even when it comes down to debugging, since you are no longer debugging the computer -- you are debugging the logic itself.


    To go further on illustrating this argument, if you're interested:

    Continuing from my example above -- When your 3d object is suddenly stretched out, partially upside-down, and not at all wherever you intended it to be after operating the widget, it's pretty clear the widget's 3d matrix transform operation has failed. Rather than needing to see the exact computer instruction that failed to compute the proper transform operation in the proper way in the proper order, one can still (intuitively) gather with what they've seen so far that matrix mathematics has failed them somewhere (as long as they know how matrix transformations should work mathematically of course -- and what it might look like when they aren't done properly). At this point, the computer's computation process is not the problem anyway. The computer is 1000x more capable at math calculations than most users. The user's own (step-by-step) process of telling the computer exactly _what_ to calculate -- and when to do so is what has failed. 80% of debugging is user-error (i.e. not understanding the exact step-by-step logic that is being fed to the computer), and the other 20% has to do purely with confusion arising from syntax-heavy instructions and/or layers of (sometimes conflicting) behavioral abstraction in the design.
    In the matrix example above, the only recourse for the user to "debug" their work is to go back and double-check their logic (step by step) until they find the unintended flaw in the flow of their matrix transformation's operational logic. This can be an immense task in many syntax-heavy projects with lots of abstraction (to keep things 'readable'). On the other hand, if there is next to zero abstraction -- they don't need to look _everywhere_ in their project to find the 'bug' in their logic "flow" -- they just need to look _exactly_ in the place where the mathematics were (logically) necessary to affect that particular 3d object in that particular way. Which is _always_ where the operation actually performs transformations on the data in question to begin with -- at least that's where it _should_ be done -- IF it were located in the proper scope to begin with.
    That's the trick to debugging though -- scope is exactly where one should be looking, and the literal, step-by-step, logic of your operations is exactly what you should be looking at. Always.


    Debugging is simple when you have direct control over (and frictionless access to) the global context and reasoning (that is, the particular scope of the data operations) for the step-by-step (logical) transformations on all of your data across your project. This, sadly, is not what typical programming languages offer. To visually gather (or make sense of) logical operations quickly is critical to debugging quickly. Therefore, any scripting structure that visually (and smoothly) defines proper (logical) scope in a way that champions the visual flow of logical operations on the data and funnels that data into a sensible (reasonable) scope (and therefore proper logical operational context for behavior within that scope) has a huge advantage, whether said scope is local or global (or some logical / bridged combination of the two, but with a sensible and clear UI to make this "bridge" visually-intuitive), and has the kind of structure capable of being extremely fast and easy to follow (and therefore simple to quickly and globally decipher specific step-by-step logical operational flow and behavior with the brain alone). The brain is often assisted by timely and properly-designed visual cues and delineation acting as an aid to make the process for global intuitive understanding even more frictionless -- especially to someone that hasn't yet built a mental model of the project in question (and its logical structure) -- which helps newbies quickly decipher and debug portions of the project that they have only simply glanced at -- (and therefore easily understood) -- which should be the goal in all Visual Scripting endeavors, imo.

    It's hard to explain something like this -- so please forgive my verbosity. Hopefully what I've offered so far makes sense well enough though. :)
     
    Last edited: Dec 17, 2020
    GliderGuy, stuksgens, eggeuk and 2 others like this.
  32. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    Data-Bridges -- "Bridge Systems"


    A bridge system is a sort of freeform scripting architectural structure that simply links data from two (or more!) data-oriented systems into a common data repository those (and other) systems can subscribe to (or get subscribed to by other systems -- including by the 'bridge' system itself) to have access to data or tagging features for as long as they wish.


    'Architectural' Overhead

    The "bridge system" construct replaces a few standard scripting elements to reduce 'architectural' overhead.

    Some constructs to be replaced include:

    - Class Instances (and references)
    - Explicit Data References / Associations / Code Behaviors
    - Return (Types/Values)
    - Events / Delegates
    - Functions / Arguments

    Class instancing and 'event' systems are most of what does data bridging in C# right now, as far as functionality can be concerned, but that kind of 'bridging' requires lots of type limitations, variable referencing, and time/architectural overhead in determining scope as well as data backtracking (i.e. to ensure variable types stay 'generic'), when, in actuality, it would be far simpler (on the architectural front) to keep commonly-referenced data in a single repository the user defines themselves (that is -- in a "bridge" system) where any user of that 'bridged' data can reference whenever and whatever they want. Since this is _also_ a "system" (i.e. like the ECS as well as the "scripting" kind), the users (satellite or 'child' systems) who subscribe to a 'bridge' system (i.e. via system associations with dataless tags) can also have that bridge system explicitly notify another particular system (i.e. by 'tagging' themselves to that system to be notified when something in particular changes). The beauty of a 'bridge' system is that it can simply sit there and do nothing but contain and/or sort data (i.e. that can be referenced or changed in bulk from elsewhere), OR it can actively behave like an 'event' notification system where it notifies interested parties when data is changed in a particular way, thus centralizing everything using or manipulating multiple systems' data -- all in (and from) ONE place.


    Monolithic Structures

    Most 'monolithic' functionality can be quickly broken down into multiple 'bridge' systems or child systems -- without the need for what's commonly referred to as 'abstraction' usually needed to sweep all the (apparent) complexity away under the rug.

    Abstraction, in general, is like sweeping the whole cat under the rug in order to prevent its hair from getting all over the place. Bridge Systems, in contrast, carefully groom the cat with a brush designed to remove excess fur -- letting you enjoy having a soft kitty to pet -- without having to hide (or shave) the cat completely (just to have a clean home).


    Building a Bridge System

    Let's try a typical complex use-case -- reading/writing mesh data.

    For example, let's assume one needs to access vertex position information (and wants to offset some vertex color for a mesh when a part is clicked on and 'moved' around and then 'paint' it based on its new position), a 'bridge' system could be constructed to first have an initial 'buffer' of vert indexes as well as an (initially) empty buffer of vert positions and colors that get filled up by an external system as verts are moved to a new position (or have them appended to the current buffer size for verts and their new positions before the next frame is rendered) based on information provided to the buffer by an external system (i.e. the child system or, in other words, the 'functional' part of the bridge system).


    Next, the 'bridge' system could be given (or gives itself) a dataless 'tag' that tells it to process the color calculations (i.e. when the buffer reaches a certain data size, or when another external 'bridged' system tells it to -- i.e. the vertex coloring system says its done calculating the new colors, and therefore the 'bridge' system can continue its process by clearing the 'vertex' transform buffer and now apply the new color buffer provided by the external 'color' system).

    Finally, the 'bridge' system then takes its buffers (3 buffers so far) and maybe passes that job off to a renderer/shader by notifying a separate (external) system responsible for this function. This other system was subscribed to the 'bridge' system by having a 'GetVertexData' tag attached to its own system data, inherited by its association with the parent 'bridge' system. The child system then gets a 'SendVertexDataToShader' tag added to its system data by the parent 'bridge' system. This 'association' means that external (child) system can also have the "GetVertexData" tag as well as the 'SendVertexDataToShader' tag (which is provided by the 'bridge' system to its children). Since both of these have a system in common (i.e. the 'bridge' system), dataless tags can be 'passed around' without needing any actual 'references' to the tags themselves. The system inherits them based on its loose association before it runs for the first time.



    'Permission' Scope of a Bridge System

    The 'bridge' system, as indicated above, can be set as the 'parent' system. It does not need access to any of its 'child' systems' dataless tags (except the ones it provides itself), as in the above example. However, that doesn't mean it cannot be set to a state where it can potentially utilize any potential children's dataless tags too.
    After all, some parents have access to their children's phones. If they want to make a call on it, then what's stopping them? They pay the bill after all. Therefore, why can't a parent system enforce that control too? -- While some parents might be more respectful of their children's autonomy, it tends to depend on the child (and sometimes the parent), yo.


    In general, the 'children' systems described here are usually the 'functional' bits of the 'bridge' system. The parent or 'bridge' system tends to simply guide or orchestrate their children's movements based on 'events' their children need to be concerned with.

    Parenting aside -- I used the word 'functional' above because external systems (the ones outside the 'bridge' systems) act like a method or 'function' that writes (essentially) directly to the memory location of the data in the 'bridge' system's domain.

    To clarify what I mean by writing 'directly' to a memory location:

    Nothing else should be able to write to the same location in the same cycle/frame. Generally, a specialty 'subsystem' should exist to handle these kinds of unsafe 'crossover' references (that means when two or more systems need to write to the same value/memory location at the same time, the data is temporarily stored in special buffers using one special subsystem that writes, sequentially, to the given memory location in the next frame/CPU cycle).
    If you make special exception (i.e. you need the data written now, and you promise nothing else will write to this value), then it means you intend to be responsible for the data, letting it be possible to write data THIS frame/cycle because you plan to be careful to orchestrate what can/can't write data -- and when that happens -- (which is much of the work we do as game designers / programmers anyway).
    If you don't care how many systems write to the data (i.e. you want data to write as fast as possible, but want it to be accurate and don't want to worry about timing -- such as when updating a mesh), you let the subsystem automatically queue that data to special buffers (i.e. buffers for ints/floats/strings/etc.) that will be written in the next frame/cycle to a specific memory location.


    'Debugging' Problematic Bridge Systems

    When something's random, or when it appears something's written over the data you expect to be there (i.e. when you use multiple bridge systems), that means you are looking for the one child's data/functionality you have not given proper 'structure' to. Even if data is coming in from many different systems, since there's only ONE place to write data to and retrieve it from (the 'bridge' system repository), and only two ways to do it (i.e. now or later), and since a child is directly related to its parent (and cannot have a 'hierarchy' of parents -- only maybe 'aunts/uncles') you are only looking at inputs to these bridge systems -- not outputs -- nor obscure, interrelated, references -- when debugging your code.
    This means you are debugging logic only.
    Any hidden pathways and/or syntax errors are simply not related to the data problem. You finger-fudged some input somewhere -- that's the only possibility (outside of structural issues with the logic itself, which needs fixing anyway -- what 'works' only 'works' up to a point -- scale beyond that, then it doesn't 'work' anymore). Thankfully rewriting entire systems is a breeze with a dataless 'Tag' system, as these are very simple logical structures at their core -- and the visual side is fun and rewarding when you see things working as they should -- especially with how minimal the effort was in defining their logic and functionality to begin with.




    Bridge Systems -- When to Use?

    This question is a good one.

    The best answer is when external data references and array-like input/output are essentially unavoidable.


    These can be avoided in simple game designs. However, for something like a character controller, you are always going to have to reference joint positions, camera positions/direction, collider angles/locations, hit detection, and a lot of other stuff.

    Therefore, you will be using bridge systems A LOT.

    However, despite them seeming to be complex at first glance, this architecture can save your bacon when you need more complexity (and scalability). You can easily break off a piece of a 'bridge' system and make it into its own 'child' function that inherits from particular 'parent/aunt/uncle' bridge systems.

    Since you don't have a hierarchy to keep up with, this is not only performant, but also easier to keep track of when you need special functionality -- or when you need to modify the functionality of some other portion of an (otherwise monolithic) entity that consists of many different modules and system structures -- such as a player controller.

    Since these 'bridge' systems can give your child systems as much or as little special attention as the child needs, they have no need to do anything else but sit there and orchestrate (or be a source of) data for their children.
    Therefore there is no need to 'return' anything -- you ask the 'bridge' system for the data when you want it, or ask it to give it to you when you don't _know_ when you might want it.

    This is, essentially, what a good 'parent' or 'mentor' does anyway. :)
     
    Last edited: Apr 1, 2021
    landonth and stuksgens like this.
  33. xVergilx

    xVergilx

    Joined:
    Dec 22, 2014
    Posts:
    2,614
    Not sure what problem are you trying to solve here. And I don't think you're actually aware why there's no "ideal" visual scripting out there.

    Real problem here is human factor. People are lazy and have an insane lack of patience to learn.
    You can't bypass that by providing "communication", since each individual level of knowledge is vastly differ, even among "programmers".

    Simplify VS too much - hidden complexity and unexpected bugs due to unexpected behaviour (and potential loss of performance).
    Make a middle ground VS - suddenly too hard to learn / understand;


    So TL;DR: out of this - Its pretty much yet another "Make Game" button thread.

    I'm not saying you shouldn't try, maybe out of all solutions yours would be best.
    Just wanted to say - don't start a religion on a forum. Use your own blog.
    No need to feed unaware users useless information.
     
  34. GliderGuy

    GliderGuy

    Joined:
    Dec 14, 2018
    Posts:
    168
    In my opinion — that statement couldn't be farther from the truth.

    @DreamingImLatios summarized this thread rather beautifully:
    And that summary does not at all sound like this thread is asking for a 'Make Game' button.


    While the OP may be advised to shorten, condense, or spoilerize his text walls, he is in no way starting a religion by delving into a topic deeply. The OP making a blog would be awesome — but I don't believe this thread is breaking any rules by simply being text-heavy.


    How, exactly, is he doing that? The OP is deconstructing the concepts of Visual Scripting and suggesting an alternative paradigm for it. For anyone that wants to understand Visual Scripting deeply, this thread seems quite far from 'useless'.



    The way I see it — this thread is advocating for excellence rather than settling for mediocrity.
    Should we not all be doing that? ;)
     
    Last edited: Apr 1, 2021
  35. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    Some good points of 'at a glance' feedback -- so let me address those first:




    Sorry again for my text-walls.

    As GliderGuy mentioned above, @DreamingImLatios did an excellent job at summarizing things.


    But out of my own mouth:

    The problem I am aiming to solve is that the 'structure' of every programming language is convoluted and messy and isn't anywhere close to human-readable logic (or even 'intent') in its current (non-visual / non-intuitive) form -- especially when it comes to scaling up and down for games (and teams) of any size or complexity.






    If you had a chance to read any of my posts condemning Unity's choice of using Bolt 1 for Visual Scripting, this is exactly what I am advocating AGAINST.

    This learning/unintuitive aspect of mid-range (to large VS systems) is something I am equally AGAINST as well.

    As you said, humans are lazy, but perhaps I'm the laziest of all.
    Because, with a few simple (data-oriented) rules, and a slightly different way of looking at the problem of complex architecture in terms of inherent visual problems (rather than just looking for new syntax or style), numerous (effortless for the end-user) solutions can be achieved that might one day even outdo my own offerings. I simply want to program game logic easily and effortlessly. As long as a solution focuses on the visual structure of the architecture as a language, while keeping the language as close to logic (and as far away from style or syntax as possible), some truly easy-to-manage systems could exist -- systems that could maybe even outclass my own meager offerings.

    Visual Scripting is just the best medium possible to achieve the necessary separation of syntax from logic. After all -- a picture (or visual representation of something) is worth a thousand (or a million, in my case) words.



    Oh, there are lots of reasons no 'ideal' solution exists.
    Visual Scripting is a HARD subject, and too complex (and undocumented) for most programmers to wrap their minds around. The one reason I code as rarely as possible is because I can't afford to get my mind wrapped up in the thinking that a text-based editor is "good enough" for logic, especially when it relies heavily on (mostly arbitrary) syntax styles -- instead of logic -- at its core.
    Ideally, that 'logic' would map as close to 1:1 with a human's visual intuition and understandings of affordances as is physically possible through its chosen visual language. So perhaps even VR/AR could play a role in this one day.

    We just need to focus on understanding structure (and the affordances it provides -- or not -- to logic) for now.



    People aren't lazy -- they just hate to think too much outside of their paradigms, and they want everything given to them on a silver platter YESTERDAY. And reading long paragraphs to ingest information is the exact opposite of that.

    Which is why this thread has gotten so little traction.
    And frankly, I'm okay with that.

    The point of this thread is to put the knowledge out there in some form. Then, later, I will have the chance to prove my ideas in practice. At that point, I can look back at what I _thought_ was correct, and see where I was right -- but more importantly -- where I was wrong (and therefore where I can improve my thinking).



    Not my goal.

    This is more of a public library/interface for my ideas. Anyone can chime in -- especially those who disagree.
    This is what the internet is good for after all.

    Since visual scripting is subjective, what better place to test the mettle of my (hopefully) objective ideas before they have a concrete form than to say "look, this is probably great!" to the internet -- and then instantly be shot down with how these ideas aren't so great because of x,y,z reasons and associations. Sometimes one gets lucky and those reasons are actually very valid (and objective), and sometimes those associations are things one would have never noticed with their own (limited) experience of the world. Therefore, a forum like this has value to professing objectivity -- way more than a blog (or even a thing, in and of itself) does.

    Therefore, like you said, if I was solely interested in hearing my own voice -- I would write a blog.

    However, I am doing legit research. I am putting this information out for others to reference and debunk as well as for those who want to look for a different way to 'code' games -- and for those who simply want to challenge my ideas.


    I honestly couldn't imagine where this idea came from.

    I know the thread is long, but that in itself should discourage that thinking enough on its own -- even if you didn't read a single word in this thread.
    Giving instructions on how to think about architecture and structure automatically implies there is more to a solution than a "press button; make thing" approach.

    Nowhere across the forums or discord or anywhere else am I pointing at a "press here to make game" sort of solution. This thread is no different.


    GliderGuy is spot-on here.

    I am simply trying to spread awareness that there are different (and better!) ways out there than what we commonly think of as 'workable' solutions.

    I am, by far, not the only person who thinks the current VS solutions suck -- and it sounds like you are the same way.
    I think the only difference between you and I is that I believe they can be improved -- and I am working toward improving them. On the other hand, it sounds like you are just feeling like that 'improvement' is hopeless and probably fleeting, so being on the 'winning' side -- even if that negative thinking it requires isn't the side you really want to be on -- is what feels like the 'best' answer to you. At least you can say "See? I told you it wouldn't work!" when efforts like mine 'inevitably' fail. However, what people like that don't count on is that I am damn adamant about making the best possible scripting (and game development workflows) possible all-around -- even if I have to go to the head of Unity and lay out everything I have across their desk.
    New users are not any less ambitious than experienced veterans in the games industry. The main difference is that the new users do not have the tools and methodologies available in an "easy-to-digest" manner -- and there's no excuse for this besides lack of knowledge of the overall goals of the individual taking on this task -- and that shouldn't matter, whether he is a veteran game developer -- or a newbie just starting out.



    Information is only 'useless' when one (subjectively) deems it so.

    Any deep knowledge / wisdom I have comes from the fact that I rarely deem _any_ information 'useless' -- I always find some merit in its existence (i.e. by seeing how it relates to other things I already believe myself to 'know').

    To be able to say so absolutely that any information is 'useless' makes one wonder how 'useful' that statement of yours really is. If I were to judge you based on that statement alone (to be clear, I am not judging you, so please don't take offense) -- I'd bet you're the kind of person who finds old people in nursing homes 'useless' too, as they likely inherently don't seem to provide any kind of real or perceived value to you, and therefore, in your mind, might also be objectively 'useless' too. However, since I think you are probably still _human_ underneath that veil of absolutes, that isn't my real opinion of you -- just one that someone with your mindset might have on you, if thinking the same way as you apparently do.

    It's worth it to take the time to understand the value of something that does not seem to inherently have any value.
    You'd be surprised at how rich and full your world will effortlessly become -- both inside and out.
     
    Last edited: Apr 1, 2021
  36. davenirline

    davenirline

    Joined:
    Jul 7, 2010
    Posts:
    729
    How do you solve the "only text programmers can modify the tool" problem (if you have considered it)? What I mean by this is that visual programming tools are only going to be modified by text programmers because it was made that way. They didn't use a visual programming language.

    I think this is a major reason why visual programming tools did not explode. The programmers that made them don't really have a use for them. They'd still stick to text code because it's what they know. They're not motivated to update the tool, unless they are being paid. Compare this to something like JavaScript. It was a hastily made language with not a lot of design put into it. Yet, it became so popular because people used the same language to make tools/frameworks/libraries to fill in the holes. They used the language to "improve" itself. I don't see this happening in visual programming space.
     
  37. stuksgens

    stuksgens

    Joined:
    Feb 21, 2017
    Posts:
    134
    Interesting that point of view...

    A "perfect" visual script tool would be one that can be created, modified, evolved over itself without depending on changes through the code. At that point, scripts have become obsolete
     
  38. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    This is an excellent question -- and is definitely one that can be solved.

    My answer to that kind of unknown complexity is often "generated" or "precompiled" code modules handled in a compiler/interpreter layer through a great structural design -- because, as any artist/designer knows, "form would typically follow its function" -- Tooling (and code) is no different (even if the very 'form' of the tool itself is being modified). As long as the tool itself is designed to be modified (or at least aware of it and affords it in the first place), that 'meta' tool's form is still "following" its "function" even then.

    To put an objective 'point' on this -- Unity is doing this very thing with ShaderGraph (and even Visual Scripting -- at least according to what I gather about 'snippet' nodes.)
    Having a layer between the logic and scripting layer (to interpret the scripting based on the intended logic) prior to the direct execution of the code generally does what you need in terms of making logic flexible and easy-to-use.
    Aside from ShaderGraph, Unity has already used this same trick in other areas of the application as well in the past (e.g. DOTS prefabs).
    Houdini regularly uses this trick with Python and other (external) applications like Unity to handle UX and UI problems by 'teaching' these applications how to interpret its in-house Visual Scripting logic (and therefore replicate its internal results) with things that modularize its architecture (such as digital assets and using session synchronization) to translate its entire node architecture to generate equivalent functionality in other software solutions. Houdini has proven this is a viable path for its own Visual Scripting implementation, as it has been tried and true in bringing Houdini to other software like Maya and Unreal -- and even to Unity itself.



    This is a good point, but it's missing a key component: realtime.

    So what about performance? -- What if things need to be changed on-the-fly? -- Visual Scripts always need to be interpreted, right? The short answer is: Yes. And so does any language -- whether it's C# or C++ -- but this is done at the compiler level. Unity is working around that limitation by layering the 'compiler' optimization process in a similar way as it interprets and compiles C# to give it near C++ performance -- i.e. when building the game.

    The long answer to the performance/realtime question is the following:

    First you must design and program your systems and logic and then make them flexible enough to perform different kinds of functionality at runtime by swapping out common functionality -- all without relying on backend functionality.
    This is the job of the interpreting layer however.
    This is what turns human-readable code to machine-code.
    The problem with Visual Scripting languages with this particular problem is not the myth that Visual Scripts are simply a "walled-garden" with no escape from the interpreter -- the problem is that nothing has ever bothered to interface with them. This is because their logic is extremely varied and arbitrary most of the time -- which is why I disliked the Bolt 1 approach at first.
    Visual scripts are not yet considered their own 'language' (because they are almost NEVER designed in any way to be Data-Oriented), and therefore they don't tend to talk to (or get interpreted by) other languages easily. Therefore, they generally tend to BE walled-gardens in practice, despite the power and flexibility available to them in a slightly different form.
    The fact that these Visual Scripting languages are not 'text-based' has no impact on their inherent functionality. All text-based languages, again, struggle with the same issues of not being data-oriented enough. At least in terms of Visual Scripting, that struggle gets handled with creatively using the interpreter layer. If the interpreter is good at his job -- the language is translated (and optimized) as accurately and as efficiently as any other language (text-based or not) and can therefore be translated into other languages (surprisingly well, actually -- even with "weird" languages like Python with special rulesets -- since, as I say all the time "Data is just Data" -- and all languages work with "Data" via input/output, therefore, except for their syntax-based rules, they all work, at their core, in surprisingly similar ways. The major differences between them are syntax, as well as where their own 'interpreter' layer(s) might exist -- (which is at the 'compiler' level for most languages). Regardless of where this 'interpreter' layer is or what syntax rules are used to decorate the data transactions -- Data is still data.


    If you're still not convinced, let's try another analogy:
    Keep in mind that Visual Script "data" conversion is also not much different than a 3d model being converted from one file-format to another. Clearly .OBJ doesn't have 'bones' -- so you use this form of a 3D model only when you need a 'dumb' 3d model. When you need slightly more functionality, then .FBX is the format you probably need. The point being -- these are still essentially from the same 'data' source. You simply toss away the 'extra' data you don't need. Identical functionality of discarding 'extra' data happens when narrowing down the scope of your runtime data with endless 'if' statements.
    Same is true when writing a new class in C# -- data is just data, so an 'int32' you use in an instance of your class just to represent health and the 'int32' you use to subtract from that health via the methods from a different class instance are still working with two ints, each with 32 bits of data. If these can be processed in parallel (and therefore faster), who cares if these are a number or a letter, or a series of binary on/off switches? -- The answer is always: only the function / methods you chose to use that data with. If the function/method doesn't care where the data comes from (and only cares that it has enough of that data to work with and that it is in the proper scope), and if the scope of that data (i.e. its location in memory) is easy to ensure that it is not likely to get written over or be referring to some other data in the next frame/cycle -- why would anybody else care about the 'type' of data -- especially your Visual Script? After all, if you don't need 'bones' you don't need 'bone' data. As long as you properly define the scope of your data -- any language can interpret that scope. That is -- any language can see that this is 'bone' data, and not 'vertex' data. You only want the 'transforms' for the 'vertex' data in an .OBJ file, for example, to be modified -- not the 'bone' data -- so it can be tossed or ignored. This is exactly what "Queries" do in databases -- as well as Linq style queries. These are just a LOT more complicated than they actually have to be at their core.

    Back to the original questioning though:

    This is a pretty good point. However, that brings me back to what I said at the end of my previous post about value:

    Scripts (Visual or otherwise) are able to be translated the same way as any other data -- The problem arises in the fact that we prioritize _existing_ (read: outdated) technology, rather than prioritizing overcoming limitations of technology in general when going forward to define a 'solution'.
    The (unspoken) programmer mantra "if it works for _somebody_, then it can work for _everybody_" is a very selfish and outdated point of view -- and is simply not true at its core. This way of thinking only serves the programmers who are (unknowingly) making their lives harder and harder with each new generation of technology (and 'languages') that evolve and get layered on top of one another to 'solve' various problems that are proposed to 'fix' the problems only an increasingly outdated mindset unknowingly continues to create. This all generally happens since few ever attempt to understand the underlying problem of exactly what a 'code editor' is meant to do.

    [Hint: It isn't meant to make one an elitist bastard that prides oneself on the fact that hours (or decades) in a text-editor somehow makes one more of a 'coder' (or even simply more 'logic-driven') than one who needs (or simply prefers) a Visual Editor for the same task.]



    "Filling in the holes" is usually okay. Again, data is just data. Not every use-case can be built directly into the foundation of the language itself, and therefore that language definitely has to be flexible enough at its core to provide for that.

    Visual Scripting, historically, has not been a very flexible language at its core -- but that can change. Houdini is doing a pretty good job of trying to do that -- and hopefully Unity is following-suit with the 'language' part in some way.


    That said, the one thing any 'language' needs to function is data and scope (i.e. the verb action and the subjects to act -- or be acted -- upon). This must sit alongside a way to define, alter, and move between these data scopes effortlessly (and in an understandable way, if you're human). Everything else is just icing on the cake -- or unnecessary syntax.
    There is no "in-between" on this.

    The scary part of any language comes when building upon new frameworks and tools so much that these 'extensions' to the language become a full-on 'language' in and of themselves (at least functionally).

    For example, if we were talking about webdev, JQuery and/or AJAX comes to mind here -- It might as well have been a new language.

    I go further into that thought here:

    Once I start adding if/then/else to core functionality of the original language, I start to cause either performance issues or too much complexity in my logic to keep track of with the language itself -- This is because the 'function' no longer determines the 'form' of the language.
    That language's 'function' has changed so drastically that its 'form' no longer suits it. It has truly become 'code' that suddenly needs to be 'decoded' in order to be understood.
    At this point, the 'form' follows some random offshoot 'functionality' and bastardizes the original 'form' by splitting it between two (or more) 'functions'. Therefore, if one or the other of these are not true to one another at the core of their functionality, then that 'form' becomes split (and its core functionality is in conflict). A concrete example of this was in webdev using HTML and CGI / ASP / PHP / etc. and eventually Javascript (etc) for a 'better' HTML. Webdev tools have always been a nightmare, as we did AJAX, MySQL, and even PHP, etc. to also 'fill in the gaps' when the languages did not have good database support and the technology/hardware behind these languages wasn't up to snuff yet.

    This is something a proper approach to Visual Scripting could have solved YEARS ago, with the 'interpreter' layer to it either being on the local compiler level (i.e HTML/Javascript) -- or on the server-based compiler layer (PHP/MySQL), without the need for technology like AJAX/etc. These could have all been the same data-oriented language, yet XML and other various technologies were employed to send data back and forth instead, further complicating the approach to tools/technologies, instead of simplifying and optimizing them using Visual Scripting as an approach.

    To "improve" a language within the language itself is a double-edged sword.
    While it can make the 'language' more flexible -- that 'flexibility' has a huge (hidden) cost in terms of the form and the core functionality it was structured for.

    Again, it is the structural design that matters -- and the affordances (and techniques employed to offer those affordances) that matter at the end of the day to the design of the core language functionality.
    In other words, if it is the intent of the Visual Scripting language to be general-purpose, then the language itself must also be designed to be 'general-purpose' -- on the purest level possible. That design, therefore, must be aware of the potential use-cases it may one day face -- in a way that languages like HTML/Javascript/PHP/etc. were not (yet could have been). While building a tool that combines languages for you (think Adobe's Dreamweaver) was a great idea akin to Visual Scripting for Unity -- the core of the problem of it not being able to design a webpage visually as easily as one can design a Photoshop image has always existed in the languages it attempted to employ behind the scenes. In the early days of web design, that kind of performance would have made a huge difference -- if only the languages weren't so heavily integrated in (and dependent upon) one another's inherent design. For example, imagine back in the day of the 28k modems, having a webpage that had an HTML version and a PHP version of the page that operated locally 90% of the time, and only needed to request special rendering data in bits/bytes -- data that could automatically be uploaded to your webserver when designing a page (such as which image indexes to download) -- and only included the relevant parts that were needed on the client-side. Sending the initial link to the website for the URL was all that we needed to do. The structure of the site remained mostly the same, and selected data could be requested behind the scenes (and prioritized) as the page loaded, without needing explicit URLs for everything, and then it could be stored in a cache to deliver almost instantaneous page changes -- on a 28k modem. D:

    Had these languages been designed with structural design in mind -- so much more could have been possible (back then -- as well as now) with only a little more creativity -- and effort.

    To put it simply though -- if the language is designed in a way that its form follows its desired function from the outset to the fullest of its abilities (and that intended function supports a wide enough variety of use-cases at the core of its design foundation), the form, in the end, would never need to change much (if at all), unless there is a radically new and different form that better fits the overall functionality more than the one that currently exists. In this case, structural design is still key (form follows function is being upheld still) -- and therefore structural design is being followed to the utmost.

    Maya, on the other hand, and even Web Development -- fail to adhere to these rules (which continuously dates them), whereas Blender and Houdini (or Apps and Unity) continue to evolve their structural design to become something a lot better (as a whole) than anything else that ever existed before them. :)
     
    Last edited: Apr 7, 2021
    mattdymott, stuksgens and GliderGuy like this.
  39. Gekigengar

    Gekigengar

    Joined:
    Jan 20, 2013
    Posts:
    485
    Its been a while, have there been any progress, concept art, or even a working prototype yet? If you start explaining with diagrams, or even a public repo, you might gain vital feedbacks and help so you can understand if your theories truly aligned with your audience, and works practically in real environments.

    I say take a shot at it, you might even gain investors if it truly is the ultimate VS. (Like the competing engine that has made all the right investments these past few months ;))
     
    awesomedata and GliderGuy like this.
  40. awesomedata

    awesomedata

    Joined:
    Oct 8, 2014
    Posts:
    1,293
    Not sure what 'competing engine' you're referring to here -- so, if you would, please elaborate. :)
    As far as I know, I'm not 'competing' with anyone -- at least not yet.



    Definitely progress, but no concept 'art' (that I'm ready to release yet). The working prototype is in the works, but won't be ready until I see the base form of the approach by Unity (which, I'm hoping, should be the next release). The idea is to leverage the Unity VS engine to create a working prototype (which the Unity team assures me would be possible) to showcase the new workflows at some point.


    Indeed -- I'm not entirely sure I'll go github on this one (since it's a bit of a paradigm shift and needs more of a focus-test than a widespread release), but I'll definitely consider explaining things visually when I feel it's time. Right now, if you know about this 'project', it's only because you're following my commentary. I've made no efforts to 'hide' much, however, I can't just shoot myself in the foot just because people are curious of my approach. So far, most of what I am working on (and the approach itself) has been implied (and sometimes outright stated) if you seek breadcrumbs around the Unity forum about it -- especially here in this thread. This obfuscation is intentional, however. I'm afraid if I release my designs too early, someone will have stolen them and gotten them implemented before I could consider them 'done' enough to release them.

    I've already had ideas from my main innovation to Unity so far (Snapcam) stolen from me and promoted as parts of (and even as the core idea of) other 'assets'. Therefore, to avoid being burned on something this huge to me, I don't want to release anything before I'm 100% at the production-ready stage.



    Your interest definitely motivates me!

    Sadly, I'm a bit of a ways away from being prepared to take on investors, haha. :)
    Besides -- I would need a team first!
    This might really be the direction I go -- if the basic prototype is a strong enough contender for people's attention.

    No promises though. Right now, I am still debating how far I might want to take this as a business if it actually blows up into something that would be difficult to maintain (as things like this tend to be). A flagship product is great, but no business can run on one product alone. And so far, I'm not seeing a huge amount of interest yet, so we'll see.
    Besides, evaluating something like this as a business (instead of just a one-shot product) takes a lot of planning and preparation (because of the risk involved) -- and that's not something one should rush into and 'cash-out' on -- if you want the products to actually be GOOD - and long-lasting - investments of your (and your customers') time.
    So, outside of time itself playing a factor (because I can't yet do this fulltime), the above is my current conundrum.

    That said -- hopefully it makes sense now as to why the radio-silence.
     
    GliderGuy likes this.
unityunity