Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice
  3. Join us on November 16th, 2023, between 1 pm and 9 pm CET for Ask the Experts Online on Discord and on Unity Discussions.
    Dismiss Notice

As Moore's Law slows do we need to start learning assembly language?

Discussion in 'General Discussion' started by Arowx, Jan 18, 2016.

  1. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Software developers have been living off the cornucopia of Moore's Law (we get 2x faster better processors every 1-2 years) for about 50 years now.

    Have you heard a programmer say that they don't need to optimise the software, just delay the release and upgrade the hardware specs. Businesses are built on this, software is slow and buggy on release then the bugs get fixed and the hardware is upgraded.

    But as the size of the chips shrink new problems arrive, quantum effects (leakage of electrons) and heat build up.

    Even if you think we can get around these issues features on a chip are getting down to 20-30 atoms in size, so even if we can overcome rising heat (it can literally burn the transistors) and quantum effects (electrons jumping over transistor gates or being in two places at once) we can't get lower than a single atom (32,16,8,4,2,1 6 doublings away or 6-12 years).

    And current chip manufacturing technology requires a lithographic etching stage, so features are limited to the wavelength of light you can focus on the silicon wafer.

    So should we be asking UT to start building a IL2ASM system?

    Just kidding but it got me thinking...

    Nvidia are touting their next level hardware to work in the self drive car market, utilising deep learning systems to pattern recognise people, cars, obstacles and the road from live video feeds.

    Deep learning is using neural networks, software and hardware that mimics how neurons work in the brain.

    But could this powerful pattern recognition technology be used to optimize software in an era where we won't be getting a faster processor year on year.

    UT we bake lighting and navigation, you have a profiler and most users have a GPU, have you considered doing some research into the potential of using deep learning to optimise Unity and our games?
     
    Last edited: Jan 20, 2016
  2. goat

    goat

    Joined:
    Aug 24, 2009
    Posts:
    5,182
    No, most programmers create slower code using assembly.
     
    wccrawford, kaiyum, Ryiah and 3 others like this.
  3. Dantus

    Dantus

    Joined:
    Oct 21, 2009
    Posts:
    5,667
    I struggle to see what this topic is supposed to be about. For me it is a bunch of topics mixed together in some way that seem to be somehow related as long as you don't watch careful enough. I don't get it.

    Edit: Is the goal to get a deep learning solution for the GPU written in assembly?
     
    Last edited: Jan 18, 2016
  4. 00christian00

    00christian00

    Joined:
    Jul 22, 2012
    Posts:
    1,033
    Knowing assembly language I think is something that every expert programmer should do.
    Not because you'll really need to use it, but because you'll learn how cpu really works and what kind of things to avoid and how to program without relying too much on a specific language feature.
     
    Socrates likes this.
  5. Tautvydas-Zilys

    Tautvydas-Zilys

    Unity Technologies

    Joined:
    Jul 25, 2013
    Posts:
    10,507
    That would be slower than IL2CPP.
     
  6. orb

    orb

    Joined:
    Nov 24, 2010
    Posts:
    3,033
    We get *twice the density of transistors* every cycle. Not the same thing, usually not a big leap in speed ;)

    No need to learn assembly (although it's useful for understanding), since compilers are so smart now. Compile times are also way down for many fields. Clang is a massive boost compared to the old GCC in both speed of building and speed of resulting binaries, which spurred major improvements in GCC. Other language specialty compilers create large chunks of code in notime.

    Offloading maths to the GPU might be a nice thing to try, where applicable. I'm sure baking could be accelerated with some GPGPU help.
     
  7. Mwsc

    Mwsc

    Joined:
    Aug 26, 2012
    Posts:
    189
    Take a look at the speed of CPUs over the past 10 years. The speed has not been following Moore's law for a long time.
     
  8. orb

    orb

    Joined:
    Nov 24, 2010
    Posts:
    3,033
    "Moore's law is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years."
    Nothing about speed, which never grows linearly alongside density anyway :)
     
    wccrawford likes this.
  9. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Did a fair bit of this back on Amiga where it kind of mattered. In today's age though ASM isn't easy, what would be fast on one cpu might not necessarily be optimal on another. By contrast, C++ compilers are the best we have, these have had the most investment and engineering hours behind them, so IL2CPP is the fastest general purpose solution.

    In other cases like 3rd party middleware, we often see bits of hand optimised ASM for where it counts on *some* platforms, perhaps sound middleware for example.

    And if you're not careful, you will be missing out on much cooler optimisations. The whole moore's law thing relates to hardware, and we might all be quantum and thingmajig-powered before long anyway.

    The slowest part is usually me anyway.
     
    lorenalexm, Kiwasi and GarBenjamin like this.
  10. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Some compilers do have similar options to this, but it's with mixed success, as most CPUs have pretty deep branch prediction these days on the fly. Far bigger, more reliable gains with less work are achieved by making work parallel, and optimisations like this must come first, obviously. We're not at the point where deep analysis offers the best performance gains yet.

    The reason for this is that you typically (esp in this industry) do simple math, over and over and over again, it's not really outside of science, that you would gain from a deep analysis of program behaviour, and even then something like quantum computing or compute would trounce it as you will already have a pretty good idea of the behaviour. Most things aren't even SIMD optimised and the easily reachable optimisations should be done first.

    In scientific fields, the solution is simply making less general-purpose cpus to solve the tasks, ie an entire function is built into silicon, that offers by far the biggest performance upgrade, well beyond typical analysis of a program's behaviour.
     
  11. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    So UT might be better off using a Deep Learning Neural Network to analyse all of the games made with it's technology and generate a dedicate Unity Engine Processor.

    That's deep and wow it would be fast your game running direct in and on dedicated hardware a Unity Processor, or UP chip.
     
  12. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    That's pretty much the opposite of what I said.

    I don't think it will help doing deep learning in a field where there is nothing to learn. As Unity's component parts such as mecanim are self contained, there is no optimisation benefit from deep learning that part. Or any single part.

    When you apply it to your own code, where such a thing might be beneficial (because we don't know what people's code will be) it is not very beneficial in game development, because the maths and branching in game development is typically all the same stuff, over and over again. You benefit very little from a deep analysis. This is something CPUs excel at with prediction, regardless, right now.

    Scientific and heavy computation does benefit from it, and the best result is always obtained for them implementing the same function in hardware, typically in the form of a plug in card to accelerate key areas.

    I'm not saying it's useless, I'm saying it isn't useful vs all the other things Unity could do meanwhile to make it faster first, a job which would probably not ever get finished.

    Sure they could analyse a lot of code paths and they do for things like IL2CPP, where it matters, but deep learning and analysis isn't the same thing, and once you have the data, it's going to be hard or impossible to act on it due to everything being in component parts. You are better off just optimising what is obviously slow in this particular field.
     
  13. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    OK but we could use the pattern recognition properties of a deep learning neural network combined with performance data and profiling data to convert slow code to highly optimised code automatically.

    For instance what if you could click on the cloud build and optimise button in Unity and you project would be wisked up to the cloud built, profiled, analysed and optimized auto-magically using deep learning neural network technology?

    You get a report of the improvements and the option to review and accept the changes, knowing that a cloud based copy of your original code is still stored.

    Now that sounds like a cool reason to go Pro to me.

    Or not just speed performance but the system could check for logic flaws, garbage collection minimization.

    In effect a deep learning neural network could be like a master unity game developer we could use to empower our games.

    It's only an idea but the field is making progress and a limited knowledge domain like programming unity could be an ideal application for a deep learning system.

    Of course there could be a time when Unity adds a node based game development system and this system would take our programmer jobs!!!
     
    darkhog likes this.
  14. Dantus

    Dantus

    Joined:
    Oct 21, 2009
    Posts:
    5,667
    @Arowx, when I read your description, it sounds as if there are already implementations for this available, that have shown to be practically usable in large scale applications and since Unity is a multi platform solution, it will most likely work for those as well.
    Do you have links to such a solution?
     
  15. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    last line in first post...
    As with most information technology the last field it is applied to is IT (check out the history of using a program to compile a program).

    But deep learning system are very good at image identification and pattern recognition. If you view code as a pattern of symbols then the same neural network technology will be able to learn from examples of good code, bad code and even classify the code into groups. And with more training suggest optimizations or improvements to bad code patterns.

    The system would need expert tutoring and lots of examples but with all the people using Unity that should not be a problem.
     
  16. Dantus

    Dantus

    Joined:
    Oct 21, 2009
    Posts:
    5,667
    There are standard solutions these days like static analysis that help coders to e.g. find bad code patterns. Those tools already exist and can be used.
    Neural networks for software analysis have not proven to be useful for software development yet. There is no solution yet and even the research doesn't seem to be advanced enough for that. I am not an expert in this field, but haven't seen anything that is even close to that.
    Being the first in such an area requires a huge investment and if the research hasn't reached the point where serious practical applications have almost been reached, it is simply a waste of resources. They would need to invest millions without knowing whether there will be any practical benefit.
    Since the decision makers are usually not willing to take a fraction of this kind of risk, it is simply not realistic to happen.
     
    Kiwasi likes this.
  17. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    So what you're saying is... if you're a hobbyist target your own machine, then assembly is the right way to go?
     
  18. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Actually research is cheap, UT could fund a couple of PhD students to work on the project, if they can get AMD, Nvidia, IBM, Intel or even Microsoft to help fund it or support the researchers with hardware then a basic small research team could probably be run for the outlay of a couple of in house employees or one manager.
     
  19. Jaimi

    Jaimi

    Joined:
    Jan 10, 2009
    Posts:
    6,171
    In todays world, modern apps are so far divorced from the hardware that assembly language is likely not an option, nor would it be helpful. It's sad to see apps that do hardly nothing taking up megabytes of memory, and relying on megabytes of library code to do it. But, it's the world we live in. Even if you write assembly code, you still have to rely on enormous amounts of library functions to do anything useful, and much of peoples programs aren't even in their own code anymore.

    Moore's law, if I recall correctly, was that the number of transistors on a chip would double every year. This has not held out for the past 10 years. Neither has the "Speed doubling every year" or "every two years".
    In truth, the increase in processing speed is slowing down, and will continue to slow down. What will happen is that the number of processors will increase. We've seen this done in a fantastic way in your video card, which really is just a specialized computer that performs graphics, with hundreds, or even thousands of processors. I think this is well known, and everyone will continue to shift toward parallel processing instead of faster CPUs.
     
  20. Dantus

    Dantus

    Joined:
    Oct 21, 2009
    Posts:
    5,667
    The research may be cheap and than bringing it to a state where it can be used for practical purposed would be cheap too? The research might take a few years and since neural networks tend to be pretty performance hungry, investment in some kind of computation power would be unavoidable besides the cost of the researchers. Let's be optimistic and assume they have something valuable after three years. After that, there would need to be a team to implement an actual solution that is e.g. integrated in Visual Studio. Let's be optimistic and say they have something within three years. Maybe they can start to think about a larger scale solution at that point and test it with actual production environments. Let's be optimistic again and assume it only takes them another three years to get there.
    Being optimistic, they might have something after about 9 years, though I am rather skeptical whether this is possible. At that point they might have developed something that is not related to their actual business at all and they don't know whether it is going to be valuable for their core business at all.
    No decision maker would do that! Unity may invest in many other research topics long before something like that. It is not their core business!
     
  21. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Or we could take the route of the X Prize, minimum outlay but a prize for the team that wins a challenge.
    Or similar to the way current self drive car deep learning technology is being worked on at multiple Universities and companies with shared open resources.

    The X-Prize style route could focus on Unity games.

    The open University approach would probably be better if opened up to multiple programming languages, but challenges or some official deep learning code optimization forum would probably be needed.
     
  22. Dantus

    Dantus

    Joined:
    Oct 21, 2009
    Posts:
    5,667
    And Unity should contribute to that just because it is a cool technology?
     
  23. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Not so much that it is cool as the fact that the first engine/tool/language providers that add this feature set will gain a massive advantage over their competition.

    It does't have to be UT, any company that provides a deep learning service that can reduce bugs and improve software performance will probably do very well especially as Moore's law slows down.
     
  24. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    8,986

    Ugh.. Moore's law doesn't slow down. Moore's law is just observational prediction about the amount of components in an ic over time.
     
    Kiwasi, Ryiah and McMayhem like this.
  25. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    8,986
    Also, it appears that most indie devs are fundamentally opposed to the use of deep learning.
     
    Ryiah likes this.
  26. Frpmta

    Frpmta

    Joined:
    Nov 30, 2013
    Posts:
    479
    What you are saying will be one day implemented.

    But it won't be by Unity.
    It will be either Microsoft or Google or some start-up.
     
  27. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,327
    No. Where did you get that idea?

    First, good portion of gaming project don't need much computational power. Graphically taxing games load gpu first, cpu second.

    And if you need every single bit of performance, then instead of learning assembly, you'll need to write a tool that will utilize features of your platform, instead of doing that yourself. Because the tool you write is not subject to human error, while you'll surely mess up everywhere.

    That's a pipe dream and it is not gonna fly for the next 50 years or more. Deep learning networks and network return results that are expressed as floats/probabilities, such as "I'm 90% sure this is a cat, but i'm also 85% sure it is a banana". That would work for pattern recognition, not for optimization purposes. Also, you'll need to train learning network for few months, preferably with couple of terabytes of sample data (where'd you get that) and preferably on something like computer cluster.

    Do not treat machine learning as "make a program" button or as a genie lamp.
     
  28. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,327
    Code is completely different from image.
    In image, you can nuke 90% of the pixels, and human will still recognize what it is about.
    In code, change one letter and program will break.

    You'd need true AI here, not deep learning.
     
  29. hopeful

    hopeful

    Joined:
    Nov 20, 2013
    Posts:
    5,628
    What Moore's Law means to me is that when I finally get my desktop game finished, it will run on mobile. ;)
     
    Kiwasi and Manny Calavera like this.
  30. kaiyum

    kaiyum

    Joined:
    Nov 25, 2012
    Posts:
    686
    Unless you are writing a game engine, you wont need that. Its true that modern cpus are not getting the exponential leap in power with respect to time, as we expect. I have heard that a material called "graphene" would lead us next.

    If you do c/c++ coding(for ue4 or whatever else), then a bit of asm is always helpful. In the engine programming, we use asm for the tougher performance hungry bitches; say transform handling-matrix-skinning etc. SSE intrinsics always helps in math library. Plus, while learning asm you surely will a good overview of the hardware. But if you intent to program the whole game in asm, I will call you straight crazy.

    Most cases compiler will optimize better than you, that is why it is very crucial to work with asm carefully. End of the day, for us, unity programmers, asm is of little value to the table.
     
    zombiegorilla and Ryiah like this.
  31. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Not opposed, just practical in the case of code optimisation, there's a lot more real world things Unity would need to do long before this kind of data matters.

    Deep learning for your boss however will probably result in even better star wars movies, so please continue :)
     
    zombiegorilla likes this.
  32. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Have you ever worked in R&D? There is a massive, and expensive, gap between a PhD student proving something is possible, and the thing actually making the thing.

    People are already exploring this topic. But its probably not the place for Unity to step in for research. Unity is a game engine. In many cases that simply means stitching together other implementations. Not so much inventing them.
     
  33. Dantus

    Dantus

    Joined:
    Oct 21, 2009
    Posts:
    5,667
    This topic is so far away from being practically relevant, that a lot of universities are going to spend millions and very likely companies like Microsoft, IBM and Google too, before there is the slightest chance that it ends up on the agenda of companies like Unity.
     
    Kiwasi and zombiegorilla like this.
  34. Eric-Darkomen

    Eric-Darkomen

    Joined:
    Jul 18, 2015
    Posts:
    99
    I think we're too focused on and concerned with linear progress. Siri relies on recording sound clips and relaying them to a data center. We've been able to rent computational power from server farms for years and we live in a golden age of video game streaming.

    I think when Moore's law collapses PCs will start getting bigger again (looking at today's offerings we can afford the space) until we head down the game-console-as-a-streamed-service route (seems inevitable) where processing optimizations and engineering problems will happen at a different scale and the clients will be dumber and cheaper than ever...
     
  35. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,327
    Most likely that won't happen, because developers will hit human resource limit. AAA game dev approaches costs of space programs already.

    Higher processing power and better graphics increase project budget (armies of artists cost money), so even with all powerful hardware, at some point project will simply cost too much to even bother with it, and simpler game dev projects will not require much of CPU power.
     
  36. Eric-Darkomen

    Eric-Darkomen

    Joined:
    Jul 18, 2015
    Posts:
    99
    Not likely, that's how we got from the abacus to punch cards to small teams using assembly to C# and a cast of thousands in the first place. Complexity has a tendency to increase over time. The worth of human resources is determined by supply and demand, the same as everything else.
     
  37. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,141
    Chances are most of the processing power will simply be put into rendering techniques that are currently impractical. We still have a long ways to go before we'll see the quality of current generation raytracing in a realtime format.
     
    Martin_H likes this.
  38. zenGarden

    zenGarden

    Joined:
    Mar 30, 2013
    Posts:
    4,538
    Will this help you fin great game ideas ? will this help you create better gameplay and graphics ? :rolleyes:
     
  39. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,327
    No. It will look cool on paper as a buzzword, though.
     
    Tomnnn likes this.
  40. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    That was the same kind of BIG thinking at IBM when Bill Gates wrote them an operating system. ;)

    OK but imagine what it would be like to have a deep learning Unity service, and let's name it Chan or what's the Sci-Fi guy called?

    So your working on your game and it's working but UT have 'Chan' or Unity Guy as a deep learning system to analyse, and optimize your code at the click of a button.

    Would you press the button, or buy pro to access that button?

    And with VR games needing 90+ fps we probably need this technology, maybe not as a service but what if a research version could be used by Unity to improve the core engine and API?
     
  41. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    That can get you pretty far before you have to actually demonstrate anything
     
  42. I_Am_DreReid

    I_Am_DreReid

    Joined:
    Dec 13, 2015
    Posts:
    361
    I actually did a lot of reading on assembly, back then. Man, its pretty hardcore S*** if you ask me.
     
    kaiyum likes this.
  43. Dantus

    Dantus

    Joined:
    Oct 21, 2009
    Posts:
    5,667
    There were already existing operating systems at that point. Even though it wasn't and isn't easy to create an operating system, the subject was already pretty well understood.

    Why should exactly that technology be needed? There are a lot of simpler ways to improve the performance that are more likely to be successful.
     
  44. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    And Deep Learning system are popping up all over IBM's Watson, Google is working on AI, Nvidia's Car Systems.

    Well I suppose the core systems of Unity can be optimized and improved, multi-threaded, updated to leaner faster graphics API's and utilise SIMD and GPU compute optimizations as well as garbage collection reductions and improvements.

    But even with all of the above a Unity games performance is limited to the knowledge and skills of the developers using it.

    Some Unity developers will be doing amazing performant code that they have optimised and others might struggle to build optimal games that do not run as well. The end users see the Unity logo and play a poorly written under-optimised game and associate the two. With a service that can help developers make the most of Unity they can make sure that their logo is at least associated with games that are not slow or buggy.
     
  45. darkhog

    darkhog

    Joined:
    Dec 4, 2012
    Posts:
    2,218
    Or we'll get bigger processors. As in physically bigger (3-5 inches). Bigger area means more transistors, while transistor size would remain the same and heat produced similar.
     
  46. Dantus

    Dantus

    Joined:
    Oct 21, 2009
    Posts:
    5,667
    @Arowx, I have said everything I had to say. You make the assumption that this problem could be relatively easily solved for software development or at least within a meaningful amount of time. I don't agree with that at all.
     
  47. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    A coworker of mine and I plan to make a gba rom. He says there will be assembly. Lots of assembly.
     
    kaiyum, Ryiah and darkhog like this.
  48. zenGarden

    zenGarden

    Joined:
    Mar 30, 2013
    Posts:
    4,538
    What is best than some Hexadecimal editor to your game ? :D
     
  49. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    I'm ok with hex, I'll probably push a majority of the assembly code to him.
     
  50. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,327
    The problem here is that you and even unity technologies aren't IBM or Google. So neither you nor unity will have proper resources to make another Watson.

    Deep learning is incredibly resource hungry if you want it to be useful, and there's little or no practical application of it in games at this point.

    Frankly, you just seem to think "deep learning == magic, magic will solve all the problems". That's not how it works.
     
    Stardog and Ryiah like this.