Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Are computers about to be turned inside out?

Discussion in 'General Discussion' started by Arowx, Nov 23, 2016.

  1. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,026
    Only if you're planning on keeping the same components throughout the entire lifetime of their respective socket and slots.

    At best there are only ways to maximize the potential for code to end up in caches, but even if it were directly accessible by programmers you still wouldn't see it used by the majority of coders. More likely than not it'd be relegated to those who work with very low level code like driver and OS developers.
     
    Last edited: Nov 25, 2016
  2. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    HP successfully tests its vision of memory-focused computing
    https://www.engadget.com/2016/11/28/hp-successfully-tests-memory-computer/

    But...
    Sounds like we are still waiting on a fully working and fast Memristor RAM system.

    They are not calling it Smart RAM, it's called Memory-Driven Computing (MDC).
     
  3. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,301
    Which is still very different from what you were talking about.

    If you dig through the article...
    Basically, you're looking at paradigm shift. New OS and new CPU architecture.
     
    angrypenguin likes this.
  4. boxhallowed

    boxhallowed

    Joined:
    Mar 31, 2015
    Posts:
    513
    As a CS hobbyist, please, take the advice of the people here and take a course on CS 101. This is either beautiful trolling or embarrassing.
     
    Lightning-Zordon and Kiwasi like this.
  5. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Well they are trying to tackle server level data access latency with a memory centric approach using photonics and NVM.

    I was just referring to the rise of memory centric systems, with HBM and GPU's and how you could take the computing to where the data is or bring the data closer to the computing system.

    And how we could end up with combined CPU/GPU/MEMORY blocks that provide much higher processing speeds due to reduced latency.

    Both approaches are attempting to massively reduce data latency, one physically the other photonically.
     
  6. Aiursrage2k

    Aiursrage2k

    Joined:
    Nov 1, 2009
    Posts:
    4,835
    Arrowx you should get a job as a blogger or something
     
  7. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    If you view the brain's synapses as the ultimate combination of memory and processing then it looks like the memristor could provide a digital version of the synapse (see article link below).

    https://www.extremetech.com/extreme...t-can-be-conditioned-just-like-a-real-synapse

    Summary a memristor circuit remembers it's previous state but can be made so that it slowly forgets, which is how neurons in the brain work, the strength of a neurons signal depends on how sensitised it has become to the input. This allows people to remember things they do often e.g. habits and forget infrequently done things.

    Which is the basic building block for synaptic or neuromorphic 'brain' chips, which are the next era in computing, according to the article below.

    http://www.extremetech.com/extreme/...in-chips-will-begin-the-next-era-in-computing

    Summary our brains are great at pattern recognition and dealing with a 'fluid' changing complex world, something digital logical computers are useless at and have to be programmed for every contingency, or in more complex ways to cope e.g. bayesian theory, genetic algorithms, fuzzy logic or simulated neuromorphic behaviour.

    But there could be a spin off from this FPGA's (field programmable gate arrays) allow chips to be reprogrammed at the hardware level to a new task. Think about this you get the speed benefit of running logic directly in the hardware. All General CPU's take instructions that then trigger logic circuits that act on data so they are highly flexible but slower than a dedicated block of logic circuits running that algorithm.

    The Memristor circuit could revolutionise FPGA's so that instead of a General Purpose CPU running code the MFPGA could be dynamically reconfigured to the needed algorithm in real time.

    And the cruncher what if MFPGA's are combined into memory chips, the memristors first use is probably going to be memory it's ability to remember it's state without power will revolutionise computer memory on it's own but combined with MFPGA technology you could truly have Smart RAM (dynamic processing power where the data is).

    The thing is if any FAB that can create RAM can adopt MFPGA technology then what happens to Intel, ARM, AMD who have dedicated themselves to Central Processing Units. They will still be needed but will they become more like ARM a company that developes CPU technology and lets others create the hardware.
     
    Last edited: Nov 30, 2016
  8. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    2,980
    Things like FPGA and microcontrollers can be reprogrammed, but only a certain number of times. If you had an automated procedure to reprogram them each time you wanted to push new logic out to your memory, you would quickly burn out the FPGA and/or microcontrollers. And you would still be creating a new bottleneck, because FGPA and microcontrollers are much slower than a fast CPU.

    Doing some math on the memory sticks is just dumb. It would require a new set of programming tools, and it would create a bottleneck. By contrast, moving the memory (HBM) onto an already fast CPU is an awesome idea.

    In addition to that, you have not explained how you would solve the common problem that multi-CPU systems have always faced, which is bandwidth needed to access memory in another unit. With two CPUs, each CPU is responsible for some RAM. CPUs frequently need to grab data from each other's RAM segments over very fast interconnects. If you place a slow ARM processor on each memory stick, you would need to devise a solution for how each slow ARM processor would grab data from other memory sticks.

    Sorry to constantly throw cold water on your idea, but there are plenty of technical reasons nobody is doing memory sticks with ARM processors on them. Your idea creates new bottlenecks without solving anything.
     
  9. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Are you sure FPGA are slow as Intel has coupled one with a Xeon processor and is touting 20x performance boost (2014) -> http://www.extremetech.com/extreme/...h-integrated-fpga-touts-20x-performance-boost

    And memristors are already on their way to being a new non volatile memory standard, so they must have high rewrite ability as what would be the use of NVM that wears out after a few cycles or thousand cycles. So coupled with FPGA technology they should allow for a long lasting and adaptable MFPGA processor.

    You do have a good issue here data needs to move for instance player inputs as data fed through Unity to graphical and audio player outputs.

    Can I correct you on the aspect of slow ARM processors:
    • One they are not slow the ARM A10 has four cores and runs at 2.3 Ghz other ARM chips have 8 cores.
    • Two the processors could be from any manufacturer e.g. Intel, Amd, IBM, Qualcomm, Samsung...
    There are two options move the data to the logic or the logic to the data, but a smart system would assess the sizes of both and choose the best option based on moving less data as this is the most expensive part of any computing system, the inherent latency of moving data.

    But look at modern HBM2 memory sizes, they could allow 32GB of memory next to the processors. So it also depends on processor/memory granularity and balance in a system.

    Note HP are looking to an optical light based data transfer bus.
     
  10. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,301
    You do know that Ghz is not an incredibly useful metric today, right?
    The CPU might be doing different number of instructions per second regardless of the clock, plus its instruction set can also affect what can be done with it.

    If anything, this thread convinced me that reading about technology news that are not directly related to the problem you're working with is a waste of time.

    There's simply too much information to reasonably process. It is a sea of numbers and data, and you can waste eternity getting excited and imagining "all the possibilities"... while doing nothing useful. There are many things that would be interesting to work with (as in "participate in development"), but reading news about them seems to be quite pointless.
     
  11. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    If you work in an industry based on technology not being interested in new technology trends is rather short sighted isn't it?
     
  12. boxhallowed

    boxhallowed

    Joined:
    Mar 31, 2015
    Posts:
    513
    That's not what they said. They are speaking of tech "trends", narratives woven by "tech journalist" who either have a very poor working knowledge of the subject, or are paid off. "Trends" are not where the advancements are being made; well known techniques being improved are where we are succeeding with new technology.
     
  13. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,301
    No. It is the reasonable thing to do. There's thing that you need to do, things that are related to the task at hand, and things that are not related to it.

    If you're building a game, optical computers, memristors, alternative architectures, silicon brain chips and all that jazz are distractions that have nothing to do with your project. Because you'll see effect of those advances, improvements (assuming they become popularity, and achieve mainstream adoption) in 10 years or more. Meanwhile, you'll have a very specific needs, very specific apis and very specific needs to do. Possibilities are things that do not exist right now.

    For example, the article about brain chips spends most of the time trying to wow the reader without providing any substantial info. They want you to start thinking about about amazing possible future with smart machines. Meanwhile to figure out how much bullshit this article has in it, you'll need to spend few hours of research, and in the end you'll gain no useful info you can apply right now. So you'll have wasted your time doing searches that will result in nothing useful.

    Sherlock Holmes (from Conan Doyle's books) said:
    I think he had a point, despite being a fictional character.
     
    Ryiah likes this.
  14. ShilohGames

    ShilohGames

    Joined:
    Mar 24, 2014
    Posts:
    2,980
    Performance is relative. How does that 2.3GHz ARM chip compare to a similarly clocked Intel Xeon? The ARM is slower than a Xeon.

    I am very excited about a 32GB APU with HBM2, but that is different than your idea with an ARM chip on each memory stick. A powerful CPU/APU with 32GB of HBM2 is going to be massively faster than your ARM chip on a memory stick.

    Next question: How would your ARM on a memory stick deal with operations that required data from multiple memory sticks? For example, let's say you wanted to add two Vectors and the Vectors were located in memory on different memory sticks? Which ARM chip would handle the Vector addition operation and how would it handle gathering the data from multiple memory sticks? This issue would come up pretty often during normal operation, and it would put load on the bus between the memory sticks (which would negate the main benefit of doing a compute operation on a memory stick).

    And would your "smart memory" be smart enough to handle operations on custom data classes (like Vector3) and structs, or would your "smart memory" only be able to push compute operations to the sticks of standard data types (like int and float)?

    Just a quick note, I actually have designed circuits, (including complex embedded systems with microcontrollers and more) so I am not just reading press releases and then posting nonsense in a forum.
     
    Ryiah likes this.