Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

When Moorse Law Ends?!

Discussion in 'General Discussion' started by Arowx, Dec 15, 2014.

  1. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Nope, because there's yet another 10 years of life in the old dog yet. Plus there's this whole GPU computing thing that's only just getting started that Apple has planned ahead for with the mac pro. Using the GPU as a processor is probably going to dominate where things go for the next few years. CPUs becoming 24-48 core monsters etc. Already happened with xeons.

    Intel and nvidia are at least 5 years ahead of what you're getting drip fed. You're given a 10-15% upgrade each time a new cycle of products come out. It's in everyone's best interests to drip feed because they can sell you several cpus and gpus as opposed to one that lasts 5 or more years, and this is why nvidia won't suddenly bring out a card 100x better than AMD, since they'd both lose money then.

    Moore's law? more like market law.
     
    lorenalexm likes this.
  2. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,071
    Little to no meaningful impact in the short term. We have so many people, both individuals and companies, that are still using outdated OSes and older hardware. Intel may very well hit the upper limits of silicon in a few years, but it will take a few additional years for most people to actually migrate to the resulting hardware.

    By the time we reach anything that could be considered "long term", we'll likely have alternatives that allow us to continue moving forwards. It isn't like silicon is the only possible way to go and it isn't like we're even remotely close to the true upper limits. We're simply close to the practical upper limits.
     
    aaronhm77 likes this.
  3. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,789
    Graphene transistors speed record stands currently at 100gHz. IBM has successfully been able to manufacture large sheets by growing it on substrates with few flaws. It also can make great super-capacitors where you can charge in seconds and discharge for hours. Coming soon to your cellphone and desktop..
     
  4. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    It's OK guys I had a mathematic epiphany you can get 28nm transistors into a less than 5 nm square you just have to stack them up.

     
  5. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    So you could have a chip that powers itself?! ;0)
     
  6. BFGames

    BFGames

    Joined:
    Oct 2, 2012
    Posts:
    1,543
    Did anyone mention Quantum computers? :D
     
    aaronhm77 likes this.
  7. drhousemd

    drhousemd

    Joined:
    Dec 6, 2012
    Posts:
    24
    It ends in the Singularity, yes that's right, like when Johnny Depp cybernetically merges with Google.
     
    Gigiwoo, aaronhm77 and Ony like this.
  8. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    No that's Ray Kurzweil, Johnny Depp is the actor, Kurzweil is the Singularitarian working at Google to make it smarter than all of us.
     
  9. Tomnnn

    Tomnnn

    Joined:
    May 23, 2013
    Posts:
    4,148
    @Arowx can you comment on computers like mine? The graphics card is connected to the integrated card, not the display. It's dedicated to doing 3D / graphics calculations and then feeding them to the integrated card to be rendered on the screen. Is this a step in the right direction for performance?

    I treat my laptops like desktops that are easy to move, they're always plugged in. I've only heard that this kind of setup saves power, which will greatly annoy me if it's trading performance to do so!! :mad:
     
  10. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,789
    They might be good for deciding lighting or AI decisions where you want the lowest possible energy to decide amongst and array of possibilities..which they excel at like decryption, but classical binary computing platforms will kick their ass for the type of stuff we do with vectors and transforms.. But yes..you could use the supercapacitor films and the graphene transistors on a single chip. They are constantly making big breakthroughs with this stuff at least once a week these days to solve problems and can now manufacture tons instead of beakerfuls a year.
     
  11. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,789
    Kurzweil can kiss my royal behind. I don't want to download my arse into anything thanks. Being human means I have ennui..I am a creator. This guy is captain borg. He shoulda stuck to synthesizers. I can outsmart a machine any day of the week.
     
  12. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
  13. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    As usual people entirely miss the points I make in each thread early on. Market law, not Moore's law. Our consumer rate of advancement is entirely a financial concern.
     
  14. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Well unless you explain clearly to us dunderheads then how do you expect to sound wise and insightful.
     
  15. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    I thought it was pretty clear: the tech for way better computing is available right now, and can be brought to market but it would be at their expense. As I said before, the trend is a 10% to 15% performance increase each new cpu within the same market category that's better than the last. You don't get a bigger jump because of money.

    Money dictates and drives most of the 'innovation' and 'trends' that you see in computing. If tomorrow, nvidia had a gpu 5,000 times better for the same price, they simply would not sell it, ever. Nor would AMD. That is a loss of tonnes of cash. Progress is a fixed rate, and it's deliberate, not constrained by technology.

    Size doesn't matter, nor does it make sense. Performance matters though. And you get that drip fed.
     
  16. Ony

    Ony

    Joined:
    Apr 26, 2009
    Posts:
    1,973
    I would think that if tomorrow nVidia had a GPU that was 5,000 times better than anything else, they would most certainly sell it. Then they would continue to improve from that point onward. There would be no sound business reason not to. It wouldn't be a loss of tonnes of cash, it would be a gain of total market share while their competitors fought to catch up.

    If they can hit 5,000 times better, they can certainly go further from there. The bar would simply be reset. I don't think they purposely keep the bar low just to drip feed. I think they purposely keep trying to do better to sell more powerful GPUs to consumers every chance they get. As you say, performance matters. People care about performance. Up that performance by 5,000 times the norm, and you're sitting on a gold-mine.
     
    Kiwasi, Gigiwoo and hippocoder like this.
  17. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    That's the problem. If they have that threshold, then there's not much room for improvement. Why do it? pointless. Assuming they could, then yeah they would. I was assuming 5000 times better would be the ceiling.

    If they sell the 5k wonder gpu, then this forces AMD to do the same and they're back to square one. Sure everyone benefits but profits were only for a single product cycle within a 6 to 12 month period.

    I guess it's an argument that's easy to pull in different directions.

    Ask yourself though: could nvidia release a gpu right now that is 5 times better than what's on the market? they probably could, but they won't. Where is the number that makes sense?
     
  18. Ony

    Ony

    Joined:
    Apr 26, 2009
    Posts:
    1,973
    But that's the thing. Why would they not? You ask where the number is that makes sense, and I ask where the number is that determines the rate at which they have supposedly "conspired" to drip feed at. If they could release a GPU tomorrow that is 5 times faster than anything else out there, what possible reason would there be to hold back? Where is the number where they should? I don't believe there is one. I could of course be wrong, but so far I'm not convinced I am.
     
  19. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Because their manufacturing relies on using the same layout and prefab for several revisions. It's a massive cost saving on their part. They could still turn a profit, but it wouldn't be anywhere near as big as say doing 6 different gpus from the same prefab with minor tweaks. They've been doing this since forever. That by default is drip fed. There's a ton of additional reasons why it makes sense to bring smaller and more frequent upgrades as opposed to giant leaps, not least considering that the other component parts need to be up to scratch. Say they make a GPU 5x as fast? Without customers also having a CPU capable of throwing that much data at it quick enough, it's not going to show up on benchmarks well enough.

    I've got a hydro cooled 780 ti which is grotesquely overclocked with the same deal for the 4770K which is running at 4.6ghz. That's fast, but they both need to be good to get the most from the tech. Nvidia's new line up is capable of being much faster and already is. But we're being sold the 970... the 980... the 990 or whatever, each one a good 10-15% faster than the previous with price to match.

    As time goes on, they take the previous fast one and rebrand it at a lower price point (still same prefab and gpu).

    The difference between the 970..80 and whatever is what they disable and remove. The power is there, the design is there, it's all worked out. But, they make more money this way. This is their business model, not the business model we'd prefer. And that's fine.

    Still drip fed tho. Proof is the fact its the same gpu with bits removed, frequency knocked down etc :)
     
    Last edited: Dec 22, 2014
  20. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    LOL Absurd 5000 x the performance of a Titan Z GPU.

    LOL Isn't that what Mantle and Tessellation are for to reduce the CPU load and untether the GPU from having to wait on a single rendering thread.
     
  21. Ony

    Ony

    Joined:
    Apr 26, 2009
    Posts:
    1,973
    I'm going to agree with some of your points and disagree with others. I don't think the issue of whether they would release a GPU that powerful tomorrow or not can be simply put in black and white. There are a ton of factors involved, most of which neither of us is privy to (assuming that you, like me, do not work at a video card research center).

    And now I have to get back to what I should be doing, which is finishing my game instead of sitting in the forum talking about GPUs. :p
     
    hippocoder likes this.
  22. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,071
    Except that customers will start to expect such improvements. Part of why technology does not come out in leaps and bounds is it has to keep pace with the speed of research and development.
     
  23. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,500
    If that were true then releases wouldn't work as they do unless there were some form of massive collusion conspiracy consistently applied across all industries. In my time there have been multiple occasions where one vendor has released a significant upgrade and the other has had to compete on price rather than performance, or by putting out wildly inefficient/hot chips, or by "refreshing" their current line. If they were truly so far ahead of what they're releasing then that would never happen - whoever released first would always be at a major disadvantage, because the other could just wait them out and then smash them on all fronts because all they have to do is wait to see what gets released, set their own yardstick a little further ahead, and start the production lines. With that in mind, even if both did have a healthy R&D head start on the market they'd eat that up just by competing with each other.

    Of course it's possible that they all do something shonky every once in a while to put on the appearance of tight competition...
     
    Ony likes this.
  24. Ony

    Ony

    Joined:
    Apr 26, 2009
    Posts:
    1,973
    If nVidia technology made a leap tomorrow and was suddenly capable of making a GPU that was 5,000 times faster than any other, then that is where the new bar would then be set to improve from. It's not as if they could reach a technological leap like that and then think, "Meh, it was a fluke, and we certainly can't do better than this. Let's not do anything with it." No, the tech that allowed them to reach that height would then be the bar they would use to improve from.

    As far as customers expecting improvements all the time after that, what else is new? Customers always expect that.
     
  25. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,071
    That's just it. A huge leap could very well be possible right now.

    http://www.extremetech.com/extreme/...00-times-faster-using-standard-cmos-processes

    Changing from silicon to graphene though is a pretty drastic change and isn't something that is so easily repeatable.
     
  26. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,500
    Sure, but I have to agree with Hippo there that they probably wouldn't release that all at once, unless they believed the competition had made similar advancements (I called it a conspiracy before based on the idea that everyone was 5 years ahead and rationing it out). They could instead give us a 10x improvement - enough to knock the competition about - and then a bunch of small steps over time. This would partially be dictated by the market, too - nobody is going to pay 5000x more for a video card even if it's that much more powerful (based on my GPU that'd be about two million bucks...), and at current prices they probably can't make their money back if it's so powerful that nobody ever needs to buy another one, so they'd have to do something to get people to buy a bunch of them to be able to maintain and increase profitability.

    Edit: On a personal note, I'd happily pay more money for tech that was less disposable, even if only to increase long-term efficiency of environmental resources. I was pleasantly surprised when I realised that my current card is 4 years old and still going pretty well. Even that only cost me around $350, in Australia no less.
     
  27. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,500
    Plus, there's also the issue that having faster chips isn't the whole picture. You need other hardware and standards that can interoperate with those chips. You need software, compilers and (potentially) language support for the features that help make them fast. So it could well be true that research labs have stuff that's "5 years ahead" of what's a new PC today, but that's no good to anyone until those research labs - and the many and varied vendors, suppliers, and developers they work with - have got together and to figure out not only how to make that stuff work with each other, but also how to successfully introduce it to a market that's already cool with what it's got.

    It's far easier to sell an incremental upgrade to existing stuff than it is to tell people to throw out their whole system 'cause, hey, this new stuff is way cooler - it can render a web page 100x faster! (Who cares?)
     
  28. Ony

    Ony

    Joined:
    Apr 26, 2009
    Posts:
    1,973
    OK, I'm convinced, thanks everyone for your thoughts.

    The problem is, there's an alien sitting here next to me who has the information on how to create a GPU that's 50,000 times the speed of the current ones and he wanted me to find someone who could use it. I feel bad telling him to go home. :(

    Maybe if I give him some some Reese's Pieces it won't be so bad.
     
    Kiwasi and hippocoder like this.
  29. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,071
    Yes, same here. My current card (GTX 460 w/ 1GB) will be four years old starting next year. It has pretty much outlasted any other card I have and the only real reason I need an upgrade is the drivers beyond a certain version are known to have stability issues with my model.
     
  30. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,789
    Massively faster GPU's or CPU's will show up only after the US military has ensured that their gear is faster.. If one is assuming pro gear tech is the top of the current line, the insider word is that we are about 30 years behind. Couple that factoid to what hippocoder has stated and that is about the current reality. Graphene is getting pretty close to being prosumer tech..Pretty close is 3 to 5 years. IBM can grow large sheets of flawless graphene on a silicon germanium substrate IIRC and manufacturing processes have proceeded from the beaker full level to tons yearly. This is moving along at a fast clip as various flaws previously are being overcome by teams from University shaped building labs to corporate and large private or foundational materials research labs all competing for the patents that will bring in very tidy sums for a long time.
     
  31. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Things are slowing down, CPU's and GPU's are getting faster and cheaper slower or are getting faster but not cheaper.

    It is unrealistic to expect an exponential trend to continue forever.

    Think of the wheat on the chessboard analogy every square you double the number of grains and by the time you get to the last square there has never been that amount of grain (you would need about 1,676 years worth of grain at 2013 global production levels to fill the last square).

    But in reverse, you keep halving the size of something every 1-2 years eventually things get too small to divide or in the case of transistors too small to build reliable computer chips. Even now chip makers are having to build in error correction to compensate for erratic behaviour at smaller scales.

    Graphene still hits physical nanoscale limits that impact production, reliability and durability. Also let me know when they solve the band gap problem with graphene as this prevents it from being used as a transistor and a real replacement for silicon.
     
  32. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    But on the plus side GPU's are at about 28nm and can gain a speed boost going down to 14nm and then 7nm so that's what about 8x the performance of current GPU's. They are also starting to stack in 3d on the chip memory this improves speed of access, reduces power needed and increases bandwidth, so we should still get 4k and probably 8k at 60fps gaming in the future.

    So I think desktop or console powered VR could really be amazing in the future e.g. 4-8k hmd resolution panels.

    But mobile VR may be limited to 4k or less resolution due to mobile gpu's being limited. I hope I'm wrong as having a future where mobile phones can be plugged/linked to a pair of VR/AR glasses and immerse you in a 'you can't believe it's not reality' experience would be amazing.
     
  33. high-octane

    high-octane

    Joined:
    Oct 16, 2014
    Posts:
    84
    These faster and smaller GPU's and CPU's will be wasted on Flappy Bird in Ultra HD and more godawful endless runners and zombie apocalypse shooters but instead using head mounted displays. The usual crap, but massively parallel and GPU accelerated.
     
    Kiwasi and darkhog like this.
  34. Centigrade

    Centigrade

    Joined:
    Dec 22, 2014
    Posts:
    63
    Limitations of the atomic nature of matter & the speed of light means there is an upper limit. Building processors in the third dimension (e.g. expending them into cubes) is potentially limited by the restrictions of heat production/dissipation.

    Edit: I believe the projected end is in about fifty years from now.
     
    Last edited: Dec 23, 2014
  35. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    OH I don't think it will all just stop, too much money in the new faster CPU/GPU it will just slow down as marketing realise that the engineering guys can only manage a couple more steps down and then a few stacking tricks. Intel plan to produce 7nm chips by 2018, below 5nm is where it all gets a bit quantum according to the physicist. 3 years to 2018 maybe 5 to 10 years to push out as much as they can from it then it will be a case of diminishing returns and the search for new technologies.

    Wow by 2020 we could be at peak silicon computing, where everything plateaus.
     
  36. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,071
    Which as ippdev has stated is roughly about when we should have made the jump over to graphene. Assuming Intel stays on their time table, they will need options by that point. I'm expecting at least some time spent optimizing their architectures.
     
  37. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Yep you can say what you like on a forum but it's better to link to factual news items that cover the topic.

    IBM using Carbon Nanotubes (CNT) expected by 2020 and providing a 6 times faster chip for the same amount of power as a modern chip.

    http://www.extremetech.com/computin...rbon-nanotubes-can-restore-moores-law-by-2020

    Last I heard graphene although an amazing material has still not been used to produced any actual binary transistors, it was only used in an analog RF circuit. It does sound amazing as it could give a massive speed bump to computer chips (Ghz), but there is a problem giving graphene a band gap. Maybe CNT's could be used with graphene?
     
  38. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,071
    Which may not even have all the details. The article states that it is "unclear whether IBM has made a specific breakthrough that led it to this announcement or just a general feeling of progress".
     
  39. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,789
    I had linked to a paper earlier. Earlier,,say nine months ago they had issues with RF leakage due to band gap issues. That has been overcome in a number of ways as in this link http://arxiv.org/pdf/0801.2744.pdf. Several other methods such as two lane "highways" whose edge properties which have uniqute characteristics also limits RF leakage and provide a band gap if deposited on certain substrates. I get alot of the graphene updates from this site stardrive.org which had this article about germanium transistors supplanting silicon today at this link germanium transistors. This is the latest work from a quick google search using the past month as a limiter http://www.sciencedirect.com/science/article/pii/S0379677914004317. The links are out there if you use google properly and can understand WTF some of these papers are conveying.
     
    Ryiah likes this.
  40. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    At some point there will be a tipping point and everyone will go boom and have a massive leap forward, but until then, make as much money as possible with existing tech. I remember a leaked intel memo that did the rounds saying they'd be ok with milking the existing CISC architecture for another 10-20 years (which was read about 10 years ago if memory serves).

    It'll be about cores for a long time - we'll have 32 core monsters, by which time diminishing returns will start to kick in, and there's not going to be much point in going higher for general consumers unless you want to replace the GPU. General consumers will probably be happy with 8 cores for a long time yet, so those will get milked.

    What happens for consumers when extra cores aren't utilised? That is approaching, and we can see that game developers want 8 cores, that it's probably a sweet spot of the new consoles are any judge. MS and Sony didn't pull this number out of their asses. They polled their developers.

    Having more than 8 cores at this point probably isn't as desirable as doing everything faster and running those cores more efficiently. To this end, we are seeing GPUs step up to the plate with gpu compute programs.

    In 10-15 years, this will probably be mainstream on mobile phones (thanks Imagination Technologies for putting the scares out!) because of custom hardware. GPUs and CPUs are getting limited only in the general-purpose sense.

     
    Ryiah likes this.
  41. derf

    derf

    Joined:
    Aug 14, 2011
    Posts:
    354


    Not sure if this has been mentioned.
     
  42. aaronhm77

    aaronhm77

    Joined:
    Nov 22, 2012
    Posts:
    65
    instead of flat 2D CPU chips, the future will have 3D cube CPU chips.

    just like u r saying

    and they will be made of crystals just like in sci fi movies

    japan is all ready working on this stuff

    slicon valley will be slow in fallowing japans progress

    also a common affordable PC in future will have one central computer unit surrounded by smaller multiple USB
    exchangebale satellite computers (like rasbery PIs) so that a PC in future will be hundred times more powerfull than todays single tower unit.

    this actulay should and could be happening today but because people are so full of sh$t it aint happening
     
  43. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,071
    Which most applications, games very easily included, very likely still won't be able to take full advantage of. More cores does not automatically equate to more power, it merely equates to more tasks running at once. Some aspects of a program can benefit but others simply cannot.
     
    hippocoder likes this.
  44. aaronhm77

    aaronhm77

    Joined:
    Nov 22, 2012
    Posts:
    65
    how about biological computers where we grow them in our gardens; harvest them and use them for computer stuff

    some one just invented the first non biological LEAF that works exactly like a real leaf but it is not biological

    intelligent design is what created this universe, as children of the universe, we are mealy REMEMBERING and re learning what we forgot long ago how to do...

    ... when WE created this world as the CREATORS of it and then who then jumped into it as explorers of OUR creation.:)
     
  45. shaderop

    shaderop

    Joined:
    Nov 24, 2010
    Posts:
    942
    The lizard people. They are all controlled by the lizard people.
     
    Gigiwoo, Ony and aaronhm77 like this.
  46. aaronhm77

    aaronhm77

    Joined:
    Nov 22, 2012
    Posts:
    65
    they (the Genius geeks of high) know that a computer works faster, better, more eficiantly and with less energy, when memory can be pulled and gathered to the CPU from all directions (3D) rather than from a 2D chip where memory has to travel from further away and wait in line to reach the CPU.

    this is only common sence

    i think that they are probably experimenting and EXHAUSTING all the possible 2D methods of 2d CPU chips before they PROGRESS to 3D cube chips.

    they will be built (of course) with 3D printer type CPU chip making robot machines etc:)
     
  47. aaronhm77

    aaronhm77

    Joined:
    Nov 22, 2012
    Posts:
    65
    and you are not joking either... because they are
     
  48. Gigiwoo

    Gigiwoo

    Joined:
    Mar 16, 2011
    Posts:
    2,981
    Brilliant! Absolutely brilliant!
    Gigi
     
  49. Gigiwoo

    Gigiwoo

    Joined:
    Mar 16, 2011
    Posts:
    2,981
    To all this, I say, "Innovator's Dilemma". Three years ago, my engineers told me, "No way! Mobile devices will never have the gaming power of ...blah, blah, blah." They didn't understand the way DISRUPTIVE technology works. The proof is that my wife's new phone has a quad-core 2.7 Ghz processor, 3 GB RAM, and a higher resolution than my best GAMING desktop.

    In my lifetime, we've gone from this:



    To this:



    Pretending it's all somehow going to come to a grinding halt is extremely foolish. Crazy days are fast approaching.

    Gigi
     
    Kiwasi, Ryiah and Ony like this.
  50. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    LOL So an exponential trend that has been ongoing all your life cannot possibly hit physical 'natural' limits that occur due to the smaller and smaller sizes needed to make more and more powerful computer chips causing production, heat and durability issues.

    I'm not claiming a 'grinding halt' just expect things to slow down and eventually stop or for new technologies to take their place in the next decade or two.

    You are like the ruler who thinks he can pay the debt by putting twice as many grains of wheat on the next chessboard square as the last square.

    Now if you could just pay me 1p on the first square then tomorrow double on the next square, and if you could double that every day for 64 days?! :)
     
    Last edited: Dec 23, 2014
    Gigiwoo likes this.