Hacker News new | past | comments | ask | show | jobs | submit login
Are processors pushing up against the limits of physics? (arstechnica.com)
74 points by ub on Aug 15, 2014 | hide | past | favorite | 43 comments



A metric that really puts things in perspective is the following: take a common consumer CPU, clocked at 3.4Ghz. That means it executes 3,400,000,000 cycles per second. Divide the speed of light by this number, and you obtain approximatively 0,088m.

During the time your standard desktop CPU takes to finish a cycle, light only travels 9 centimeters.


Just a side not that light in other medium than air travels slower. It's not 300k km/h but ~200k km/h in FR4 microstrip (6"/ns where 1ns is equivalent to 1GHz clock).


translated: "Just an aside: note that the speed that light travels in any medium is slower than it travels in vacuum. Its speed is no longer 3e8 m/s, but a third less in, for example, FR-4 microstrip (a popular prototype board dielectric). The effective speed is about 6 inches per ns, or at least, 6 inches per cycle of a 1 GHz clock."


I'm not really sure how that puts things into perspective. I actually have no concept of how fast photons are whizzing around the room.


Takes light 8 minutes to get from the sun to here. (To put it in perspective.)


That's effectively how I see it: as a simple human, I consider light as an instantaneous phenomenon. Its speed seems to be the maximum attainable for about anything, but a man-made processor has the power to execute a simple computation in less time than what it takes for light to travel 9 centimeters. Isn't that impressive? Doesn't that cast doubt on how far we can improve our CPUs? Just remember that a thousand years ago, our beat creations were basically piles of rock.


Pyramids, built over 4 millennium ago, a marvel of accuracy and the highest man made object until the year 1300, would beg to differ.

Unbelievable:

The pyramid remained the tallest man-made structure in the world for over 3,800 years,[8] unsurpassed until the 160-metre-tall (520 ft) spire of Lincoln Cathedral was completed c. 1300. The accuracy of the pyramid's workmanship is such that the four sides of the base have an average error of only 58 millimetres in length.[9] The base is horizontal and flat to within ±15 mm (0.6 in).[10] The sides of the square base are closely aligned to the four cardinal compass points (within four minutes of arc)[11] based on true north, not magnetic north,[12] and the finished base was squared to a mean corner error of only 12 seconds of arc.[13] The completed design dimensions, as suggested by Petrie's survey and subsequent studies, are estimated to have originally been 280 Royal cubits high by 440 Royal cubits long at each of the four sides of its base. The ratio of the perimeter to height of 1760/280 Royal cubits equates to 2π to an accuracy of better than 0.05% (corresponding to the well-known approximation of π as 22/7).


Not much perspective to be had here unless you are assuming a single photon's distance traveled.

The analogy breaks down when you consider thousands or millions/billions of photons traveling simultaneously, then you can measure in miles.

A single photon's travel-distance doesn't mean much in this context.


As far as I know, those 9 centimetres are still the fundamental speed limit for information to travel.


That's really what makes me fear about how far we can push our processors. I don't think Moore's law will hold a lot longer for CPUs, unless we manage to get very good quantum computers very soon.


I think it's a great way to put it in perspective for me at least, light is the fastest thing in the universe it's the ultimate constant for reference. We could measure computation in light-cycles similarly as we do with lightyears for really long distances. I don't get frequencies like I do distances, I think it's because I deal with distances all the time but less so with frequencies.


The answer to this question is yes. People like to talk about how the real issue is economics, not physics, but the inability to make chips much smaller economically is very much a reflection of physical constraints on the processes we're currently using to manufacture chips (particularly the lithographic process), and as of right now there's no clear successor to those processes that's going to enable them to get smaller for cheaper.


Isn't ASML still pursuing EUV?


Right now, the laws of economics are a bigger problem than the laws of physics. Field effect transistors have been shown to work 5nm and even 3nm. However, the new lithography technologies needed to reach those resolutions cheaply are nowhere near ready.


I heard a rule of thumb that fab costs go up about 40% with each reduction in size. Is that accurate?


Intel paid ASML ~$4.1 billion to deliver 10nm lithography. That was ~ 1.5yrs ago (http://www.intc.com/releasedetail.cfm?ReleaseID=690165). ASML stock is a good indicator what's happening in this business.

Intel has running 14nm which is 16% better on SRAM cell than 16nm TSMC.


> Right now, the laws of economics are a bigger problem than the laws of physics.

The "laws of economics" and the "law of physics" are two sides of the same coin in this case.

It's the laws of physics that prevents us from producing arbitrarily cheap chips, it's not two separate factors.

The "laws of physics" are our hurdles, and the more we pass the cheaper the chips get.


Not even close, but there's good evidence they're pushing the limits of silicon transistors.


In June of this year HP announced its plan to build "The Machine". Regardless on how feasible their project is, I think they are right in pointing out that memory is the current bottleneck in computer engineering. We don't need faster processors. Focusing on the size of transistors, which are already insanely small when you think about it, may be a mistake.


The question I want to ask is, are generic CPUs now fast enough that people no longer need faster CPUs? I have an i7-4771 on my desktop (bought instead of 4770K because I wanted TSX... thanks Intel ;), and I can't really imagine much use for an even faster CPU unless I'm gaming or doing heavy compute work.


No, I don't think the things most people do with computers should require any faster hardware; the problem is that software is often being written to require increasingly more resources under the false assumption that processing speed and memory are "infinite" or close to it.

The exponential growth that started many decades ago has promoted a culture of extreme waste. From the earliest notions of "premature optimisation", and the rise of structured programming and OOP with its many-layered abstractions, to the latest trend of ultra-high-level frameworks and the web-application movement, there is this constantly present notion that "abstractions and computing power is free, but programmer time is expensive". Although opposition to this seems to have increased in the recent years, it's still a prevalent attitude and being taught currently in many schools. People are being forced to frequently upgrade their hardware (with the associated waste and manufacturing costs) just so they can run the latest versions of software - often to do the same things at the same speeds they were doing them before. It's likely not too far of a stretch to say that software on average is now a few orders of magnitude larger and slower than it should be.

This trajectory follows similarly to the early part of what happened to the car industry - fuel was initially cheap so manufacturers (and consumers) concentrated little on fuel efficiency, but starting in the 70s oil shortages made for some pretty rapid changes as people became aware that what they were doing was not sustainable. There has been much growth in interest in efficient hardware recently, which is good, but the other part of the equation, software, is also very important. Thus I think anyone who still believes in that mantra about programmer time, when working on software intended for a large number of users, is as absurd as someone in the car industry saying "engineer time is expensive, but fuel is cheap". Processors may be getting limited by the laws of physics but I don't think many programmers have reached the limits of their brainpower yet. :-)


>The exponential growth that started many decades ago has promoted a culture of extreme waste.

In fairness, I look back at what I was working on ten years ago, and I am currently writing my companies database and web front end, on my own, thanks to the layers of abstractions (scripting language, web framework, improvements in database technology). This database is able to do more than what a team of 5 could accomplish ten years ago. You can't deny that sort of efficiency is not beneficial to many (most?) organisations.


It depends. For in-house software, I can fully understand that wasting a few developer hours less may well be enough to justify buying beefier CPUs for the couple of users who need them.

On the other hand, if thousands of people need better CPUs to load imgur comments (e.g.) instantaneously instead of forcing the browser to a grinding halt for a task that should be essentially trivial, things may look different.


I have to assume that you ignore the absolute explosion in usefulness achieved in the last two decades for this "wasteful" software.

there are too many misconceptions and strawmen in your post to write a full rebuttal, so instead I'll just thank you for the (unintentional) laughs.


> I can't really imagine much use for an even faster CPU

Come on, do you really want to admit that you have such poor imagination? Surely there should be uses for ever more processing power. The question should be rather - how can we get this processing power economically.


I think gaming is still the big issue. I have an i7-4800MQ and frequently do large compiles. At first, my computer became almost unusable during compiles, which I attributed to the CPU being hogged by the compiler. I recently switched to doing my builds on an external hard-drive and no longer notice any performance difference when I am compiling.

I don't remember the last time I have seen my CPU running near 100% (other than by a process that shouldn't be doing anything and is probably in a while(true) type loop that would eat up any amount of speed), but I do remember plenty of times when my computer was slow, and my hdd led was solid every time.

EDIT: I suppose this does suggest a use for extra CPU: harddrive compression


My guess is that you were swapping. In addition to slowness, this explains how switching to an external hard drive for the files you're compiling would help, since it would reduce contention on a single hard drive.

Next time it happens, check if you're out of memory and using a lot of swap space/virtual memory. If so, you just need more RAM. Also, the best upgrade for any computer these days is an SSD, an HDD being the slowest part of modern computers.


This would be because, given the workload, your external drive isn't quick enough to keep up with the CPU while your internal is?

You could obviously use something like nice / ionice if you don't want to kludge an external drive about, but to me it seems like an inherit problem with our systems that we can make them unusable just by doing something like a compile.


I can leave happily with a core2duo. The bottlenecks are memory and hdd (ssd fixing 80% of that). Except for HD, games and non-mainstream tasks (compiling large software, video transcoding) that is.

People are right, it's a systemic issue now, we need to rearchitect things and avoid inefficiencies (lock free research, persistent data structure, programming paradigms [go, erlang...]). I may be wrong but unless you're using gentoo (and even then) your software will never use hardware to its fullest.

The main reason I want a recent processor : power consumption. Having a 5W fanless piece of silicon is the main advantage over my old one.


Ah yes, the old 640K ought to be enough for anybody.


Accelerator based computing is a tell that this is happening already. Shrinking everything down alone is not bringing the big speedups in performance-per-watt anymore, so what chip manufacturers do is putting as much ALUs as they can on the same die size, curbing lots of built-in management features in the process, that our software programming models have been built upon over the decades. Hardware is still getting faster at Moore's law, but only given constantly adapting software, i.e. "The Free Lunch Is Over"[1].

[1] http://www.gotw.ca/publications/concurrency-ddj.htm


They say we can't get smaller than an atom, but electrons are smaller than an atom, and we don't even have to use just the charge but can also use properties like spin and momentum to get more values out of them. I.e. spintronics. Then of course there are photons as well. The article mentions itself that we use light for etching features even less than the wavelength of the light nowadays. Sometimes you need a big read-write head or something, but then you can just push magnetic domains past it on a wire, etc. so that isn't necessarily the size of piece of computation in the device if we move beyond transistors.


I agree with you there. It's not unreasonable to me that people might devise a way to push against the Planck limit in terms of the space that computation takes up.


Depends on exactly what you mean. You can keep adding cores until the cows come home. And I don't really buy the "we don't know how to use all those extra cores" argument. Multi-threaded code isn't the rocket science it's portrayed to be in the press.

One thing that may become practical is die stacking, depending on what they can do about extra heat.



Except that's not how multiple cores are normally used.


And now people are rethinking processor architecture : http://millcomputing.com/docs/


I love how the engineers in "Halt and Catch Fire" are always talking about "the laws of physics".


No. Every single time someone supposes that any piece of technology is approaching its limits, the answer is no. Technology will continue to improve and advance as we make new discoveries. The simple fact is we do not know the future, so pretending like we know what things will be like in 20 or 50 years is just pointless. What's with this obsession of taking today's technological knowledge and assuming that things won't drastically change? Nobody should ever pretend to know what the future holds.


That's a very unreasonable point of view. Sure, it's true that in the past 200 years we've had an explosion of technological progress but there's absolutely no reason to assume that it's possible to continue it for an infinite amount of time. It's rather more likely that over time we'll discover the lines that separate "hard but possible" from "theoretically impossible given the physics of this universe". The only real question is when that will happen for any given area of technology.

Asserting that technology will just continue to improve and advance forever simply makes you sound like a stock-market broker in 2007.


In terms of the limits of computer development, we have a way to go yet before we make something like a Matrioshka brain.


Things will drastically change. But given piece of technology has its limits. Assuming it does not is like expecting horses to run faster than sound some time ago.


"If you immediately know the candlelight is fire, then the meal was cooked long ago."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: