
Are Computers Still Getting Faster? [video] - Jerry2
https://www.youtube.com/watch?v=IuLxX07isNg
======
Houshalter
Most computing power advancement is now in GPUs, which have had tremendous
increases:

    
    
        Approximate cost per GFLOPS
        
        Date		| 2013 US Dollars
        1961		| $8.3 trillion
        1984		| $42,780,000
        1997		| $42,000
        2000		| $1,300
        2003		| $100
        2007		| $52
        2011		| $1.80
        June 2013		| $0.22
        November 2013	| $0.16
        December 2013	| $0.12
        January 2015	| $0.08

~~~
pedrocr
Is that actual data or just an exponential approximation?

~~~
kabdib
The 1961 number has gotta be extrapolation, not actual data, or else there was
some serious underground-city scale computing center buried in Antartica . . .
Man from U.N.C.L.E. anyone? :-)

~~~
sigmar
8.3 trillion per GFlop is equivalent to $8300 per Flop

No underground city required

~~~
swimfar
Not sure why you got downvoted, but just a small nitpick. It's FLOPS for
FLoating-point Operations Per Second. Otherwise $8300 per FLoating-point
OPeration would be quite pricy for almost any time period. ;)

~~~
0_00_0
Cheaper to hire actual human computers at that point.

~~~
Houshalter
How many humans can do a floating point operation in a single second? And do
them 24/7 without error?

------
mdasen
One thing that they didn't quite touch on is the incredible increase in the
performance of web browsers.

They did talk about more and more people using a computer as a window to the
internet, but didn't talk about how much more efficient the software powering
that window had become. Apple launched Safari in 2003 and Chrome/V8 was
released in 2008. So even as we've wanted our web apps to do more, the engines
running them have gotten so much better.

When most software that people were using was being written in C/C++, an
increase in the amount of work the program wanted to do needed to be matched
by an increase in performance. Compilers have gotten better, but nothing near
the impact that modern web browsers have had. So older computers have
benefitted from a huge increase in browser performance (the browser being the
thing executing most "applications") over the past decade in a way that we
never really saw in previous computing generations.

~~~
davorb
> One thing that they didn't quite touch on is the incredible increase in the
> performance of web browsers.

This has largely been offset by the fact that we're sending more and more data
over the wire. The average web site is now 2.1MB in size.[0]

[0] [http://money.cnn.com/2015/06/16/technology/web-slow-
big/](http://money.cnn.com/2015/06/16/technology/web-slow-big/)

~~~
rahimnathwani
"The average web site is now 2.1MB in size."

s/site/page

------
jacobolus
I really wish people would use log scale for this kind of graph.

Trying to study rates of growth on a linear scale is basically impossible
during exponential-ish growth periods. All but the rightmost end of the chart
gets squished flat along the horizontal axis.

Similarly, charts of e.g. currency exchange rates, stock indices, comparative
economic growth between countries, etc. should most of the time be plotted
with a log-scaled vertical axis. Otherwise there’s no way to accurately
compare slopes in different parts of the chart, which can be deeply
misleading.

~~~
Houshalter
Log scales are extremely misleading. It looks like computing power has only
increased a bit since the 1980s, when in fact it has increased orders of
magnitude.

Especially for a general audience, but even people familiar with log scales,
it doesn't give an intuitive feel of the actual numbers.

~~~
jacobolus
I agree with you that not enough people are familiar with log-scale charts,
but that’s a problem with experience, not with “intuition”. Log-scaled charts
give perfectly fine “intuitive feeling” for anyone who is used to reading
them. Which would include most people if log-scaled charts were more
ubiquitous.

I am lamenting that they are not, because for many, many purposes they are
easier to read and facilitate more useful intra-chart comparisons than linear-
scaled charts. In particular, they make the slope of lines meaningful in many
cases where linear-scaled charts do not. They also fit much more useful
information about relative magnitude and allow us to read a couple significant
digits of numbers across a wide range of scales. Any time there is more than 2
orders of magnitude difference among values a chart, a linear scale becomes
nearly useless.

Reading charts at all is an acquired skill, and takes a lot more practice than
you might expect. Just like doing arithmetic with fractions, decimals, or
angles of a circle is an acquired skill, or driving a car is an acquired
skill. If you study young children or people from non-literate cultures,
you’ll see all kinds of difficulty reading linear-scaled charts.

Edit: here, I spent a couple minutes making a linear-scaled and a log-scaled
chart of the wikipedia $/gigaflops table from elsewhere in this thread, using
Matlab. See how much useful information you can get from the linear-scaled
chart:

[http://i.imgur.com/A80Tr6V.png](http://i.imgur.com/A80Tr6V.png)
[http://i.imgur.com/xG0mngX.png](http://i.imgur.com/xG0mngX.png)

(On the log-scaled chart, if we penciled in a grid, you could get at least one
significant figure out of each data point, despite the data spanning 11 orders
of magnitude. On the linear-scaled chart, you get 2 significant figures for a
single data point, but there’s no way to make any distinction at all between
the rest of the values, which span about 8 orders of magnitude.)

~~~
Houshalter
IMO neither chart gives an intuitive feel for the data. That nice straight
line does nothing to emphasize just how rapidly the price dropped over the
last 10 years. That's why I listed a table of the raw numbers instead of a
graph. People understand the scale of numbers, and the scale isn't lost by
trying to cram it into a chart.

The video used a zooming chart, which solves both problems.

~~~
jacobolus
> _That nice straight line does nothing to emphasize just how rapidly the
> price dropped over the last 10 years_

Sure it does. The number goes from 10^12 down to 10^-1, 13 orders of
magnitude. You can easily read these off the side or count the number of grid
lines crossed. (Whoops, I wrote 11 in my comment before.) If you’re used to
reading a log-scaled chart, counting the powers of ten along the vertical axis
is perfectly “intuitive”, or at any rate just as “intuitive” as any other way
you could write these.

In distance scale terms, this is the same as going from nanometers to tens of
kilometers. Or in time scale terms from milliseconds to hundreds of years.

Numbers with such a big difference in scale are to some extent inherently
incomparable: We don’t interact with such a range of scale with most of our
naked senses, but need tools to understand and compare such divergent numbers.
(Though if you’re hunting for an analogy, comparing volume or mass might be
slightly easier, as they scale with length cubed. 13 orders of magnitude gets
you from the mass of a human egg cell up to the mass of a tank.)

------
AshleysBrain
To me the obvious answer is: you only need a certain amount of system
resources to do most things, and consumer computers reached that point around
2005. (Sort of like his theory, but the software doesn't need to use any more
resources at all any more.)

Later improvements make it faster, easier to use, higher-resolution, better at
multitasking etc., but none of that changed the yes/no question of "Will it
work?"

~~~
yogthos
That's a rather unimaginative future you're painting. Things don't just keep
getting incrementally better, but instead we get paradigm shifts.

We're currently on the cusp of the machine learning paradigm shift. As ML
keeps getting better, it will start showing up more and more in consumer
interfaces. I'm willing to bet that within a few decades natural language
interfaces will become the norm. The desktop paradigm will simply go away at
that point.

You'll just tell the computer what you want to do instead of having to juggle
windows using the mouse and keyboard. You'll be able to say find that youtube
video, play this song, or reply to that email.

The computer will most likely turn into a personal assistant, and these kinds
of machines will require a lot more power than what we currently have in
desktops.

~~~
Joeri
Or, it could also be that natural language interfaces aren't going to become
much better than they are. Siri and cortana have been around for a few years
now, but they haven't actually gotten all that much smarter. We've got the
most powerful companies in the world investing the resources of thousands of
the smartest programmers to build these agents, and the best they can do is
siri? That doesn't bode well for the intelligent agent future.

The primary bottleneck for intelligent agents is not voice recognition, it's
semantic insight. They need to have an innate ability to learn the meaning of
new things without being programmed to do so. As far as I know siri, cortana
and google do not do this. Meaning must be programmed into them explicitly.
That doesn't scale. Yes, they can learn new things, but they learn them
slowly. You can even build advanced intelligent things like self-driving cars
(with great effort) because the required level of understanding about the
world is very constrained. However, you cannot scale it up to general purpose
assistants because the amount of code required cannot be built in any
reasonable amount of time. Until we have algorithms that create algorithms,
software rewriting itself and evolving based on higher goals, we won't see the
promised star-trek-level personal assistants come to fruition.

It could go either way. Let us hope you are right.

~~~
IanCal
> Until we have algorithms that create algorithms, software rewriting itself
> and evolving based on higher goals, we won't see the promised star-trek-
> level personal assistants come to fruition.

There's a massive jump you're making here, I feel. From where we are to
"computer, invent a new novel for me" there's a huge range of useful mid-
points.

One level is just understanding more about human-written unstructured data to
answer questions.

Another is to be better at sending commands to _other_ humans.

Third there's linking these things up. Understanding where there are gaps in
the knowledge, who might be able to fill them in and then asking them the
right questions. I know there was work done in this area as part of CoSY back
in 2004-2008
[http://www.cs.bham.ac.uk/research/projects/cosy/](http://www.cs.bham.ac.uk/research/projects/cosy/)
which was followed with
[http://www.cs.bham.ac.uk/research/projects/cogx/](http://www.cs.bham.ac.uk/research/projects/cogx/)

This then gets you to the level of essentially free personal assistants for a
wide range of general tasks. That would be valuable to a very large number of
people.

------
Svenstaro
The author left out a comparison on graphics cards. There is a recent trend in
computing (starting around 2008 when GPUs became freely programmable) where
GPUs are the new real measure for horsepower in a high-performance computer
due to their raw speed and usefulness in scientific computing and games.

~~~
marshray
Maybe GPUs are the measure for horsepower _because_ CPUs have dropped the
torch so badly.

~~~
Svenstaro
Well, to be fair these devices have fairly different purposes. One might argue
that CPUs are in fact sufficient for what they do in most cases and all other
cases should likely be somehow be ported to accelerator devices such as GPUs.

------
tibbon
For non-gaming, non-video-editing tasks, I partially see it as just the core
tasks aren't changing that much. An operating system isn't doing _that_ much
more than it was 5-7 years ago.

I'm using my 2011 Macbook Pro for Protools, and it feels just as fast as my
2014 model does for the rather paltry tasks I throw at it (I use outboard
hardware for almost everything, mixing 16 tracks of audio was something a G3
Mac could handle easily). I don't see any reason I'll realistically need to
upgrade it anytime soon (loaded with memory, SSD, etc)

------
pippy
The thing that stuck out to me was the Geekbench score. 2,287 vs 6,350 over
the course of a decade isn't that impressive improvement. Given it's
reflective of what a computer can _do_ , comparing metrics such as Ghz or
gigaflops seems superfluous in the context to the average user. During the
exponential improvement in 80's or the 90's, it seems like we have hit a brick
wall.

Not to say there hasn't been massive strides of improvement over the last
decade, SSD and GPUs in particular have been spectacular. But many consumers
have the decision they'ed oven rather have a power efficient processor than a
heavy duty CPU.

As developers it means being conscious of bloat and writing more efficient
software. Already we're seeing many libraries that are closer to the hardware
or 'metal'.

------
bobajeff
The biggest reason for old computers still being usable is that the market was
long ago overshot by computer power.

Most people simply don't need a lot of machine to do what they need on a
computer. Where as early on computers were struggling to do things like
display text and images the peak of user demands was the point where all
computers could run Flash Player 9.

~~~
trgn
True. The biggest advance in usable performance the last few years didn't come
from faster processors but from SSDs.

~~~
BurningFrog
But that wouldn't be important if people didn't want/need faster computers.

See the "circular argument" argument above.

~~~
tamana
Moore's law is about cpu. Cpu got so far ahead of io that cpu starved. Io
(SSD) became the bottleneck.

~~~
Tiksi
Moore's law is about transistor count/size/density, which ssds benefit from as
well.

------
jostmey
Transistor counts keep going up although clock speeds and single core
performance peaked starting ten years ago. Check out this graphic:

[http://www.extremetech.com/wp-
content/uploads/2014/09/Dennar...](http://www.extremetech.com/wp-
content/uploads/2014/09/DennardScaling.png)

Since then there has been an explosion in number of cores. Seems like coding
for distributed systems is the way to go. I suspect that increasing the
transistor count will remain beneficial to things like scientific computing
and machine learning.

~~~
mikeash
Is there one with data past 2008 for single-threaded performance? I thought
that was improving quite a bit, just with more instructions per clock rather
than higher clock rates. Core counts don't seem to be going up that fast. A
typical computer now might have four cores, which is far from an explosion.

~~~
seanp2k2
6, 8, and 10-core configurations are coming to the higher-end desktop market
soon: [http://wccftech.com/intel-broadwell-e-hedt-
computex-2016/](http://wccftech.com/intel-broadwell-e-hedt-computex-2016/)

22 and 24 cores coming to Xeons in 2016:
[http://www.kitguru.net/components/cpu/anton-shilov/intel-
xeo...](http://www.kitguru.net/components/cpu/anton-shilov/intel-
xeon-e5-e7-broadwell-processors-with-up-to-24-cores-due-in-2016/)

~~~
mikeash
They've been available on the high end for a long time. I have a 2013 Mac Pro
with 6 cores, for example. But it seems like the more average computer has not
changed nearly as much in this area.

------
petke
Quite frankly, Im surprised the reason behind this isnt common knowledge, at
least among us geeks.

The reason is hardware designers have hit a wall, they cant make single
threaded applications run any faster.

So your brand new gaming PC is no faster at running single core applications
than your 10 year old computer is. You can copy over any single core
application from your new PC to your old one, and it will run just as fast.

What your new computer has is more cores though. Ideally you want linear
scaling. That is if you have 2 cores your program should run twice as fast,
and if you have 256 it should run 256 times as fast. This kind of scaling is
almost impossible to achieve. Most programs don't scale beyond a certain
number of cores.

Your multithreaded program might run a bit faster if you throw an extra core
at it, but very quickly throwing more cores at it doesn't make a difference.
It might even get slower. You have all those extra cores idle, or worse they
are busy waiting.

Most programs dont scale well to many cores because any synchronization
between cores kill scalability. If you have ANY sequential steps in your
application that will put a limit on scalability.

As an example say you have 4 cores. You have a main loop on one core, it posts
expensive work to the other (3) cores. And when they are done, you collect all
the results and use them somehow. This final step is sequential and it kills
your scaling. Your application wont run 4 times faster. And it probably wont
run any faster if you throw 16 cores at it. This is because that one core is
synchronizing with all the other cores. So those other cores do a lot of
waiting around. And any sequential steps in that one core becomes the
bottleneck, as those other cores will all end up waiting for it.

This message is already too long. So long story short. Single threaded
programs are no faster on new computers, and most multi threaded programs dont
scale well to many cores. Its a huge wasted opportunity to let cores be idle
or underutilized. We have to fundamentally change our programming style and
tools to take make use of all available cores.

More info by Herb Sutter:

[http://www.gotw.ca/publications/concurrency-
ddj.htm](http://www.gotw.ca/publications/concurrency-ddj.htm)

[http://herbsutter.com/welcome-to-the-jungle/](http://herbsutter.com/welcome-
to-the-jungle/)

~~~
nisa
> Single threaded programs are no faster on new computers, and most multi
> threaded programs dont scale well to many cores.

There is also some progress -
[http://www.cpubenchmark.net/singleThread.html](http://www.cpubenchmark.net/singleThread.html)

I'm writing this on a Core2Duo E6750 that has a single thread score of 1000 -
most new CPUs are at least twice as fast. Also Instructions per cycle going
up.

It's really a hard problem. You can't just spawn threads and put mutexes in
front of you data structures. This is terribly slow, even slower than single
core without locking for some tasks.

We need better lockless data structures that minimize synchronisation.

~~~
Tepix
The E6750 is a 2.66Ghz CPU that was introduced in Q3 2007. That means that
single core performance went up only 8% per year on average.

Compared to the improvements we made in the decades before that, that's really
poor.

I have a i7-2600k (quad core 3.4Ghz, 95W TDP, HT) that was introduced in 2011.
Five years later, it still is not significantly slower than recent CPUs by
Intel unless they take advantage of new CPU features. It sold for around 250€
back then. Today, for 250€ you can buy an i5-6600k (quad core 3.5Ghz, 91W TDP,
no HT). The only big improvement in the i5-6600k is the GPU (which I don't
use).

------
onli
Getting pretty clear already in the video that there is more than one effect.
What is mentioned not clear enough/missing is the more difficult usage of the
newer processors and their additional cores.

You can see it in games. Until recently it was equally possible to play on an
overclocked Pentium G3258 as on an i5-4690, a way more powerful quadcore
processor, because most games just did not use the additional threads the i5
provided. That is changing now, so far that even the Hyperthreading of an i7
gets very useful in games.

If that was true in games, that was probably also true for other software.
Meaning that the new processors where not that more powerful for all those
software not being able to use the multi-threading capabilities.

------
aaron695
How about, because operating systems, browsers (& web sites) and a lot of
other things have to run on mobile devices, programmers can no longer rely on
specs increasing, they have to keep code more efficient.

------
Benjaminsen
As other have touched on, the apparent slowdown in processor speeds is due to
computers have enough CPU power for most users needs.

Today computing power in personal computers is basically a commodity, with
companies now having to focus on other aspects to be able to sell their
computers. Namely weight, battery life, network connectivity etc.

Having said that, the chips are still developing, but now the focusing is on
the new user demands rather than raw speed. E.g. we now have native decoding
of audio, video, networking, low power modes, etc to increase battery life.
(And thereby decrease weight)

While the consumer GPU's are also getting faster, it's not due to more
advanced use cases, but rather because computers now have screens with higher
screen resolutions.

I believe that processing power became a commodity around a single core
GeekBench Browser score of about 2500 (Think Late 2011 Macbook pro - note
using browser score here to be able to compare apples to apples).
Interestingly, this is now pretty much exactly where the newest iPhones are.
(Android only just now reaching the 2k's).

In essence, with the latest and greatest chips on mobile, we are reaching the
point where processing speed is now a commodity on this platform. However as
with PC's we will likely see a bit of overshooting, so expect the mobile CPU
race to continue until around a GeekBench Browser score of 3000-3200.

Hopefully this will result in companies starting to compete on battery life of
the device as was seen with laptops. E.g. the newest MacBook Pro claims 9
hours of battery, compared to the 4 hours promised for the Late 2011 Macbook
Pro.

The only obvious unknown I can think of, is if the average consumer embraces
Virtual Reality, rather than it becoming a niche such as high quality PC
gaming. Then we might once again see a new strong focus towards raw single
core computing power, both on Desktop and Mobile.

------
whistlerbrk
One thing that was not mentioned was scheduling of processes especially on
mobile devices. Many operating systems employ timer coalescing that along with
other techniques serve to maximize power efficiency instead of CPU utilization

------
hyperpallium
In most features, my smartphone is about x1,000 more powerful than my ZX81
(from '81). RAM, "ROM", clockspeed. Accounting for no hardware fp (Z80), I
estimated that my smartphone's GPU is about x1,000,000 the FLOPS.

And high-end consumer video cards are about x1,000,000,000 ... which is about,
or a little less, what one of the Moore's law corollaries predicts for 35
years, 1981 to 2016, per $.

~~~
johansch
RAM: x1,000,000 (1 GB is medium-end in phones these days, and the ZX81 had
1kB.)

~~~
hyperpallium
whoa, that was a significant error.

------
emgram769
a computer increases performance 10 fold: what takes 10 minutes now takes 1.
we notice a difference of 9 whole minutes! wow! a new computer increases
performance 10 fold: whoaaaa, that minute of waiting is now merely SECONDS! we
saved like almost a whole minute of waiting! an even newer computer increases
performance 10 fold: nice, thats like noticeabley faster: today, the newest
computer is available: meh something from like 10 years ago would run this
fine...

we are getting to a point where other aspects of performance are also getting
harder and harder to notice. things like rendering, supported display sizes
and refresh rates, the ability for machines to understand us (voice
recognition and computer vision)

------
inesf
My 2008 PC is still performing well. I think engineers should work more on the
OS since sometimes we need to upgrade just because of the trash and useless
things kept by the OS.

------
tim333
The guy says roughly

>The question I'm asking is why does a 10 year old computer basically run
current software OK while that wasn't true in the past.

I've got a theory that it's a bit related to human brain IO limits. There's
only so much text, sound and video we can take in, video being the highest
bandwidth and computers got to a stage where they could do video OK a decade
of so ago. So increasing the output dramatically does not have a huge effect
on user experience. I mean 4k video is nice but it's a similar experience to
360p.

~~~
Pharaoh2
Or may be we are approaching a wall that is harder to climb than usual.

With unlimited computing power we could render real time highly interactive
super accurate simulations which takes months to render. But we can't because
we don't have the computing power.

The truth is, we are hitting multiple wall at the same time. It's harder to
shrink gates any further, it's harder to increase clock speed anymore and it's
harder to dissipate the heat generated by the cpus. All the increase we are
now getting are incremental not exponential and that is why we can run a
current software on a 10 year old cpu, if the software is not optimized for
multicore cpus.

~~~
tim333
Or a bit of both. You have a point that the performance of a single core has
stopped increasing so fast.

On the other hand if people really wanted performance they could get high
powered desk top computers but the trend has been more towards using phones.
About the most processor intensive thing I do personally is edit video and
even that works OK on my 2 year old phone.

~~~
Pharaoh2
I have no idea what type of video editing you are doing but editing 2.7k60
video on my high end gpu accelerated desktop is a pain in the ass.

The adoption of mobile phones as computing platform is a very different event.
It allows use of computing in a very versatile manner. That is why it's usage
exploded.

------
shultays
Can we preserve Moore's live by duct taping two computers together every
decade?

