
40 Years of Microprocessor Trend Data (2015) - sytelus
https://www.karlrupp.net/2015/06/40-years-of-microprocessor-trend-data/
======
wolfgke
Also see the updated text

42 Years of Microprocessor Trend Data

> [https://www.karlrupp.net/2018/02/42-years-of-
> microprocessor-...](https://www.karlrupp.net/2018/02/42-years-of-
> microprocessor-trend-data/)

from 2018-02.

------
francoisLabonte
As one of the authors of the original paper the data is based on I have fond
memories as a graduate student of standardizing the performance data to
SpecInt which did not exist till the 2000s. So for the older processors that
were no longer available I looked hard for processors that had both SpecInt
and older benchmarks like Dhrystone and MIPS per seconds to normalize them to
SpecInt performance.

The same data was used by other students of my advisor to create the Cpu
database which has some more data like cache sizes and FO4.

[http://cpudb.stanford.edu/](http://cpudb.stanford.edu/)

~~~
igravious
Excellent work!

From the latest data ([https://www.karlrupp.net/2018/02/42-years-of-
microprocessor-...](https://www.karlrupp.net/2018/02/42-years-of-
microprocessor-trend-data/)) you can definitely see that single-threaded
performance is still increasing but not at the exponential clip it was. The
inflection point for that is around the mid-2000s. It is hard to eyeball
because the chart is logarithmic but the rate of increase looks to be sub-
exponential for the past decade or so.

MHz and Watts have clearly levelled off.

Number of logical cores looks to be on an exponential path so clearly we need
to be tracking multi-threaded performance gains. Unlike your work this only
needs to be backdated to the mid-2000s – multi-core ramps up when single-
threaded starts ramping down – so it should be far easier.

Pretty much all applications take advantage of multi-threading now, be it
multi-processing or multi-threading. Erlang is the only _language_ I know of
that has the multi-threaded paradigm baked into its core. Ruby has had Fibers
since 2.2 so they are very much a tacked on idea. Perhaps what we need is to
retrofit them to C, come up with an abstract machine model and language
primitives that fits our multi-core world. We could call it C[].

~~~
rijoja
What are youre thoughts on golang from my shallow investigations they seem to
have a fairly good model readily available built into the language

------
raws
I don't understand this article's(2015) graph extrapolations going back
downwards on gains that are certainly not going away in future processors
iterations.

Clearly some curves have kind of flat lined but aside from the power
consumption one which could logically go down and definitely has in terms of
power/transistor ratio, the other ones are not going down any time soon for
sure like the graphs suggest.

------
exodusorbitals
Fairly illustrative example of technology S-curve. We haven't ran out of
Moor's law yet, but the only available direction for performance improvement
left is parallelism.

~~~
dano
A return to exponential performance gains will come from parallelism is how I
read your comment, and I agree. Multiple cores running at 5GHz for a few
hundred dollars is an amazing amount of processing capacity.

~~~
wiz21c
Is it me or we already use that parallelism a lot, that is, while the
operating system runs serveral single threaded apps in aprallel ?

~~~
ido
ok, but how many of these do you need? would you notice a jump from a 16-core
CPU to a 128-core CPU just due to a bunch of single threaded apps?

Anecdotally my several years old iMac [0] has 4-cores/8-threads and I already
only notice more threads when parallel-compiling (compared to my 2C/4T
i7-Skylake laptop).

[0] [https://everymac.com/systems/apple/imac/specs/imac-
core-i7-3...](https://everymac.com/systems/apple/imac/specs/imac-
core-i7-3.3-21-inch-aluminum-retina-4k-late-2015-specs.html)

~~~
marcosdumay
> ok, but how many of these do you need?

For what? A web server (the most common application around) can use one for
each simultaneous connection. They don't even have to be on the same box. A
database server is limited on IO, but can use many CPU cores to speed-up
things. A component on the latest "productivity" suite can probably use only
one, but should require an entire one anyway, and your usual Win10 desktop has
plenty of stupid stuff to run by its own, while your usual Linux desktop was
already idling 90% of the time a decade ago.

It is always a matter of what are you doing. If your task is inherently
serial, do you need it done only once?

~~~
ido
For a ‘normal’ end-user desktop/laptop (not a server) I think you’d hit
diminishing returns quickly after about 8 threads. If we’re tlaking long term
growth we’d want e.g. a 128 core CPU (possibly only a few more years from
today) to still be an improvement for the common usecase compared to the
previous 64 core generation.

Also for the billions of computers around the most common application is
probably a web _browser_ , not a web server.

