
The Era of General Purpose Computers Is Ending - jonbaer
https://www.nextplatform.com/2019/02/05/the-era-of-general-purpose-computers-is-ending/
======
Animats
There are only a few things that parallelize so well that large quantities of
special purpose hardware are cost effective.

• 3D graphics - hence GPUs.

• Fluid dynamics simulations (weather, aerodynamics, nuclear, injection
molding - what supercomputers do all day.)

• Crypto key testers - from the WWII Bombe to Bitcoin miners

• Machine learning inner loops

That list grows very slowly. Everything on that list was on it by 1970, if you
include the original hardware perceptron.

~~~
p1esk
The list might grow slowly, but the last item on it - ML - grows like crazy
right now. It's not unreasonable to expect that in 20 years the vast majority
of all computation (from tiniest IoT devices to largest supercomputers) will
be running ML models (from simplest classifiers to whole brain simulations).

~~~
scottlocklin
Can you name an existing, large business which depends on ML for its
existence?

~~~
currymj
Fair Isaac Corporation

~~~
scottlocklin
This is the only acceptable answer here, and of course gradient boosted
decision trees don't require special hardware, contra the original article on
general purpose computing.

~~~
currymj
incidentally, AMD actually advertised their latest architecture as using
"neural networks" for branch prediction. (Now IIRC it was actually just a
linear model, aka a neural net with one layer.)

So if that technology were to catch on, a pedant could argue that most
computing workloads really are machine learning.

------
newnewpdro
Isn't this more appropriately described as "The Era of General Purpose
Microprocessors Is Ending"?

A general purpose computer is the entire machine, reprogrammable to perform a
variety of tasks, hence general-purpose. While I do think it's potentially
coming to an end as well, I think it's doing so for entirely different
reasons.

The general purpose computer has become a somewhat niche device in that the
public is increasingly interested in consumer-oriented appliances which just
happen to contain microprocessors, like phones and tablets. They're often
locked-down and only capable of running a blessed subset of applications
available from select suppliers through a walled-garden.

That's threatening demise of the general-purpose computer as we know it. I'm
genuinely concerned that we may find ourselves one day limited to very
expensive niche machines produced in low volumes having general-purpose
capabilities targeting STEM-oriented uses. I hope I'm wrong here, but we're
already seeing evidence of young people not even learning how to type because
they never used a keyboard, it doesn't seem impossible.

The linked article is talking about processors, not computers.

~~~
tenebrisalietum
The general-purpose computer was something most of the public never wanted. It
just was that for the a while starting in the mid 90's you needed a computer
to use the Internet, "AOL", or email in any way.

> I'm genuinely concerned that we may find ourselves one day limited to very
> expensive niche machines produced in low volumes having general-purpose
> capabilities targeting STEM-oriented uses.

General purpose computers belonged to tech nerds in the 70's and the same plus
professionals/creatives through most of the 80's, and it sounds like it's
going to go back to them. Honestly I can see benefits to this, it was nice to
have dirt cheap hardware for a while but maybe things will get back to being
more modular and expandable again.

Regarding general purpose _processing_ , I think RISC-V is going to save us
here and keep a general purpose microprocessor around as long as anyone wants.

------
dahart
While many valid points, they put GPUs in the specialized processor category
to seal their argument. It is technically true, however the trend in GPUs is
and has always been toward more general computing, and most computers have
GPUs in them. I expect to see commercially viable CPUs with SIMD units (Intel
tried and bailed, maybe they’ll try again...) as well as GPUs with virtual &
shared memory any day now.

~~~
mattnewport
GPUs already have virtual addressing and the ability to share memory with the
CPU and other GPUs in at least some circumstances. What they don't have is
automatic page faulting to persistent storage or fully shared memory with the
CPU by default but that is for performance reasons. For most applications of
GPUs performance is too important to want either behavior by default.

All GPUs are basically full of giant SIMD units and the programming models
increasingly expose this. They just don't have a common standard ISA.

~~~
dahart
Totally agreed, automatic page faulting to disk isn't something you want in a
GPU application right this second, but with Moore's law dying, I just expect
buses to catch up.

And you're also right the programming models do expose the SIMD nature, but at
the same time they are becoming increasingly easier, and increasingly more
flexible and more general purpose. At least I'm thinking of CUDA and NVIDIAs
more recent independent thread scheduling; tolerating higher levels of
divergence. I'm less familiar with AMD hardware, but I assume the trends are
the same.

Really, I'm looking at the trend more than any specifics or constraints of
today. GPUs used to be only for rendering triangles, and only a small number
of gamers and an even smaller number of graphics researchers cared. Today, the
applications are way more mainstream, everyone has GPUs, and interest in AI
and crypto and general high performance computing is rivaling that of graphics
& games.

~~~
mattnewport
The thing is GPUs achieve their performance by embracing certain realities
that aren't likely to change any time soon. The classic model of memory
reflected in conventional CPU architectures tries to keep up the pretense that
all memory accesses are equally fast through lots of levels of caches. GPUs
have a model that recognizes memory locality as fundamental and treats general
memory access more like IO - something asynchronous that you want to issue a
request for and then find other work to do while you wait for a response. Fine
gained thread context switching with massive parallelism mostly hides that
from the programmer and saves them having to write an await on every memory
fetch.

CPUs also go to heroic lengths to try and make shared memory transparent with
complex cache synchronization logic. GPUs allow you to do that to an extent
with atomics but have a programming model that discourages it. That's a model
that better aligns with the realities of hardware.

Moore's Law was always about number of transistors, not clock speed or single
thread performance. GPUs have kept it fairly alive in that regard. Even CPUs
continue to increase transistor counts quite effectively, they've just stalled
out on clock speed increases and ways to use those transistors to increase
single thread performance and a lot of software still can't scale well by just
adding cores.

~~~
dahart
You're right, and I'd agree with all of that.

Maybe one way of expressing my point of view that involves fewer potentially
wrong predictions of the future is: I feel like the SIMD programming model is
now becoming accepted as general purpose computing, rather than that GPUs are
particularly specialized. The GPU model is different than CPU, but not that
specialized anymore. I may be drawing a subjective line. I do expect GPU
programming to continue getting easier both hardware and software wise, but I
also feel like, for exactly all the reasons you mention, more and more people
are aware of and accepting the GPU limitations in search of that performance.

------
jostmey
Perhaps the era of general purpose computing will come to an end. But what I
see is a shift away from single CPUs supporting thousands of complex
instructions to GPUs with simpler instructions sets capable of running
calculations in parallel. It's more of a shift from serial to parallel
computing than a shift from general purpose to special purpose computing

~~~
mattnewport
GPUs typically have instruction sets of similar complexity to many CPUs, plus
additional specialized instructions related to their SIMD model, plus a bunch
of specialized hardware for particular functionality. I don't really think
it's accurate to describe GPUs as having simpler instruction sets than CPUs.

------
tyingq
Another interesting trend is dwindling desktop and laptop sales:
[https://www.statista.com/statistics/263393/global-pc-
shipmen...](https://www.statista.com/statistics/263393/global-pc-shipments-
since-1st-quarter-2009-by-vendor/)

Plus, most non tech people don't need anything high end. So the percentage of
Chromebooks, Celerons, etc, goes up.

Then, on the server side, a lot of the chip sales are going direct to the
FAANG group, rather than to someone like Dell or HP.

Those two things do take a lot of wind out of better, generally available,
general purpose devices for regular people and companies. A shrinking market
doesn't usually improve quality.

~~~
redisman
FAANG (at least Amazon so far) is also starting to roll out ARM-based servers
which is interesting.

~~~
scarface74
Microsoft is also moving to ARM servers.

[https://www.theverge.com/2017/3/9/14867310/arm-servers-
micro...](https://www.theverge.com/2017/3/9/14867310/arm-servers-microsoft-
intel-compute-conference)

And of course Apple ships 10x more ARM chips than x86 chips.

------
swagasaurus-rex
Very relevant blogpost:

[https://herbsutter.com/welcome-to-the-
jungle/](https://herbsutter.com/welcome-to-the-jungle/)

------
jmount
Eventually the specialized processing units get re-classified as general
processing units, covered in "On the Design of Display Processors"
[http://cva.stanford.edu/classes/cs99s/papers/myer-
sutherland...](http://cva.stanford.edu/classes/cs99s/papers/myer-sutherland-
design-of-display-processors.pdf)

------
zdw
"Special Purpose" can mean so many things that it really depends on the
purpose to tell if they're going to be replaced.

For example traditional RAID controllers were replaced with software based
solutions once there was surplus compute in the multicore era. If your
workload can be viewed as "offload the CPU", there's only a matter of time
before general purpose CPU cores are more plentiful and the need to offload
goes away.

Pure compute (be it traditional CPU's or some other vector variant that
GPU/TPU offer) or latency sensitive tasks (some network, other FPGA, ASIC,
accelerated tasks etc.) are only areas where non-GP can maintain a long term
foothold.

~~~
votepaunchy
> there's only a matter of time before general purpose CPU cores are more
> plentiful and the need to offload goes away

The end of Moore’s Law means no additional transistors and therefore no
additional cores without simplifying or otherwise reducing the architecture.

------
peapicker
The article says developing the TPU was “very expensive for Google” at tens of
millions of dollars. That’s between one one-hundredth and one tenth of one
percent of Google’s 2018 revenue. Not expensive in my book at that scale.

------
deevolution
it's ending because we're approaching maximum transistor density. The market
demands increasingly faster computers, if we're reaching the limits to how
many transistors we can cram into a single CPU, ASICs seem to be a logical
evolutionary step.

------
ianai
“That’s mainly because the cost of developing and manufacturing a custom chip
is between $30 and $80 million.”

I’ve heard figures an order of magnitude smaller for ARM. If so, the processor
market needs to move beyond the Intel/x86 market corner before generalizations
about cpu/gpu may be made.

One source, not fully vetted: [https://www.anandtech.com/show/7112/the-arm-
diaries-part-1-h...](https://www.anandtech.com/show/7112/the-arm-diaries-
part-1-how-arms-business-model-works/2)

------
patrickg_zill
Thing is that before Windows this was the case.

The sound card had its own Midi and sound effects chips, for example. Now
reduced to one chip if that on the AC97 capable motherboard.

The modems had their own discrete processor to handle the communication over
the phone line. Now again reduced to a WinModem chip and/or a NIC or WiFi
chip.

------
amelius
General Purpose Computing is ending for another reason: Apple controlling its
entire supply chain, and dictating what computing can be used for. If market
developments continue along this line and competitors follow suit, then soon
buying a PC for your research will cost you a lot more.

------
npx
Based solely on the title, I assumed this article was going to be about Jeff
Bezos. We're entering a brave new world where all compute is rented from Bezos
and can only be used for the furtherance of his agenda. The recent tabloid
scandal kinda speaks to the underlying problem. When given documentary
evidence of a tryst between Bezos and a married woman, these people did the
right thing and tried to blackmail him. Bezos somehow managed to turn this
into a story about his endless accomplishments and his courage in the face of
adversity! Bezos isn't even competing against other companies anymore because
that would be too easy. Bezos is actually competing against the rest of
humanity now. We're entering the era of Bezos Purpose computing.

