Hacker News new | more | comments | ask | show | jobs | submit login
The Era of General Purpose Computers Is Ending (nextplatform.com)
80 points by jonbaer 12 days ago | hide | past | web | favorite | 46 comments





There are only a few things that parallelize so well that large quantities of special purpose hardware are cost effective.

• 3D graphics - hence GPUs.

• Fluid dynamics simulations (weather, aerodynamics, nuclear, injection molding - what supercomputers do all day.)

• Crypto key testers - from the WWII Bombe to Bitcoin miners

• Machine learning inner loops

That list grows very slowly. Everything on that list was on it by 1970, if you include the original hardware perceptron.


Parallelization is not the only to pursue specialized hardware [0]. The real benifit of specialized hardware is that it can do one thing very well. Most of what a CPU spends its energy on is deciding what computation to do. If you are designing an circuit that only ever does one type of computation you can save vast amounts of energy.

[0] Its not even a particularly good reason. The only reason we don't have massively parallel general purpose CPUs is because of how specific the problems that can benefit from it are. Even then, modern GPUs are pretty close to being general purpose parralell processors.


One other item for that list is packet-offloading for networking cards. That is, taking the work of checksum calculation, and even the wrapping/unwrapping of data (converting from streams to/from packets), and pushing that into the NIC's hardware.

I was also thinking about including the work that hard-drive controllers do (like checksumming, and handling 512/4K logical/physical sectors), but the difference there is that, for NIC offload, the kernel has that functionality already, where for hard drives the kernel does not do the work of the hard-drive controller.


The list might grow slowly, but the last item on it - ML - grows like crazy right now. It's not unreasonable to expect that in 20 years the vast majority of all computation (from tiniest IoT devices to largest supercomputers) will be running ML models (from simplest classifiers to whole brain simulations).

It's pretty unreasonable to expect that if you're aware of what AI winter is, and the fact that we're probably upon the cusp of another one: https://en.wikipedia.org/wiki/AI_winter

Once everyone realizes every practical use of their AI technology is more than adequately met by conventional code, and that there's no grand breakthrough into general artificial intelligence coming anytime soon, the hype cycle will end.


> Once everyone realizes every practical use of their AI technology is more than adequately met by conventional code

I'm all for skepticism for the current hyperbole, but there's no need to be hyperbolic in the other direction.

There are some applications in which deep learning really does work better than alternatives. The 2018 Gordon bell prize, after all, went to a team that did deep learning on Summit for climate analysis.

There is a nontrivial list of applications that you would have a hard time convincing experts that they would be better off with conventional code.


I think we're set for an AI fall, not winter. There are truly some techniques for which current ML implementations work quite well that cannot be adequately met using conventional code. Speech recognition, machine translation, and image processing are the main examples.

Can you name an existing, large business which depends on ML for its existence?

It's like asking "name a large business which depends on the internet for its existence" back in 1995.

A better question to ask back then would have been "which large business will depend on the internet in 2015".

Note that the internet in 1995 was pretty bad (in terms of applications and from technical point of view), and the hype led to dot com crash 5 years later. Yet it's hard to overestimate the importance of the internet in today's world.


Fair Isaac Corporation

This is the only acceptable answer here, and of course gradient boosted decision trees don't require special hardware, contra the original article on general purpose computing.

incidentally, AMD actually advertised their latest architecture as using "neural networks" for branch prediction. (Now IIRC it was actually just a linear model, aka a neural net with one layer.)

So if that technology were to catch on, a pedant could argue that most computing workloads really are machine learning.


not for its existence, but the USPS would have a much harder time routing mail without ML based recognition of handwritten addresses.

Google.

All of Google's market advantage happened before their ML kick, and none of their continued success can be ascribed to it.

I don't really want to get into an argument over definitions, but IMO pagerank is pretty obviously machine learning.

That was my intention when writing the reply, yeah.

You can even argue for the opposite, since they went all in into ML, the product quality has at best stagnated.

Their product isn’t search, it’s tsrgetted ads. Have the ads gotten worse?

They have not become better (in my experience). The ads are still mostly irrelevant for me. (With no anti-tracking attempted by me.)

It's worth noting that all four of your examples routinely run on GPUs.

3D graphics? Check (freebie).

Fluid dynamics? Check - supercomputers increasingly get most of their compute from GPUs.

Cryptography? Check - this is the only one that really got specialized hardware.

Machine learning? Check.

So "large quantities of special purpose hardware" wasn't even used for these. Just large quantities of general purpose parallel processors, known for historical reasons as "graphics processing units."


Isn't this more appropriately described as "The Era of General Purpose Microprocessors Is Ending"?

A general purpose computer is the entire machine, reprogrammable to perform a variety of tasks, hence general-purpose. While I do think it's potentially coming to an end as well, I think it's doing so for entirely different reasons.

The general purpose computer has become a somewhat niche device in that the public is increasingly interested in consumer-oriented appliances which just happen to contain microprocessors, like phones and tablets. They're often locked-down and only capable of running a blessed subset of applications available from select suppliers through a walled-garden.

That's threatening demise of the general-purpose computer as we know it. I'm genuinely concerned that we may find ourselves one day limited to very expensive niche machines produced in low volumes having general-purpose capabilities targeting STEM-oriented uses. I hope I'm wrong here, but we're already seeing evidence of young people not even learning how to type because they never used a keyboard, it doesn't seem impossible.

The linked article is talking about processors, not computers.


The general-purpose computer was something most of the public never wanted. It just was that for the a while starting in the mid 90's you needed a computer to use the Internet, "AOL", or email in any way.

> I'm genuinely concerned that we may find ourselves one day limited to very expensive niche machines produced in low volumes having general-purpose capabilities targeting STEM-oriented uses.

General purpose computers belonged to tech nerds in the 70's and the same plus professionals/creatives through most of the 80's, and it sounds like it's going to go back to them. Honestly I can see benefits to this, it was nice to have dirt cheap hardware for a while but maybe things will get back to being more modular and expandable again.

Regarding general purpose processing, I think RISC-V is going to save us here and keep a general purpose microprocessor around as long as anyone wants.


Yes, I agree. I found the title misleading and led me to think about this https://boingboing.net/2011/12/27/the-coming-war-on-general-... as well.

While many valid points, they put GPUs in the specialized processor category to seal their argument. It is technically true, however the trend in GPUs is and has always been toward more general computing, and most computers have GPUs in them. I expect to see commercially viable CPUs with SIMD units (Intel tried and bailed, maybe they’ll try again...) as well as GPUs with virtual & shared memory any day now.

GPUs already have virtual addressing and the ability to share memory with the CPU and other GPUs in at least some circumstances. What they don't have is automatic page faulting to persistent storage or fully shared memory with the CPU by default but that is for performance reasons. For most applications of GPUs performance is too important to want either behavior by default.

All GPUs are basically full of giant SIMD units and the programming models increasingly expose this. They just don't have a common standard ISA.


Totally agreed, automatic page faulting to disk isn't something you want in a GPU application right this second, but with Moore's law dying, I just expect buses to catch up.

And you're also right the programming models do expose the SIMD nature, but at the same time they are becoming increasingly easier, and increasingly more flexible and more general purpose. At least I'm thinking of CUDA and NVIDIAs more recent independent thread scheduling; tolerating higher levels of divergence. I'm less familiar with AMD hardware, but I assume the trends are the same.

Really, I'm looking at the trend more than any specifics or constraints of today. GPUs used to be only for rendering triangles, and only a small number of gamers and an even smaller number of graphics researchers cared. Today, the applications are way more mainstream, everyone has GPUs, and interest in AI and crypto and general high performance computing is rivaling that of graphics & games.


The thing is GPUs achieve their performance by embracing certain realities that aren't likely to change any time soon. The classic model of memory reflected in conventional CPU architectures tries to keep up the pretense that all memory accesses are equally fast through lots of levels of caches. GPUs have a model that recognizes memory locality as fundamental and treats general memory access more like IO - something asynchronous that you want to issue a request for and then find other work to do while you wait for a response. Fine gained thread context switching with massive parallelism mostly hides that from the programmer and saves them having to write an await on every memory fetch.

CPUs also go to heroic lengths to try and make shared memory transparent with complex cache synchronization logic. GPUs allow you to do that to an extent with atomics but have a programming model that discourages it. That's a model that better aligns with the realities of hardware.

Moore's Law was always about number of transistors, not clock speed or single thread performance. GPUs have kept it fairly alive in that regard. Even CPUs continue to increase transistor counts quite effectively, they've just stalled out on clock speed increases and ways to use those transistors to increase single thread performance and a lot of software still can't scale well by just adding cores.


You're right, and I'd agree with all of that.

Maybe one way of expressing my point of view that involves fewer potentially wrong predictions of the future is: I feel like the SIMD programming model is now becoming accepted as general purpose computing, rather than that GPUs are particularly specialized. The GPU model is different than CPU, but not that specialized anymore. I may be drawing a subjective line. I do expect GPU programming to continue getting easier both hardware and software wise, but I also feel like, for exactly all the reasons you mention, more and more people are aware of and accepting the GPU limitations in search of that performance.


> I expect to see commercially viable CPUs with SIMD units (Intel tried and bailed, maybe they’ll try again...)

I don't think you meant what you wrote here. ARM(/AArch64), PowerPC, MIPS, RISC-V, SPARC, and x86 all have SIMD unit ISAs on them. In fact, on x86-64, I'm annoyed because scalar floating point uses the vector registers anyways, so just glancing for use of %xmm is insufficient to tell you if your code got vectorized.

What Intel tried and axed was the many-dumb-x86 core model of Larrabee and the Xeon Phi stuff. It's arguable that a lot of that failure was do to Intel stupidly asserting that you wouldn't need to rewrite your code to make it run on that sort of architecture.


You’re absolutely right, totally fair point. I was thinking about SIMD-only GPUs with many thousands of parallel threads, but yes of course, small scale SIMD is ubiquitous. Cray did a SIMD CPU before, so it has been commercially viable. I don’t know who else has done it. But don’t you think we’ll start to see mixed mode CPUs that have a lot more GPU style SIMD in them? Like, not just some vector instructions that you sprinkle into CPU code, but maybe the ability to launch a big batch of fully SIMD threads.

What I had in my head in my comment was not to suggest nobody has done it, I’m just thinking that in the future we will probably see CPU and GPU merge together. It seems like a strange byproduct of corporate chip-maker history that we currently have CPUs and GPUs that are separate things. Considering the widespread adoption of GPUs for high performance and general purpose computing, I guess I just expect to see processors become dual-mode in the future, rather than have two different single-mode chips.


Perhaps the era of general purpose computing will come to an end. But what I see is a shift away from single CPUs supporting thousands of complex instructions to GPUs with simpler instructions sets capable of running calculations in parallel. It's more of a shift from serial to parallel computing than a shift from general purpose to special purpose computing

GPUs typically have instruction sets of similar complexity to many CPUs, plus additional specialized instructions related to their SIMD model, plus a bunch of specialized hardware for particular functionality. I don't really think it's accurate to describe GPUs as having simpler instruction sets than CPUs.

Another interesting trend is dwindling desktop and laptop sales: https://www.statista.com/statistics/263393/global-pc-shipmen...

Plus, most non tech people don't need anything high end. So the percentage of Chromebooks, Celerons, etc, goes up.

Then, on the server side, a lot of the chip sales are going direct to the FAANG group, rather than to someone like Dell or HP.

Those two things do take a lot of wind out of better, generally available, general purpose devices for regular people and companies. A shrinking market doesn't usually improve quality.


FAANG (at least Amazon so far) is also starting to roll out ARM-based servers which is interesting.

Microsoft is also moving to ARM servers.

https://www.theverge.com/2017/3/9/14867310/arm-servers-micro...

And of course Apple ships 10x more ARM chips than x86 chips.



Eventually the specialized processing units get re-classified as general processing units, covered in "On the Design of Display Processors" http://cva.stanford.edu/classes/cs99s/papers/myer-sutherland...

"Special Purpose" can mean so many things that it really depends on the purpose to tell if they're going to be replaced.

For example traditional RAID controllers were replaced with software based solutions once there was surplus compute in the multicore era. If your workload can be viewed as "offload the CPU", there's only a matter of time before general purpose CPU cores are more plentiful and the need to offload goes away.

Pure compute (be it traditional CPU's or some other vector variant that GPU/TPU offer) or latency sensitive tasks (some network, other FPGA, ASIC, accelerated tasks etc.) are only areas where non-GP can maintain a long term foothold.


> there's only a matter of time before general purpose CPU cores are more plentiful and the need to offload goes away

The end of Moore’s Law means no additional transistors and therefore no additional cores without simplifying or otherwise reducing the architecture.


The article says developing the TPU was “very expensive for Google” at tens of millions of dollars. That’s between one one-hundredth and one tenth of one percent of Google’s 2018 revenue. Not expensive in my book at that scale.

Thing is that before Windows this was the case.

The sound card had its own Midi and sound effects chips, for example. Now reduced to one chip if that on the AC97 capable motherboard.

The modems had their own discrete processor to handle the communication over the phone line. Now again reduced to a WinModem chip and/or a NIC or WiFi chip.


it's ending because we're approaching maximum transistor density. The market demands increasingly faster computers, if we're reaching the limits to how many transistors we can cram into a single CPU, ASICs seem to be a logical evolutionary step.

“That’s mainly because the cost of developing and manufacturing a custom chip is between $30 and $80 million.”

I’ve heard figures an order of magnitude smaller for ARM. If so, the processor market needs to move beyond the Intel/x86 market corner before generalizations about cpu/gpu may be made.

One source, not fully vetted: https://www.anandtech.com/show/7112/the-arm-diaries-part-1-h...


General Purpose Computing is ending for another reason: Apple controlling its entire supply chain, and dictating what computing can be used for. If market developments continue along this line and competitors follow suit, then soon buying a PC for your research will cost you a lot more.

Based solely on the title, I assumed this article was going to be about Jeff Bezos. We're entering a brave new world where all compute is rented from Bezos and can only be used for the furtherance of his agenda. The recent tabloid scandal kinda speaks to the underlying problem. When given documentary evidence of a tryst between Bezos and a married woman, these people did the right thing and tried to blackmail him. Bezos somehow managed to turn this into a story about his endless accomplishments and his courage in the face of adversity! Bezos isn't even competing against other companies anymore because that would be too easy. Bezos is actually competing against the rest of humanity now. We're entering the era of Bezos Purpose computing.



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: