• 3D graphics - hence GPUs.
• Fluid dynamics simulations (weather, aerodynamics, nuclear, injection molding - what supercomputers do all day.)
• Crypto key testers - from the WWII Bombe to Bitcoin miners
• Machine learning inner loops
That list grows very slowly. Everything on that list was on it by 1970, if you include the original hardware perceptron.
 Its not even a particularly good reason. The only reason we don't have massively parallel general purpose CPUs is because of how specific the problems that can benefit from it are. Even then, modern GPUs are pretty close to being general purpose parralell processors.
I was also thinking about including the work that hard-drive controllers do (like checksumming, and handling 512/4K logical/physical sectors), but the difference there is that, for NIC offload, the kernel has that functionality already, where for hard drives the kernel does not do the work of the hard-drive controller.
Once everyone realizes every practical use of their AI technology is more than adequately met by conventional code, and that there's no grand breakthrough into general artificial intelligence coming anytime soon, the hype cycle will end.
I'm all for skepticism for the current hyperbole, but there's no need to be hyperbolic in the other direction.
There are some applications in which deep learning really does work better than alternatives. The 2018 Gordon bell prize, after all, went to a team that did deep learning on Summit for climate analysis.
There is a nontrivial list of applications that you would have a hard time convincing experts that they would be better off with conventional code.
A better question to ask back then would have been "which large business will depend on the internet in 2015".
Note that the internet in 1995 was pretty bad (in terms of applications and from technical point of view), and the hype led to dot com crash 5 years later. Yet it's hard to overestimate the importance of the internet in today's world.
So if that technology were to catch on, a pedant could argue that most computing workloads really are machine learning.
3D graphics? Check (freebie).
Fluid dynamics? Check - supercomputers increasingly get most of their compute from GPUs.
Cryptography? Check - this is the only one that really got specialized hardware.
Machine learning? Check.
So "large quantities of special purpose hardware" wasn't even used for these. Just large quantities of general purpose parallel processors, known for historical reasons as "graphics processing units."
A general purpose computer is the entire machine, reprogrammable to perform a variety of tasks, hence general-purpose. While I do think it's potentially coming to an end as well, I think it's doing so for entirely different reasons.
The general purpose computer has become a somewhat niche device in that the public is increasingly interested in consumer-oriented appliances which just happen to contain microprocessors, like phones and tablets. They're often locked-down and only capable of running a blessed subset of applications available from select suppliers through a walled-garden.
That's threatening demise of the general-purpose computer as we know it. I'm genuinely concerned that we may find ourselves one day limited to very expensive niche machines produced in low volumes having general-purpose capabilities targeting STEM-oriented uses. I hope I'm wrong here, but we're already seeing evidence of young people not even learning how to type because they never used a keyboard, it doesn't seem impossible.
The linked article is talking about processors, not computers.
> I'm genuinely concerned that we may find ourselves one day limited to very expensive niche machines produced in low volumes having general-purpose capabilities targeting STEM-oriented uses.
General purpose computers belonged to tech nerds in the 70's and the same plus professionals/creatives through most of the 80's, and it sounds like it's going to go back to them. Honestly I can see benefits to this, it was nice to have dirt cheap hardware for a while but maybe things will get back to being more modular and expandable again.
Regarding general purpose processing, I think RISC-V is going to save us here and keep a general purpose microprocessor around as long as anyone wants.
All GPUs are basically full of giant SIMD units and the programming models increasingly expose this. They just don't have a common standard ISA.
And you're also right the programming models do expose the SIMD nature, but at the same time they are becoming increasingly easier, and increasingly more flexible and more general purpose. At least I'm thinking of CUDA and NVIDIAs more recent independent thread scheduling; tolerating higher levels of divergence. I'm less familiar with AMD hardware, but I assume the trends are the same.
Really, I'm looking at the trend more than any specifics or constraints of today. GPUs used to be only for rendering triangles, and only a small number of gamers and an even smaller number of graphics researchers cared. Today, the applications are way more mainstream, everyone has GPUs, and interest in AI and crypto and general high performance computing is rivaling that of graphics & games.
CPUs also go to heroic lengths to try and make shared memory transparent with complex cache synchronization logic. GPUs allow you to do that to an extent with atomics but have a programming model that discourages it. That's a model that better aligns with the realities of hardware.
Moore's Law was always about number of transistors, not clock speed or single thread performance. GPUs have kept it fairly alive in that regard. Even CPUs continue to increase transistor counts quite effectively, they've just stalled out on clock speed increases and ways to use those transistors to increase single thread performance and a lot of software still can't scale well by just adding cores.
Maybe one way of expressing my point of view that involves fewer potentially wrong predictions of the future is: I feel like the SIMD programming model is now becoming accepted as general purpose computing, rather than that GPUs are particularly specialized. The GPU model is different than CPU, but not that specialized anymore. I may be drawing a subjective line. I do expect GPU programming to continue getting easier both hardware and software wise, but I also feel like, for exactly all the reasons you mention, more and more people are aware of and accepting the GPU limitations in search of that performance.
I don't think you meant what you wrote here. ARM(/AArch64), PowerPC, MIPS, RISC-V, SPARC, and x86 all have SIMD unit ISAs on them. In fact, on x86-64, I'm annoyed because scalar floating point uses the vector registers anyways, so just glancing for use of %xmm is insufficient to tell you if your code got vectorized.
What Intel tried and axed was the many-dumb-x86 core model of Larrabee and the Xeon Phi stuff. It's arguable that a lot of that failure was do to Intel stupidly asserting that you wouldn't need to rewrite your code to make it run on that sort of architecture.
What I had in my head in my comment was not to suggest nobody has done it, I’m just thinking that in the future we will probably see CPU and GPU merge together. It seems like a strange byproduct of corporate chip-maker history that we currently have CPUs and GPUs that are separate things. Considering the widespread adoption of GPUs for high performance and general purpose computing, I guess I just expect to see processors become dual-mode in the future, rather than have two different single-mode chips.
Plus, most non tech people don't need anything high end. So the percentage of Chromebooks, Celerons, etc, goes up.
Then, on the server side, a lot of the chip sales are going direct to the FAANG group, rather than to someone like Dell or HP.
Those two things do take a lot of wind out of better, generally available, general purpose devices for regular people and companies. A shrinking market doesn't usually improve quality.
And of course Apple ships 10x more ARM chips than x86 chips.
For example traditional RAID controllers were replaced with software based solutions once there was surplus compute in the multicore era. If your workload can be viewed as "offload the CPU", there's only a matter of time before general purpose CPU cores are more plentiful and the need to offload goes away.
Pure compute (be it traditional CPU's or some other vector variant that GPU/TPU offer) or latency sensitive tasks (some network, other FPGA, ASIC, accelerated tasks etc.) are only areas where non-GP can maintain a long term foothold.
The end of Moore’s Law means no additional transistors and therefore no additional cores without simplifying or otherwise reducing the architecture.
I’ve heard figures an order of magnitude smaller for ARM. If so, the processor market needs to move beyond the Intel/x86 market corner before generalizations about cpu/gpu may be made.
One source, not fully vetted: https://www.anandtech.com/show/7112/the-arm-diaries-part-1-h...
The sound card had its own Midi and sound effects chips, for example. Now reduced to one chip if that on the AC97 capable motherboard.
The modems had their own discrete processor to handle the communication over the phone line. Now again reduced to a WinModem chip and/or a NIC or WiFi chip.