Hacker News new | past | comments | ask | show | jobs | submit login

There are only a few things that parallelize so well that large quantities of special purpose hardware are cost effective.

• 3D graphics - hence GPUs.

• Fluid dynamics simulations (weather, aerodynamics, nuclear, injection molding - what supercomputers do all day.)

• Crypto key testers - from the WWII Bombe to Bitcoin miners

• Machine learning inner loops

That list grows very slowly. Everything on that list was on it by 1970, if you include the original hardware perceptron.




Parallelization is not the only to pursue specialized hardware [0]. The real benifit of specialized hardware is that it can do one thing very well. Most of what a CPU spends its energy on is deciding what computation to do. If you are designing an circuit that only ever does one type of computation you can save vast amounts of energy.

[0] Its not even a particularly good reason. The only reason we don't have massively parallel general purpose CPUs is because of how specific the problems that can benefit from it are. Even then, modern GPUs are pretty close to being general purpose parralell processors.


One other item for that list is packet-offloading for networking cards. That is, taking the work of checksum calculation, and even the wrapping/unwrapping of data (converting from streams to/from packets), and pushing that into the NIC's hardware.

I was also thinking about including the work that hard-drive controllers do (like checksumming, and handling 512/4K logical/physical sectors), but the difference there is that, for NIC offload, the kernel has that functionality already, where for hard drives the kernel does not do the work of the hard-drive controller.


The list might grow slowly, but the last item on it - ML - grows like crazy right now. It's not unreasonable to expect that in 20 years the vast majority of all computation (from tiniest IoT devices to largest supercomputers) will be running ML models (from simplest classifiers to whole brain simulations).


It's pretty unreasonable to expect that if you're aware of what AI winter is, and the fact that we're probably upon the cusp of another one: https://en.wikipedia.org/wiki/AI_winter

Once everyone realizes every practical use of their AI technology is more than adequately met by conventional code, and that there's no grand breakthrough into general artificial intelligence coming anytime soon, the hype cycle will end.


> Once everyone realizes every practical use of their AI technology is more than adequately met by conventional code

I'm all for skepticism for the current hyperbole, but there's no need to be hyperbolic in the other direction.

There are some applications in which deep learning really does work better than alternatives. The 2018 Gordon bell prize, after all, went to a team that did deep learning on Summit for climate analysis.

There is a nontrivial list of applications that you would have a hard time convincing experts that they would be better off with conventional code.


I think we're set for an AI fall, not winter. There are truly some techniques for which current ML implementations work quite well that cannot be adequately met using conventional code. Speech recognition, machine translation, and image processing are the main examples.


Can you name an existing, large business which depends on ML for its existence?


It's like asking "name a large business which depends on the internet for its existence" back in 1995.

A better question to ask back then would have been "which large business will depend on the internet in 2015".

Note that the internet in 1995 was pretty bad (in terms of applications and from technical point of view), and the hype led to dot com crash 5 years later. Yet it's hard to overestimate the importance of the internet in today's world.


Fair Isaac Corporation


This is the only acceptable answer here, and of course gradient boosted decision trees don't require special hardware, contra the original article on general purpose computing.


incidentally, AMD actually advertised their latest architecture as using "neural networks" for branch prediction. (Now IIRC it was actually just a linear model, aka a neural net with one layer.)

So if that technology were to catch on, a pedant could argue that most computing workloads really are machine learning.


not for its existence, but the USPS would have a much harder time routing mail without ML based recognition of handwritten addresses.


Google.


All of Google's market advantage happened before their ML kick, and none of their continued success can be ascribed to it.


I don't really want to get into an argument over definitions, but IMO pagerank is pretty obviously machine learning.


That was my intention when writing the reply, yeah.


You can even argue for the opposite, since they went all in into ML, the product quality has at best stagnated.


Their product isn’t search, it’s tsrgetted ads. Have the ads gotten worse?


They have not become better (in my experience). The ads are still mostly irrelevant for me. (With no anti-tracking attempted by me.)


It's worth noting that all four of your examples routinely run on GPUs.

3D graphics? Check (freebie).

Fluid dynamics? Check - supercomputers increasingly get most of their compute from GPUs.

Cryptography? Check - this is the only one that really got specialized hardware.

Machine learning? Check.

So "large quantities of special purpose hardware" wasn't even used for these. Just large quantities of general purpose parallel processors, known for historical reasons as "graphics processing units."




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: