
AI Chips: Past, Present and Future - Lind5
https://semiengineering.com/artificial-intelligence-chips-past-present-and-future/
======
SemiTom
There are something like three dozen startups working on different chips that
can speed up this whole process. The big question for all of these companies
and their architectures is how flexible they will be for anything beyond a
very specific implementation of an algorithm. This is particularly important
in the AI/ML world because the algorithms are changing so rapidly. The
challenge here goes well beyond the chip architecture. So if you think about
the benchmarks for CPUs today, those can be blazing fast, but when you try
them out on real applications they run a lot slower. Those differences may be
magnified with parallel approaches. Different applications utilize different
levels of parallelism with uneven performance improvements. It's not just how
the processing is parsed, but also how effectively the results can be
recombined after processing.

~~~
cbHXBY1D
Well, that's why Facebook, Microsoft, and Amazon are collaborating together to
create ONNX, an open source graph model. It's currently looking like ONNX will
be the "LLVM IR" of the deep learning world.

------
jl2718
Disappointing article. The history and future of neural network chips is much
much larger than GPUs and TPUs. There was a post a few months ago from IEEE
about neural chip architectures, which was pretty good for the scope, but
there is actually a lot more once you add in the materials research.

~~~
joe_the_user
It may be a long time before special purpose parallel processing unit become
more useful than general purpose parallel processing units.

The advantage of general purpose parallel chips is both economies of scale
(GPUs serve purposes from games to cryptomining to AI) and the tendency of AI
to switch from algorithm to algorithm. There's been a long history of "AI
Chips" that prematurely choose the particular AI algorithm that would be
optimized.

