
Neural Algorithms and Computing Beyond Moore's Law - aidanrocke
https://cacm.acm.org/magazines/2019/4/235577-neural-algorithms-and-computing-beyond-moores-law/fulltext
======
doctorpangloss
It's an interesting point of view, for sure.

> There are cases of non-neural computer vision algorithms based on Bayesian
> inference principles,22 though it has been challenging to develop such
> models that can be trained as easily as deep learning networks.

Maybe. The paper he cites, "Human-level concept learning through probabilistic
program induction," has like "On a challenging one-shot classification task,
the model achieves human-level performance while outperforming recent deep
learning approaches" right in its abstract. One shot sounds like exactly the
meaning of easy to train.

Maybe he meant _accelerated_ training, but that kind of gets to the core of
what's flawed about this analysis. There's no economic incentive to build
specialized, accelerated hardware for one of Tenenbaum's research projects,
until there _is_. There's enormous economic incentive to build video cards for
games, which is what all those deep learning advances were predicated on, and
it remains to be seen if there's any economic incentive for TPUs or whatever
specialized hardware he's imagining, of whatever architecture.

Looking at computing architectures, like CPUs versus GPUs, the way he does is
a post-hoc analysis, and nobody except those deep in the research community
and paying attention to NVIDIA papers could have anticipated how GPUs would
have affected research.

There isn't, and there is, an architectural difference between CPUs and GPUs
that matter. He's sort of picking and choosing which architectural differences
matter in a way that favor R&D that he conceded has problems with
"falsifiability."

If anything, we need better, cheaper CPUs still! They're still very useful for
R&D. I'd rather get a slow supercomputer built today than a nothing built
tomorrow, if Tenenbaum is telling me he's chasing something now and needs it
now.

Conversely, why should we listen to R&D people about problems that are
fundamentally economically motivated? It would be a bad bet. We're getting low
power CPUs now not because the world is "data oriented" or whatever he's
saying, but because the iPhone is so fucking popular there's immense demand
for it. Ask Paul Otellini, an expert on the CPU business, what he thinks about
that! So maybe we should actually be doubling down on the needs of consumer
electronics manufacturers?

------
aidanrocke
So if I had to write a tl;dr I would say that there are two economic
incentives for neuromorphic computing:

1\. Energy efficiency: More brain-like computers will eventually an energy
efficiency comparable to the human brain. Consider that AlphaGo Zero used ~200
KiloWatts for learning Go vs. a human that uses about ~20 Watts for learning
everything that humans can learn.

2\. AI & Neuroscience research: What better way to test neuroscience theory
than to physically build a brain? As the saying goes, if you can't build you
don't understand it.

~~~
hollerith
>Consider that AlphaGo Zero used ~200 KiloWatts for learning Go vs. a human
that uses about ~20 Watts

Maybe not the best example since to have any chance at all of taking even a
single game off of AlphaGo Zero a human would need to study go for at least a
decade.

Another big advantage the machine has here is that the result of the AlphaGo
Zero training can be deployed to as many go-playing computers as you like with
no additional training.

~~~
kodz4
Point but that 20 Watt brain can learn a little more than one game.

~~~
p1esk
HW that runs AlphaGo Zero code is a general purpose computer, so it can learn
anything given the right software.

