
DeepDream: How Alexander Mordvintsev excavated the computer’s hidden layers - DamnInteresting
https://thereader.mitpress.mit.edu/deepdream-how-alexander-mordvintsev-excavated-the-computers-hidden-layers/
======
colah3
I've been incredibly lucky to work with Alex on several projects, including
DeepDream. He's amazing. If you think you have a new idea about how to
understand neural networks, there's a decent chance Alex did a prototype of it
five years ago.

Regarding DeepDream, it often feels to me -- I don't wish to speak on behalf
of Alex or Mike -- that we didn't really understand what our results meant
when we published DeepDream. It was kind of like discovering that warped glass
can distort and magnify images: a really interesting discovery, but a lot more
work was needed to turn it into a scientific instrument like glass can be used
to form a microscope. As the community got single neuron or direction feature
visualizations that worked well, lots of research possibilities began to open
up. And in retrospect, one of the most important tricks was jitter, which Alex
introduced. This style of feature visualization is probably the single tool I
rely on most in my research to this day.

(If you're curious what this has led to as we've continued to pursue it, check
out Circuits ([https://distill.pub/2020/circuits/zoom-
in/](https://distill.pub/2020/circuits/zoom-in/)), Building Blocks
([https://distill.pub/2018/building-
blocks/](https://distill.pub/2018/building-blocks/)) and Activation Atlases
([https://distill.pub/2019/activation-
atlas/](https://distill.pub/2019/activation-atlas/)).)

I'd also encourage people to check out Alex' new line of research, Neural
Cellular Automata ([https://distill.pub/2020/growing-
ca/](https://distill.pub/2020/growing-ca/)). I think it's a really interesting
line of exploration. And as usual, Alex has an incredible deep trove of small
fascinating results relating to NCA if you talk to him about it.

------
2bitencryption
> The crucial point is that the machine does not see a cat or dog, as we do,
> but a set of numbers.

This seems to miss the point - to follow that pattern, "Humans do not see a
cat or a dog, they receive a set of neural impulses".

If a human "knows" those impulses represent a cat, you could also surely say
an artificial neural net "knows" those numbers represent a cat - and if you
ask "how" a human/NN knows this, I guess the answer is the same -- different
levels of visual abstraction (numbers/impulses trigger neurons that recognize
edges and shapes, which become eyes become faces become bodies become
animals...) trigger different levels of the network that are familiar with
those abstractions and turn them into the end result: "That is a cat."

------
colordrops
Ugh, I really dislike articles that are about to tell you the key idea(s) at
the beginning then veer off into a personal interest story before doing the
reveal. It's enough to get me to quit the article.

~~~
dang
Ok, but please don't post unsubstantive comments to Hacker News.

~~~
colordrops
I see comments on how articles or websites are presented all the time, didn't
realize it was disallowed. Apologies.

~~~
dang
Thanks—appreciated.

