
Inside Google's Deep Dreams project - steven
https://medium.com/backchannel/inside-deep-dreams-how-google-made-its-computers-go-crazy-83b9d24e66df#.o7ltyrj98
======
jmcmahon443
Really enjoyed the humble approach to describing the engineering mindset.

Also a great look into how awesome Google management is for letting their
software engineers explore spontaneous interests.

~~~
daveguy
Also a great look into how awesome Google management is for letting their
software engineers explore spontaneous interests+.

\+ at 2am at home :)

~~~
michael_h
My company lets me spend 66% of my time on whatever I want.

~~~
pearjuice
Try self employment, you will approach 100%.

~~~
jacquesm
Not if you want to stay afloat. 100% of your time will go to serving your
customers, saving for a downturn, doing admin, customer acquisition, staying
in touch with old customers who are not currently in the market but who may
ping you for the occasional question and so on. The _other_ 100% of your time
you can spend on whatever you want.

------
lawpoop
Question for the HN audience: what exactly would Deep Dream do it if were
trained on 'everything'?

When it came out, people asked why its output was full of puppyslugs, and the
answer was "Because it was trained primarily on a corpus of dog pictures."

Well, suppose that it was trained on a corpus of pictures of 'everything'.
What would its output look like then? Would they look more or less like the
input image?

~~~
thenewwazoo
I'm not a CNN guy, but your question can't really make sense given my
understanding of neural networks. In short, the output step of a NN has to
include a classifier. That is, there's a matrix of probabilities that the NN
generates. The higher a given probability, the more likely the input is to
match the category associated with that value. For example, a NN trained to
distinguish "light" from "dark" may output a matrix with value [0.3, 0.7].

To train a CNN on "everything" means you have to have an arbitrarily large
output matrix. Can you list every possible category of everything? Probably
not. Even if you could take a swing at it, it's hard to get enough data (and
time!) to train the net on each category. Small datasets result in overfitting
of the training data, and poor overall performance. How big a data set would
you need to properly train a sufficiently-sized CNN with an arbitrarily-sized
classifier?

~~~
pigscantfly
I think I understand what you're getting at - that network architectures have
a fixed size output - but you're incorrect in saying that the final layer must
be a classifier. In general, you can optimize any differentiable function with
gradient descent, and the output does not have to be a probability
distribution.

The original poster's question does make sense; he's asking what would happen
if you trained the network on something like the ILSVRC dataset.

------
sabujp
tldr; it helps us to perhaps understand how we recognize the world around us,
esp. when we're dreaming or on psychadelics

