
Deepdreaming without the Slugdogs - geb
http://blog.thehackerati.com/post/128746247346/deepdreaming-without-the-slugdogs
======
ericjang
I find it interesting that these hallucinations do not maintain a consistent
3D perspective: the images look really flat and any noticeable three-
dimensionality is localized to a small part of the image.

Intuitively, this makes some sense - one would expect an object classifier to
not care too much about determining viewpoint, so the amplified representation
of a dog or a slug is flat. I think convolution layers being the bottom-most
layer also has something to do with it.

My dreams are a lot more perspective-correct though. Deepdream certainly
entertains the idea that biological dreaming might be somehow similar to
gradient ascent. Even if it were so, it means that the sensory experiences we
feel in our dreams somehow integrate a much more unified "reality" than what
we would experience if we were only dreaming with an object classifier.

~~~
grrowl
I think the computers can only "express" (input or output) in terms of a 2D
image, whereas it seems to me that human dreaming is in terms of abstract
thoughts. We're not really dreaming in 3D, but dreaming in perception. If we
could train a computer on 3D scenes in the same manner, it would be able to
express its dreams in 3D, but we don't have the data to feed it.

~~~
albertzeyer
We don't have the resources yet to train such a system on high resolution
video. But it's probably coming soon. There are some works already in action
recognition in videos.

------
chippy
Try interactive version with one or two objects from the object list at
Twitch.tv [http://www.twitch.tv/317070](http://www.twitch.tv/317070)

"Instead of using it for classification, we are showing it an image and asking
it to modify it, so that it becomes more confident in what it sees. This
allows the network to hallucinate. The image is continuously zooming in,
creating an interesting kaleidoscopic effect."

