Hacker News new | past | comments | ask | show | jobs | submit login
Deep Dreams with Caffe (github.com/google)
245 points by miket on July 1, 2015 | hide | past | favorite | 55 comments

One thing people might not realize (I'm not sure how obvious it is) is that these renders depend strongly on the statistics of the training data used for the ConvNet. In particular you're seeing a lot of dog faces because there is a large number of dog classes in the ImageNet dataset (several hundred classes out of 1000 are dogs), so the ConvNet allocates a lot of its capacity to worrying about their fine-grained features.

In particular, if you train ConvNets on other data you will get very different hallucinations. It might be interesting to train (or even fine-tune) the networks on different data and see how the results vary. For example, different medical datasets, or datasets made entirely of faces (e.g. Faces in the Wild data), galaxies, etc.

It's also possible to take Image Captioning models and use the same idea to hallucinate images that are very likely for some specific sentence. There are a lot of fun ideas to play with.

So how much computational effort would it take to train with a different set of images, to reach the same level of training as this existing data?

Would it be possible on a simple commercial computer?

OpenCL support is coming, but it's not as performing as CUDA support yet for Caffe.

Grab a couple of video cards and have fun!

Might finally be putting some bitcoin GPUs to use.

http://deepdreams.zainshah.net spun up a simple web server so you can try your own! Please be gentle :)

I tried to run the same image in it a few times. http://imgur.com/5Hsuoiy

Since you have this all set up, can you make some feedback loop animations for example with zooming? Or apply this to each frame of a movie? For example something famous like Charlie Bit My Finger. Hopefully using the deeper more horrifying setting.

Here is a zooming example. I definitely noticed that it makes people's eyes look evil. Maybe it's hallucinating animal eyes on top of human eyes...


That's a bit freaky

Nice! You might consider adding support for one of the MIT Places networks (http://places.csail.mit.edu/downloadCNN.html). That's how we got a lot of the pictures we used in the original blog post. For example, these were made that way: http://1.bp.blogspot.com/-XZ0i0zXOhQk/VYIXdyIL9kI/AAAAAAAAAm...

awesome! I was just going to suggest that someone do this!

Check out the tiger, and super weird tiger: https://twitter.com/radiofreejohn/status/616490624095621120 :) :)

I think animals are particularly conducive to getting weird. I tried a picture of a sailboat and it wasn't that weird.

Then I tried a picture of a puppy and it sprouted another face on its paw and subtle eyes throughout its fur: http://i.imgur.com/N8Izxm1.jpg

Yeah, it seems to pick up on eyes very heavily.

I did one of a landscape of the Blueridge Mountains and it added a bunch of buildings on the mountain tops, and insects in the sky :P

Looks great, thanks for this.

Some of us are going to be putting our own on http://reddit.com/r/deepdream

Maybe add some fields to manipulate the other parameters :-)

Looks like you got slashdotted. Did you happen to create any kind of packaged installation? I'd like to try it over the weekend.

did the site crash or something? I'm not getting anything back

Nooo:( Was just starting to do that!

Still do it! The parent comment's instance is down!

The visuals generated by the neural network remind me of visuals experienced under the influence of psilocybin or LSD. I wonder if I am making an unjust leap or if there is a similar organic process (searching for familiar patterns) taking place in the mind? Fascinating, thanks for sharing.

No hypothesis is unjust! It could also be related some of the experiences people have in sensory deprivation tanks. Your brain attempting to see structure in noise and hallucinates. One hypothesis would be that on LSD, and other psychoactive substances, this feedback loop is somehow enhanced. There might be a few doctorates to be earned in testing these hypotheses.

It would make sense if the brain did use similar mechanisms to search for patterns.

"Be careful running the code above, it can bring you into very strange realms!"

Reminds me of Charlie Stross's new novel,

"A brief recap: magic is the name given to the practice of manipulating the ultrastructure of reality by carrying out mathematical operations. We live in a multiverse, and certain operators trigger echoes in the Platonic realm of mathematical truth, echoes which can be amplified and fed back into our (and other) realities. Computers, being machines for executing mathematical operations at very high speed, are useful to us as occult engines. Likewise, some of us have the ability to carry out magical operations in our own heads, albeit at terrible cost."


Stross definately has the sight. As for the Platonic realm, well, that's just the hypervisor he's referring to. :)

You might also like Shadowfist (http://shadowfist.com), a card game that used to have the Purists, a playable faction powered by esoteric, math-centric magic.

Those Teletubbies are perfect. Best I've seen yet.

At the very bottom of the uncanny valley

Wow, these are way better than the Google Inceptionism originals.

Those are great! Which model did you use for the Mona Lisa picture? Thanks.

Time to #DeepDream some minecraft texture packs :)

Great, I got the dependencies installed on OSX and I'm already monsterifying a head shot for LinkedIn. Now, to find a way to get this working in real time with a webcam...

I'm stuck at compiling Caffe :\

I had a lot of difficulty getting it compiled too, but this helped a lot:


That guide is mostly correct, assuming your reply here means that you're also using OS X. It should get you all the way to a working Caffe install. The one thing it doesn't get right is that your PYTHON_INCLUDE and PYTHON_LIB variables should both point to the relevant folders from your Homebrew Python install (I had no luck attempting to compile pycaffe against either Anaconda or system Python, both would just segfault when I imported the module). In my case, that was (assuming you've already installed numpy with Homebrew pip):

PYTHON_INCLUDE := /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/include/python2.7 \ /usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include

PYTHON_LIB is exactly as it is in the example Makefile.config on that page, except adjusted for version number if you've installed Python via Homebrew since 2.7.10 was released.

Same here, keep it up man!

We sort of reverse-engineered this last week and set up a stream with live interactive "hallucinations": http://www.twitch.tv/317070

You can suggest what objects the network should dream about (combinations of two are also possible).

Our code will be published on GitHub later today!

I'm very excited to see the code. I read your blog post earlier this week and am very intrigued.

Amazing that it easily runs on consumer hardware, this dispels suspicions that a Google cluster was necessary for these results.

I'm wondering if it's possible to use this with a model that was trained on a database without labels, just pictures. Is such a thing even possible? For this particular application, labeling and categories are ultimately superfluous, but are they required in order to get there?

Can someone please create a SASS interface to play with it? Would love to send this to family/friends who can't easily spin up the code.

A simpler version of this idea (making an image A out of matching pieces of a set of images B) was implemented in the early 90s and released as open source: http://draves.org/fuse/

I always wonder why sometimes the system finds faces and other elements in essentially untextured / homogeneous parts of images. Wouldn't there be some sort of "data term" in the energy functional that would suppress these results and/or move them to other parts of the image?

Perhaps this is working entirely differently and I'm thinking too much in the classical computer vision realm. Would love some explanation though.

I imagine the chance of an input that would result in zero confidence in all output nodes is damn near zero.

There will basically always be an output nose with the highest confidence, no matter how low.

This is really cool. I wonder what it would look like applied to video.

Also I didn't know that github displays .ipynb, that's pretty awesome.

This should be combined with the oculus with a camera on the front.

Simulated lsd?

Does anyone know if this technique can be used to slurp up a database and produce "typical" records for populating a test database? This is a problem that I struggled with a few years ago and still haven't found a good automated solution.

Could you refine your question? This is a post about image processing via neural network. Do you mean take an existing database, learn via neural network, and populate a fresh one with "learned" attributes?

yes that sounds correct. I'm thinking of something where I take an existing db and train a NN on it, then populate a test db with things like "typical account", "typical delinquent account", etc. This db could then be used for automated testing. I have seen approaches like Factory Girl in rails but the new rows are just incrementing fields somehow. Another approach would be to model a column statistically then generate random values that conform to that model. I'm thinking of something that is so general it can find and model relationships in a db. For example it should be able to see that most people have 2 or 3 credit cards on file and generate test data that is like that. This may not be a problem for NN but the idea of running the networks backwards and "imagining" things it has learned seems like a good fit. I have played around with markov chains trained on a first + last name that could generate made up names but that is as far as I got with it.

The dogs, eyes, and Dali-like bird-dogs are really cool. I've seen some insects, too, but not very often.

Are there any other flavors of hallucination? Why all the dogs? I suppose ImageNet has a lot of dog varieties in its category list.

A Trip To The Moon - http://imgur.com/a/EkAkv

So awesomely trippy, love it.

ugh so annoying to compile can someone make this easier

http://ryankennedy.io/running-the-deep-dream/ installs a Docker container, couldn't be easier.

But... How do you do this?

Such things is the reason why I like scientific-friendly Python community.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact