If you add a few constraints, you can run a neural network "in reverse". Crank up the "dog" output (which is now an input) and it will generate an image that would be recognized as a dog. You can also feed in images like normal, then modify the output and run it backwards to make the image more "doggy". And you're not limited to the first and last layers; you can enhance the values of different layers in the middle to bring out smaller or larger features.
It's a single image of a "cloud" as has been interpreted by a neural network trained to recognize animals... so it interpreted many of the cloud shapes as different animals!
I depends on what the networks have been trained to recognise - if they have been trained to recognise images of dogs then they are more likely to detect dogs in clouds. See also https://news.ycombinator.com/item?id=9818077 .
It is, psychedelic induced hallucinations (as opposed to hallucinations as a result of delerium) are produced in much the same way as deepdream. And the results are strikingly similar.
was thinking about this, all the deepdream images we've been seeing look really similar because they all use the same network, would be interesting to use custom trained networks more variation
Well, really they look similar because nobody bothers to use anything but the default training set it comes with - which happens to contain tons of dogs.
I'm not sure what the goal or methods of achieving it are?