Deep Dream: Artificial intelligence meets hallucinations

Artificial intelligence is all the rage these days with AI solutions for everything from scheduling meetings to Big Data mining in gastronomy appearing at an unprecedented rate. Many of these products rely on an algorithm called “deep learning” which Wikipedia defines as:

… a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using model architectures, with complex structures or otherwise, composed of multiple non-linear transformations.

One of the key technologies used to implement deep learning systems is artificial neural networks, or ANNs:

… a family of statistical learning models inspired by biological neural networks (the central nervous systems of animals, in particular the brain) and are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. Artificial neural networks are generally presented as systems of interconnected “neurons” which exchange messages between each other. The connections have numeric weights that can be tuned based on experience, making neural nets adaptive to inputs and capable of learning.

But ANNs present researchers and engineers with a problem; how they work is next to impossible to understand. In an attempt to grok how ANNs function Google engineers created Deep Dream, an ANN that, when given an image, looks for things it’s already been taught and inserts them into the image. It does this operation iteratively and the end results are, to say the least, psychedelic; what has been termed by Google’s geeks as “Inceptionalism” (see the main image above).

dd ibisGoogle

Many commentators have pointed out that the resulting images have a sort of Hieronymus Bosch feel but that’s very much a result of what Google’s ANN had been taught which apparently included lots of dogs (as above) and pagoda-like buildings.