Monday, June 22, 2015

The “dreams” of Google’s AI are equal parts amazing and disturbing - Quartz

The “dreams” of Google’s AI are equal parts amazing and disturbing - Quartz:



'via Blog this'




WRITTEN BY
American sci-fi novelist Philip K. Dick once famously asked, Do Androids Dream of Electric Sheep? While he was on the right track, the answer appears to be, no, they don’t. They dream of dog-headed knights atop horses, of camel-birds and pig-snails, and of Dali-esque mutated landscapes.


Google’s image recognition software, which can detect, analyze, and even auto-caption images, uses artificial neural networks to simulate the human brain. In a process they’re calling “inceptionism,” Google engineers sought out to see what these artificial networks “dream” of—what, if anything, do they see in a nondescript image of clouds, for instance? What does a fake brain that’s trained to detect images of dogs see when it’s shown a picture of a knight?


Google trains the software by feeding it millions of images, eventually teaching it to recognize specific objects within a picture. When it’s fed an image, it is asked to emphasize the object in the image that it recognizes. The network is made up of layers—the higher the layer, the more precise the interpretation. Eventually, in the final output layer, the network makes a “decision” as to what’s in the image.
But the networks aren’t restricted to only identifying images. Their training allows them to generate images as well. Here’s what it outputs when it was asked to create images of the following objects:


(Google)
Cool, right? And it gets a lot more interesting. Google engineers decided that instead of asking the software to generate a specific image, they would simply feed it an arbitrary image and then ask it what it saw. Here’s how Google describes the experiment:


We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.
When feeding an image into the first layer, this is what the network created, something akin to a familiar photo filter:


(Google)
(Google)
Then things got really weird. Google started feeding images into the highest layer—the one that can detect whole objects within an image—and asked the network, “Whatever you see there, I want more of it!”


This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.
The result is somewhat akin to looking into the subconscious of an AI. When an image of clouds was fed to a network trained on identify animals, this is what happened:


(Google)
Here are some closeups of details from the second image:


(Google)
Show an artificial neural network a normal, cloudy sky, and it’ll tell you there are dog-fish and pig-snails floating around out there. It’s what one imagines an AI might see on the computing equivalent of an acid trip.


Not only does “inceptionism” teach Google a lot more about artificial neural networks and how they operate, but it also reveals some interesting new applications for the technology. As the Google engineers put it, the process “makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general.”


Below are some more images the networks created in their feedback loops (in addition to the one at the top of this story). You can see theentire gallery here.






No comments: