Daily Bulletin


The Conversation

  • Written by The Conversation
imageOne of the psychadelic nightmares generated by Google's Inceptionism system.Google Research

You may have seen some of the “nightmarish” images generated by Google’s aptly named Inceptionism project. Here we have freakish fusions of dogs and knights (as in the image above), dumbells with arms attached (see below) and a menagerie of Hieronymus Bosch-ian creatures:

imageComing soon to a nightmare near you.Google Research

But these are more than just computerised curiosities. The process that generated these images can actually tell us a great deal about how our own minds process and categorise images – and what it is we have that computers still lack in this regard.

Digging deep

Artificial neural networks, or “deep learning”, have enabled terrific progress in the field of machine learning, particularly in image classification.

Conventional approaches of machine learning typically relied on top-down rule-based programming, with explicit stipulation of what features particular objects had. They have also typically been inaccurate and error-prone.

An alternative approach is using artificial neural networks, which evolve bottom-up through experience. They typically have several interconnected information processing units, or neurons. A programmer weights each neuron with certain functions, and each function interprets information according to an assigned mathematical model telling it what to look for, whether that be edges, boundaries, frequency, shapes, etc.

The neurons send information throughout the network, creating layers of interpretation, eventually arriving at a conclusion about what is in the image.

Google’s Inceptionism project tested the limits of its neural network’s image recognition capacity. The Google research team trained the network by exposing it to millions of images and adjusting network parameters until the program delivered accurate classifications of the objects they depicted.

Then they turned the system on its head. Instead of feeding in a image – say, a banana – and having the neural network say what it is, they fed in random noise or an unrelated image, and had the network look for bananas. The resulting images are the network’s “answers” to what it’s learned.

imageStarting with random noise, Google’s artificial neural network found some bananas.Google Research

What it tells us about machine-learning

The results of the Inceptionism project aren’t just curiosities. The psychadelic interpretations made by the program indicate that something is missing that is unique to information processing in biological systems. For example, the results show that the system is vulnerable to over-generalising features of objects, as in the case of the dumbbell requiring an arm:

imageDumbells often have arms attached, but not like this.

This is similar to believing that cherries only occur atop ice cream sundaes. Because the neural network operates on correlation and probability (most dumbbells are going to be associated with arms), it lacks a capacity to distinguish contigency from necessity in forming stable concepts.

The project also shows that the over-reliance on feature detection leads to problems with the network’s ability to identify probable co-occurrence. This results in a tendency towards over-interpretation, similar to how Rorschach tests reveal images, or inmates in Orange is the New Black see faces in toast.

Similarly, Google’s neural network sees creatures in the sky, as with the strange creatures like the “Camel-Bird” and “Dog-Fish” above. It even picks up oddities within the Google homepage:

imageMore than meets the eye.Google

A stable classification mechanism so far eludes deep learning networks. As described by the researchers at Google:

We actually understand surprisingly little of why certain models work and others don’t. […] The techniques presented here help us understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training.

What it tells us about ourselves

The Inceptionism project also tells us a little about how our own neural networks function. For humans like us, perceptual information about objects is integrated from various inputs, such as shape, colour, size and so on, to then be transformed into a concept about that thing.

For example, a “cherry” is red, round, sweet and edible. And as you discover more things like a cherry, your neural network creates a category of things like cherries, or to which a cherry belongs, such as “fruit”. Soon, you can picture a cherry without actually being in the presence of one, owing to your authority over what a cherry is like at the conceptual level.

Conceptual organisation enables us to perceive drawings, photos and symbols of a cloud as referring to the same “cloud” concept, regardless of how much the cloud’s features may suggest the appearance of Dog-Fish.

imageGoogle’s artificial neural network discovered all sorts of bizarre creatures lurking in the clouds.Google Research

It also enables you to communicate about abstract objects, despite never having experienced them directly, such as unicorns.

imageYou can recognise this as a unicorn even though you’ve never met one in real life.

One implication that arises from this research by Google is that simulating intelligence requires an additional organisational component beyond just consolidated feature detection. Yet it’s still unclear how to successfully replicate this function within deep learning models.

While our experimental artificial neural networks are getting better at image recognition, we don’t yet know how they work – just like we don’t understand how our own brains work. But by continuing to test how artificial neural networks fail, we will learn more about them, and us. And perhaps generate some pretty pictures in the process.

imageNot all the images generated by Inceptions are sinister.Google Research

Jessica Birkett does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

Authors: The Conversation

Read more http://theconversation.com/what-the-dog-fish-and-camel-bird-can-tell-us-about-how-our-brains-work-43904

Writers Wanted

Victoria and NSW are funding extra tutors to help struggling students. Here's what parents need to know about the schemes

arrow_forward

The mystery of the blue flower: nature's rare colour owes its existence to bee vision

arrow_forward

Curb population growth to tackle climate change: now that's a tough ask

arrow_forward

The Conversation
INTERWEBS DIGITAL AGENCY

Politics

Ray Hadley's interview with Scott Morrison

RAY HADLEY: Prime Minister, good morning.    PRIME MINISTER: G’day Ray.   HADLEY: I was just referring to this story from the Courier Mail, which you’ve probably caught up with today about t...

Ray Hadley & Scott Morrison - avatar Ray Hadley & Scott Morrison

Prime Minister's Remarks to Joint Party Room

PRIME MINISTER: Well, it is great to be back in the party room, the joint party room. It’s great to have everybody back here. It’s great to officially welcome Garth who joins us. Welcome, Garth...

Scott Morrison - avatar Scott Morrison

Prime Minister Interview with Ben Fordham, 2GB

BEN FORDHAM: Scott Morrison, good morning to you.    PRIME MINISTER: Good morning, Ben. How are you?    FORDHAM: Good. How many days have you got to go?   PRIME MINISTER: I've got another we...

Scott Morrison - avatar Scott Morrison

Business News

Tips to find the best plastic manufacturing supplier for your needs

Plastics are very much an important part of all of our lives, but they’re particularly valuable to a wide variety of industries that rely on their production for their operations. The industries, ...

News Co - avatar News Co

7 foolproof tips for bidding successfully at a property auction

Auctions can be beneficial for prospective buyers, as they are transparent and fair. If you reach the limit you are willing to pay, you can simply walk away. Another benefit of an auction is tha...

Dominique Grubisa - avatar Dominique Grubisa

Getting Ready to Code? These Popular and Easy Programming Languages Can Get You Started

According to HOLP (History Encyclopedia of Programing Languages), there are more than 8,000 programming languages, some dating as far back as the 18th century. Although there might be as many pr...

News Co - avatar News Co



News Co Media Group

Content & Technology Connecting Global Audiences

More Information - Less Opinion