Tuesday 4 September 2012

Robot learns to recognise itself in a mirror

Nico the robot looking at its reflection in the mirror. Image: Justin Hart/ Yale University
Self-awareness is the next big step for thinking robots.
A robot at Yale University, US, has passed an important milestone of self-awareness by interpreting information from the real world i.e. observing its reflection in the mirror and determining the perspective from this information. It’s not quite up to the standards of Ridley Scott’s replicants or WALL-E, but it’s a notable step in the world of artificial intelligence.
“Self-awareness is a key developmental step on the road to social intelligence,” says Dr Mary-Anne Williams from the University of Technology, Sydney, who was not involved with the study. “Society does not want robotists to develop robots with cognitive disorders and deficiencies so robotics researchers are increasingly spending more effort developing social skills for robots so that they can interact and collaborate with people more effectively.”
The robot, named Nico, is part of an experiment to see whether a robot can fulfill a classic trial of self-awareness, called the ‘mirror test.’ This test usually involves an animal being assessed on whether it can recognise that a mark on the body it sees in the mirror is its own. The alternative is that the animal perceives the mirror as being an extension to another room, which is why pets will often make a fuss when they see their reflection, believing it to be a stranger rather than their own image. So far, only dolphins, orcas, elephants, magpies, humans and a few other apes have passed.
Nico has been able to pass the test, and is able to look in the mirror and see its hand, which it recognises because of a visual token attached to it. “The robot watches its arm move in its visual field, and learns about the structure of the arm, how it moves through space, and the relationship between the arm and the visual field,” says PhD student Justin Hart, who conducted the research under the supervision of Professor Brian Scassellati. “It builds an expectation of where it expects to see its hand, visually, in both 3D and 2D—as imaged by the robot’s cameras—after a motion.”
From here, it is able to infer the perspective of reflections in the mirror based on watching the motion of its arm. Knowing this perspective allows the robot to accurately estimate where objects are in space based on their reflections. “It is important to distinguish that this is unique from the traditional mirror test, as proposed by Gordon Gallup in 1970,” Hart says. “This test is specifically about the robot being able to use a mirror for spatial reasoning.”
Hart believes that this work will aid in bringing robots into people’s everyday lives. He has demonstrated that the robot is already able to adjust its model in order to use tools, and that similar adaptations could compensate for damage to the robot. For example, letting a robot perform tasks unforeseen by the engineers who developed it by allowing it to learn and reason about its body in order to attempt to perform the task at hand.
“What makes this exciting, in terms of self-awareness, is that the robot is able to use knowledge that it has learned about itself in order to reason about a thing in its environment—the mirror—in a way that robots really haven’t been able to do before,” Hart says. “I believe that self-awareness is an important part of the picture in artificial intelligence, but it is not the endgame. It is just a step along the way.”

No comments:

Post a Comment