Which means that what looks like a slightly sad (if persistent) hunk of metal making its way across a hard floor represents something much bigger, actually. New research published on May 28, 2015 in Nature finds that machines can change their behavior to adapt to being broken—they can learn and iterate based on self-reflection. In other words, they can act like animals.
“Animals understand the space of possible behaviors and their value from previous experience,” the researchers Antoine Cully, Jeff Clune, Danesh Tarapore, and Jean-Baptiste Mouret wrote in Nature. “The key insight here is that robots could do the same.”
The authors of the study refer to their work as an “intelligent trial-and-error algorithm,” and they emphasize how different it is from earlier research in the realm of what’s known as reinforced learning. The way it works: A robot realizes it isn’t moving the way it ought to, so it tests alternative ways of getting where it needs to go based on an extensive database of movements. “The robot does not know exactly that it is broken,” the researchers wrote in a statement about their work. “It only knows that its performance has suddenly dropped. It has no internal sensors to detect whether any of its components are damaged.” (The lack of sensors is key because it means building these systems would be much cheaper, they say.)
As the robot tests alternative movements, it continuously updates it database of options—having used a computer simulation of itself to create a sort of how-to-walk map ahead of time. Researchers call this testing phase a “simulated childhood,” and it’s a little bit like what a baby does when she’s learning how to crawl. Only the robot takes minutes—not weeks or months—to test and determine the movements that will work best.
This is much faster, the authors say, than previous attempts. And that’s in part because although the robot is sifting through about 13,000 possible movements, they are all options that the robot has already deemed potentially useful. “The space of all possible behaviors that is searched to find these 13,000 high-performing behaviors is unimaginably vast,” they wrote. “In fact, it contains 10^47 possible behaviors, which is about how many atoms make up the planet Earth!”
Researchers experimented with both a hexapod robot and a robotic arm, and they believe their algorithm could be used to enable any kind of robot to adapt to damage and complete a mission. Over the course of hundreds of tests, the six-legged robot was able to adapt to at least six different types of damage—including completely losing two legs—and the robot arm was able to adapt to at least 14 kinds of damage, including having two of its motors broken.
Perhaps all this evokes images of Westworld, or of The Terminator, or at least of an assembly line that never breaks down. A robot that can break and keep going anyway is, after all, a robot that doesn’t need people. Or needs them less than its robot predecessors did, anyway. The potential uses for such machines are incredible to imagine. These things could skitter across Mars, or explore an ocean trench, or crawl over rubble to help search for victims after an earthquake.
Scientists and engineers have been working on perfecting such algorithms for more than a decade. And in 2006, when researchers at Cornell built a robot that could teach itself how to limp, one scientist said the behavior was a form of consciousness. "Whether humans or animals are conscious in a similar way—do we also think in terms of a self-image, and rehearse actions in our head before trying them out—is still an open question," researcher Josh Bongard told the university’s news service at the time. Elsewhere, researchers have designed robots with squishy, self-healing muscles, robotic cubes that can apparently clone themselves, tiny robots that can assemble themselves in the first place, and giant robots that can hurl cinder blocks. The fields of robotics, machine learning, and artificial intelligence are making gains so rapidly it can be hard to keep track.
The work that culminated in Wednesday’s Nature paper began in 2011.
Here’s a story they like to tell that explains why: To build a map of potential movements for the robot, they used a formula that would generate different ways of walking. It was based on the percentage of time a robot had its feet touching the ground. So the robot would figure out how to move one way when only 50 percent of its legs touched the ground, and another way when 75 percent of its legs touched the ground, and so on. Naturally, researchers figured the robot wouldn’t do anything when its feet weren’t touching the ground at all.
They were wrong.
“It surprised us!” the researchers wrote. “It flipped over on its back and crawled on its elbows with its feet in the air.”
Full text of this article by Adrienne LaFrance in The Atlantic.