A Robot Teaches Itself To Play Jenga. But This Is No Game.
GLOBAL THERMONUCLEAR WAR. The slight possibility that a massive asteroid could boop Earth. Jenga. These are a few of the things that give humans debilitating anxiety.
Robots can’t solve any of these problems for us, but one machine can now brave the angst that is the crumbling tower of wooden blocks: Researchers at MIT reported in Science Robotics that they’ve engineered a robot to teach itself the complex physics of Jenga. This, though, is no game—it’s a big step in the daunting quest to get robots to manipulate objects in the real world.
The process went like this. The researchers equipped an industrial robot arm with a force sensor in its wrist and a two-pronged manipulator, and sat it down in front of a Jenga tower. The robot got its sense of sight from a camera trained on the tower.
But the researchers didn’t teach it how to win against a human. Instead, the researchers asked the robot to do some exploring, probing blocks at random. “It knows what the blocks look like and where they are, but it doesn’t really understand how they interact with each other,” says MIT roboticist Nima Fazeli, lead author on the new paper.
As the robot explored, it discovered that some blocks are looser and require less pressure to move, while others are harder to budge. Like a human Jenga player, the robot has no way of knowing by sight alone what is going to be a good brick to tackle. “You look at the tower and your eyes don’t tell you anything about which piece you should touch,” says MIT mechanical engineer Alberto Rodriguez, coauthor on the paper. “That information comes from probing it—it requires interactive perception.” With both sight and touch, the physics of a Jenga tower become more apparent.
At least that was this robot’s experience. “We found that with about 200 to 300, sometimes 400 pushes, it builds a sufficiently rich model of physics that it can then play the game with,” says Fazeli. So like a human child, the robot learns basic physics not by going to school to get a Ph.D., but through real-world play. (For now, though, it’s only playing against itself.)
In this way, the robot builds a fundamental understanding of the dynamics of Jenga. “So when it sees a new instance of the tower, when it sees a new block, it has a new kind of interaction,” says Fazeli. “It falls back on the model it has and uses that to do predictions about the next action.” It doesn’t need a human to tell it, no, that’s a dumb way of doing things, or yes, you’re on the right track.
This approach is a departure from how other roboticists are tackling the problem of teaching robots how to interact with objects. Researchers at UC Berkeley, for instance, are using something called reinforcement learning, which relies on lots of random movements on the part of the robot and a system of rewards to give it feedback. If the robot moves its arm in some arbitrary way that gets it closer to some predetermined goal, it gets a digital reward, which essentially tells it, “Yes, do that sort of thing again.” With lots of trial and error, over time the robot learns a manipulation task. But it doesn’t have that understanding of physics that the Jenga-playing robot does.
As this new robot is Jenga-ing, code is comparing its experimental proddings to previous attempts and evaluating its success. The robot knows how all these attempts both looked and felt like, given the camera and the force sensor. So when it starts pushing on a sticky block that looks and feels like a block it couldn’t extract before without the tower twisting or collapsing, it backs off. (If it’s having to apply more pressure, that indicates it’s working against more friction, which is where its understanding of physics comes in handy.) If it feels and sees a loose block, it continues, because it knows that’s worked before.
While playing Jenga may not seem like a mission-critical skill for robots to master, the underlying strategy of combining sight and touch is one that’s common in everyday life. Take brushing your teeth. You can visually understand that you’re scrubbing your front teeth, but you also need to detect that you’re not scrubbing too hard, which is difficult to determine from sight alone. Not that we need robots to be brushing our teeth, but there are a lot of manipulation problems out in the real world that they’ll need to parse by combining both sight and touch. Handling particularly delicate objects, for example.
This Jenga bot is also signaling a shift in how some robots learn. For years roboticists have trained their creations by running their software in simulations, allowing the robots to accrue experience faster than they would in the real world. But that approach has natural limits.
Consider how complicated the physics of a walking robot are, and how difficult that would be to model with perfect precision. “If you wanted to walk on different surfaces, you won’t know the friction, you don’t know the center of mass,” says Caltech AI researcher Anima Anandkumar, who wasn’t involved in this new work. “All these minor details add up rather quickly. That’s what makes it impossible to exactly model these parameters.” Experimenting with Jenga in the real world, on the other hand, skips all that modeling and forces the robot to get a grasp on the physics firsthand.
Which is not to say that working in simulation isn’t useful. Researchers at Elon Musk’s OpenAI lab, for instance, are getting physical robot hands to more seamlessly bridge the gap between what they learn in simulation and the conditions of the physical world. In these early days of robot learning, there’s no one right way to go about things.
As for robots that can beat you at Jenga, don’t hold your breath—they’re still learning the basics here. But at least they’ll have something to keep themselves occupied after that global thermonuclear war of ours.
*This article was written by Matt Simon, and was originally published on Wired.com
Elevate Your Bottom Line
Discover how cutting-edge technology can improve business productivity