WASHINGTON (TIP): Disney researchers have developed a new humanoid robot that is able to recognise when a person is handing them objects and predicting where to make the hand-off. The robot can receive an object handed to it by a person in a natural way, researchers say. Recognising that a person is handing something and predicting where the human plans to make the handoff is difficult for a robot, but the researchers from Disney Research, Pittsburgh and Karlsruhe Institute of Technology (KIT) solved the problem by using motion capture data with two people to create a database of human motion.
By rapidly searching the database, the robot can realise what the human is doing and make a reasonable estimate of where he is likely to extend his hand. People handing a coat, a package or a tool to a robot will become commonplace if robots are introduced to the workplace and the home, said Katsu Yamane, senior research scientist. But the technique he developed could apply to any number of situations where a robot needs to synchronise its motion with that of a human, such as in a dance.
“If a robot just sticks out its hand blindly, or uses motions that look more robotic than human, a person might feel uneasy working with that robot or might question whether it is up to the task,” Yamane said. “We assume human-like motions are more userfriendly because they are familiar,” Yamane said. Human-like motion is often achieved in robots by using motion capture data from people. But that’s usually done in tightly scripted situations, based on a single person’s movements.
For the general passing scenarios envisioned by Yamane, a sampling of motion from at least two people would be necessary and the robot would have to access that database interactively, so it could adjust its motion as the person handing it a package progressively extended her arm. To enable a robot to access a library of human-tohuman passing motions with the speed necessary for robot-human interaction, the researchers developed a hierarchical data structure.
Using principal component analysis, the researchers first developed a rough estimate of the distribution of various motion samples. They then grouped samples of similar poses and organized them into a binary-tree structure. With a series of “either/or” decisions, the robot can rapidly search this database, so it can recognize when the person initiates a handing motion and then refine its response as the person follows through.