Autonomous Robotic Manipulation
The Autonomous Robotic Manipulation Software (ARM-S) team developed software that autonomously performs complex manipulation tasks. The ARM-S team was part of a DARPA competition to program an autonomous robotic manipulator that acts like human eyes, arms and hands. The goal was to develop manipulation software that carries out high-level tasks, interacts intelligently with its surroundings, adapts to real-world environments and requires little supervision.
Robotic manipulators have a number of problems that limit their widespread use:
- They have poor force and touch feedback.
- Restrictions on camera field-of-view, perspective, and communications bandwidth make non line-of-sight teleoperation a challenge.
- They are also difficult to control and demand their operator’s complete attention. This makes ordinary tasks like opening doors and picking up objects unnecessarily difficult, tedious and prone to mistakes.
The team at NREC developed manipulation software with the ability to follow high-level commands like “pick up a briefcase” or “open a door.” The robot uses its sensors to detect the item (or items) in question, then intelligently carries out the task.
The ARM-S software runs on Andy, CMU’s ARM robot, and controls its every move. Andy can grasp and manipulate ordinary objects with its versatile manipulator arm and three-fingered hand (or end effector). The robot constantly surveys the scene with its vision sensors (stereo camera and range finder) to understand what objects are within reach and how to grab them. It can identify components that need to be assembled (e.g. placing a battery into a drill) and use two hands for complex tasks on heavier objects (e.g. removing a wheel from a vehicle).