Not a member yet? Register for full benefits!

Username
Password
Robotic Learning Locomotion

The goal of the Learning Locomotion program is to develop a new generation of learning algorithms that enable traversal of large, irregular obstacles by unmanned vehicles. By learning from experience, these algorithms will allow robots to function in more variable and unexpected terrain than hard coding motion.

DARPA has selected six university research teams, including ones at MIT and Stanford, to compete to develop the best algorithms for controlling the robot puppy. The agency hopes this will help identify the best adaptive strategy for moving over irregular surfaces.

From DARPA's website:

"Large, irregular obstacles such as urban rubble, rock fields, and fallen logs present minor challenges to dismounted forces, slowing but not stopping them. These Slow-Go areas for dismounted forces are No-Go areas for today?s small unmanned vehicles, limiting their effectiveness on the battlefield. Enabling future unmanned vehicles to traverse large, irregular obstacles will allow robots to better contribute to military operations.

Locomotion over extreme terrain requires deliberately planned, precisely coordinated movements. Like a hiker traversing a boulder field, the unmanned vehicle in extreme terrain will succeed not by flailing, but by meticulously sequencing its motions. The complexity of the planning and the required sensorimotor coordination presents significant challenges for the design and implementation of control systems. Handcrafting the control laws and parameters may not even be possible with reasonable effort.

Automatic learning offers a promising alternative. In the Learning Locomotion program, algorithms will be created that learn how to locomote based on the experience of a legged platform confronting extreme terrain. It is expected that the performance of these algorithms will far exceed the performance of handcrafted systems, creating a breakthrough in locomotion over extreme terrain. Further, it is expected that these algorithms will be broadly applicable to the class of ?agile? ground vehicles.

6 Performer teams will receive a locomotion platform called Little Dog, with 4 legs, 3 actuators per leg, and a total weight less than 7 pounds. They will also be given a board with built in terrain features acting as obstacles of varying size and spacing.

On an approximately monthly basis, beginning approximately three months into the period of performance, performer teams will upload their software to a central facility. There, independent tests will be conducted by downloading the code into a functionally identical Little Dog, and running it on a terrain board that is statistically equivalent but not physically identical to the teams? terrain boards. Performance will be measured by travel speed, and by the size of the largest obstacle traversable.

Phase I is a 15-month effort to develop learning methods that control autonomous travel over extreme terrain. The goal is for the system to travel about 0.6 in/sec and scale obstacles 2.5 in tall. Phase II will be an 18-month effort in which the desired speed will increase to 3.8 in/sec and the max obstacle height will become 5.7 in. These speeds and heights were determined by the leg length of the vehicle."

LittleDog is a timid-looking four-legged robot about the size of a Chihuahua. Yet, it's small size makes it ideal towork with, as 'mountains' can be made with ease for it to climb over, that could not be so easily done with a larger model.

The robot has three motored joints on each leg, and its movements are controlled precisely by an on-board computer. An internal gyroscope lets the robot sense its orientation, while an external motion-capture system monitors the precise position of each limb and joint as it moves.

The video below, shows one of the LittleDogs in action, this one from Carnegie Mellon University. It is hard to look at the first half of this video and not feel sorry for the little critter. However, it, like any baby, has to learn to crawl, before it can learn to walk. The robot in the second half of the video is the same one; it taught itself.


LittleDog

This little bundle of metal and plastic, that tries so hard, is the first stage in a DARPA project to create sophisticated robotic assistants for military personnel, including automated "pack-mules" capable of hauling heavy loads over tough terrain.

Boston Dynamics has previously demonstrated a much larger four-legged robot called BigDog. Internal sensors and motors allow this robot to rapidly regain its balance after slipping or being pushed, but BigDog is unable to tackle the kind of irregular terrain faced by LittleDog. It is also incredibly loud, requiring hefty motors due to it's sheer weight. The hope is, however, to use thealgorithms developed by LittleDog, to teach BigDog new tricks.


BigDog

Universities Involved:

Boston Dynamics (Carnegie Mellon University)

Institute for Human and Machine Cognition (IHMC)

MIT

Stanford

University of Manchester

University of Zurich

References

DARPA Learning Locomotion
http://www.arpa.mil/ipto/programs/ll/index.htm

Boston Dynamics: LittleDog
http://www.bostondynamics.com/content/sec.php?section=LittleDog

LittleDog Learning Locomotion Project: MIT
http://publications.csail.mit.edu/abstracts/abstracts06/littledog/littledog.html

Staff Comments

 


.
Untitled Document .