Robotics

Project Overview

We are working with a variety of robotic systems, focusing on mobility and the coupling of perception and control. We work with wheeled vehicles, flying machines, and legged robots. One of our emphases is on agile autonomy. We aim to enable deployment of autonomous mobile robots in previously unseen and unmapped environments. The robots should traverse these environments with no prior exposure to them, with purely onboard sensing and computation, and with great speed and agility. Much of our work leverages learning-based techniques, and develops unified treatments of perception and action.

Publications

Reaching the limit in autonomous racing: Optimal control versus reinforcement learning

Champion-level drone racing using deep reinforcement learning

Learning robust perceptive locomotion for quadrupedal robots in the wild

Learning high-speed flight in the wild

OpenBot: Turning Smartphones into Robots

Learning Quadrupedal Locomotion over Challenging Terrain

Deep Drone Acrobatics

Trajectory Optimization for Legged Robots With Slipping Motions

Does Computer Vision Matter for Action?

Beauty and the Beast: Optimal Methods Meet Learning for Drone Racing

Learning Agile and Dynamic Motor Skills for Legged Robots

Deep Drone Racing: Learning Agile Flight in Dynamic Environments

Driving Policy Transfer via Modularity and Abstraction

Trajectory Optimization with Implicit Hard Contacts

End-to-end Driving via Conditional Imitation Learning