Sensorimotor Control and Simulation

CARLA: An Open Urban Driving Simulator

Project Overview

We are working on sensorimotor control: learning to act based on raw sensory input. Our models and algorithms aim to support flexible operation in complex and dynamic three-dimensional environments. We are inspired by applications such as autonomous driving and household robotics, and by scientific curiosity. Much of our work leverages immersive simulations, and we have developed simulation platforms to support this field.


CARLA: An Open Urban Driving Simulator

Learning to Act by Predicting the Future

Learning Complex Neural Network Policies with Trajectory Optimization

Variational Policy Search via Trajectory Optimization

Guided Policy Search

Continuous Inverse Optimal Control with Locally Optimal Examples

Nonlinear Inverse Reinforcement Learning with Gaussian Processes

Feature Construction for Inverse Reinforcement Learning