Sensorimotor Control and Simulation

Project Overview

We are working on sensorimotor control: learning to act based on raw sensory input. Our models and algorithms aim to support flexible operation in complex and dynamic three-dimensional environments. We are inspired by applications such as autonomous driving and household robotics, and by scientific curiosity. Much of our work leverages immersive simulations, and we have developed simulation platforms to support this field.


Semi-parametric Topological Memory for Navigation

TD or not TD: Analyzing the Role of Temporal Differencing in Deep Reinforcement Learning

End-to-end Driving via Conditional Imitation Learning

MINOS: Multimodal Indoor Simulator for Navigation in Complex Environments

CARLA: An Open Urban Driving Simulator

Learning to Act by Predicting the Future

Learning Complex Neural Network Policies with Trajectory Optimization

Variational Policy Search via Trajectory Optimization

Guided Policy Search

Continuous Inverse Optimal Control with Locally Optimal Examples

Nonlinear Inverse Reinforcement Learning with Gaussian Processes

Feature Construction for Inverse Reinforcement Learning