Sensorimotor Control and Simulation

Project Overview

We are working on sensorimotor control: learning to act based on raw sensory input. Our models and algorithms aim to support flexible operation in complex and dynamic three-dimensional environments. We are inspired by applications such as autonomous driving and household robotics, and by scientific curiosity. Much of our work leverages immersive simulations, and we have developed simulation platforms to support this field.


Learning by Cheating

Habitat: A Platform for Embodied AI Research

Benchmarking Classic and Learned Navigation in Complex 3D Environments

Does Computer Vision Matter for Action?

Assessing Generalization in Deep Reinforcement Learning

On Evaluation of Embodied Navigation Agents

Motion Perception in Reinforcement Learning with Dynamic Objects

Driving Policy Transfer via Modularity and Abstraction

On Offline Evaluation of Vision-based Driving Models

Semi-parametric Topological Memory for Navigation

TD or not TD: Analyzing the Role of Temporal Differencing in Deep Reinforcement Learning

End-to-end Driving via Conditional Imitation Learning

MINOS: Multimodal Indoor Simulator for Navigation in Complex Environments

CARLA: An Open Urban Driving Simulator

Learning to Act by Predicting the Future