Sensorimotor Control and Simulation

Project Overview

We are working on sensorimotor control: learning to act based on raw sensory input. Our models and algorithms aim to support flexible operation in complex and dynamic three-dimensional environments. We are inspired by applications such as autonomous driving and household robotics, and by scientific curiosity. Much of our work leverages immersive simulation, and we have developed simulation platforms to support this field.


An Extensible, Data-Oriented Architecture for High-Performance, Many-World Simulation

Habitat 2.0: Training Home Assistants to Rearrange their Habitat

Learning to Drive from a World on Rails

Megaverse: Simulating Embodied Agents at One Million Experiences per Second

Large Batch Simulation for Deep Reinforcement Learning

Rearrangement: A Challenge for Embodied AI

Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning

Learning by Cheating

Habitat: A Platform for Embodied AI Research

Benchmarking Classic and Learned Navigation in Complex 3D Environments

Does Computer Vision Matter for Action?

Assessing Generalization in Deep Reinforcement Learning

On Evaluation of Embodied Navigation Agents

Motion Perception in Reinforcement Learning with Dynamic Objects

Driving Policy Transfer via Modularity and Abstraction

On Offline Evaluation of Vision-based Driving Models

Semi-parametric Topological Memory for Navigation

TD or not TD: Analyzing the Role of Temporal Differencing in Deep Reinforcement Learning

End-to-end Driving via Conditional Imitation Learning

MINOS: Multimodal Indoor Simulator for Navigation in Complex Environments

CARLA: An Open Urban Driving Simulator

Learning to Act by Predicting the Future