
CARLA is an open-source simulator for autonomous driving research, developed to support the creation, training, and validation of autonomous driving systems. It offers open digital assets like urban layouts and vehicles, a flexible API for controlling simulation aspects, and supports diverse sensor configurations (LIDAR, cameras, GPS). Key features include scalability via a multi-client architecture, fast simulation for planning and control by disabling rendering, and tools for map generation (ASAM OpenDRIVE compliant via RoadRunner) and traffic scenario simulation (ScenarioRunner). CARLA integrates with ROS via a ROS-bridge and provides baselines like AutoWare and Conditional Imitation Learning agents. The platform is actively developed and upgraded, with recent collaborations including Neya Systems for an upgrade to Unreal Engine 5, bringing enhanced graphics, Nanite geometry, Lumen global illumination, and MetaHumans for hyper-realistic characters, with plans for physics capability upgrades. The project is supported by the Computer Vision Centre and the Embodied AI Foundation.

CARLA is an open-source simulator for autonomous driving research, developed to support the creation, training, and validation of autonomous driving systems. It offers open digital assets like urban layouts and vehicles, a flexible API for controlling simulation aspects, and supports diverse sensor configurations (LIDAR, cameras, GPS). Key features include scalability via a multi-client architecture, fast simulation for planning and control by disabling rendering, and tools for map generation (ASAM OpenDRIVE compliant via RoadRunner) and traffic scenario simulation (ScenarioRunner). CARLA integrates with ROS via a ROS-bridge and provides baselines like AutoWare and Conditional Imitation Learning agents. The platform is actively developed and upgraded, with recent collaborations including Neya Systems for an upgrade to Unreal Engine 5, bringing enhanced graphics, Nanite geometry, Lumen global illumination, and MetaHumans for hyper-realistic characters, with plans for physics capability upgrades. The project is supported by the Computer Vision Centre and the Embodied AI Foundation.