Scale RapidThe fastest way to production-quality labels.
Scale StudioLabeling infrastructure for your workforce.
Scale 3D Sensor FusionAdvanced annotations for LiDAR + RADAR data.
Scale ImageComprehensive annotations for images.
Scale VideoScalable annotations for video data.
Scale TextSophisticated annotations for text-based data.
Scale AudioAudio Annotation and Speech Annotation for NLP.
Scale MappingThe flexible solution to develop your own maps.
Scale NucleusThe mission control for your data
Scale ValidateCompare and understand your models
Scale LaunchShip and track your models in production
Scale Document AITemplate-free ML document processing
Scale Content UnderstandingManage content for better user experiences
Scale SyntheticGenerate synthetic data
Large-scale open source dataset for autonomous driving.
Overview
Data Collection
Car Setup
Sensor Calibration
LiDAR extrinsics
Camera extrinsics
Camera intrinsic calibration
IMU extrinsics
In order to achieve cross-modality data alignment between the LiDAR and the cameras, the exposure on each camera was triggered when the top LiDAR sweeps across the center of the camera’s FOV. This method was selected as it generally yields good data alignment. Note that the cameras run at 12Hz while the LiDAR runs at 20Hz.
The 12 camera exposures are spread as evenly as possible across the 20 LiDAR scans, so not all LiDAR scans have a correspondingcamera frame.
Reducing the frame rate of the cameras to 12Hz helps to reduce the compute, bandwidth and storage requirement of the perception system.
Data Annotation
vehicle.car
human.pedestrian.adult
movable_object.barrier
movable_object.trafficcone
vehicle.truck
vehicle.trailer
vehicle.construction
vehicle.bus.rigid
Ready to get started with nuScenes?
This tutorial will give you an overview of the dataset without the need to download it.
Please note that this page is a rendered version of a Jupyter Notebook.
Take the Tutorial