Asynchronous Multi-View SLAM

1Uber Advanced Technologies Group, 2University of Toronto, 3University of Waterloo
*Equal Contribution

Updates (Nov 2023)

The AMV-Bench dataset is now released! This is a seven-camera SLAM dataset consisted of 116 sequences spanning 482 km and 21 hours of driving with diverse and challenging weather conditions, environments and motions. For instructions to use the dataset, please go to the dataset section. For more details about the dataset, please refer to the paper and supplementary materials. The dataset is curated from the Pit30M dataset and shares the same non-commercial academic-only dataset license. We would like to thank everyone in the dataset release process that made this possible.

Abstract

Existing multi-camera SLAM systems assume synchronized shutters for all cameras, which is often not the case in practice. In this work, we propose a generalized multi-camera SLAM formulation which accounts for asynchronous sensor observations. Our framework integrates a continuous-time motion model to relate information across asynchronous multi-frames during tracking, local mapping, and loop closing. For evaluation, we collected AMV-Bench, a challenging new SLAM dataset covering 482 km of driving recorded using our asynchronous multi-camera robotic platform. AMV-Bench is over an order of magnitude larger than previous multi-view HD outdoor SLAM datasets, and covers diverse and challenging motions and environments. Our experiments emphasize the necessity of asynchronous sensor modeling, and show that the use of multiple cameras is critical towards robust and accurate SLAM in challenging outdoor scenes.

Qualitative Results

We evaluate our approach on the proposed AMV-Bench dataset. We showcase all 25 qualitative trajectories in the validation set, comparing our asynchronous multi-view SLAM (AMV-SLAM) system using all 7 cameras (red), ORB-SLAM2 using the stereo cameras (sandy brown), and ground-truth (blue). Please refer to the supplementary pdf for more results on the 65 training set sequences.

We additionally showcase some qualitative maps produced by our AMV-SLAM system.

Video (ICRA talk)

Video (short)

The AMV-Bench Dataset

The AMV-Bench dataset is hosted along with the Pit30M dataset on Amazon via AWS s3. After installing the AWS command line interface (CLI) tool, you can simply access the dataset with the command aws s3 --no-sign-request ls s3://pit30m/amv-slam/. Under the v0.2.0 sub-folder, there are three folders train_halfrez, val_halfrez and test_halfrez that store the respective sequences in the trail/val/test split. You can additionally download the dataset with the aws s3 --no-sign-request cp --recursive s3://pit30m/amv-slam/ /your/local/drive command. The dataset size is around 4.2 TB.

Each sequence folder (e.g., "hwy_no_rain_0") follows the same file structure.

  • The "all_camera_images" folder stores images from each of the seven cameras. Each camera image is provided at half resolution, with 960 x 600 pixels.
  • The "all_camera_info" folder stores the timestamps and calibration information for each camera. The "all_times.csv" file specifies the camera timestamps in each multi-frame. The camera_name_info.json file stores calibration information (extrinsics, intrinsics, and distortion coefficients) of each camera. The camera_times.txt specifies the camera timestamps associated with each image.
  • The "poses_and_times.json" file contains the ground-truth vehicle poses and times. It follows the format of the TUM dataset where each line is 'timestamp tx ty tz qx qy qz qw' representing the timestamp, position and quaternion of the vehicle pose.
  • The other files ("dso_mono_calib.txt", "orb_stereo_calib.yaml", etc.) are additional calibration files we created in the DSO-SLAM and ORB-SLAM format.

To better help you understand the dataset format, we provide a tutorial Python notebook here. If you'd like to run or play with it, please make sure to make a copy first. Feel free to try out the dataset and contact Joyce and Andrei (emails in the paper pdf) if you have any questions.

BibTeX

@article{yang2021asynchronous
  author        = {Yang, Anqi Joyce and Cui, Can and Bârsan, Ioan Andrei and Urtasun, Raquel and Wang, Shenlong},
  title         = {Asynchronous Multi-View {SLAM}},
  journal       = {ICRA},
  year          = {2021},
  organization  = {IEEE}
}