Asynchronous Multi-View SLAM

1Uber Advanced Technologies Group, 2University of Toronto, 3University of Waterloo
*Equal Contribution

Abstract

Existing multi-camera SLAM systems assume synchronized shutters for all cameras, which is often not the case in practice. In this work, we propose a generalized multi-camera SLAM formulation which accounts for asynchronous sensor observations. Our framework integrates a continuous-time motion model to relate information across asynchronous multi-frames during tracking, local mapping, and loop closing. For evaluation, we collected AMV-Bench, a challenging new SLAM dataset covering 482 km of driving recorded using our asynchronous multi-camera robotic platform. AMV-Bench is over an order of magnitude larger than previous multi-view HD outdoor SLAM datasets, and covers diverse and challenging motions and environments. Our experiments emphasize the necessity of asynchronous sensor modeling, and show that the use of multiple cameras is critical towards robust and accurate SLAM in challenging outdoor scenes.

Qualitative Results

We evaluate our approach on the proposed AMV-Bench dataset. We showcase all 25 qualitative trajectories in the validation set, comparing our asynchronous multi-view SLAM (AMV-SLAM) system using all 7 cameras (red), ORB-SLAM2 using the stereo cameras (sandy brown), and ground-truth (blue). Please refer to the supplementary pdf for more results on the 65 training set sequences.

We additionally showcase some qualitative maps produced by our AMV-SLAM system.

Video

BibTeX

@article{yang2021asynchronous
  author        = {Yang, Anqi Joyce and Cui, Can and Bârsan, Ioan Andrei and Urtasun, Raquel and Wang, Shenlong},
  title         = {Asynchronous Multi-View {SLAM}},
  journal       = {ICRA},
  year          = {2021},
  organization  = {IEEE}
}