2025 Volume 61 Issue 12 Pages 524-529
This paper introduces a flexible multi-sensor fusion SLAM framework that integrates data from LiDAR, cameras, IMUs, and wheel odometry. The framework is adaptable and does not rely on a specific sensor configuration, automatically executing optimal tightly-coupled sensor fusion when the camera, IMU, and wheel odometry are available. A key component, the LiDAR-Visual Temporal Alignment (LVTA) method is proposed to synchronize asynchronous LiDAR and camera data. With synchronized sensor inputs, joint optimization is employed to deliver high-accuracy SLAM results. Experimental results demonstrate that our SLAM framework achieves state-of-the-art accuracy and operates in real-time on low-cost embedded systems.