Skip to main content Accessibility help
×
Home

A Study of Sensor-Fusion Mechanism for Mobile Robot Global Localization

  • Yonggang Chen (a1) (a2), Weinan Chen (a1), Lei Zhu (a1), Zerong Su (a3), Xuefeng Zhou (a3), Yisheng Guan (a1) and Guanfeng Liu (a4)...

Summary

Estimating the robot state within a known map is an essential problem for mobile robot; it is also referred to “localization”. Even LiDAR-based localization is practical in many applications, it is difficult to achieve global localization with LiDAR only for its low-dimension feedback, especially in environments with repetitive geometric features. A sensor-fusion-based localization system is introduced in this paper, which has the capability of addressing the global localization problem. Both LiDAR and vision sensors are integrated, making use of the rich information introduced by vision sensor and the robustness from LiDAR. A hybrid grid-map is built for global localization, and a visual global descriptor is applied to speed up the localization convergence, combined with a pose refining pipeline for improving the localization accuracy. Also, a trigger mechanism is introduced to solve kidnapped problem and verify the relocalization result. The experiments under different conditions are designed to evaluate the performance of the proposed approach, as well as a comparison with the existing localization systems. According to the experimental results, our system is able to solve the global localization problem, and the sensor-fusion mechanism in our system has an improved performance.

Copyright

Corresponding author

*Corresponding author. E-mail: weinanchen1991@gmail.com

Footnotes

Hide All

The first two authors contributed equally to this work.

Footnotes

References

Hide All
1. Sebastian, T., Wolfram, B. and Dieter, F., Probabilistic Robotics (MIT Press, Cambridge, MA, USA, 2005).
2. Ben, G., Jamie, S., Antonio, C. and Shahram, I., “Real-time RGB-D camera relocalization via randomized ferns for keyframe encoding,” IEEE Trans. Visual. Comput. Graph. 21(5), 571583 (2015).
3. Raúl, M. and Juan, T. D., “ORB-SLAM: a versatile and accurate monocular SLAM system,” IEEE Trans. Robot. 31(5), 11471163 (2015).
4. Jakob, E., Thomas, S. and Daniel, C., “LSD-SLAM: Large-scale Direct Monocular SLAM,” European Conference on Computer Vision, Zürich, Switzerland (2014) pp. 834849.
5. Christian, F., Matia, P. and Davide, S., “SVO: Fast Semi-direct Monocular Visual Odometry,” Robotics and Automation (ICRA), 2014 IEEE International Conference, Hong Kong, China (2014) pp. 1522.
6. Wolfgang, H., Damon, K., Holger, R. and Daniel, A., “Real-time loop closure in 2D LIDAR SLAM,” Robotics and Automation (ICRA), 2016 IEEE International Conference, Stockholm, Sweden (2016) pp. 12711278.
7. Karel, K., Vojtech, V., Miroslav, K. and Libor, P., “Comparison of shape matching techniques for place recognition,” Mobile Robots (ECMR), 2013 European Conference, Barcelona, Spain (2013) pp. 107112.
8. Guillaume, B. and Remi, M., “Robust large scale monocular visual SLAM,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA (2015) pp. 16381647.
9. Charbel, A., “Efficient Image-Based Localization Using Context,” University of Waterloo (2015).
10. Mustafa, O., Michael, C., Vincent, L. and Pascal, F., “Fast keypoint recognition using random ferns,” IEEE Trans. Pattern Anal. Machine Intelligence 32(3), 448461 (2010).
11. Torsten, S., Bastian, L. and Leif, K., “Fast image-based localization using direct 2d-to-3d matching,” Computer Vision (ICCV), 2011 IEEE International Conference, Barcelona, Spain (2011) pp. 667674.
12. Mark, C., Paul, N., Vincent, L. and Pascal, F., “FAB-MAP: probabilistic localization and mapping in the space of appearance,” Int. J. Robot. Res. 27(6), 647665 (2008).
13. Gautam, S. and , K. J, “Visual loop closing using gist descriptors in manhattan world,” ICRA Omnidirectional Vision Workshop, Anchorage, AK, USA (2010).
14. Sebastian, T., Dieter, F., Wolfram, B. and Frank, D., “Robust Monte Carlo localization for mobile robots,” Artif. Intelligence 128(1–2), 99141 (2001).
15. Yang, L. and Hong, Z., “Visual loop closure detection with a compact image descriptor,” Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference, Vilamoura, Portugal (2012) pp. 10511056.
16. Charbel, A., Daniel C, A., Adel H, F. and John S, Z., “Filtering 3D Keypoints Using GIST For Accurate Image-Based Localization,” Proceedings of the British Machine Vision Conference (BMVC), York, UK (2016) pp. 127.1127.12.
17. José, M., Andrew, C. and Walterio, M., “Enhancing 6D visual relocalisation with depth cameras,” Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference, Tokyo, Japan (2013) pp. 899906.
18. Jungho, K., Kuk-Jin, Y. and In, K. So, “Bayesian filtering for keyframe-based visual SLAM,” Int. J. Robot. Res. 34(4–5), 99141 (2015).
19. Cristiano, P. and Urbano, N., “Fusing LIDAR, camera and semantic information: a context-based approach for pedestrian detection,” Int. J. Robot. Res. 32(3), 371384 (2013).
20. Nicholas, C., Anush, M., James R, M. and Ryan M, E., “Visual localization in fused image and laser range data,” Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference, San Francisco, CA, USA (2011) pp. 43784385.
21. Colin, M., Paul, F. and Timothy, B. D, “Towards lighting-invariant visual navigation: an appearance-based approach using scanning laser-rangefinders,” Robot. Autonom. Syst. 61(8), 836852 (2013).
22. Giorgio, G., Cyrill, S. and Wolfram, B., “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE Trans. Robot. 23(1), 3446 (2007).
23.“Gmapping ROS package,” https://github.com/ros-perception/slam_gmapping (2018).
24.“Karto SLAM ROS package,” https://github.com/ros-perception/slam_karto/ (2017).
25.“Cartographer ROS package,” https://github.com/googlecartographer/cartographer (2018).
26. Antonio, T., Aude, O, Monica S, C. and John M, H., “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychol. Rev. 113(4), 766 (2006).
27. Jing, L., Research on the Fast Scene Classification based on Gist of a Scene (Jilin University, Jilin, China, 2013).
28. Laurent, K., Davide, S. and Roland, S., “A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation,” Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference, CO, USA (2011) pp. 29692976.
29. Ethan, R., Vincent, R., Kurt, K. and Gary, B., “ORB: An efficient alternative to SIFT or SURF,” Computer Vision (ICCV), 2011 IEEE International Conference, Barcelona, Spain (2011) pp. 25642571.
30. Yunpeng, L., Noah, S. and Daniel, H. P, “Location recognition using prioritized feature matching,” European conference on computer vision (2010) pp. 791804.
31. Su, Z., Zhou, X., Cheng, T., Zhang, H., Xu, B. and Chen, W., “Global localization of a mobile robot using lidar and visual features,” Robotics and Biomimetics (ROBIO), 2017 IEEE International Conference, Macau, China (2017) pp. 23772383.
32. Perea, D., Hernandez, J., Morell, A. and et. at, “MCL with sensor fusion based on a weighting mechanism versus a particle generation approach,” International IEEE Conference on Intelligent Transportation Systems, 2013, The Hague, Netherlands (2013) pp. 166171.
33. Yim, B., Lee, Y. and Song, J. and Chung, W., “Mobile Robot Localization Using Fusion of Object Recognition and Range Information,” IEEE International Conference on Robotics and Automation, Rome, Italy (2007) pp. 35333538.

Keywords

Type Description Title
VIDEO
Supplementary materials

Chen et al. supplementary material
Chen et al. supplementary material

 Video (69.9 MB)
69.9 MB

A Study of Sensor-Fusion Mechanism for Mobile Robot Global Localization

  • Yonggang Chen (a1) (a2), Weinan Chen (a1), Lei Zhu (a1), Zerong Su (a3), Xuefeng Zhou (a3), Yisheng Guan (a1) and Guanfeng Liu (a4)...

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed