Hostname: page-component-76fb5796d-vvkck Total loading time: 0 Render date: 2024-04-26T01:04:37.547Z Has data issue: false hasContentIssue false

SLC-VIO: a stereo visual-inertial odometry based on structural lines and points belonging to lines

Published online by Cambridge University Press:  17 January 2022

Chenchen Wei
Affiliation:
College of Mechanical and Vehicle Engineering, Hunan University, 2nd South Lushan Road, 410009, Changsha, China
Yanfeng Tang
Affiliation:
College of Mechanical and Vehicle Engineering, Hunan University, 2nd South Lushan Road, 410009, Changsha, China
Lingfang Yang
Affiliation:
College of Civil Engineering, Hunan University, 2nd South Lushan Road, 410009, Changsha, China
Zhi Huang*
Affiliation:
College of Mechanical and Vehicle Engineering, Hunan University, 2nd South Lushan Road, 410009, Changsha, China
*
*Corresponding author. E-mail: huangzhi@hnu.edu.cn

Abstract

To improve mobile robot positioning accuracy in building environments and construct structural three-dimensional (3D) maps, this paper proposes a stereo visual-inertial odometry (VIO) system based on structural lines and points belonging to lines. The 2-degree-of-freedom (DoF) spatial structural lines based on the Manhattan world assumption are used to establish visual measurement constraints. The property of point belonging to a line (PPBL) is used to initialize the structural lines and establish spatial distance-residual constraints between point and line landmarks in the reconstructed 3D map. Compared with the 4-DoF spatial straight line, the 2-DoF structural line reduces the variables to be estimated and introduces the orientation information of scenes to the VIO system. The utilization of PPBL makes the proposed system fully exploit the prior geometric information of environments and then achieves better performance. Tests on public data sets and real-world experiments show that the proposed system can achieve higher positioning accuracy and construct 3D maps that better reflect the structure of scenes than existing VIO approaches.

Type
Research Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., Leonard and J. John, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Trans. Robot. 32(6), 13091332 (2016). doi: 10.1109/TRO.2016.2624754.CrossRefGoogle Scholar
Artieda, J., Sebastian, José M., Campoy, Pascual, Correa, Juan F.,Mondragón, Iván F., Martínez, Carol and Olivares, Miguel et al. , “Visual 3D SLAM from UAVs,” J. Intell. Robot. Syst. Theory Appl. 55(4–5), 299321 (2009). doi: 10.1007/s10846-008-9304-8.CrossRefGoogle Scholar
Yekkehfallah, M., Yang, M., Cai, Z., Li, L. and Wang, C., “Accurate 3D localization using RGB-TOF camera and IMU for industrial mobile robots,” Robotica 39(10), 18161833 (2021). doi: 10.1017/S0263574720001526.CrossRefGoogle Scholar
Klein, G. and Murray, D., “Parallel Tracking and Mapping on a Camera Phone,” Science & Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009 (2009) pp. 8386. doi: 10.1109/ISMAR.2009.5336495.CrossRefGoogle Scholar
Forster, C., Pizzoli, M. and Scaramuzza, D., “SVO: Fast Semi-Direct Monocular Visual Odometry,” IEEE International Conference on Robotics and Automation (ICRA) (2014) pp. 1522.Google Scholar
Mur-Artal, R., Montiel, J. M. M. and Tardos, J. D., “ORB-SLAM: A versatile and accurate monocular SLAM system,” IEEE Trans. Robot. 31(5), 11471163 (2015). doi: 10.1109/TRO.2015.2463671.CrossRefGoogle Scholar
Nützi, G., Weiss, S., Scaramuzza, D. and Siegwart, R., “Fusion of IMU and vision for absolute scale estimation in monocular SLAM,” J. Intell. Robot. Syst. Theory Appl. 61(1–4), 287299 (2011). doi: 10.1007/s10846-010-9490-z.CrossRefGoogle Scholar
Qin, T., Li, P. and Shen, S., “VINS-Mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Trans. Robot. 34(4), 10041020 (2018). doi: 10.1109/TRO.2018.2853729.CrossRefGoogle Scholar
Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R. and Furgale, P., “Keyframe-based visual-inertial odometry using nonlinear optimization,” Int. J. Rob. Res. 34(3), 314334 (2015). doi: 10.1177/0278364914554813.CrossRefGoogle Scholar
Coughlan, J. M. and Yuille, A. L., “Manhattan World: Compass direction from a single image by Bayesian inference,” Proceedings of the IEEE International Conference on Computer Vision, vol. 2 (1999) pp. 941947. doi: 10.1109/iccv.1999.790349.CrossRefGoogle Scholar
Pumarola, A., Vakhitov, A., Agudo, A., Sanfeliu, A. and Moreno-Noguer, F., “PL-SLAM: Real-Time Monocular Visual SLAM with Points and Lines,” 2017 IEEE International Conference on Robotics and Automation (ICRA) (2017) pp. 45034508. doi: 10.1109/ICRA.2017.7989522.CrossRefGoogle Scholar
Gomez-Ojeda, R., Moreno, F. A., Zuiga-Nol, D., Scaramuzza, D. and Gonzalez-Jimenez, J., “PL-SLAM: A stereo SLAM system through the combination of points and line segments,” IEEE Trans. Robot. 35(3), 734746 (2019). doi: 10.1109/TRO.2019.2899783.CrossRefGoogle Scholar
Zhao, L., Huang, S., Yan, L. and Dissanayake, G., “A new feature parametrization for monocular SLAM using line features,” Robotica 33(3), 513536 (2015). doi: 10.1017/S026357471400040X.CrossRefGoogle Scholar
He, Y., Zhao, J., Guo, Y., He, W. and Yuan, K., “PL-VIO: Tightly-coupled monocular visual–inertial odometry using point and line features,” Sensors (Switzerland) 18(4), 125 (2018). doi: 10.3390/s18041159.CrossRefGoogle ScholarPubMed
Castellanos, J. A. and Tardós, J. D., Mobile Robot Localization and Map Building, vol. 39, no. 6 (Springer, Berlin, 1999) pp. 275–284. doi: 10.1007/978-1-4615-4405-0.CrossRefGoogle Scholar
Gee, A. P. and Mayol-Cuevas, W., “Real-Time Model-Based SLAM Using Line Segments,” Lecture Notes in Computer Science, vol. 4292 (2006) pp. 354–363. doi: 10.1007/11919629_37.CrossRefGoogle Scholar
von Gioi, R. G., Jakubowicz, J., Morel, J.-M. and Randall, G., “LSD: A fast line segment detector with a false detection control,” IEEE Trans. . Mach. Intell. 32(4), 722–732 (2010). doi: 10.1109/TPAMI.2008.300.CrossRefGoogle Scholar
Zhang, L. and Koch, R., “An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency,” J. Vis. Commun. Image Represent. 24(7), 794805 (2013). doi: 10.1016/j.jvcir.2013.05.006.CrossRefGoogle Scholar
Bartoli, A. and Sturm, P., “The 3D line motion matrix and alignment of line reconstructions,” Int. J. Comput. Vis. 57(3), 159178 (2004). doi: 10.1023/B:VISI.0000013092.07433.82.CrossRefGoogle Scholar
Zheng, F., Tsai, G., Zhang, Z., Liu, S., Chu, C. C. and Hu, H., “Trifo-VIO: Robust and Efficient Stereo Visual Inertial Odometry Using Points and Lines,” IEEE International Conference on Intelligent Robots and Systems (2018) pp. 36863693. doi: 10.1109/IROS.2018.8594354.Google Scholar
Lee, Y. H., Nam, C., Lee, K. Y., Li, Y. S., Yeon, S. Y. and Doh, N. L., “VPass: Algorithmic Compass using Vanishing Points in Indoor Environments,” IEEE-RSJ International Conference on Intelligent Robots and System (2009) pp. 936941. doi: 10.1109/IROS.2009.5354508.Google Scholar
Camposeco, F. and Pollefeys, M., “Using Vanishing Points to Improve Visual-Inertial Odometry,” Proceedings - IEEE International Conference on Robotics and Automation, vol. 2015 (2015) pp. 52195225. doi: 10.1109/ICRA.2015.7139926.CrossRefGoogle Scholar
Kim, S. and Oh, S. Y., “SLAM in indoor environments using omni-directional vertical and horizontal line features,” J. Intell. Robot. Syst. Theory Appl. 51(1), 3143 (2008). doi: 10.1007/s10846-007-9179-0.CrossRefGoogle Scholar
Zhang, G., Kang, D. H. and Suh, I. H., “Loop Closure Through Vanishing Points in a Line-Based Monocular SLAM,” Proceedings - IEEE International Conference on Robotics and Automation (2012) pp. 45654570. doi: 10.1109/ICRA.2012.6224759.CrossRefGoogle Scholar
Zhou, H., Zou, D., Pei, L., Ying, R., Liu, P. and Yu, W., “StructSLAM: Visual SLAM with building structure lines,” IEEE Trans. Veh. Technol. 64(4), 13641375 (2015). doi: 10.1109/TVT.2015.2388780.CrossRefGoogle Scholar
Tardif, J.-P., “Non-Iterative Approach for Fast and Accurate Vanishing Point Detection,” IEEE International Conference on Computer Vision (ICCV) (2009) pp. 12501257. doi: 10.1109/ICCV.2009.5459328.Google Scholar
Zou, D., Wu, Y., Pei, L., Ling, H. and Yu, W., “StructVIO: Visual-inertial odometry with structural regularity of man-made environments,” IEEE Trans. Robot. (2019). doi: 10.1109/TRO.2019.2915140.CrossRefGoogle Scholar
Huang, Z., Fan, B. and Song, X., “Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions,” J. Electron. Imaging 27(02), 1 (2018). doi: 10.1117/1.jei.27.2.023025.Google Scholar
Nieto, M. and Salgado, L., “Non-Linear Optimization for Robust Estimation of Vanishing Points,” Proceedings - International Conference on Image Processing. ICIP (2010) pp. 1885–1888. doi: 10.1109/ICIP.2010.5652381.CrossRefGoogle Scholar
Andrew, A. M., Multiple View Geometry in Computer Vision, vol. 30, no. 9–10 (Cambridge University Press, Cambridge, 2001).Google Scholar
Lepetit, V., Moreno-Noguer, F. and Fua, P., “EPnP: An accurate O(n) solution to the PnP problem,” Int. J. Comput. Vis. 81(2), 155166 (2009). doi: 10.1007/s11263-008-0152-6.CrossRefGoogle Scholar
Civera, J., Davison, A. J. and Montiel, J. M. M., “Inverse depth parametrization for monocular SLAM,” IEEE Trans. Robot. 24(5), 932945 (2008). doi: 10.1109/TRO.2008.2003276.CrossRefGoogle Scholar
Sameer, A., Keir, M. et al. , “Ceres Solver,” p. 2018 (2010).Google Scholar
Burri, M., Nikolic, J., Gohl, P., Schneider, T., Rehder, J., Omari, S., Achtelik, M. and Siegwart, R. et al. , “The EuRoC micro aerial vehicle datasets,” Int. J. Rob. Res. 35(10), 11571163 (2016). doi: 10.1177/0278364915620033.CrossRefGoogle Scholar
Schubert, D., Goll, T., Demmel, N., Usenko, V., Stuckler, J. and Cremers, D., “The TUM VI Benchmark for Evaluating Visual-Inertial Odometry,” (2018). doi: 10.1109/IROS.2018.8593419.CrossRefGoogle Scholar
Huang, F., Wang, Y., Shen, X., Lin, C. and Chen, Y., “Method for calibrating the fisheye distortion center,” Appl. Opt. 51(34), 81698176 (2012). doi: 10.1364/AO.51.008169.CrossRefGoogle ScholarPubMed