Hostname: page-component-8448b6f56d-xtgtn Total loading time: 0 Render date: 2024-04-17T19:17:25.861Z Has data issue: false hasContentIssue false

Stereo visual odometry with velocity constraint for ground vehicle applications

Published online by Cambridge University Press:  30 March 2021

Fei Liu*
Affiliation:
Department of Civil Engineering, University of Calgary, Calgary, Alberta, Canada.
Yashar Balazadegan Sarvrood
Affiliation:
Department of Geomatics Engineering, University of Calgary, Calgary, Alberta, Canada.
Yue Liu
Affiliation:
Department of Automation, Harbin Engineering University, Harbin, Heilongjiang, China
Yang Gao
Affiliation:
Department of Geomatics Engineering, University of Calgary, Calgary, Alberta, Canada.
*
*Corresponding author. E-mail: liu19@ucalgary.ca

Abstract

This paper proposes a novel method of error mitigation for stereo visual odometry (VO) applied in land vehicles. A non-holonomic constraint (NHC), which imposes physical constraint to the rightward velocity of a land vehicle, is implemented as an observation in an extended Kalman filter (EKF) to reduce the drift of stereo VO. The EKF state vector includes position errors in an Earth-centred, Earth-fixed (ECEF) frame, velocity errors in the camera frame, angular rate errors and attitude errors. All the related equations are described and presented in detail. In this approach, no additional sensors are used but NHC, namely velocity constraint in the right direction , is applied as an external measurement to improve the accuracy. Tests are conducted with the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) datasets. Results show that the relative horizontal positioning error improved from 0⋅63% to 0⋅22% on average with the application of the velocity constraints. The maximum and root mean square of the horizontal error with velocity constraints are both reduced to less than half of the error with stand-alone stereo VO.

Type
Research Article
Copyright
Copyright © The Royal Institute of Navigation 2021

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Balazadegan Sarvrood, Y. (2016) Multi-Sensor Map Matching Techniques for Autonomous Land Vehicle Navigation. University of Calgary, Calgary, Canada.Google Scholar
Bay, H., Tuytelaars, T. and Van Gool, L. (2006). Surf: Speeded up robust features. In: Leonardis, A., Bischof, H and Pinz, A (eds.). Computer Vision–ECCV 2006, Berlin Heidelberg: Springer, pp. 404417.CrossRefGoogle Scholar
Bloesch, M., Omari, S., Hutter, M., Siegwart, R. (2015). Robust Visual Inertial Odometry Using a Direct EKF-Based Approach. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 298304.CrossRefGoogle Scholar
Bloesch, M., Burri, M., Omari, S., Hutter, M., Siegwart, R. (2017). Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback. The International Journal of Robotics Research, 36(10), 10531072.CrossRefGoogle Scholar
Campos, C., Elvira, R., Rodríguez, J.J.G., Montiel, J.M., Tardós, J.D. (2020). ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM. arXiv preprint arXiv:2007.11898.CrossRefGoogle Scholar
Cvišić, I. and Petrović, I. (2015). Stereo Odometry Based on Careful Feature Selection and Tracking. In: 2015 European Conference on Mobile Robots (ECMR). IEEE, pp. 16.Google Scholar
Fischler, M. A. and Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381395.CrossRefGoogle Scholar
Forster, C., Zhang, Z., Gassner, M., Werlberger, M., Scaramuzza, D. (2016). SVO: Semidirect visual odometry for monocular and multicamera systems. IEEE Transactions on Robotics, 33(2), 249265.CrossRefGoogle Scholar
Gálvez-López, D. and Tardos, J. D. (2012). Bags of binary words for fast place recognition in image sequences. IEEE Transactions on Robotics, 28(5), 11881197.CrossRefGoogle Scholar
Gao, X.S., Hou, X.R., Tang, J., Cheng, H.F. (2003). Complete solution classification for the perspective-three-point problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(8), 930943.Google Scholar
Geiger, A., Lenz, P., Stiller, C., Urtasun, R. (2013). Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research, 32(11), 12311237.CrossRefGoogle Scholar
Godha, S. and Cannon, M. E. (2007). GPS/MEMS INS integrated system for navigation in urban areas. GPS Solutions, 11(3), 193203.CrossRefGoogle Scholar
Groves, P. D. and Jiang, Z. (2013). Height aiding, C/N 0 weighting and consistency checking for GNSS NLOS and multipath mitigation in urban areas. The Journal of Navigation, 66(5), 653669.CrossRefGoogle Scholar
Hartley, R. and Zisserman, A. (2001). Multiple View Geometry in Computer Vision. Cambridge: Cambridge University Press.Google Scholar
Hesch, J. A. and Roumeliotis, S. I. (2011) A Direct Least-Squares (DLS) Method for PnP. In: 2011 International Conference on Computer Vision. IEEE, pp. 383390.CrossRefGoogle Scholar
Jekeli, C. (2012). Inertial Navigation Systems with Geodetic Applications. Berlin: Walter de Gruyter.Google Scholar
Lepetit, V., Moreno-Noguer, F. and Fua, P. (2009). EPnp: Efficient perspective-n-point camera pose estimation. International Journal of Computer Vision, 81(2), 155166.CrossRefGoogle Scholar
Leutenegger, S. (2015). Keyframe-based visual-inertial odometry using nonlinear optimization. The International Journal of Robotics Research, 34(3), 314334.CrossRefGoogle Scholar
Liu, F. (2018). Tightly Coupled Integration of GNSS/INS/Stereo Vision/Map Matching System for Land Vehicle Navigation. University of Calgary, Calgary, Canada.Google Scholar
Liu, F., BalazadeganSarvrood, Y. and Gao, Y. (2018a). Tight integration of INS/stereo VO/digital map for land vehicle navigation. Photogrammetric Engineering & Remote Sensing, 84(1), 1523.CrossRefGoogle Scholar
Liu, F., Balazadegan Sarvrood, Y. and Gao, Y. (2018b). Implementation and analysis of tightly integrated INS/stereo VO for land vehicle navigation. The Journal of Navigation, 71(1), 8399.CrossRefGoogle Scholar
Liu, Y., Gu, Y., Li, J., Zhang, X. (2017). Robust stereo visual odometry using improved RANSAC-based methods for mobile robot localisation. Sensors, 17(10), 2339.CrossRefGoogle Scholar
Liu, Y., Liu, F., Gao, Y., Zhao, L. (2018). Implementation and analysis of tightly coupled global navigation satellite system precise point positioning/inertial navigation system (GNSS PPP/INS) with insufficient satellites for land vehicle navigation. Sensors, 18(12), 4305.CrossRefGoogle ScholarPubMed
Menze, M. and Geiger, A. (2015). Object Scene Flow for Autonomous Vehicles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 30613070.CrossRefGoogle Scholar
Nistér, D. (2005). Preemptive RANSAC for live structure and motion estimation. Machine Vision and Applications, 16(5), 321329.CrossRefGoogle Scholar
Nistér, D., Naroditsky, O. and Bergen, J. (2004) Visual Odometry. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. IEEE, pp. I652.CrossRefGoogle Scholar
Noureldin, A., Karamat, T. B. and Georgy, J. (2012). Fundamentals of Inertial Navigation, Satellite-Based Positioning and Their Integration. Berlin Heidelberg: Springer Science & Business Media.Google Scholar
Qin, T., Li, P. and Shen, S. (2018). VINS-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics, 34(4), 10041020.CrossRefGoogle Scholar
Qin, T., Pan, J., Cao, S., Shen, S. (2019). A general optimization-based framework for local odometry estimation with multiple sensors. arXiv preprint arXiv:1901.03638.Google Scholar
Rosten, E. and Drummond, T. (2006). Machine learning for high-speed corner detection. In: Leonardis, A., Bischof, H. and Pinz, A. (eds.). Computer Vision–ECCV 2006, Berlin Heidelberg: Springer, pp. 430443.CrossRefGoogle Scholar
Scaramuzza, D. (2011). Performance evaluation of 1-point-RANSAC visual odometry. Journal of Field Robotics, 28(5), 792811.CrossRefGoogle Scholar
Scaramuzza, D. and Fraundorfer, F. (2011). Visual odometry [tutorial]. IEEE Robotics & Automation Magazine, 18(4), 8092.CrossRefGoogle Scholar
Shi, J. and Tomasi, C. (1993). Good Features to Track. New York: Cornell University.Google Scholar
Shin, E.-H. (2001). Accuracy Improvement of Low Cost INS/GPS for Land Applications. UCGE Report. University of Calgary.Google Scholar
Strasdat, H., Montiel, J. M. M. and Davison, A. J. (2012). Visual SLAM: Why filter? Image and Vision Computing, 30(2), 6577.CrossRefGoogle Scholar
Tomasi, C. and Kanade, T. (1992). Shape and motion from image streams under orthography: A factorization method. International Journal of Computer Vision, 9(2), 137154.CrossRefGoogle Scholar
Von Stumberg, L., Usenko, V. and Cremers, D. (2018). Direct Sparse Visual-Inertial Odometry Using Dynamic Marginalization. In 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 25102517.CrossRefGoogle Scholar
Wang, R., Schworer, M. and Cremers, D. (2017). Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras. In Proceedings of the IEEE International Conference on Computer Vision, pp. 39033911.CrossRefGoogle Scholar
Welch, G. and Bishop, G. (1995). An Introduction to the Kalman Filter. Chapel Hill, North Carolina: Department of Computer Science, University of North Carolina at Chapel Hill.Google Scholar
Zhang, J., Kaess, M. and Singh, S. (2014). Real-Time Depth Enhanced Monocular Odometry. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, pp. 49734980. doi: 10.1109/IROS.2014.6943269.CrossRefGoogle Scholar