Skip to main content Accessibility help

Scale robust IMU-assisted KLT for stereo visual odometry solution

  • L. Chermak (a1), N. Aouf (a1) and M. A. Richardson (a1)


We propose a novel stereo visual IMU-assisted (Inertial Measurement Unit) technique that extends to large inter-frame motion the use of KLT tracker (Kanade–Lucas–Tomasi). The constrained and coherent inter-frame motion acquired from the IMU is applied to detected features through homogenous transform using 3D geometry and stereoscopy properties. This predicts efficiently the projection of the optical flow in subsequent images. Accurate adaptive tracking windows limit tracking areas resulting in a minimum of lost features and also prevent tracking of dynamic objects. This new feature tracking approach is adopted as part of a fast and robust visual odometry algorithm based on double dogleg trust region method. Comparisons with gyro-aided KLT and variants approaches show that our technique is able to maintain minimum loss of features and low computational cost even on image sequences presenting important scale change. Visual odometry solution based on this IMU-assisted KLT gives more accurate result than INS/GPS solution for trajectory generation in certain context.


Corresponding author

*Corresponding author. E-mail:


Hide All
1. Nister, D., Naroditsky, O. and Bergen, J., “Visual Odometry,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, D.C., USA, vol. 1, (Jun. 2004) pp. 652–659.
2. Boulekchour, M., Aouf, N. and Richardson, M., “Robust L∞ convex pose-graph optimisation for monocular localisation solution for unmanned aerial vehicles,” Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 229 (10), 19031918 (2014).
3. Mouats, T., Aouf, N., Chermak, L. and Richardson, M., “Thermal stereo odometry for UAVs,” IEEE Sensors J. 15 (11), 63356347 (2015).
4. Howard, A., “Real-Time Stereo Visual Odometry for Autonomous Ground Vehicles,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Nice, France (Sep. 2008) pp. 3946–3952.
5. Sünderhauf, N., Protzel, P., “Stereo odometry – a review of approaches”, Chemnitz University of Technology Technical Report, March, 2007.
6. Lucas, B. D. and Kanade, T., “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, B.C., Canada, vol. 2, (1981) pp. 674–679.
7. Shi, J. and Tomasi, C., “Good Features to Track,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, W.A., USA, (Jun. 1994) pp. 593–600.
8. Baker, S. and Matthews, I., “Lucas-kanade 20 years on: A unifying framework,” Int. J. Comput. Vis. 56 (3), 221255 (2004).
9. Bouguet, J.-Y., “Pyramidal Implementation of the Affine Lucas Kanade Feature Tracker Description of the Algorithm,” OpenCV documentation, Intel Corporation, Microprocessor Research Labs, (1999).
10. Zinßer, T., Gräßl, C. and Niemann, H., “Efficient feature tracking for long video sequences,” In: Joint Pattern Recognition Symposium, (Springer Berlin Heidelberg, 2004) pp. 326333.
11. Sinha, S. N., Frahm, J.-M., Pollefeys, M. and Genc, Y., “Feature tracking and matching in video using programmable graphics hardware,” Mach. Vis. Appl. 22 (1), 207217 (2007).
12. Lee, H.-K., Choi, K.-W., Kong, D., and Won, J., “Improved Kanade-Lucas-Tomasi Tracker for Images with Scale Changes,” Proceedings of the IEEE International Conference on Consumer Electronics, Berlin, Germany, (Sep. 2013) pp. 33–34.
13. Hwangbo, M., Kim, J.-S. and Kanade, T., “Gyro-aided feature tracking for a moving camera: Fusion, auto-calibration and GPU implementation,” Int. J. Robot. Res. 30 (14), 17551774 (2011).
14. Ryu, Y.-G., Roh, H.-C. and Chung, M.-Y., “Video Stabilization for Robot Eye Using IMU-Aided Feature Tracker,” Proceedings of the IEEE International Conference on Control Automation and Systems, Gyeonggi-do, Korea, (Oct. 2010) pp. 1875–1878.
15. Tanathong, S. and Lee, I., “Translation-based KLT tracker under severe camera rotation using GPS/INS data,” IEEE Geoscience Remote Sensing Lett. 11 (1), 6468 (2013).
16. Scaramuzza, D. and Fraundorfer, F., “Visual odometry [tutorial],” IEEE Robot. Autom. Mag. 18 (4), pp. 8092 (2011).
17. Goslinski, J., Nowicki, M. and Skrzypczynski, P., “Performance comparison of EKF-based algorithms for orientation estimation on android platform,” IEEE Sensors J. 15 (7), pp. 3781–3792 (2015).
18. Konolige, K., Agrawal, M. and Sola, J., “Large-Scale Visual Odometry for Rough Terrain,” In: Robotics Research, (Springer Berlin Heidelberg, 2011) pp. 201212.
19. Tardif, J.-P., George, M., Laverne, M., Kelly, A. and Stentz, A., “A New Approach to Vision-Aided Inertial Navigation,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, (Oct. 2010) pp. 4161–4168.
20. Chatfield, A. B., Fundamentals of High Accuracy Inertial Navigation, Reston, V.A., USA, vol. 174. (AIAA, Sep. 1997).
21. Makadia, A. and Daniilidis, K., “Correspondenceless Ego-Motion Estimation Using an IMU,” in Proceedings of the IEEE International Conference on Robotics and Automation, (2005) pp. 3534–3539.
22. Mirzaei, F. M. and Roumeliotis, S. I., “A Kalman filter-based algorithm for IMU-camera calibration: Observability analysis and performance evaluation,” IEEE Trans. Robot. 24 (5), pp. 11431156 (2008).
23. Martinelli, A., “Vision and IMU data fusion: Closed-form solutions for attitude, speed, absolute scale, and bias determination,” IEEE Trans. Robot. 28 (1), pp. 4460 (2012).
24. Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R. and Furgale, P., “Keyframe-based visual–inertial odometry using nonlinear optimization,” Int. J. Robot. Res. 34 (3), pp. 314334 (2015).
25. Hartley, R. I. and Sturm, P., “Triangulation,” Comput. Vis. Image Understanding 68 (2), pp. 146157 (1997).
26. Geiger, A., Ziegler, J. and Stiller, C., “Stereoscan: Dense 3d Reconstruction in Real-Time,” Proceedings of the IEEE Symposium on Intelligent Vehicles, Baden-Baden, Germany, (Jun. 2011) pp. 963–968.
27. Sünderhauf, N., Konolige, K., Lacroix, S. and Protzel, P., “Visual Odometry using Sparse Bundle Adjustment on an Autonomous Outdoor Vehicle,” In: Autonome Mobile Systeme, (Springer Berlin Heidelberg, 2006) pp. 157163.
28. Wu, C., Agarwal, S., Curless, B. and Seitz, S. M., “Multicore Bundle Adjustment,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, R.I., USA (2011) pp. 3057–3064.
29. Levenberg, K., “A method for the solution of certain problems in least squares,” Quarterly Appl. Math. 2, 164168 (1944).
30. Marquardt, D. W., “An algorithm for least-squares estimation of nonlinear parameters,” J. Soc. Ind. Appl. Math. 11 (2), pp. 431441 (1963).
31. Dennis, J. E. Jr, and Mei, H. H. W., “Two new unconstrained optimization algorithms which use function and gradient values,” J. Optimization Theory Appl. 28 (4), pp. 453482 (1979).
32. Powell, M.J., “A new algorithm for unconstrained optimization'', Nonlinear programming, Proceedings of a Symposium Conducted by the Mathematics Research Center, the University of Wisconsin–Madison, (May 1970) pp. 31–65
33. Lourakis, M. L. A. and Argyros, A. A., “Is Levenberg-Marquardt the Most Efficient Optimization Algorithm for Implementing Bundle Adjustment?,” Proceedings of the IEEE International Conference on Computer Vision, Beijing, China, vol. 2, (Oct. 2005) pp. 1526–1531.
34. Sun, W. and Yuan, Y.-X., Optimization Theory and Methods: Nonlinear Programming, vol. 1. (Springer Berlin Heidelberg, 2006).
35. Warren, M., McKinnon, D., He, H. and Upcroft, B., “Unaided Stereo Vision Based Pose Estimation,” Proceedings of the Australasian Conference on Robotics and Automation, Brisbane, Australia, (Dec. 2010).
36. Lourakis, M. I. and Argyros, A. A., “SBA: A software package for generic sparse bundle adjustment,” ACM Trans. Math. Software 36 (1), pp. 130 (2009).
37. Geiger, A., Lenz, P., Stiller, C. and Urtasun, R., “Vision meets robotics: The KITTI dataset,” Int. J. Robot. Res. (2013).



Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed