Hostname: page-component-8448b6f56d-xtgtn Total loading time: 0 Render date: 2024-04-18T23:36:43.541Z Has data issue: false hasContentIssue false

A virtual environment for evaluation of computer vision algorithms under general airborne camera imperfections

Published online by Cambridge University Press:  23 March 2021

Arshiya Mahmoudi
Affiliation:
Department of Aerospace Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran.
Mehdi Sabzehparvar*
Affiliation:
Department of Aerospace Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran.
Mahdi Mortazavi
Affiliation:
Department of Mechanical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
*
*Corresponding author. E-mail: sabzeh@aut.ac.ir

Abstract

This paper describes a camera simulation framework for validating machine vision algorithms under general airborne camera imperfections. Lens distortion, image delay, rolling shutter, motion blur, interlacing, vignetting, image noise, and light level are modelled. This is the first simulation that considers all temporal distortions jointly, along with static lens distortions in an online manner. Several innovations are proposed including a motion tracking system allowing the camera to follow the flight log with eligible derivatives. A reverse pipeline, relating each pixel in the output image to pixels in the ideal input image, is developed. It is shown that the inverse lens distortion model and the inverse temporal distortion models are decoupled in this way. A short-time pixel displacement model is proposed to solve for temporal distortions (i.e. delay, rolling shutter, motion blur, and interlacing). Evaluation is done by several means including regenerating an airborne dataset, regenerating the camera path on a calibration pattern, and evaluating the ability of the time displacement model to predict other frames. Qualitative evaluations are also made.

Type
Research Article
Copyright
Copyright © The Royal Institute of Navigation 2021

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Allerton, D. (2009). Principles of Flight Simulation. Washington, DC: American Institute of Aeronautics and Astronautics, Inc. doi:10.2514/4.867033Google Scholar
Atashgah, M. A. and Malaek, S. M. B. (2012). An integrated virtual environment for feasibility studies and implementation of aerial MonoSLAM. Virtual Real, 16, 215232.10.1007/s10055-011-0197-7CrossRefGoogle Scholar
Atashgah, M. A. and Malaek, S. M. B. (2013). Prediction of aerial-image motion blurs due to the flying vehicle dynamics and camera characteristics in a virtual environment. The Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 227, 10551067.10.1177/0954410012450107CrossRefGoogle Scholar
Beard, S., Buchmann, E., Ringo, L., Tanita, T. and Mader, B. (2008). Space Shuttle Landing and Rollout Training at the Vertical Motion Simulator. Presented at the AIAA Modeling and Simulation Technologies Conference and Exhibit, Honolulu, Hawaii on 18–21 August 2008, 6541.10.2514/6.2008-6541CrossRefGoogle Scholar
Bing maps. (2005). https://www.bing.com/maps. Accessed 1 April 2019.Google Scholar
Brown, D. C. (1966). Decentering distortion of lenses. Photogrammetric Engineering and Remote Sensing, 32, 444462.Google Scholar
Buckley, C. (1994). Bézier Curves for Camera Motion. Department of Computer Science, Trinity College Dublin, Ireland.Google Scholar
Burri, M., Nikolic, J., Gohl, P., Schneider, T., Rehder, J., Omari, S., Achtelik, M. W. and Siegwart, R. (2016). The EuRoC micro aerial vehicle datasets. The International Journal of Robotics Research, 35, 11571163. doi:10.1177/0278364915620033CrossRefGoogle Scholar
Engel, J., Koltun, V. and Cremers, D. (2018). Direct sparse odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 611625.10.1109/TPAMI.2017.2658577CrossRefGoogle ScholarPubMed
Farr, T. G., Rosen, P. A., Caro, E., Crippen, R., Duren, R., Hensley, S., Kobrick, M., Paller, M., Rodriguez, E. and Roth, L. (2007). The shuttle radar topography mission. Reviews of Geophysics, 45, 133.10.1029/2005RG000183CrossRefGoogle Scholar
FlightGear Flight Simulator. (1997). http://home.flightgear.org/. Accessed 1 ApRil 2019.Google Scholar
Gribel, C. J. and Akenine-Möller, T. (2017). Time-continuous quasi-Monte Carlo ray tracing. Computer Graphics Forum, 36 (6), 354367.10.1111/cgf.12985CrossRefGoogle Scholar
Guertin, J.-P., McGuire, M. and Nowrouzezahrai, D. (2014). A fast and stable feature-aware motion blur filter. High Performance Graphics. 5160.Google Scholar
Hartley, R. I. (1997). In defense of the eight-point algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 580593. doi:10.1109/34.601246CrossRefGoogle Scholar
Hempe, N. and Rosmann, J. (2015). Implementing the eRobotics approach by combining advanced rendering techniques and complex simulations in a modern, multi-domain VR simulation system. International Journal of Modeling and Optimization, 5, 268272.10.7763/IJMO.2015.V5.472CrossRefGoogle Scholar
Huang, X., Hou, Q., Ren, Z. and Zhou, K. (2012). Scalable programmable motion effects on GPUs. Computer Graphics Forum, 31, 22592266.Google Scholar
Huang, A. S., Bachrach, A., Henry, P., Krainin, M., Maturana, D., Fox, D. and Roy, N. (2017). Visual odometry and mapping for autonomous flight using an RGB-D camera. In: Christensen H., Khatib O. (eds), Robotics Research, [Springer Tracts in Advanced Robotics, Volume 100]. Springer, Cham, 235252.Google Scholar
Irmisch, P., Baumbach, D., Ernst, I. and Börner, A. (2019). Simulation Framework for a Visual-Inertial Navigation System. 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 19951999.10.1109/ICIP.2019.8803187CrossRefGoogle Scholar
Jiang, X., Li, S., Gu, L., Sun, J. and Xiao, D. (2019). Optical image generation and high-precision line-of-sight extraction for Mars approach navigation. The Journal of Navigation, 72, 229252.10.1017/S0373463318000450CrossRefGoogle Scholar
Koch, R. F. and Evans, D. C. (1980). ATRAN Terrain Sensing Guidance - The Grand-Daddy System. In: Wiener, T. F. (ed.). Presented at the 24th Annual Technical Symposium, Proc. SPIE 0238, Image Processing for Missile Guidance, (23 December 1980), San Diego, 29. doi:10.1117/12.959126CrossRefGoogle Scholar
Koenig, N. and Howard, A. (2004). Design and Use Paradigms for Gazebo, an Open-Source Multi-Robot Simulator. 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566). IEEE, Sendai, Japan, 21492154. doi:10.1109/IROS.2004.1389727CrossRefGoogle Scholar
Mahmoudi, A. (2020). cameraDistortionShader. https://github.com/mah92/cameraDistortionShader.Google Scholar
Meilland, M., Drummond, T. and Comport, A. I. (2013). A Unified Rolling Shutter and Motion Blur Model for 3D Visual Registration. Proceedings of the IEEE International Conference on Computer Vision, 20162023, Sydney.10.1109/ICCV.2013.252CrossRefGoogle Scholar
Mueller, M., Smith, N. and Ghanem, B. (2016). A Benchmark and Simulator for UAV Tracking. Proceedings of the European Conference on Computer Vision (ECCV, Amsterdam).10.1007/978-3-319-46448-0_27CrossRefGoogle Scholar
Navarro, F., Serón, F. J. and Gutierrez, D. (2011). Motion blur rendering: State of the art. Computer Graphics Forum, 30, 326.Google Scholar
Oettershagen, P., Stastny, T., Mantel, T., Melzer, A., Rudin, K., Gohl, P., Agamennoni, G., Alexis, K. and Siegwart, R. (2016). Long-endurance sensing and mapping using a hand-launchable solar-powered UAV. In: Wettergreen, D. S. and Barfoot, T. D. (eds.). Field and Service Robotics. Springer International Publishing. Cham, 441454. doi:10.1007/978-3-319-27702-8_29CrossRefGoogle Scholar
OpenSceneGraph Library (1998). URL http://www.openscenegraph.org/. Accessed 1 April 2019.Google Scholar
Pham, T. T. and Suh, Y. S. (2019). Spline function simulation data generation for walking motion using foot-mounted inertial sensors. Electronics, 8, 18.10.3390/electronics8010018CrossRefGoogle Scholar
Pomerleau, F., Liu, M., Colas, F. and Siegwart, R. (2012). Challenging data sets for point cloud registration algorithms. The International Journal of Robotics Research, 31, 17051711.10.1177/0278364912458814CrossRefGoogle Scholar
Pueyo, P., Cristofalo, E., Montijano, E. and Schwager, M. (2020). CinemAirSim: A Camera-Realistic Robotics Simulator for Cinematographic Purposes. ArXiv Prepr. ArXiv200307664.Google Scholar
Rehder, J., Nikolic, J., Schneider, T., Hinzmann, T. and Hinzmann, R. (2016). Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 4304–4311.Google Scholar
Schumaker, L. (2007). Spline Functions: Basic Theory. Cambridge University Press, Cambridge.10.1017/CBO9780511618994CrossRefGoogle Scholar
Shah, S., Dey, D., Lovett, C. and Kapoor, A. (2018). Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In: Hutter, M. and Siegwart, R. (Eds.) Field and Service Robotics. Results of the 11th International Conference. Springer, ETH, Zurich, 621635.10.1007/978-3-319-67361-5_40CrossRefGoogle Scholar
Shelley, M. (2014). Monocular Visual Inertial Odometry on a Mobile Device. TUM, Munich.Google Scholar
Unreal Engine. (1998). www.unrealengine.com. Accessed 1 April 2019.Google Scholar
Warren, M., McKinnon, D., He, H., Glover, A. and Shiel, M. (2012). Large scale monocular vision-only mapping from a fixed-wing sUAS. In: Yoshida, K and Tadokoro, S (Eds.) Field and Service Robotics: Results of the 8th International Conference [Springer Tracts in Advanced Robotics, Volume 92]. Springer, Germany, 495509.Google Scholar
Yan, G., Wang, J. and Zhou, X. (2015). High-Precision Simulator for Strapdown Inertial Navigation Systems Based on Real Dynamics from GNSS and IMU Integration. China Satellite Navigation Conference (CSNC) 2015 Proceedings: Volume III. Springer, 789799.10.1007/978-3-662-46632-2_68CrossRefGoogle Scholar
Yu, C., Cai, J. and Chen, Q. (2017). Multi-resolution visual positioning and navigation technique for unmanned aerial system landing assistance. The Journal of Navigation, 70, 12761292.10.1017/S0373463317000327CrossRefGoogle Scholar
Zhang, M., Wang, W., Sun, H. and Han, H. (2014). Perception-based model simplification for motion blur rendering. Graphical Models, 76, 116127.10.1016/j.gmod.2013.10.003CrossRefGoogle Scholar
Zhao, H., Shang, H. and Jia, G. (2014). Simulation of remote sensing imaging motion blur based on image motion vector field. Journal of Applied Remote Sensing, 8, 083539.10.1117/1.JRS.8.083539CrossRefGoogle Scholar