Hostname: page-component-848d4c4894-m9kch Total loading time: 0 Render date: 2024-05-03T23:27:56.227Z Has data issue: false hasContentIssue false

Vision geometry-based UAV flocking

Published online by Cambridge University Press:  20 March 2023

L. Wang*
Affiliation:
Research Institute of Electronic Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
T. He
Affiliation:
Research Institute of Electronic Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
*
*Corresponding author. Email: wang_lei@uestc.edu.cn

Abstract

A distributed UAV (unmanned aerial vehicle) flocking control method based on vision geometry is proposed, in which only monocular RGB (red, green, blue) images are used to estimate the relative positions and velocities between drones. It does not rely on special visual markers and external infrastructure, nor does it require inter-UAV communication or prior knowledge of UAV size. This method combines the advantages of deep learning and classical geometry. It adopts a deep optical flow network to estimate dense matching points between two consecutive images, uses segmentation technology to classify these matching points into background and specific UAV, and then maps the classified matching points to Euclidean space based on the depth map information. In 3D matching points, also known as 3D feature point pairs, each of their classifications is used to estimate the rotation matrix, translation vector, velocity of the corresponding UAV, as well as the relative position between drones, based on RANSAC and least squares method. On this basis, a flocking control model is constructed. Experimental results in the Microsoft Airsim simulation environment show that in all evaluation metrics, our method achieves almost the same performance as the UAV flocking algorithm based on ground truth cluster state.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Royal Aeronautical Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Lei Wang and Tao He are co-first authors.

References

Klausen, K., Meissen, C., Fossen, T.I., Arcak, M. and Johansen, T.A. Cooperative control for multirotors transporting an unknown suspended load under environmental disturbances, IEEE Trans Contr Syst Technol, 2020, 28, (2), pp 653660. https://doi.org/10.1109/TCST.2018.2876518 CrossRefGoogle Scholar
Ma, J., Guo, D., Bai, Y., Svinin, M. and Magid, E. A vision-based robust adaptive control for caging a flood area via multiple UAVs, 18th International Conference on Ubiquitous Robots, 2021, pp 386–391. https://doi.org/10.1109/UR52253.2021.9494698 CrossRefGoogle Scholar
Khosravi, M., Enayati, S., Saeedi, H., and Pishro-Nik, H. Multi-purpose drones for coverage and transport applications, IEEE Trans Wireless Commun, 2021, 20, (6), pp 39743987. https://doi.org/10.1109/TWC.2021.3054748 CrossRefGoogle Scholar
Vásárhelyi, G., Virágh, C., Somorjai, G., Tarcai, N., Szörenyi, T., Nepusz, T. and Vicsek, T. Outdoor flocking and formation flight with autonomous aerial robots, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014, pp 3866–3873. https://doi.org/10.1109/IROS.2014.6943105 CrossRefGoogle Scholar
Lightbody, P., Krajník, T. and Hanheide, M. An efficient visual fiducial localisation system, ACM SIGAPP Appl Comput Rev, September 2017, 17, (3), pp 2837. https://doi.org/10.1145/3161534.3161537 CrossRefGoogle Scholar
Ledergerber, A., Hamer, M. and D’Andrea, R. A robot self-localization system using one-way ultra-wideband communication, IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), 2015, pp 31313137. https://doi.org/10.1109/IROS.2015.7353810 Google Scholar
Wang, Z. and Gu, D. A local sensor based leader-follower flocking system, IEEE International Conference on Robotics and Automation, 2008, pp 3790–3795. https://doi.org/10.1109/ROBOT.2008.4543792 CrossRefGoogle Scholar
Güler, S., Abdelkader, M. and Shamma, J.S. Infrastructure-free multi-robot localization with ultrawideband sensors, American Control Conference (ACC), 2019, pp 1318. https://doi.org/10.23919/ACC.2019.8814678 Google Scholar
Tang, Y., Hu, Y., Cui, J., Liao, F., Lao, M., Lin, F. and Teo, R.S. Vision-aided multi-UAV autonomous flocking in GPS-denied environment, IEEE Trans Ind Electron, 2019, 66, (1), pp 616626. https://doi.org/10.1109/TIE.2018.2824766 CrossRefGoogle Scholar
Moshtagh, N., Jadbabaie, A. and Daniilidis, K. Vision-based control laws for distributed flocking of nonholonomic agents, IEEE International Conference on Robotics and Automation, 2006, pp 2769–2774. https://doi.org/10.1109/ROBOT.2006.1642120 CrossRefGoogle Scholar
Moshtagh, N., Michael, N., Jadbabaie, A. and Daniilidis, K. Vision-based, distributed control laws for motion coordination of nonholonomic robots, IEEE Trans Robot, 2009, 25, (4), pp 851860. https://doi.org/10.1109/TRO.2009.2022439 CrossRefGoogle Scholar
Soria, E., Schiano, F. and Floreano, D. The influence of limited visual sensing on the reynolds flocking algorithm, IEEE International Conference on Robotic Computing(IRC), 2019, pp 138145. https://doi.org/10.1109/IRC.2019.00028 Google Scholar
Hu, T.K., Gama, F., Chen, T., Wang, Z., Ribeiro, A. and Sadler, B.M. Vgai: End-to-end learning of vision-based decentralized controllers for robot swarms, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, pp 4900–4904. https://doi.org/10.1109/ICASSP39728.2021.9414219 CrossRefGoogle Scholar
Schilling, F., Lecoeur, J., Schiano, F. and Floreano, D. Learning vision-based flight in drone swarms by imitation, IEEE Robot Autom Lett, 2019, 4, (4), pp 45234530. https://doi.org/10.1109/LRA.2019.2935377 CrossRefGoogle Scholar
Bastien, R. and Romanczuk, P. A model of collective behavior based purely on vision, Sci Adv, 2020, 6, (6), pp 110. https://doi.org/10.1126/sciadv.aay0792 CrossRefGoogle Scholar
Schilling, F., Schiano, F. and Floreano, D. Vision-based drone flocking in outdoor environments, IEEE Robot Autom Lett, 2021, 6, (2), pp 29542961. https://doi.org/10.1109/LRA.2021.3062298 CrossRefGoogle Scholar
Reynolds, C.W. Flocks, Herds, and Schools: A distributed behavioral model, ACM SIGGRAPH Comput Graph, July 1987, 21, pp 2534. https://doi.org/10.1145/280811.281008 CrossRefGoogle Scholar
Wang, J., Zhong, Y., Dai, Y., Birchfield, S., Zhang, K., Smolyanskiy, N. and Li, H. Deep two-view structure-from-motion revisited, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp 89498958. https://doi.org/10.1109/CVPR46437.2021.00884 Google Scholar
Ming, Y., Meng, X., Fan, C. and Yu, H. Deep learning for monocular depth estimation: A review, Neurocomputing, 2021, 438, pp 1433. https://doi.org/10.1016/j.neucom.2020.12.089 CrossRefGoogle Scholar
Zhao, C., Sun, Q., Zhang, C., Tang, Y. and Qian, F. Monocular depth estimation based on deep learning: An overview, Sci China Technol Sci, September 2020, 63, (9), pp 16121627. https://doi.org/10.1007/s11431-020-1582-8 CrossRefGoogle Scholar
Teed, Z. and Deng, J. RAFT: Recurrent all-pairs field transforms for optical flow, Computer Vision – ECCV 2020, November 2020, pp 402419. https://doi.org/10.1007/978-3-030-58536-5_24 Google Scholar
Asgari Taghanaki, S., Abhishek, K., Cohen, J.P., Cohen-Adad, J. and Hamarneh, G. Deep semantic segmentation of natural and medical images: A review, Artif Intell Rev, 2021, 54, pp 137178. https://doi.org/10.1007/s10462-020-09854-1 CrossRefGoogle Scholar
Arun, K.S., Huang, T.S. and Blostein, S.D. Least-squares fitting of two 3-D point sets, IEEE Trans Pattern Anal Machine Intell, 1987, PAMI-9, (5), pp 698700. https://doi.org/10.1109/TPAMI.1987.4767965 CrossRefGoogle ScholarPubMed
Fischler, M.A. and Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Readings in Computer Vision, Morgan Kaufmann, San Francisco (CA), 1987, pp 726740. https://doi.org/10.1016/B978-0-08-051581-6.50070-2 Google Scholar
He, T. and Wang, L. Neural network-based velocity-controllable UAV flocking, Aeronaut J, 2022, pp 116. https://doi.org/10.1017/aer.2022.61 Google Scholar
Shah, S., Dey, D., Lovett, C., and Kapoor, A. AirSim: High-fidelity visual and physical simulation for autonomous vehicles, Spr Tra Adv Robot, 2017, arXiv: 1705.05065. https://arxiv.org/abs/1705.05065.Google Scholar