Hostname: page-component-76fb5796d-r6qrq Total loading time: 0 Render date: 2024-04-27T12:05:20.469Z Has data issue: false hasContentIssue false

A CSR-based visible and infrared image fusion method in low illumination conditions for sense and avoid

Published online by Cambridge University Press:  03 July 2023

N. Ma
Affiliation:
College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing, China
Y. Cao*
Affiliation:
College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing, China
Z. Zhang
Affiliation:
College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing, China
Y. Fan
Affiliation:
Shenyang Aircraft Design & Research Institute, Aviation Industry Corporation of China, Shenyang, China
M. Ding
Affiliation:
College of Civil Aviation, Nanjing University of Aeronautics and Astronautics, Nanjing, China
*
Corresponding author: Y. Cao; Email: cyfac@nuaa.edu.cn

Abstract

Machine vision has been extensively researched in the field of unmanned aerial vehicles (UAV) recently. However, the ability of Sense and Avoid (SAA) largely limited by environmental visibility, which brings hazards to flight safety in low illumination or nighttime conditions. In order to solve this critical problem, an approach of image enhancement is proposed in this paper to improve image qualities in low illumination conditions. Considering the complementarity of visible and infrared images, a visible and infrared image fusion method based on convolutional sparse representation (CSR) is a promising solution to improve the SAA ability of UAVs. Firstly, the source image is decomposed into a texture layer and structure layer since infrared images are good at characterising structural information, and visible images have richer texture information. Both the structure and the texture layers are transformed into the sparse convolutional domain through the CSR mechanism, and then CSR coefficient mapping are fused via activity level assessment. Finally, the image is synthesised through the reconstruction results of the fusion texture and structure layers. In the experimental simulation section, a series of visible and infrared registered images including aerial targets are adopted to evaluate the proposed algorithm. Experimental results demonstrates that the proposed method increases image qualities in low illumination conditions effectively and can enhance the object details, which has better performance than traditional methods.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Royal Aeronautical Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Yu, X. and Zhang, Y. Sense and avoid technologies with applications to unmanned aircraft systems: Review and prospects, Progr. Aerosp. Sci., 2015, 74, pp 152166.10.1016/j.paerosci.2015.01.001CrossRefGoogle Scholar
Mcfadyen, A. and Mejias, L. A survey of autonomous vision-based see and avoid for unmanned aircraft systems, Progr. Aerosp. Sci., 2016, 80, pp 117.10.1016/j.paerosci.2015.10.002CrossRefGoogle Scholar
Fu, Y., Zhang, Y. and Yu, X. An advanced sense and collision avoidance strategy for unmanned aerial vehicles in landing phase, IEEE Aerosp. Electron. Syst. Mag., 2016, 31, (9), pp 4052.10.1109/MAES.2016.150166CrossRefGoogle Scholar
Lin, C.E., Lai, Y.-H. and Lee, F.-J. UAV collision avoidance using sector recognition in cooperative mission to helicopters, 2014 Integrated Communications, Navigation and Surveillance Conference (ICNS) Conference Proceedings, IEEE, 2014, pp F11.10.1109/ICNSurv.2014.6819986CrossRefGoogle Scholar
Lin, C.E. and Lai, Y.-H. Quasi-ADS-B based UAV conflict detection and resolution to manned aircraft, J. Electric. Comput. Eng., 2015, pp 112.Google Scholar
Zhang, Z., Cao, Y., Ding, M., Zhuang, L. and Yao, W. An intruder detection algorithm for vision based sense and avoid system, 2016 International Conference on Unmanned Aircraft Systems(ICUAS), IEEE, 2016, pp 550556.10.1109/ICUAS.2016.7502521CrossRefGoogle Scholar
Harvey, B. and O’Young, S. Acoustic detection of a fixed-wing UAV, Drones, 2018, 2, (1), pp 422.10.3390/drones2010004CrossRefGoogle Scholar
Sabatini, R., Gardi, A. and Ramasamy, S. A laser obstacle warning and avoidance system for unmanned aircraft sense-and-avoid, Appl. Mech. Mater., 2014, 629, pp 355360.10.4028/www.scientific.net/AMM.629.355CrossRefGoogle Scholar
Zhang, Z., Cao, Y., Ding, M., Zhuang, L., Yao, W., Zhong, P. and Li, H. Candidate regions extraction of intruder airplane under complex background for vision-based sense and avoid system, IET Sci. Meas. Technol., 2017, 11, (5), pp 571580.10.1049/iet-smt.2016.0312CrossRefGoogle Scholar
Zhang, Z., Cao, Y., Ding, M., Zhuang, L. and Wang, Z. Spatial and temporal context information fusion based flying objects detection for autonomous sense and avoid, 2018 International Conference on Unmanned Aircraft Systems (ICUAS), IEEE, 2018, pp 569578.10.1109/ICUAS.2018.8453295CrossRefGoogle Scholar
Wang, X., Zhou, Q., Liu, Q. and Qi, S. A method of airborne infrared and visible image matching based on hog feature, Pattern Recognition and Computer Vision, 2018.Google Scholar
Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D. and Matas, J. Deblurgan: Blind motion deblurring using conditional adversarial networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp 81838192.10.1109/CVPR.2018.00854CrossRefGoogle Scholar
Liu, R., Fan, X., Hou, M., Jiang, Z., Luo, Z. and Zhang, L. Learning aggregated transmission propagation networks for haze removal and beyond, IEEE Trans. Neural Netw. Learn. Syst., 2018, pp 114.Google ScholarPubMed
James, J., Ford, J.J. and Molloy, T.L. Learning to detect aircraft for long-range vision-based sense-and-avoid systems, IEEE Robot. Automat. Lett., 2018, 3, (4), pp 43834390.10.1109/LRA.2018.2867237CrossRefGoogle Scholar
Lyu, Y., Pan, Q., Zhao, C., Zhang, Y. and Hu, J., Vision-based UAV collision avoidance with 2D dynamic safety envelope, IEEE Aerosp. Electron. Syst. Mag., 2016, 31, (7), pp 1626.10.1109/MAES.2016.150155CrossRefGoogle Scholar
Zhang, S., Guo, Y., Lu, Z., Wang, S. and Liu, Z. Cooperative detection based on the adaptive interacting multiple model-information filtering algorithm, Aerosp. Sci. Technol., 2019, 93, p 105310.10.1016/j.ast.2019.105310CrossRefGoogle Scholar
Yu, M., Li, S. and Leng, S. On-board passive-image based non-cooperative space object capture window estimation, Aerosp. Sci. Technol., 2019, 84, pp 953965.10.1016/j.ast.2018.11.028CrossRefGoogle Scholar
Yang, H., Yu, J., Wang, S. and Peng, X., Design of airborne target tracking accelerator based on KCF, J. Eng.Google Scholar
Lai, J., Ford, J.J., O’Shea, P. and Mejias, L. Vision-based estimation of airborne target pseudobearing rate using hidden Markov model filters, IEEE Trans. Aerosp. Electron. Syst., 2013, 49, (4), pp 21292145.10.1109/TAES.2013.6621806CrossRefGoogle Scholar
Vetrella, A.R., Fasano, G. and Accardo, D. Attitude estimation for cooperating UAVs based on tight integration of GNSS and vision measurements, Aerosp. Sci. Technol., 2019, 84, pp 966979.10.1016/j.ast.2018.11.032CrossRefGoogle Scholar
Lee, K., Park, H., Park, C. and Park, S.-Y. Sub-optimal cooperative collision avoidance maneuvers of multiple active spacecraft via discrete-time generating functions, Aerosp. Sci. Technol., 2019, 93, pp 105298.1105298.10.10.1016/j.ast.2019.07.031CrossRefGoogle Scholar
Gu, S.,Zuo, W., Xie, Q., Meng, D., Feng, X. and Zhang, L. Convolutional sparse coding for image super-resolution, Proceedings of the IEEE International Conference on Computer Vision, 2015, pp 18231831.10.1109/ICCV.2015.212CrossRefGoogle Scholar
Liu, Y., Chen, X., Ward, R.K. and Wang, Z.J., Image fusion with convolutional sparse representation, IEEE Sig. Process. Lett., 2016, 23, (12), pp 18821886.10.1109/LSP.2016.2618776CrossRefGoogle Scholar
Liu, Y., Chen, X., Wang, Z., Wang, Z.J., Ward, R.K. and Wang, X. Deep learning for pixel-level image fusion: Recent advances and future prospects, Inform. Fusion, 2018, 42, pp 158173.10.1016/j.inffus.2017.10.007CrossRefGoogle Scholar
Liu, J., Fan, X., Jiang, J., Liu, R. and Luo, Z., Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion, IEEE Trans. Circ. Syst. Video Technol., 2022, 32, (1), pp 105119.10.1109/TCSVT.2021.3056725CrossRefGoogle Scholar
Singh, S. and Anand, R. Multimodal medical image fusion using hybrid layer decomposition with CNN-based feature mapping and structural clustering, IEEE Trans. Instrument. Meas., 2020, 69, (6), pp 38553865.10.1109/TIM.2019.2933341CrossRefGoogle Scholar
Liu, Y., Chen, X., Cheng, J., Peng, H. and Wang, Z. Infrared and visible image fusion with convolutional neural networks, Int. J. Wavelets Multiresol. Inf. Process., 2018, 16, (03).10.1142/S0219691318500182CrossRefGoogle Scholar
Liu, Y., Chen, X., Cheng, J. and Peng, H., A medical image fusion method based on convolutional neural networks, 2017 20th International Conference on Information Fusion, IEEE, 2017, pp 17.10.23919/ICIF.2017.8009769CrossRefGoogle Scholar
Liu, R., Liu, J., Jiang, Z., Fan, X. and Luo, Z. A Bilevel integrated model with data-driven layer ensemble for multi-modality image fusion, IEEE Trans. Image Process., 2021, 30, pp 12611274.10.1109/TIP.2020.3043125CrossRefGoogle ScholarPubMed
Liu, J., Fan, X., Huang, Z., Wu, G., Liu, R., Zhong, W. and Luo, Z., Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp 58025811.10.1109/CVPR52688.2022.00571CrossRefGoogle Scholar
Zeiler, M.D., Taylor, G.W. and Fergus, R. Adaptive deconvolutional networks for mid and high level feature learning, 2011 International Conference on Computer Vision, 2011, pp 20182025.10.1109/ICCV.2011.6126474CrossRefGoogle Scholar
Xu, L., Yan, Q., Xia, Y. and Jia, J. Structure extraction from texture via relative total variation, ACM Trans. Graph. (TOG), 2012, 31, (6), pp 139.10.1145/2366145.2366158CrossRefGoogle Scholar
Wohlberg, B. Efficient algorithms for convolutional sparse representations, IEEE Trans. Image Process., 2015, 25, (1), pp 301315.10.1109/TIP.2015.2495260CrossRefGoogle ScholarPubMed
Madhuri, G. and Negi, A., Discriminative dictionary learning based on statistical methods, arXiv e-prints, 2021.Google Scholar
Liu, Z., Blasch, E., Xue, Z., Zhao, J., Laganiere, R. and Wu, W. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study, IEEE Trans. Patt. Anal. Mach. Intell., 2011, 34, (1), pp 94109.10.1109/TPAMI.2011.109CrossRefGoogle ScholarPubMed
Yang, B. and Li, S. Multifocus image fusion and restoration with sparse representation, IEEE Trans. Instrument. Meas., 2009, 59, (4), pp 884892.10.1109/TIM.2009.2026612CrossRefGoogle Scholar