Hostname: page-component-76fb5796d-dfsvx Total loading time: 0 Render date: 2024-04-27T01:02:57.146Z Has data issue: false hasContentIssue false

Obstacle type recognition in visual images via dilated convolutional neural network for unmanned surface vehicles

Published online by Cambridge University Press:  13 January 2022

Binghua Shi
Affiliation:
School of Information and Communication Engineering, Hubei University of Economics, Wuhan, China. School of Automation, Wuhan University of Technology, Wuhan, China
Yixin Su*
Affiliation:
School of Automation, Wuhan University of Technology, Wuhan, China
Cheng Lian
Affiliation:
School of Automation, Wuhan University of Technology, Wuhan, China
Chang Xiong
Affiliation:
School of Automation, Wuhan University of Technology, Wuhan, China
Yang Long
Affiliation:
School of Automation, Wuhan University of Technology, Wuhan, China
Chenglong Gong
Affiliation:
School of Automation, Wuhan University of Technology, Wuhan, China
*
*Corresponding author. E-mail: suyixin@whut.edu.cn

Abstract

Recognition of obstacle type based on visual sensors is important for navigation by unmanned surface vehicles (USV), including path planning, obstacle avoidance, and reactive control. Conventional detection techniques may fail to distinguish obstacles that are similar in visual appearance in a cluttered environment. This work proposes a novel obstacle type recognition approach that combines a dilated operator with the deep-level features map of ResNet50 for autonomous navigation. First, visual images are collected and annotated from various different scenarios for USV test navigation. Second, the deep learning model, based on a dilated convolutional neural network, is set and trained. Dilated convolution allows the whole network to learn deep features with increased receptive field and further improves the performance of obstacle type recognition. Third, a series of evaluation parameters are utilised to evaluate the obtained model, such as the mean average precision (mAP), missing rate and detection speed. Finally, some experiments are designed to verify the accuracy of the proposed approach using visual images in a cluttered environment. Experimental results demonstrate that the dilated convolutional neural network obtains better recognition performance than the other methods, with an mAP of 88%.

Type
Research Article
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press on behalf of The Royal Institute of Navigation

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Agrafiotis, P., Doulamis, A., Doulamis, N. and Georgopoulos, A. (2014). Multi-sensor target detection and tracking system for sea ground borders surveillance. ACM International Conference Proceeding Series, 41, 17.CrossRefGoogle Scholar
Bovcon, B., Mandeljc, R., Perš, J. and Kristan, M. (2018). Stereo obstacle detection for unmanned surface vehicles by IMU-assisted semantic segmentation. Robotics and Autonomous Systems, 104, 113.CrossRefGoogle Scholar
Chen, X., Yang, Y., Wang, S., Wu, H., Tang, J., Zhao, J. and Wang, Z. (2020). Ship type recognition via a coarse-to-fine cascaded convolution neural network. Journal of Navigation, 73(4), 813832.CrossRefGoogle Scholar
Dairi, A., Harrou, F., Senouci, M. and Ying, S. (2018). Unsupervised obstacle detection in driving environments using deep-learning-based stereovision. Robotics and Autonomous Systems, 100, 287301.10.1016/j.robot.2017.11.014CrossRefGoogle Scholar
Everingham, M., Eslami, S. M., Gool, L., Williams, K., Winn, J. and Zisserman, A. (2015). The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111, 98136.CrossRefGoogle Scholar
Fefilatyev, S., Goldgof, D., Shreve, M. and Lembke, C. (2012). Detection and tracking of ships in open sea with rapidly moving buoy-mounted camera system. Ocean Engineering, 54, 112.CrossRefGoogle Scholar
Frost, D. and Tapamo, J. (2013). Detection and tracking of moving objects in a maritime environment using level set with shape priors. Eurasip Journal on Image & Video Processing, 42, 116.Google Scholar
He, K., Zhang, X., Ren, S. and Sun, J. (2016). Deep Residual Learning for Image Recognition, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 770778.Google Scholar
Hermann, D., Galeazzi, R., Andersen, J. and Blanke, M. (2015). Smart sensor based obstacle detection for high-speed unmanned surface vehicle. IFAC-PapersOnLine, 48(16), 190197.10.1016/j.ifacol.2015.10.279CrossRefGoogle Scholar
Hu, W., Yang, C. and Huang, D. (2011). Robust real-time ship detection and tracking for visual surveillance of cage aquaculture. Journal of Visual Communication & Image Representation, 22(6), 543556.10.1016/j.jvcir.2011.03.009CrossRefGoogle Scholar
Kaushal, M., Khehra, B. and Sharma, A. (2018). Soft computing based object detection and tracking approaches: State-of-the-art survey. Applied Soft Computing, 70, 423464.10.1016/j.asoc.2018.05.023CrossRefGoogle Scholar
Kristan, M., Sulic, K., Kovacic, S. and Perš, J. (2015). Fast image-based obstacle detection from unmanned surface vehicles. IEEE Transactions on Cybernetics, 46(3), 641654.10.1109/TCYB.2015.2412251CrossRefGoogle ScholarPubMed
Krizhevsky, A., Sutskever, I. and Hinton, G. (2012). ImageNet Classification with Deep Convolutional Neural Networks, International Conference on Neural Information Processing Systems, 10971105.Google Scholar
Li, L., Zhao, H. and Zhang, Y. (2019). Detection of wildfire smoke images based on a densely dilated convolutional network. Electronics, 8(10), 1131.CrossRefGoogle Scholar
Liu, Z., Zhou, F., Bai, X. and Yu, X. (2012). Automatic detection of ship target and motion direction in visual images. International Journal of Electronics, 100, 94111.10.1080/00207217.2012.687188CrossRefGoogle Scholar
Long, Y., Zuo, Z., Su, Y., Li, J. and Zhang, H. (2020). An A*-based bacterial foraging optimisation algorithm for global path planning of unmanned surface vehicles. Journal of Navigation, 73(5), 116.CrossRefGoogle Scholar
Loomans, M. J. H., Peter, H. N. and Wijnhoven, R. (2014). Robust Automatic Ship Tracking in Harbours Using Active Cameras, IEEE International Conference on Image Processing, Melbourne, VIC, Australia, 41174121.Google Scholar
Meng, L., Hirayama, T. and Oyanagi, S. (2018). Underwater-drone with panoramic camera for automatic fish recognition based on deep learning. IEEE Access, 6(99), 1788017886.CrossRefGoogle Scholar
Mousazadeh, H., Jafarbiglu, H., Abdolmaleki, H. and Omrani, E. (2018). Developing a navigation, guidance and obstacle avoidance algorithm for an unmanned surface vehicle by algorithms fusion. Ocean Engineering, 159(4), 5665.CrossRefGoogle Scholar
Nguyen, V., Nguyen, H., Tran, D., Sang, J. and Jeon, J. (2017). Learning framework for robust obstacle detection, recognition, and tracking. IEEE Transactions on Intelligent Transportation Systems, 18(6), 16331646.Google Scholar
Polvara, R., Sharma, S., Wan, J., Manning, A. and Sutton, R. (2018). Obstacle avoidance approaches for autonomous navigation of unmanned surface vehicles. Journal of Navigation, 71(1), 241256.CrossRefGoogle Scholar
Prasad, D., Rajan, D., Rachmawati, L., Rajabaly, E. and Quek, C. (2017). Video processing from electro-optical sensors for object detection and tracking in maritime environment: A survey. IEEE Transactions on Intelligent Transportation Systems, 18(8), 19932016.CrossRefGoogle Scholar
Selvi, M. U. and Kumar, S. S. (2011). Sea object detection using shape and hybrid color texture classification. Communications in Computer and Information Science, 204, 1931.CrossRefGoogle Scholar
Shi, B., Su, Y., Wang, C., Wan, L. and Luo, Y. (2019). Study on intelligent collision avoidance and recovery path planning system for the waterjet-propelled unmanned surface vehicle. Ocean Engineering, 182(6), 489498.10.1016/j.oceaneng.2019.04.076CrossRefGoogle Scholar
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A. (2014). Going Deeper with Convolutions, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 19.Google Scholar
Wan, L., Su, Y., Zhang, H., Shi, B. and AbouOmar, M. S. (2020). An improved integral light-of-sight guidance law for path following of unmanned surface vehicles. Ocean Engineering, 205, 107302.10.1016/j.oceaneng.2020.107302CrossRefGoogle Scholar
Xu, G., Cao, H., Udupa, J. K., Tong, Y. and Torigian, D. A. (2020). Disegnet: A deep dilated convolutional encoder-decoder architecture for lymph node segmentation on PET/CT images. Computerized Medical Imaging and Graphics, 88, 101851.CrossRefGoogle ScholarPubMed
Yan, Z., Chu, X., Xie, L. and Yan, X. (2014). Inland ship image edge detection based on wavelet transforms and improved canny operator. Lecture Notes in Electrical Engineering, 271, 761769.10.1007/978-3-642-40630-0_97CrossRefGoogle Scholar
Yu, F. and Koltun, V. (2016). Multi-scale Context Aggregation by Dilated Convolutions. International Conference on Learning Representations (ICLR), Caribe Hilton, San Juan, Puerto Rico, 113.Google Scholar
Zhang, R., Jian, Y., Zhang, K., Chen, F. and Zhang, J. (2016). S-CNN based ship detection from high-resolution remote sensing image. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLI-B7, 423430.Google Scholar
Zhang, Y., Li, Q. and Zang, F. (2017). Ship detection for visual maritime surveillance from non-stationary platforms. Ocean Engineering, 141, 5363.CrossRefGoogle Scholar
Zou, Z. and Shi, Z. (2016). Ship detection in spaceborne optical image with SVD networks. IEEE Transactions on Geoscience & Remote Sensing, 54(10), 58325845.CrossRefGoogle Scholar