Hostname: page-component-848d4c4894-xm8r8 Total loading time: 0 Render date: 2024-06-28T17:43:50.818Z Has data issue: false hasContentIssue false

Fusion of Ship Perceptual Information for Electronic Navigational Chart and Radar Images based on Deep Learning

Published online by Cambridge University Press:  14 June 2019

Muzhuang Guo*
Affiliation:
(College of Marine Electrical Engineering, Dalian Maritime University, Dalian 116026 Liaoning, People's Republic of China)
Chen Guo
Affiliation:
(College of Marine Electrical Engineering, Dalian Maritime University, Dalian 116026 Liaoning, People's Republic of China)
Chuang Zhang
Affiliation:
(Navigation College, Dalian Maritime University, Dalian 116026, China)
Daheng Zhang
Affiliation:
(Navigation College, Dalian Maritime University, Dalian 116026, China)
Zongjiang Gao
Affiliation:
(Navigation College, Dalian Maritime University, Dalian 116026, China)
*
(E-mail: dmuguoc@126.com)

Abstract

Superimposing Electronic Navigational Chart (ENC) data on marine radar images can enrich information for navigation. However, direct image superposition is affected by the performance of various instruments such as Global Navigation Satellite Systems (GNSS) and compasses and may undermine the effectiveness of the resulting information. We propose a data fusion algorithm based on deep learning to extract robust features from radar images. By deep learning in this context we mean employing a class of machine learning algorithms, including artificial neural networks, that use multiple layers to progressively extract higher level features from raw input. We first exploit the ability of deep learning to perform target detection for the identification of marine radar targets. Then, image processing is performed on the identified targets to determine reference points for consistent data fusion of ENC and marine radar information. Finally, a more intelligent fusion algorithm is built to merge the marine radar and electronic chart data according to the determined reference points. The proposed fusion is verified through simulations using ENC data and marine radar images from real ships in narrow waters over a continuous period. The results suggest a suitable performance for edge matching of the shoreline and real-time applicability. The fused image can provide comprehensive information to support navigation, thus enhancing important aspects such as safety.

Type
Research Article
Copyright
Copyright © The Royal Institute of Navigation 2019 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Al-Sharman, M.K., Emran, B.J., Jaradat, M.A., Najjaran, H., Al-Husari, R. and Zweiri, Y. (2018). Precision landing using an adaptive fuzzy multi-sensor data fusion architecture. Applied Soft Computing, 69, 149164.Google Scholar
Borkowski, P. and Zwierzewicz, Z. (2011). Ship course-keeping algorithm based on knowledge base. Intelligent Automation & Soft Computing, 17(2), 149163.Google Scholar
Donderi, D.C., Mercer, R., Hong, M.B. and Skinner, D. (2004). Simulated navigation performance with marine electronic chart and information display systems (ECDIS). The Journal of Navigation, 57(2), 189202.Google Scholar
Dai, J., Li, Y., He, K. and Sun, J. (2016). R-fcn: Object detection via region-based fully convolutional networks. In Chen, E. Q., Gong, Y. H., and Tie, Y. (Eds.), Advances in neural information processing systems, 379387. Barcelona, Spain: Springer.Google Scholar
Du, C. and Gao, S. (2017). Multi-focus image fusion algorithm based on pulse coupled neural networks and modified decision map. Optik, 157, 10031015.Google Scholar
Everingham, M., van Gool, L., Williams, C.K., Winn, J. and Zisserman, A. (2010). The Pascal Visual Object Classes (VOC) Challenge. International Journal of Computer Vision, 88(2), 303338.Google Scholar
Gao, B., Hu, G., Gao, S., Zhong, Y. and Gu, C. (2018). Multi-sensor optimal data fusion for INS/GNSS/CNS integration based on unscented Kalman filter. International Journal of Control, Automation and Systems, 16(1), 129140.Google Scholar
Goyal, M., Yap, M. H., Reeves, N. D., Rajbhandari, S. and Spragg, J. (2017, October). Fully convolutional networks for diabetic foot ulcer segmentation. In 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 618623. New Jersey, NJ: IEEE.Google Scholar
Hall, D.L. and Llinas, J. (2002). An introduction to multisensor data fusion. Proceedings of the IEEE, 85(1), 623.Google Scholar
He, K., Zhang, X., Ren, S. and Sun, J. (2016). Deep Residual Learning for Image Recognition. Computer vision and pattern recognition, 770778. USA: Las Vegas.Google Scholar
Hu, G., Gao, S. and Zhong, Y. (2015). A derivative UKF for tightly coupled INS/GPS integrated navigation. ISA Transactions, 56, 135144.Google Scholar
International Electrotechnical Commission. (IEC). (2013). IEC 62388: Maritime navigation and radiocommunication equipment and systems–Shipborne radar-performance requirements, methods of testing and required test results. Geneva, Switzerland. Retrieved from https://webstore.iec.ch/publication/6967.Google Scholar
Kazimierski, W. and Stateczny, A. (2015). Radar and automatic identification system track fusion in an electronic chart display and information system. The Journal of Navigation, 68(6), 11411154.Google Scholar
Lubczonek, J. (2015). Analysis of accuracy of surveillance radar image overlay by using georeferencing method. In: Radar Symposium, Dresden, Germany. http://dx.doi.org/10.1109/IRS.2015.7226230.Google Scholar
Liu, W.T., Ma, J.X. and Zhuang, X.B. (2005). Research on radar image & chart graph overlapping technique in ECDIS. Navigation of China, 62(1), 5963.Google Scholar
Ma, J., Jiang, J., Zhou, H., Zhao, J. and Guo, X. (2018). Guided locality preserving feature matching for remote sensing image registration. IEEE Transactions on Geoscience and Remote Sensing, 56(8), 44354447.Google Scholar
Othman, Z. and Abdullah, A. (2017). An adaptive threshold based on multiple resolution levels for canny edge detection. In International Conference of Reliable Information and Communication Technology, 316323. Germany: Springer.Google Scholar
Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1), 6266.Google Scholar
Ren, S., He, K., Girshick, R. and Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, 9199. Tokyo, Japan.Google Scholar
Simonyan, K. and Zisserman, A. (2015). Very Deep Convolutional Networks for Large-Scale Image Recognition. International Conference on Learning Representations, 114. USA: San Diego.Google Scholar
Wu, S., Xu, J., Zhu, S. and Guo, H. (2017). A deep residual convolutional neural network for facial keypoint detection with missing labels. Signal Processing, 144, 384391.Google Scholar
Yang, G.L., Dou, Y.B. and Zheng, R.C. (2010). Method of image overlay on radar and electronic chart. Journal of Chinese Inertial Technology, 18(2), 184.Google Scholar
Yang, Y. and Newsam, S. (2010). Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems, 270279. New York NY: ACM.Google Scholar
Zhang, C. and Fan, Z. Z. (2011). Data Fusion of Radar Image and ECDIS Based on Matching of Harris Feature Points. In Advanced Materials Research, 317, 20262029. Stafa-Zurich, Switzerland: Trans Tech Publications.Google Scholar
Zhao, Y., Gao, S.S., Zhang, J. and Sun, Q.N. (2014). Robust predictive augmented unscented Kalman filter. International Journal of Control, Automation and Systems, 12(5), 9961004.Google Scholar