Hostname: page-component-77c89778f8-m42fx Total loading time: 0 Render date: 2024-07-17T14:52:11.692Z Has data issue: false hasContentIssue false

Deep learning analysis based on multi-sensor fusion data for hemiplegia rehabilitation training system for stoke patients

Published online by Cambridge University Press:  26 July 2021

Peng Zhang
Affiliation:
Tianjin University of Science and Technology, Tianjin 300222, China
Junxia Zhang*
Affiliation:
Tianjin Key Laboratory of Integrated Design and On-line Monitoring of Light Industry and Food Engineering Machinery and Equipment, Tianjin 300222, China
*
*Corresponding author. Email: zjx@tust.edu.cn

Abstract

By recognizing the motion of the healthy side, the lower limb exoskeleton robot can provide therapy to the affected side of stroke patients. To improve the accuracy of motion intention recognition based on sensor data, the research based on deep learning was carried out. Eighty healthy subjects performed gait experiments under five different gait environments (flat ground, 10 ${}^\circ$ upslope and downslope, and upstairs and downstairs) by simulating stroke patients. To facilitate the training and classification of the neural network, this paper presents template processing schemes to adapt to different data formats. The novel algorithm model of a hybrid network model based on convolutional neural network (CNN) and Long–short-term memory (LSTM) model is constructed. To mitigate the data-sparse problem, a spatial–temporal-embedded LSTM model (SQLSTM) combining spatial–temporal influence with the LSTM model is proposed. The proposed CNN-SQLSTM model is evaluated on a real trajectory dataset, and the results demonstrate the effectiveness of the proposed model. The proposed method will be used to guide the control strategy design of robot system for active rehabilitation training.

Type
Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Guo, J., Z. Wang and K-M Lee, “Articular Geometry Reconstruction for Knee Joint with a Wearable Compliant Device,” Robotica, 37(12), 21042118 (2019).CrossRefGoogle Scholar
Wang, W., Zhang, L., Cai, K., Wang, Z., Zhang, B. and Huang, Q., “Design and Experimental Evaluation of Wearable Lower Extremity Exoskeleton with Gait Self-adaptivity,” Robotica, 37(12), 2035–2055 (2019).Google Scholar
Cao, Y., Huang, J., Huang, Z., Tu, X. and Mohammed, S., “Optimizing Control of Passive Gait Training Exoskeleton Driven by Pneumatic Muscles Using Switch-Mode Firefly Algorithm,” Robotica, 37(12), 20872103 (2019)CrossRefGoogle Scholar
Hsu, C. W. and Lin, C. J., “A comparison of methods for multiclass support vector machines,” IEEE Trans. Neur. Netw. 13(3), 415425 (2002).Google ScholarPubMed
Tipping, M. E., “Sparse Bayesian learning and the relevance vector machine,” J Mach Learn. Res. 1(3), 211244 (2001).Google Scholar
Sebastiani, F., “Machine learning in automated text categorization,” ACM Comput. Surv. 34(1), 147 (2002).CrossRefGoogle Scholar
Chen, X., Yu, J. Z., Kong, S. H., Wu, Z. X., Fang, X. and Wen, L., “Towards Real-Time Advancement of Underwater Visual Quality With GAN,” IEEE Trans Ind. Electron. 66(12), 93509359 (2019).CrossRefGoogle Scholar
Alsheikh, M. A., Selim, A., Niyato, D., Doyle, L., Lin, S. and Tan, H.-P., “Deep activity recognition models with triaxial accelerometers,” Proceedings of the AAAI Workshop: Artificial Intelligence Applied to Assistive Technologies and Smart Environments New York, America, (2016) pp. 8–13.Google Scholar
Ronao, C. A. and Cho, S.-B., “Human activity recognition with smart-phone sensors using deep learning neural networks,” Exp. Syst. 59(2), 235244 (2016).CrossRefGoogle Scholar
Hammerla, N. Y., Halloran, S. and Ploetz, T., “Deep, convolutional, and recurrent models for human activity recognition using wearables,” Proceedings of the 25th International Joint Conference on Artificial Intelligence, New York, America, (2016) pp. 15331540.Google Scholar
Alajmi, N., Kanjo, E., El Mawass, N. and Chamberlain, A., “Shopmobia: An emotion-based shop rating system,” in: Proceedings of the Conference on Affective Computing and Intelligent Interaction, Geneva, Switzerland, (2013) pp. 745750.Google Scholar
Al-barrak, L., Kanjo, E. and Younis, E. M. G., “Neuroplace: Categorizing urban places according to mental states,” PLoS One. 12(1), 168174 (2017).CrossRefGoogle ScholarPubMed
Kieran, W. and Eiman, K., “Things of the internet (ToI): Physicalization of notification,” In: Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, ACM, Nottingham, United Kingdom (2018) pp. 12281233.Google Scholar
Kieran, W. and Eiman, K., “Emoecho: A tangible interface to convey and communicate emotions,” In: Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, ACM, Nottingham, United Kingdom (2018) pp. 746749.Google Scholar
Kanjo, E., Younis, E. M. and Sherkat, N., “Toward unravelling the relationship between on-body, environmental and emotion data using sensor information fusion approach,” Inf. Fusion. 40, 1831 (2018).CrossRefGoogle Scholar
Dumas, M., “Emotional expression recognition using support vector machines,” In: Proceedings of International Conference on Multimodal Interfaces, New York, America (2001) pp. 56–62.Google Scholar
Dermitzakis, K., Arieta, A. H. and Pfeifer, R., “Gesture recognition in upper-limb prosthetics: A viability study using dynamic time warping and gyroscopes,” International Conference of the IEEE Engineering in Medicine & Biology Society (2011).CrossRefGoogle Scholar
Karantonis, D. M., Narayanan, M. R., Mathie, M., Lovell, N. H. and Celler, B. G., “Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring,” IEEE Trans. Inf. Tech. Biomed. 10(1), 156167 (2006).Google ScholarPubMed
RodrÍguez MartÍn, D. M., SamÀ MonsolÍs, A., PÉrez LÓpez, C., CatalÀ MallofrÉ, A., Cabestany MoncusÍ, J. and RodrÍguez Molinero, A., “Identification of postural transitions using a waist-located inertial sensor,” International Work-Conference on Artificial Neural Networks (2013) pp142149.Google Scholar
He, Z., “Accelerometer Based Gesture Recognition Using Fusion Features and SVM,” J Softw. 6(6), 10421049 (2011).CrossRefGoogle Scholar
Jhun-Ying, Y., Jeen-Shing, W. and Yen-Ping, C., “Using acceleration measurements for activity recognition: An effective learning algorithm for constructing neural classifiers.Pattern Recognit. Lett. 29(16), 22132220 (2008).Google Scholar
Supratak, A., Wu, C., Dong, H., Sun, K. and Guo, Y., “Survey on feature extraction and applications of bio-signals,” Mach. Learn. Health Inform. Springer, 40(8), 17501763 (2016).Google Scholar
Ronao, C. A. and Cho, S. B., “Human activity recognition with smartphone sensors using deep learning neural networks,” Exp. Syst. Appl. 59, 235244 (2016).CrossRefGoogle Scholar
Chen, W. H., Baca, C. A. B. and Tou, C. H., “LSTM-RNNs combined with scene information for human activity recognition,” International Conference on E-Health Networking, Applications and Services. IEEE, (2017).CrossRefGoogle Scholar
OrdÓÑez, F. J. and Roggen, D., “Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition,” Sensors, 16(1), 115 (2016).CrossRefGoogle ScholarPubMed
Gammulle, H., Denman, S., Sridharan, S. and Fookes, C., “Two stream LSTM: A deep fusion framework for human action recognition,” 2017 IEEE Winter Conference on Applications of Computer Vision (WACV) (2017), pp 177186.CrossRefGoogle Scholar
Wang, L.,Qiao, Y. and Tang, X., “Action recognition with trajectory-pooled deep convolutional descriptors,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp 43054314.CrossRefGoogle Scholar
l. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville and Y. Bengio. “Generative adversarial nets.” Adv. Neural Inf. Process. Syst. (2014), 2672–2680.Google Scholar
Denton, E. L., Chintala, S., Szlam, A. and Fergus, R., “Deep generative image models using a Laplacian pyramid of adversarial networks,” Adv. Neural Inform. Process. Syst. (2015), pp 14861494.Google Scholar
Yu, L., Zhang, W., Wang, J. and Yu, Y.. Seqgan, “Sequence generative adversarial nets with policy gradient,” AAAI Conference on Artificial Intelligence (2017), pp. 28522858.Google Scholar
Che, T., Li, Y., Zhang, R., Hjelm, R. D., Li, W., Song, Y. and Bengio, Y., “Maximum-likelihood augmented discrete generative adversarial networks,” arXiv preprint arXiv:1702.07983, (2017).Google Scholar
Yang, Z., Chen, W., Wang, F. and Bo, Xu, “Improving neural machine translation with conditional sequence generative adversarial nets,” NAACL HLT 2018: 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2018), pp 13461355.CrossRefGoogle Scholar
Hjelm, R. D., Jacob, A. P., Che, T., Cho, K. and Bengio, Y., “Boundary-seeking generative adversarial networks,” arXiv preprint arXiv:1702.08431, (2017).Google Scholar
Weenk, D., Roetenberg, D., Van Beijnum, B. J. J. F., Hermens, H. J. and Veltink, P. H., “Ambulatory Estimation of Relative Foot Positions by Fusing Ultrasound and Inertial Sensor Data,” IEEE Trans. Syst, Neural. Rehabil. Eng. 23(5), 817826 (2015).Google ScholarPubMed
Yuan, Q. and Chen, I. M., “Localization and velocity tracking of human via 3 IMU sensors,” Sens. Actuator. A Phys. 212(6), 2533 (2014).CrossRefGoogle Scholar
Wang, X., Gao, L., Song, J. and Shen, H., “Beyond frame-level CNN: Saliency-aware 3-D CNN with LSTM for video action recognition,” IEEE Signal Process. Lett. 24(4), 510514 (2017).CrossRefGoogle Scholar
Khairuddin, I. M., Sidek, S. N. and Majeed, A. P. P., “The classification of movement intention through machine learning models: The identification of significant time-domain EMG features,” PEERJ Comput. Sci. (2021), doi: 10.7717/peerj-cs.379.Google Scholar
Castillo, C. S. M., Wilson, S. and Vaidyanathan, R., “Wearable MMG-Plus-One Armband: Evaluation of Normal Force on Mechanomyography (MMG) to Enhance Human-Machine Interfacing,” IEEE Trans. Syst, Neural. Rehabil. Eng. 29, 196205 (2021).Google ScholarPubMed
Wang, X., Hou, Z. G., Zeng, H., Feng, L. and Pan, S., “Robot fault diagnosis based on multi-sensor information fusion,” J. Shanghai Jiaotong Univ. 6, 793798 (2015).Google Scholar
Chen, X., Liu, H., Huang, W., Xing, X. Y. and Liu, H. Y., “Research on gait classification based on acceleration sensor,” J. Sens. 10(4), 1013 (2013).Google Scholar
Huang, Z., Hasan, A. and Shin, K., “Long-Term Pedestrian Trajectory Prediction Using Mutable Intention Filter and Warp LSTM,” IEEE Robot. Autom. Lett. 6(2), 542549 (2021).CrossRefGoogle Scholar
Lu, Y., Wang, H. and Qi, Y., “Evaluation of classification performance in human lower limb jump phases of signal correlation information and LSTM models,” Biomed. Signal Process. Control, 64 (2021). doi: 10.1016/j.bspc.2020.102279 CrossRefGoogle Scholar
He, J., Guo, Z. and Shao, Z., “An LSTM-Based Prediction Method for Lower Limb Intention Perception by Integrative Analysis of Kinect Visual Signal,” J. Healthc. Eng. 2020 (2020). doi: 10.1155/2020/8024789 CrossRefGoogle ScholarPubMed
Koochaki, F. and Najafizadeh, L., “Eye Gaze-based Early Intent Prediction Utilizing CNN-LSTM,41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, GERMANY, JUL 23–27, (2019).Google Scholar
Izquierdo, R., Quintanar, A. and Parra, I., “Experimental validation of lane-change intention prediction methodologies based on CNN and LSTM,” IEEE Intelligent Transportation Systems Conference, Auckland, NEW ZEALAND, OCT 27–30, (2019).CrossRefGoogle Scholar
Cai, R., Zhu, B. and Liu, W., “An CNN-LSTM Attention Approach to Understanding User Query Intent from Online Health Communities,” 17th IEEE International Conference on Data Mining, New Orleans, LA, NOV 18–21, (2017).CrossRefGoogle Scholar
Kingma, D. P. and Ba, J. L., “Adam: A method for stochastic optimization,” International Conference on Learning Representations, (2015).Google Scholar