Hostname: page-component-76fb5796d-vvkck Total loading time: 0 Render date: 2024-04-26T02:55:11.833Z Has data issue: false hasContentIssue false

ACT-R-typed human–robot collaboration mechanism for elderly and disabled assistance

Published online by Cambridge University Press:  29 November 2013

Shuo Xu*
Affiliation:
School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200072, China, and Shanghai Key Laboratory of Manufacturing Automation and Robotics, Shanghai 200072, China
Dawei Tu
Affiliation:
School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200072, China, and Shanghai Key Laboratory of Manufacturing Automation and Robotics, Shanghai 200072, China
Yongyi He
Affiliation:
School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200072, China, and Shanghai Key Laboratory of Manufacturing Automation and Robotics, Shanghai 200072, China
Shili Tan
Affiliation:
School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200072, China, and Shanghai Key Laboratory of Manufacturing Automation and Robotics, Shanghai 200072, China
Minglun Fang
Affiliation:
School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200072, China, and Shanghai Key Laboratory of Manufacturing Automation and Robotics, Shanghai 200072, China
*
*Corresponding author. Email: sxu@shu.edu.cn

Summary

This work aims to propose an innovative mechanism of human–robot collaboration (HRC) for mobile service robots in the application of elderly and disabled assistance. Previous studies on HRC mechanism usually focused on integrating decision-making intelligence of human beings by qualitative judgment and reasoning intelligence of robots by quantitative calculation. Instead, novelties of the proposed methodology include (1) constructing an HRC framework by taking reference from the Adaptive Control of Thought – Rational (ACT-R) human cognitive architecture; (2) establishing semantic webs of cognitive reasoning through human–robot interaction (HRI) and HRC to plan and implement complex tasks; and (3) realizing human–robot intelligence fusion by mutual encouragement, connect, and integration of modules of human, robot, perception, HRI, and HRC in the ACT-R architecture. Its technical feasibility is validated by some selected experiments within a “pouring” scenario. Further, although this study is oriented to mobile service robots, the modularized design of hardware and software makes its extensive use feasible in other types of service robots like smart rehabilitation beds, wheelchairs, and cleaning equipments.

Type
Articles
Copyright
Copyright © Cambridge University Press 2013 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Breazeal, C., Gray, J., Hoffman, G. and Berlin, M., “Social Robots: Beyond Tools to Partners. In: Proceedings of the 13th IEEE International Workshop on Robot and Human Interactive Communication (Ro-Man 2004), Kurashiki, Japan (IEEE Press, New York, NY, 2004) pp. 551556.Google Scholar
2.Hinds, P. J., Roberts, T. L. and Jones, H, “Whose job is it anyway? A study of human–robot interaction in a collaborative task,” Hum. Comput. Interact. 19 (1–2), 151181 (2004).Google Scholar
3.Kim, Y. C., Yoon, W. C., Kwon, H. T., Yoon, Y. S. and Kim, H. J, “A Cognitive Approach to Enhancing Human–Robot Interaction for Service Robots,” In: Lecture Notes in Computer Science: Human Interface and the Management of Information: Methods, Techniques and Tools in Information Design (Smith, M. J. and Salvendy, G., eds.) (Springer, Berlin, Germany, 2007) pp. 858867.Google Scholar
4.DeKoven, E. A. M., Bechtel, B., Zaientz, J., Lisse, S. and Murphy, A. K. G., “Delegating Responsibilities in Human-Robot Teams,” In: Unmanned Systems Technology, Vol. VIII, Pts. 1 and 2 (Gerhart, G. R., Shoemaker, C. M. and Gage, D. W., eds.) (SPIE Press, Bellingham, WA, 2006).Google Scholar
5.Howard, A. M., “Role Allocation in Human–Robot Interaction Schemes for Mission Scenario Execution,” In: Proceedings of 2006 IEEE International Conference on Robotics and Automation (IEEE Press, New York, NY, 2006) pp. 35883594.Google Scholar
6.Clodic, A., Alami, R., Montreuil, V., Li, S., Wrede, B. and Swadzba, A, “A Study of Interaction Between Dialog and Decision for Human-Robot Collaborative Task Achievement,” In: Proceedings of the 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju, South Korea (IEEE Press, New York, NY, 2007) pp. 907912.Google Scholar
7.Xu, Y., Ohmoto, Y., Ueda, K., Komatsu, T., Okadome, T., Kamei, K., Okada, S., Sumi, Y. and Nishida, T., “A Platform System for Developing a Collaborative Mutually Adaptive Agent,” Proceedings of Next-Generation Applied Intelligence (Springer, Berlin, Germany, 2009) pp. 576585.Google Scholar
8.ACT-R Research Group, “About ACT-R,” available at: http://act-r.psy.cmu.edu/about/. Accessed March 27, 2012.Google Scholar
9.Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C. and Qin, Y., “An integrated theory of the mind,” Psych. Rev. 111 (4), 10361060 (2004).Google Scholar
10.Chong, H.-Q., Tan, A.-H. and Ng, G.-W, “Integrated cognitive architectures: A survey,” Artif. Intell. Rev. 28 (2), 103130 (2007).Google Scholar
11.Qiu, R., Noyvirt, A., Ji, Z., Soroka, A., Li, D., Liu, B., Arbeiter, G., Weißhardt, F. and Xu, S., “Integration of symbolic task planning into operations within an unstructured environment,” Int. J. Intell. Mechatronics Robot. 2 (3), 3857 (2012).Google Scholar
12.Ritter, F. E., Van Rooy, D., Amant, R. St and Simpson, K., “Providing user models direct access to interfaces: An exploratory study of a simple interface with implications for HRI and HCI,” IEEE Trans. Syst. Man Cybern. A 36 (3), 592601 (2006).Google Scholar
13.Best, B. J. and Lebiere, C., “Cognitive agents interacting in real and virtual worlds,” In: Cognition and Multi-Agent Interaction: From Cognitive Modeling to Social Simulation (Cambridge University Press, New York, NY, 2006) pp. 186218.Google Scholar
14.Kennedy, W. G., Bugajska, M. D., Marge, M., Adams, W., Fransen, B. R., Perzanowski, D., Schultz, A. C. and Trafton, J. G., “Spatial Representation and Reasoning for human–Robot Collaboration,” In: Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence, Vancouver, British Columbia (AAAI Press, Menlo Park, CA, 2007) pp. 15541559.Google Scholar
15.Chang, M.-S. and Chou, J.-H, “A robust and friendly human-robot interface system based on natural human gestures,” Int. J. Pattern. Recognit. Artif. Intell. 24 (6), 847866 (2010).Google Scholar
16.Su, M. C. and Chung, M. T., “Voice-controlled human-computer interface for the disabled,” Comput. Control Eng. J. 12 (5), 225230 (2001).Google Scholar
17.Tu, D., Zhao, Q. and Yin, H., “Eye-gaze input system being adaptive to the user's head movement,” Yi Qi Yi Biao Xue Bao (Chin. J. Sci. Instrum.) 25 (6), 828828 (2004).Google Scholar
18.Wang, M., Wang, Y., Tu, D., Jiang, J. and Zhang, G, “Path planning of mobile robot based on compound potential field method,” Appl. Res. Comput. 29 (7), 24472449 + 2460 (2012).Google Scholar
19.Galindo, C., Fernandez-Madrigal, J.-A., Gonzalez, J. and Saffiotti, A., “Robot task planning using semantic maps,” Robot. Auton. Syst. 56 (11), 955–66 (2008).Google Scholar