Skip to main content Accessibility help
×
Home

Faulty robot rescue by multi-robot cooperation

  • Gyuho Eoh (a1), Jeong S. Choi (a1) and Beom H. Lee (a1)

Summary

This paper presents a multi-agent behavior to cooperatively rescue a faulty robot using a sound signal. In a robot team, the faulty robot should be immediately recalled since it may seriously obstruct other robots, or collected matters in the faulty robot may be lost. For the rescue mission, we first developed a sound localization method, which estimates the sound source from a faulty robot by using multiple microphone sensors. Next, since a single robot cannot recall the faulty robot, the robots organized a heterogeneous rescue team by themselves with pusher, puller, and supervisor. This self-organized team succeeded in moving the faulty robot to a safe zone without help from any global positioning systems. Finally, our results demonstrate that a faulty robot among multi-agent robots can be immediately rescued with the cooperation of its neighboring robots and interactive communication between the faulty robot and the rescue robots. Experiments are presented to test the validity and practicality of the proposed approach.

Copyright

Corresponding author

*Corresponding author. E-mail: geni0620@snu.ac.kr

References

Hide All
1.Li, H., Ishikawa, S., Zhao, Q., Ebana, M., Yamamoto, H. and Huang, J., “Robot Navigation and Sound-Based Position Identification,” In: Proceedings of the IEEE International Conference on Systems, Mans and Cybernetics, Montreal (2007) pp. 24492454.
2.Pérez, J. A., Castellanos, J. A., Montiel, J. M. M., Neira, J. and Tardos, J. D., “Continuous Mobile Robot Localization: Vision vs. Laser,” In: Proceedings of the IEEE Conference on Robotics and Automation, Detroit, Michigan (1999) pp. 29172923.
3.Dellaert, F., Burgard, W., Fox, D. and Thrun, S., “Using the condensation algorithm for robust, vision-based mobile robot localization,” In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition II, Fort Collins, Colorado (1999) pp. 588594.
4.Se, S., Lowe, D. and Little, J., “Global Localization Using Distinctive Visual Features,” In: Proceedings of the International Conference on Intelligent Robots and Systems, Lausanne, Switzerland (2002) pp. 226231.
5.Yim, B. D., Hwang, S. Y. and Song, J. B., “Mobile Robot Localization Based on Fusion of Vision and Range Information,” In: Control and Automation System Symposium, Ilsan, Republic of Korea (2006) pp. 183188.
6.Little, J. J., Lu, J. and Murray, D. R., “Selecting Stable Image Features for Robot Localization Using Stereo,” In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Victoria, B.C., Canada (1998) pp. 10721077.
7.Stancliff, S., Dolan, J. and Trebi-Ollennu, A., “Towards a Predictive Model of Mobile Robot Reliability,” Technical Report, CMU-RI-TR-05-38, Carnegie Mellon University, Pittsburgh, PA (2005).
8.Aarabi, P., “Robust Multi-Source Sound Localization Using Temporal Power Fusion,” Proceedings of Sensor Fusion: Architectures, Algorithms, and Applications V (AeroSense'01), Orlando, Florida (2001).
9.Winfield, A. F. T. and Nembrini, J., “Safety in numbers: Fault tolerance in robot swarms,” Int. J. Model. Ident. Control 1 (1), 3037 (2006).
10.Cao, Y. U., Fukunaga, A. S. and Kahng, A. B., “Cooperative mobile robots: Antecedents and directions,” Auton. Robots 4, 727 (1997).
11.Mataric, M. J., Nilsson, M. and Simsarian, K. T., “Cooperative Multi-Robot Box-Pushing,” In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Pittsburgh, Pennsylvania USA (1995) pp. 556561.
12.Wightman, F. L. and Kistler, D. J., “The dominant role of low-frequency interaural time differences in sound localization,” J. Acoust. Soc. Am. 91 (3), 16481661 (1992).
13.Gerkey, B. P. and Mataric, M. J., “Pusher-Watcher: An Approach to Fault-Tolerant Tightly Coupled Robot Coordination,” In: Proceedings of the International Conference on Robotics and Automation, Washington, D.C. (2002) pp. 464469.
14.Backman, J. and Karjalainen, M., “Modelling of Human Directional and Spatial Hearing Using Neural Networks,” InProceeding of the ICASSP-93, Vol. I, Minneapolis, MN (1993) pp. 125128.
15.Blauert, J., Spatial Hearing: The Psychophysics of Human Sound Localization (MIT Press, Cambridge, MA, 2001).
16.Birchfield, S. T. and Gangishetty, R., “Acoustic Localization by Interaural Level Difference,” In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Philadelphia, PA, Vol. 4 (2005) pp. 11091112.
17.Cui, W., Cao, Z. and Wei, J., “Dual-microphone source location method in 2-D space,” In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Vol. 4, Toulouse, France (2006) pp. 845848.
18.Raspaud, M., Viste, H. and Evangelista, G., “Binaural source localization by joint estimation of ILD and ITD,” Trans. Audio Speech Lang. Process. 18 (1), 6877 (2010).
19.Parker, L. E., “Heterogeneous Multi-Robot Cooperation,” Ph.D. Thesis (Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 1994).
20.Wang, Z. and Kumar, V., “Object Closure and Manipulation by Multiple Cooperation Mobile Robots,” In: Proceedings of the International Conference on Robotics and Automation, Vol. 1, Washington, D.C. (2002) pp. 394399.
21.Pereira, G. A. S., Kumar, V., Spletzer, J. R., Taylor, C. J. and Campos, M. F. M., “Cooperative Transport of Planar Objects by Multiple Mobile Robots Using Object Closure,” In: 8th Symposia on Experimental Robotics, International Experimental Robotics VIII, Springer Tracts in Advanced Robotics Vol. 5, (2003) pp 275284.
22.Kovac, K., Zivkovic, I. and Dalbelo Basic, B., “Simulation of Multi-Robot Reinforcement Learning for Box-Pushing Problem,” In: Proceedings of the 12th IEEE Mediterranean Electrotechnical Conference (MELECON 2004) Vol. 2, Dubrovnik, Croatia (2004) pp. 603606.
23.Li, Y. and Chen, X., “Modeling and Simulation of a Swarm of Robots for Box-Pushing Task,” Proceedings of the 12th Mediterranean Conference on Control and Automation, Kusadasi, Aydin, Turkey (2004).
24.Wang, Y. and de, C. W. Silva, “Multi-Robot Box-Pushing: Single-Agent Q-Learning vs. Team Q-Learning,” In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China (2006) pp. 36943699.
25.Inoue, Y., Tohge, T. and Iba, H., “Cooperative transportation system for humanoid robots using simulation-based learning,” Appl. Soft Comput. 7 (1), 115125 (2007).
26.Noreils, F. R., “Toward a robot architecture integrating cooperation between mobile robots: Application to indoor environment,” Int. J. Robot. Res. 12 (1), 7998 (1993).
27.Dorigo, M. and Sahin, E., “Guest editorial. Special issue: Swarm robotics,” Auton. Robots 17 (2–3), 111113 (2004).
28.Choi, J. S., Eoh, G., Lee, S., Kim, J. and Lee, B. H., “Collision Prevention System Using Velocity Obstacle and Artificial Potential Field,” Proceedings of the International Symposium on Intelligent Systems, Hachioji, Japan (2010).
29.Choi, J. S., Eoh, G., Kim, J., Yoon, Y., Park, J. and Lee, B. H., “Analytic Collision Anticipation Technology Considering Agents‘ Future Behavior,” In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan (2010) pp. 16561661.
30.Vermaak, J., Gangnet, M., Blake, A. and Pérez, P., “Sequential Monte Carlo Fusion of Sound and Vision for Speaker Tracking,” In: Proceedings of the IEEE International Conference on Computer Vision, Vol. I, Vancouver, Canada (2001) pp. 741746.

Keywords

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed