Skip to main content Accessibility help
×
Home

Three-Dimensional Reconstruction Based on Visual SLAM of Mobile Robot in Search and Rescue Disaster Scenarios

  • Hongling Wang (a1) (a2), Chengjin Zhang (a3), Yong Song (a3), Bao Pang (a1) and Guangyuan Zhang (a2)...

Summary

Conventional simultaneous localization and mapping (SLAM) has concentrated on two-dimensional (2D) map building. To adapt it to urgent search and rescue (SAR) environments, it is necessary to combine the fast and simple global 2D SLAM and three-dimensional (3D) objects of interest (OOIs) local sub-maps. The main novelty of the present work is a method for 3D OOI reconstruction based on a 2D map, thereby retaining the fast performances of the latter. A theory is established that is adapted to a SAR environment, including the object identification, exploration area coverage (AC), and loop closure detection of revisited spots. Proposed for the first is image optical flow calculation with a 2D/3D fusion method and RGB-D (red, green, blue + depth) transformation based on Joblove–Greenberg mathematics and OpenCV processing. The mathematical theories of optical flow calculation and wavelet transformation are used for the first time to solve the robotic SAR SLAM problem. The present contributions indicate two aspects: (i) mobile robots depend on planar distance estimation to build 2D maps quickly and to provide SAR exploration AC; (ii) 3D OOIs are reconstructed using the proposed innovative methods of RGB-D iterative closest points (RGB-ICPs) and 2D/3D principle of wavelet transformation. Different mobile robots are used to conduct indoor and outdoor SAR SLAM. Both the SLAM and the SAR OOIs detection are implemented by simulations and ground-truth experiments, which provide strong evidence for the proposed 2D/3D reconstruction SAR SLAM approaches adapted to post-disaster environments.

Copyright

Corresponding author

*Corresponding authors. E-mails: cjzhang@sdu.edu.cn, songyong@sdu.edu.cn

References

Hide All
1. Lin, G. Y. and Wang, Y. T., “Improvement of speeded-up robust features for robot visual simultaneous localization and mapping,Robotica 32(2), 533549 (2014).
2. Martins, H., Oakley, I. and Ventura, R., “Design and evaluation of a head-mounted display for immersive 3D teleoperation of field robots,Robotica 33(10), 21662185 (2015).
3. Murphy, R. R., “Trial by fire,IEEE Rob. Autom. Mag. 11(9), 5061 (2004).
4. Yokokohji, Y., Kurisu, M., Takao, S., et al., “Constructing a 3D Map of Rubble by Teleoperated Mobile Robots with a Motion Canceling Camera System,” Proceedings of the 2003 IEEE/RSJ, International Conference on Intelligent Robots and Systems, Las Vegas, Nevada (2003) pp. 31183125.
5. Murphy, R. R., Kravitz, J., Stover, S. L. and Shoureshi, R., “Mobile robots in mine rescue and recovery,IEEE Rob. Autom. Mag. 9(6), 91103 (2009).
6. Santos, J. M., Portugal, D. and Rocha, R. P., “An Evaluation of 2D SLAM Techniques Available in Robot Operating System,” Fundaçào para a Ciência e a Tecnologia, The Portuguese Science Agency (2013).
7. Alboul, L. and Chliveros, G., “A System for Reconstruction from Point Clouds in 3D: Simplification and Mesh Representation,” 2010 11th International Conference on Control, Automation, Robotics and Vision, Singapore (2010) pp. 23012306.
8. Henry, P., Krainin, M., Herbst, E., Ren, X. and Fox, D., “RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments,Int. J. Rob. Res. 31(5), 647663 (2012).
9. Weiss, S., Scaramuzza, D. and Seigwart, R., “Monocular-SLAM-based navigation for autonomous micro helicopters in GPS-denied environments,J. Field Rob. 28(6), 854874 (2011).
10. Nagatani, K., Ishida, H., Yamanaka, S. and Tanaka, Y., “Three-Dimensional Localization and Mapping for Mobile Robot in Disaster Environments,” Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, Nevada (2003) pp. 3112–3117.
11. McGill, M., Selleh, R., Wiley, T., et al., “Virtual Reconstruction using an Autonomous Robot,” 2012 International Conference on Indoor Positioning and Indoor Navigation, Sydney, Australia (2012).
12. Sonka, M., Hlavac, V. and Boyle, R., Image Processing, Analysis and Machine Vision, 3rd. ed. (Thomson Corporation, Toronto, 2008). http://www.thomsonlearning.com.
13. Miró, J. V., Zhou, W. Z. and Dissanayake, G., “A Strategy for Efficient Observation Pruning in Multiobjective 3D SLAM,” IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA (2011) pp. 16401646.
14. Riaz, Z., Linder, T., Behnke, S., Worst, R. and Surmann, H., “Efficient Transmission and Rendering of RGB-D Views,” Advances in Visual Computing, In: ISVC, Part I, LNSC (Bebis, G. et al. eds.), vol. 8033 Springer Berlin Heidelberg (2013) pp. 517–26.
15. Fazli, S. and Kleeman, L., “Simultaneous landmark classification, localization and map building for an advanced sonar ring,Robotica 25(3), 283296 (2007).
16. Veth, M. J., Martin, R. K. and Pachter, M., “Anti-temporal-aliasing constraints for image-based feature tracking applications with and without inertial aiding,IEEE Trans. Veh. Technol. 59(8), 37443756 (2010).
17. Wikipedia, the free encyclopedia, “HSL and HSV,” (2015). http://en.wikipedia.org/wiki/HSL_and_HSV.
18. Zhou, W. Z., Miró, J. V. and Dissanayake, G., “Information-efficient 3-D visual SLAM for unstructured domains,IEEE Trans. Rob. 24(5), 10781087 (2008).
19. Nüchter, A., Lingemann, K., Hertzberg, J. and Surmann, H., “6D SLAM - 3D mapping outdoor environments,” Fraunhofer Institute for Autonomous Intelligent Systems (AIS) Schloss Birlinghoven D-53754, Sankt Augustin, Germany (2007).
20. Nejat, G. and Zhang, Z., “Finding disaster victims: robot-assisted 3D mapping of urban search and rescue environments via landmark identification,IEEE ICARCV. 1(6), 5061 (2006).
21. Zhang, Z., Guo, H., Nejat, G. and Huang, P., “Finding Disaster Victims: A Sensory System for Robot- Assisted 3D Mapping,” IEEE International Conference on Robotics and Automation, Rome, Italy (2007) pp. 38893894.
22. Zhang, Z. and Nejat, G., “Robot-Assisted Intelligent 3D Mapping of Unknown Cluttered Search and Rescue Environments,” IEEE/RSJ International Conference on Intelligent Robots and System, Acropolis Convention Center, Nice, France (2008) pp. 21152120.
23. Liu, M., Colas, F., Oth, L. and Siegwart, R., “Incremental topological segmentation for semi-structured environments using discretized GVG,Auton. Rob. 38(2), 143160 (2015).
24. Blum, R. S. and Liu, Z., Multi-Sensor Image Fusion and Its Applications (CRC Press, Taylor & Francis Group, Oxford, 2006).
25. Schleicher, D., Bergasa, L.M.,Ocaña, M., Barea, R. and López, M. E., “Real-time hierarchical outdoor SLAM based on stereovision and GPS fusion,IEEE Trans. Intell. Transp. Syst. 10(3), 5061 (2009).
26. Bloch, I., Information Fusion in Signal and Image Processing (John Wiley & Sons, Inc., Hoboken, NJ, USA, 2008).
27. Savkin, A. V. and Hoy, M., “Reactive and the shortest path navigation of a wheeled mobile robot in cluttered environments,Robotica 31(2), 323330 (2013).
28. Aghili, F., “3D simultaneous localization and mapping using IMU and its observability analysis,Robotica 29(10), 805814 (2011).
29. Deißler, T. and Thielecke, J., “UWB SLAM with Rao-Blackwellized Monte Carlo Data Association,” International Conference on Indoor Navigation (IPIN), Zürich, Switzerland (2010).
30. Happold, M. and Ollis, M., “Using learned features from 3D data for robot navigation,Stud. Comput. Intell. (SCI) 76, 6169, Applied Perception, Inc., Cranberry Township, Pennsylvania (2007).
31. Surmann, H., Nüchter, A. and Hertzberg, J., “An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments,Rob. Auton. Syst. 45(9), 181198 (2003).
32. Bêdkowski, J., Majek, K., Musialik, P., Adamek, A., Andrzejewski, D. and Czekaj, D., “Towards terrestrial 3D data registration improved by parallel programming and evaluated with geodetic precision,Autom. Constr. 47(8), 7891 (2014).
33. Ohno, K., Nomura, T. and Tadokoro, S., “Real-Time Robot Trajectory Estimation and 3D Map Construction Using 3D Camera,” Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China (2006) pp. 52795285.
34. Bücher, T., Curio, C., Edelbrunner, J., Igel, C., Kastrup, D., Leefken, I., Lorenz, G., Steinhage, A. and von Seelen, W., “Image processing and behavior planning for intelligent vehicles,IEEE Trans. Ind. Electron. 50(1), 6275 (2003).
35. Ellekilde, L. P., Huang, S. D., Miró, J. V. and Dissanayake, G., “Dense 3D map construction for indoor search and rescue,J. Field Rob. 24(1–2), 7189 (2007).
36. Jesus, F. and Ventura, R., “Combining monocular and stereo vision in 6D-SLAM for the localization of a tracked wheel robot,IEEE Inst. Syst. Rob. 12(4), 5061 (2012).
37. Wong, R. H., Xiao, J. Z. and Joseph, S. L., “An Adaptive Data Association for Robotic SLAM in Search and Rescue Operation,” Proceedings of the 2011 IEEE International Conference on Mechatronics and Automation, Beijing, China (2011) pp. 9971003.
38. Weingarten, J. and Siegwar, R., “3D SLAM Using Planar Segments,” Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China (2006) pp. 30623067.
39. Carbone, A., Ciacelli, D., Finzi, A. and Pirri, F., “Autonomous Attentive Exploration in Search and Rescue Scenarios,” In: WAPCV (Paletta, L. and Rome, E., eds.) (2007) pp. 431446. DOI: 10.1007/978-3-540-77343-6_28.
40. Schleicher, D., Bergasa, L. M., Ocaña, M., Barea, R. and López, E., “Real-time hierarchical stereo visual SLAM in large-scale environments,Rob. Auton. Syst. 58(8), 9911002 (2010).
41. Zhou, W. Z., Miró, J. V. and Dissanayake, G., “Information-Driven 6D SLAM Based on Ranging Vision,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Acropolis Convention Center, Nice, France (2008) pp. 20722077.
42. Stronger, D. and Stone, P., “Selective Visual Attention for Object Detection on a Legged Robot,In: RoboCup, LANI (Lakemeyer et al. eds.), vol. 4434 Springer-Verlag (2007) pp. 158170.
43. Mihankhah, E., Taghirad, H. D., Kalantari, A., Aboosaeedan, E. and Semsarilar, H., “Line Matching Localization and Map Building with Least Square,” IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Suntec Convention and Exhibition Center, Singapore (2009) pp. 17341739.
44. Kim, A. and Eustice, R. M., “Perception-Driven Navigation: Active Visual SLAM for Robotic Area Coverage,” IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany (2013) pp. 31963203.
45. Valavanis, K. P., Doitsidis, L., Long, M. and Murphy, R. R., “A case study of fuzzy-logic-based robot navigation,IEEE Rob. Autom. Mag. 6(9), 93107 (2006).
46. Day, B., Bethel, C., Murphy, R. and Burke, J., “A Depth Sensing Display for Bomb Disposal Robots,” Proceedings of the 2008 IEEE International Workshop on Safety, Security and Rescue Robotics, Japan (2008) pp. 146151.
47. Zhang, Z. and Nejat, G., “Intelligent sensing systems for rescue robots: landmark identification and threedimensional mapping of unknown cluttered urban search and rescue environments,Adv. Rob. 23(11), 11591177 (2009).
48. Yamamoto, Y., Pirjanian, P., Munich, M., DiBernardo, E., Goncalves, L., Ostrowski, J. and Karlsson, N., “Optical Sensing for Robot Perception and Localization,” IEEE Workshop on Advanced Robotics and its Social Impacts, Nagoya, Japan (2005) pp. 1417.
49. Fujiwara, T., Kamegawa, T. and Gofuku, A., “Stereoscopic Presentation of 3D Scan Data Obtained by Mobile Robot,” Proceedings of the 2011 IEEE International Symposium on Safety, Security and Rescue Robotics, Kyoto, Japan (2011) pp. 178183.
50. Knuth, J. and Barooah, P., “Distributed collaborative 3D pose estimation of robots from heterogeneous relative measurements: an optimization on manifold approach,Robotica 33(7), 15071535 (2014).
51. Pire, T., Baravalle, R., D’Alessandro, A. and Civera, J., “Real-time dense map fusion for stereo SLAM,Robotica 36(10), 15101526 (2018). doi:10.1017/S0263574718000528
52. Saputra, M. R. U., Markham, A. and Trigoni, N.. “Visual SLAM and structure from motion in dynamic environments: a survey,ACM Comput. Surv. 51(2), 137 (2018).

Keywords

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed