Skip to main content Accessibility help
×
Home

Robust and fast 3-D scan registration using normal distributions transform with supervoxel segmentation

  • Ji W. Kim (a1) and Beom H. Lee (a1)

Summary

This paper presents what is termed as the supervoxel normal distributions transform (SV-NDT), a novel three-dimensional (3-D) registration algorithm which improves the performance of the three-dimensional normal distributions transform (3-D NDT) significantly. The 3-D NDT partitions a model scan using a 3-D regular grid. Generating normal distributions using the 3-D regular grid causes considerable information loss because the 3-D regular grid does not use any information pertaining to the local surface structures of the model scan. The best type of surface (the constituent unit of each scan) for modeling with one normal distribution is known to be the plane. The SV-NDT reduces the loss of information using a supervoxel-generating algorithm at the partitioning stage. In addition, it uses the information of the local surface structures from the data scan by replacing the Euclidean distance with a function that uses local geometries as well as the Euclidean distance when each point in the data scan is matched to the corresponding normal distribution. Experiments demonstrate that the use of the supervoxel-generating algorithm increases the modeling accuracy of the normal distributions and that the proposed 3-D registration algorithm outperforms the 3-D NDT and other widely used 3-D registration algorithms in terms of robustness and speed on both synthetic and real-world datasets. Additionally, the effect of changing the function to create correspondences is also verified.

Copyright

Corresponding author

*Corresponding author. E-mail: kjw116@snu.ac.kr

References

Hide All
1.Besl, P. J. and McKay, N. D., “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14 (2), 239256 (1992).
2.Biber, P. and Straber, W., “The Normal Distributions Transform: A New Approach to Laser Scan Matching,” Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3, Las Vegas, USA, (2003) pp. 2743–2748.
3.Diosi, A. and Kleeman, L., “Fast laser scan matching using polar coordinates,” Int. J. Robot. Res. 26 (10), 11251153 (2007).
4.Censi, A. and Carpin, S., “HSM3D: Feature-less Global 6DOF Scan-matching in the Hough/Radon Domain,” Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, (2009) pp. 3899–3906.
5.Rusu, R. B., Blodow, N. and Beetz, M., “Fast Point Feature Histograms (FPFH) for 3D Registration,” Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, (2009) pp. 3212–3217.
6.Bulow, H. and Birk, A., “Spectral registration of noisy sonar data for underwater 3D mapping,” Auton. Robots 30 (3), 307331 (2011).
7.Bentley, J. L., “Multidimensional binary search trees used for associative searching,” Commun. ACM 18 (9), 509517 (1975).
8.Greenspan, M. and Yurick, M., “Approximate K-D Tree Search for Efficient ICP,” 4th International Conference on 3-D Digital Imaging and Modeling, Banff, Canada, (2003) pp. 442–448.
9.Lu, F. and Evangelos, M., “Robot pose estimation in unknown environments by matching 2D range scans,” J. Intell. Robot. Syst. 18 (3), 249275 (1997).
10.Armesto, L., Minguez, J. and Montesano, L., “A Generalization of the Metric-Based Iterative Closest Point Technique for 3D Scan Matching,” Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, USA, (2010) pp. 1367–1372.
11.Segal, A. V., Haehnel, D. and Thrun, S., “Generalized-ICP,” Proceedings of the 2009 Robotics: Science and System Conference, Seattle, USA, (2009).
12.Magnusson, M., Elsrud, R., Skagerlund, L.-E. and Duckett, T., “3D Modelling for Underground Mining Vehicles,” Proceedings of the Conference on Modeling and Simulation for Public Safety, Linkping, Sweden, (2005).
13.Magnusson, M., Nuchter, A., Lorken, C., Lilienthal, A. J. and Hertzberg, J., “Evaluation of 3D Registration Reliability and Speed - A Comparison of ICP and NDT,” Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, (2009) pp. 3907–3912.
14.Das, A., Servos, J. and Waslander, S. L., “3D Scan Registration Using the Normal Distributions Transform with Ground Segmentation and Point Cloud Clustering,” Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, (2013) pp. 2207–2212.
15.Stoyanov, T., Magnusson, M., Andreasson, H. and Lilienthal, A. J., “Fast and accurate scan registration through minimization of the distance between compact 3D NDT representations,” Int. J. Robot. Res. 31 (12), 13771393 (2012).
16.Takeuchi, E. and Tsubouchi, T., “A 3-D Scan Matching Using Improved 3-D Normal Distributions Transform for Mobile Robotic Mapping,” Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, (2006) pp. 3068–3073.
17.Ulas, C. and Temeltas, H., “3D multi-layered normal distribution transform for fast and long range scan matching,” J. Intell. Robot. Syst. 71 (1), 85108 (2013).
18.Douillard, B., Underwood, J., Kuntz, N., Vlaskine, V., Quadros, A., Morton, P. and Frenkel, A., “On the Segmentation of 3D LIDAR Point Clouds,” Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, (2011) pp. 2798–2805.
19.Tongtong, C., Bin, D., Daxue, L., Bo, Z. and Qixu, L., “3D LIDAR-based Ground Segmentation,” Proceedings of the 2011 First Asian Conference on Pattern Recognition, Beijing, China, (2011) pp. 446–450.
20.Moosmann, F., Pink, O. and Stiller, C., “Segmentation of 3D Lidar Data in Non-flat Urban Environments Using a Local Convexity Criterion,” Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi'an, China, (2009) pp. 215–220.
21.Lam, J., Kusevic, K., Mrstik, P., Harrap, R. and Greenspan, M., “Urban Scene Extraction from Mobile Ground Based LIDAR Data,” 5th International Symposium on 3D Data Processing, Visualization and Transmission, Paris, France, (2010) pp. 1–8.
22.Zhou, Y., Yu, Y., Lu, G. and Du, S., “Super-segments based classification of 3D urban street scenes,” Int. J. Adv. Robot. Syst. 9 (248), 18 (2012).
23.Papon, J., Abramov, A. and Schoeler, M., “Voxel Cloud Connectivity Segmentation - Supervoxels for Point Clouds,” Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Oregon, Portland, (2013) pp. 2027–2034.
24.Lim, E. H. and Suter, D., “3D terrestrial LIDAR classifications with super-voxels and multi-scale conditional random fields,” Comput.-Aided Des. 41 (10), 701710 (2009).
25.Magnusson, M., “The Three-Dimensional Normal-Distributions Transform - an Efficient Representation for Registration, Surface Analysis, and Loop Detection,” Ph.D. Dissertation, (Orebro University, Dec. 2009).
26.Biber, P., Fleck, S. and Strasser, W., “A Probabilistic Framework for Robust and Accurate Matching of Point Clouds,” 26th Pattern Recognition Symposium, Tübingen, Germany, (2004).
27.Sturm, J., Engelhard, N., Endres, F., Burgard, W. and Cremers, D., “A Benchmark for the Evaluation of RGB-D SLAM Systems,” Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, (2012) pp. 573–580.
28.Douillard, B., Quadros, A., Morton, P., Underwood, J. P., De Deuge, M., Hugosson, S., Hallstrom, M. and Bailey, T., “Scan Segments Matching for Pairwise 3D Alignment,” Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, USA, (2012) pp. 3033–3040.
29.Yang, S.-W., Wang, C.-C. and Chang, C.-H., “RANSAC Matching: Simultaneous Registration and Segmentation,” Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, USA, (2010) pp. 1905–1912.
30.Pandey, G., McBride, J. R. and Eustice, R. M., “Ford campus vision and lidar data set,” Int. J. Robot. Res. 30 (13), 15431552 (2011).

Keywords

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed