Hostname: page-component-77c89778f8-9q27g Total loading time: 0 Render date: 2024-07-20T23:25:10.389Z Has data issue: false hasContentIssue false

Expedient range enhanced 3-D robot colour vision

Published online by Cambridge University Press:  09 March 2009

R. A. Jarvis
Affiliation:
Department of Computer Science, Australian National University, Canberra, A.C.T. 2600 (Australia)

Summary

Robotic vision is concerned with providing, primarily through image sensory data acquisition and analysis, the basis for planning robotic manipulator actions upon and within a restricted world of solid objects. Ideally, its function should correspond to the human visual system's capacity to guide hand/eye coordination or body/eye navigation tasks. Fundamental to the notion of functionality in a 3D space partially filled with solid objects, is the requirement to appreciate the depth dimension, from a particular viewpoint. Human vision abounds with depth cues derivable from imagery and many of these have been the subjects of study for robotic vision application. However, direct range recovery using time-of-flight methods (ultrasonic or light) has distinct advantages for robotics and it is easy to justify these alternative approaches despite (and maybe even because of) their independence from visual cues.

This paper presents work in progress in the Computer Vision and Robotics Laboratory at The Australian National University towards implementing a robotic hand/eye coordination system with applicability in the scene domain of brightly coloured, simply shaped objects with relatively untextured surfaces in arbitrary 3 dimensional configurations. The advantages of using directly acquired range data (via a laser time-of-flight range scanner) in enhancing the scene segmentation phase of analysis is emphasised and fairly convincing results presented. Actual vision-driven manipulation has not yet been developed but plans towards this end are included.

Type
Article
Copyright
Copyright © Cambridge University Press 1983

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Jarvis, R.A., “Some Observations on the Human Visual Perception System and their Relevance to Computer Vision Research” Purdue University, Electrical Engineering Technical Report TR-EE-77−46, 11 1977.Google Scholar
2.Gregory, R.L., The Intelligent Eye (McGraw-Hill, New York, 1970).Google Scholar
3.Gibson, J.J., The Senses Considered as Perceptual Systems (Houghton-Mifflin, Boston 1966).Google Scholar
4.Guzman, A., “Computer Recognition of Three-Dimensional Objects in a Visual Scene” Ph.D. Thesis, MAC-TR-59, Project MAC, M.I.T. Mass. (1968).Google Scholar
5.Waltz, D., “Generating Semantic Descriptions from Drawings of Scenes with Shadows” Ph.D. Thesis, M.I.T., Mass. (1972). (Also Ch. 2 of The Psychology of Computer Vision P.H. Winston, (Ed.) McGraw-Hill, New York, 1975.)Google Scholar
6.Shirai, Y., “A Context Sensitive Line Finder for Recognition of PolyhedraArtif Int 4 (2), 95120 (1973). (also Ch. 3 of The Psychology of Computer Vision P.H. Winston, (Ed.) McGraw-Hill, 1975.)CrossRefGoogle Scholar
7.Duda, R.O., “Some Current Techniques for Scene Analysis” Stanford Research Institute, A.I. Centre Technical Note 46 (10 1970).Google Scholar
8.Tenenbaum, J.M., “On Locating Objects by Their Distinguishing Features in Multisensory Images” Stanford Research Institute, A.I. Centre Technical Note 84, (09 1973).Google Scholar
9.Agin, G.J., “Computer Vision for Industrial Inspection and AssemblyAdvances in Computer Technology (1980) Vol. 1. Sponsored by ASME Century 2, San Francisco, Calif. 08 12–15, 1980, pp. 17.Google Scholar
10.Carlisle, B. et al, “The PUMA/US-100 Robot Vision SystemProc. 1st International Conf. on Robot Vision,Stratford-on-Avon, U.K. (April 1–3, 1981) pp. 149160.Google Scholar
11.Jarvis, R.A., “A Computer Vision and Robotics LaboratoryComputer (06, 1982).CrossRefGoogle Scholar
12.Jarvis, R.A., “A Laser Time-of-Flight Range Scanner for Robotic Vision” Dept. of Computer Science, Australian National University Technical Report TR-CS-81−10. Also submitted to IEEE, PAMI.Google Scholar
13.Jarvis, R.A., “Interactive Image Segmentation: Line, Region and Semantic Structure” in Data Structures, Computer Graphics, and Pattem Recognition, edited by Klinger, , Fu, and Kunii, (Academic Press, New York 1977) pp. 273308.CrossRefGoogle Scholar
14.Jarvis, R.A. and Patrick, E.A., “Clustering Using a Similarity Measure Based on Shared Near NeighboursIEEE Trans on Comp C-22, No. 11, 10251034 (11, 1973).CrossRefGoogle Scholar
15.Jarvis, R.A., “Region Based Image Segmentation Using Shared Near Neighbour ClusteringProc. International Conf. on Cyber and Soc.,Sept. 19–21,Washington, D.C., pp. 641647 (1977).Google Scholar
16.Rosenfeld, A. and Pfaltz, J.L., “Sequential Operations in Digital Picture ProcessingJ Assn for Comp Mach 13, No. 4, 471494 (10, 1966).CrossRefGoogle Scholar
17.Cunningham, R., “Segmenting Binary ImagesRobotics Age 3, No. 4, 419 (07/08, 1981).Google Scholar
18.Hunter, G.M. and Steiglik, K., “Operations on Images Using Quad Trees”, IEEE Trans on Pattern Analysis and Machine Intelligence PAMI-1, No. 2, 145153 (04, 1979).CrossRefGoogle ScholarPubMed
19.Tanimoto, S.L. and Jackins, C.L., “Geometric Modelling with Oct Trees” Proc. Workshop on Picture Data Description and Management, Asilomar, Calif., 117123 (08 27–28, 1980).Google Scholar
20.Jarvis, R.A., “A Perspective on Range Finding Techniques” Australian National University, Department of Computer Science Technical Report TR-CS-81−07 (1981). Also submitted to IEEE, PAMI.Google Scholar
21.Lozano-Perez, T. and Wesley, M.A., “An Algorithm for Planning Collision-Free Paths Among Polyhedral ObstaclesComs ACM 22, No. 10, 560570 (10, 1979).CrossRefGoogle Scholar
22.Jarvis, R.A., “Polyhedra Obstacle Growing for Collision-Free Path PlanningProc. 5th Australian Computer Science Conference,Perth,Western Australia(Feb. 8–10, 1982). (Also Australian National University, Department of Computer Science Technical Report TR-CS-81−16.)Google Scholar