Book contents
- Frontmatter
- Contents
- Preface
- Contributors
- 1 The Complexity of Algorithms
- 2 Building Novel Software: the Researcher and the Marketplace
- 3 Prospects for Artificial Intelligence
- 4 Structured Parallel Programming: Theory meets Practice
- 5 Computer Science and Mathematics
- 6 Paradigm Merger in Natural Language Processing
- 7 Large Databases and Knowledge Re-use
- 8 The Global-yet-Personal Information System
- 9 Algebra and Models
- 10 Real-time Computing
- 11 Evaluation of Software Dependability
- 12 Engineering Safety-Critical Systems
- 13 Semantic Ideas in Computing
- 14 Computers and Communications
- 15 Interactive Computing in Tomorrow's Computer Science
- 16 On the Importance of Being the Right Size
- References
- Index
7 - Large Databases and Knowledge Re-use
Published online by Cambridge University Press: 10 December 2009
- Frontmatter
- Contents
- Preface
- Contributors
- 1 The Complexity of Algorithms
- 2 Building Novel Software: the Researcher and the Marketplace
- 3 Prospects for Artificial Intelligence
- 4 Structured Parallel Programming: Theory meets Practice
- 5 Computer Science and Mathematics
- 6 Paradigm Merger in Natural Language Processing
- 7 Large Databases and Knowledge Re-use
- 8 The Global-yet-Personal Information System
- 9 Algebra and Models
- 10 Real-time Computing
- 11 Evaluation of Software Dependability
- 12 Engineering Safety-Critical Systems
- 13 Semantic Ideas in Computing
- 14 Computers and Communications
- 15 Interactive Computing in Tomorrow's Computer Science
- 16 On the Importance of Being the Right Size
- References
- Index
Summary
Abstract
This article argues that problems of scale and complexity of data in large scientific and engineering databases will drive the development of a new generation of databases. Examples are the human genome project with huge volumes of data accessed by widely distributed users, geographic information systems using satellite data, and advanced CAD and engineering design databases. Databases will share not just facts, but also procedures, methods and constraints. This together with the complexity of the data will favour the object-oriented database model. Knowledge base technology is also moving in this direction, leading to a distributed architecture of knowledge servers interchanging information on objects and constraints. An important theme is the re-use not just of data and methods but of higher-level knowledge. For success, a multi-disciplinary effort will be needed along the lines of the Knowledge Sharing Effort in USA, which is discussed.
Introduction
The research area of databases is a very interesting testing ground for computing science ideas. It is an area where theory meets reality in the form of large quantities of data with very computer-intensive demands on it. Until recently the major problems were in banking and commercial transactions. These sound easy in principle but they are made difficult by problems of scale, distributed access, and the ever present need to move a working system with long-term data onto new hardware, new operating systems, and new modes of interaction. Despite this, principles for database system architecture were established which have stood the test of time – data independence, serialised transactions, two-phase commit, query optimisation and conceptual schema languages. Thanks to these advances the database industry is very large and very successful.
- Type
- Chapter
- Information
- Computing TomorrowFuture Research Directions in Computer Science, pp. 110 - 126Publisher: Cambridge University PressPrint publication year: 1996
- 1
- Cited by