Book contents
- Frontmatter
- Dedication
- Contents
- Figures and tables: acknowledgements
- Contributors
- Foreword
- Preface
- 1 Interactive information retrieval: history and background
- 2 Information behavior and seeking
- 3 Task-based information searching and retrieval
- 4 Approaches to investigating information interaction and behaviour
- 5 Information representation
- 6 Access models
- 7 Evaluation
- 8 Interfaces for information retrieval
- 9 Interactive techniques
- 10 Web retrieval, ranking and personalization
- 11 Recommendation, collaboration and social search
- 12 Multimedia: behaviour, interfaces and interaction
- 13 Multimedia: information representation and access
- References
- Index
7 - Evaluation
Published online by Cambridge University Press: 08 June 2018
- Frontmatter
- Dedication
- Contents
- Figures and tables: acknowledgements
- Contributors
- Foreword
- Preface
- 1 Interactive information retrieval: history and background
- 2 Information behavior and seeking
- 3 Task-based information searching and retrieval
- 4 Approaches to investigating information interaction and behaviour
- 5 Information representation
- 6 Access models
- 7 Evaluation
- 8 Interfaces for information retrieval
- 9 Interactive techniques
- 10 Web retrieval, ranking and personalization
- 11 Recommendation, collaboration and social search
- 12 Multimedia: behaviour, interfaces and interaction
- 13 Multimedia: information representation and access
- References
- Index
Summary
Introduction
My search engine is better than yours.
Statements like this are often sought in information retrieval research. If such statements are not mere opinions, they are based on information retrieval evaluation, which is sometimes referred to as a hallmark and distinctive feature of information retrieval research. No claim in information retrieval is granted any merit unless it is shown, through rigorous evaluation, that the claim is well founded. Technological innovation alone is not sufficient. In fact, much research in information retrieval deals with information retrieval evaluation methodology.
Evaluation, in general, is the systematic determination of merit and significance of something using criteria against a set of standards. Evaluation therefore requires some object that is evaluated and some goal that should be achieved or served. In information retrieval, both can be set in many ways. The object usually is an information retrieval system or a system component – but what is an information retrieval system? The goal is typically the quality of the retrieved result – but what is the retrieved result and how does one measure quality? These questions can be answered in alternative ways, which lead to different kinds of information retrieval evaluation.
In practical life people engage with information retrieval when they need support in accessing information in order to carry out a current task. Subjective or objective benefit for task performance is the ultimate goal in this case. Practical life with all its variability is, however, difficult and expensive to investigate. Therefore, surrogate and more easily measurable goals are employed in information retrieval evaluation, typically the quality of the ranked result list.
In practical life, the object that would be evaluated would be the human being, augmented by technology, performing a task. Again, since this is challenging to study, a more confined and better-controlled system is normally evaluated in information retrieval. This may be a test person performing a search task, augmented by an information retrieval system, the information retrieval system alone executing a search, or a system component whose contribution to the former is of interest. The task performance process may also be cut down from a work task to a search task and down to an individual query.
- Type
- Chapter
- Information
- Interactive Information Seeking, Behaviour and Retrieval , pp. 113 - 138Publisher: FacetPrint publication year: 2011