Skip to main content Accessibility help
×
Home
  • Print publication year: 2013
  • Online publication date: June 2018

8 - Quantitative approaches

from PART 2 - DOING RESEARCH, EVALUATION AND AUDIT

Summary

‘How can quantitative research help improve my library service?’

‘I'm not good with numbers and quantitative research is all about statistics, right?’

‘You mean that the information I already collect about my service can be used for “quantitative research”?’

Introduction

You might be surprised to learn that you are already doing some quantitative research as part of your library practice: you just don't label it as such. Think of the range of routine statistical collections that your organization makes – customer statistics and resource usage figures, for example. On their own these may not be very meaningful, but what happens if you compare your statistics now with statistics for a year, or two years, ago? Or how your organization compares with another organization elsewhere? Longitudinal changes in customer usage may be telling you something about the quality of the service, or the effect of increased or reduced staff inputs, but we have to be careful about linking cause and effect. One of the key messages of the examples discussed later in the chapter is the care you need to take to avoid reaching mistaken conclusions. You may want to believe that the changes are the result of reduced resourcing, or the effect of your changes in an information literacy programme but beware – there may be other reasons. This chapter will help you understand what can, and cannot, be achieved through quantitative research. The emphasis will be on some examples to illustrate some of the principles of quantitative research. You may need to consult a basic statistics textbook as well for further explanation of some of the principles of statistical analysis, and some of the highlighted terms. Chapter 9 offers an outline of some of the main ideas.

If you are familiar with the type of data being collected routinely, you are probably also aware of some of the problems in the collection and analysis of such data. Can you be sure of the reliability of the data collection? For example, are all the staff counting the items in the same way? If you are benchmarking your activity against that of another organization, are they counting in the same way that you are? Do you share the same interpretations of what it is you are counting?