We can't solve problems by using the same kind of thinking we used when we created them.
(Albert Einstein)Introduction
A temporal paradox lies at the heart of digital preservation: that the only sure criteria for evaluating the success of today's preservation systems depend on assessing the usability of the digital objects they preserve by future endusers – those unknown but significant others for whom we preserve data. It is probable that the difficulties inherent to digital preservation techniques and the userled evaluation work assessing them will persist across time. The digital preservation (DP) community must therefore address questions about who its endusers will be 10, 50 or even 100 years from now. One way to tackle this problem is to answer the deceptively simple question ‘Who are our endusers now and what are their wants and needs?’. Only then can we begin to imagine how these will change.
When being evaluated by users, DP frameworks and systems must be demonstrated to a wider range of user groups than many conventional user studies address, including data creators, curators, consumers and a variety of potential customer organizations, comprising not only endusers but intermediaries from multiple sectors possessing distinct operational and conceptual requirements. As the creation, provision and complexity of digital objects increases and evolves, so too does the distribution of user types.
Preservation is inevitably intricate. Inbuilt hazards relate to scale (e.g. the quantity and variety of digital objects requiring preservation), speed (preservation research occurs within a rapidly shifting and multifaceted technological environment) and incoherence (e.g. a lack of consensus on best practices and preferred methods).
To some extent, the significant contextual or symbolic aspects of items and their probable use have always been important to knowledge organizations. Yet in a digital library these aspects themselves become targets for preservation (Chowdhury, 2010). All of this is to say that evaluating DP work with users is fraught with complication.
While established means of user testing form an important component within the overall armoury of DP evaluation, new methods and models are being developed in order to keep pace with the everevolving thinking of users. Some of these new approaches are explored in this chapter.