Skip to main content Accessibility help
×
Home
  • Print publication year: 2006
  • Online publication date: June 2018

21 - A DiVA for every audience: lessons learned from the evaluation of an online digital video library

Summary

Introduction

The Open University (OU) has been in existence in the UK for more than 30 years, specializing in creating and delivering course materials in all formats for students at a distance. As a result there is a large archive of learning resources, managed by the OU library, which could potentially be reused in the making of new courses. The Digital Video Applications (DiVA) project was set up in 2000 to investigate the potential reuse of OU video footage when digitized and made searchable. What would the benefits be to staff, and possibly students, of an online digital video library?

The project's aims included the following:

  • • Creating an online library of digitized OU video footage.
  • • Rendering the footage searchable by content through the use of cutting edge video-indexing technology.
  • • Evaluating the system in a number of different user contexts.
  • • Making recommendations for the use and future of such a system within the university.
  • The DiVA system and its applications

    The operational specification for the DiVA digital video library system was written in consultation with a user group and project board from across the university, using the recommendations from the Informedia project evaluation (Kukulska- Hulme et al., 1999) as a starting point. We could not locate any open source solutions at that time, so a European tendering exercise was undertaken. This resulted in the purchase of the Virage suite of products.

    Adding content to the video library

    A ‘video logging’ station was employed to encode (digitize) the videos as Windows Media files at three different streaming bit rates (56K, 512K and MPEG2) to suit all users, while simultaneously creating automatic metadata using artificial intelligence to recognize words, sounds, faces and voices. However, the system needed to be ‘trained’ to recognize all of these except the words, which was time consuming. For this reason, we focused on word recognition, which created a transcript of each video using voice-to-text technology. Because of inaccuracies with this method (approximately 30% accuracy), original transcripts held by the library were also imported into the system. Additional metadata were also added manually. We chose IMS Learning Resource Metadata (IMS, 2001), and created our own extensions for video material.