The design of methods for performance evaluation is a major open research issue in the area
of spoken language dialogue systems. This paper presents the PARADISE methodology for
developing predictive models of spoken dialogue performance, and shows how to evaluate
the predictive power and generalizability of such models. To illustrate the methodology, we
develop a number of models for predicting system usability (as measured by user satisfaction),
based on the application of PARADISE to experimental data from three different spoken
dialogue systems. We then measure the extent to which the models generalize across different
systems, different experimental conditions, and different user populations, by testing models
trained on a subset of the corpus against a test set of dialogues. The results show that the
models generalize well across the three systems, and are thus a first approximation towards a
general performance model of system usability.