Published online by Cambridge University Press: 25 January 2020
V-Dem relies on country experts who code a host of ordinal variables, providing subjective ratings of latent – that is, not directly observable – regime characteristics. Sets of around five experts rate each case, and each rater works independently. Our statistical tools model patterns of disagreement between experts, who may offer divergent ratings because of differences of opinion, variation in scale conceptualization, or mistakes. These tools allow us to aggregate ratings into point estimates of latent concepts and quantify our uncertainty around these estimates. This chapter describes item response theory models that can account and adjust for differential item functioning (i.e., differences in how experts apply ordinal scales to cases) and variation in rater reliability (i.e., random error). We also discuss key challenges specific to applying item response theory to expert–coded cross-national panel data, explain how we address them, highlight potential problems with our current framework, and describe long-term plans for improving our models and estimates. Finally, we provide an overview of the end–user–accessible products of the V-Dem measurement model.