Hostname: page-component-8448b6f56d-m8qmq Total loading time: 0 Render date: 2024-04-20T14:51:37.491Z Has data issue: false hasContentIssue false

On hierarchical vs. non-hierarchical comparisons in metrology and testing

Published online by Cambridge University Press:  19 April 2010

F. Pavese*
Affiliation:
Istituto Nazionale di Ricerca Metrologica (INRIM), Strada delle Cacce 73-91, 10139, Torino, Italy
*
* Correspondence: f.pavese@inrim.it
Get access

Abstract

The type of data treatment is different depending on whether the comparison, in particular a key comparison of the MRA (mutual recognition agreement), is of the hierarchical or non-hierarchical type. This term does not mean a possible hierarchy among the participant laboratories; nor, in the opposite sense, a non-hierarchy among them like in the MRA key comparisons, but an intrinsic characteristic of the comparison measurand or design. It is a typical hierarchical comparison when the comparison involves artefact standards. In this case, the summary parameters of the comparison are hierarchically higher than the input dataset. In case of non-hierarchical comparisons, the summary parameters are generally not of a hierarchically higher level than the input dataset, because the comparison dataset can be considered drawn from a single super-population. This happens, when a single standard is circulated for measurement; when the measured samples are all drawn from a single batch of a reference material; when the standards are all realisations of a single condition – namely a physical or chemical state. This paper will discuss in detail these two categories.

Type
Research Article
Copyright
© EDP Sciences 2010

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

F. Pavese, An introduction to data modeling principles in metrology and testing, in Data Modeling for Metrology and Testing in Measurement Science. Series: Modeling and Simulation in Science, Engineering and Technology, edited by F. Pavese, A.B. Forbes (Birkhäuser, Boston, 2009), Chap. 1, pp. 1–30
Pavese, F., A metrologist viewpoint on some statistical issues concerning the comparison of non-repeated measurement data, namely MRA key comparisons, Measurement 39, 821 (2006) CrossRefGoogle Scholar
Kacker, R.N., Datla, R.U., Parr, A.C., Statistical analysis of CIPM key comparisons based on the ISO, Metrologia 41, 340 (2004) CrossRefGoogle Scholar
Pavese, F., The Definition of the Measurand in Key Comparisons: lessons learnt with thermal standards, Metrologia 44, 327 (2007) CrossRefGoogle Scholar
Pavese, F., Metrologia 42, L10 (2005) CrossRef
Ciarlini, P., Cox, M.G., Pavese, F., Regoliosi, G., The use of a mixture of probability distributions in temperature interlaboratory comparisons, Metrologia 41, 116 (2004) CrossRefGoogle Scholar
Duewer, D.L., A comparison of location estimators for interlaboratory data contaminated with value and uncertainty outliers, Accred. Qual. Assur. 13, 193 (2008) CrossRefGoogle Scholar
Douglas, R.J., Steele, A.G., Pair-differences chi-squared statistics for Key Comparisons, Metrologia 43, 89 (2006) CrossRefGoogle Scholar
A.G. Steele, R.J. Douglas, Establishing confidence from measurement comparisons, Measur. Sci. Technol. 19, 064003 (2008), doi: 10.1088/0957-0233/19/6/064003