Hostname: page-component-7c8c6479df-hgkh8 Total loading time: 0 Render date: 2024-03-29T04:34:31.726Z Has data issue: false hasContentIssue false

Multiple Comparison Procedures for Simple One-Way ANOVA with Dependent Data

Published online by Cambridge University Press:  10 April 2014

Guillermo Vallejo Seco*
Affiliation:
University of Oviedo
Ignacio Menéndez de la Fuente
Affiliation:
University of Oviedo
Paula Fernández García
Affiliation:
University of Oviedo
*
Correspondence concerning this article should be addressed to Dr. Guillermo Vallejo, Departamento de Psicología.Universidad de Oviedo. Plaza Feijoo, s/n. 33003 OVIEDO.Spain. E-mail: gvallejo@sci.cpd.uniovi.es

Abstract

The independence assumption, although reasonable when examining cross-sectional data using single-factor experimental designs, is seldom verified by investigators. A Monte Carlo type simulation experiment was designed to examine the relationship between true Types I and II error probabilities in six multiple comparison procedures. Various aspects, such as patterns of means, types of hypotheses, and degree of dependence of the observations, were taken into account. Results show that, if independence is violated, none of the procedures control a using the error rate per comparison. At the same time, as the correlation increases, so does the per-comparison power.

La asunción de independencia parece un supuesto razonable al examinar los datos de un diseño experimental de grupos al azar. Probablemente debido a ello, esta asunción raramente es verificada por los investigadores. Por todo ello, realizamos un experimento de simulación Monte Carlo por medio del cual se examinan las tasas de error tipo I y tipo II cometidas al utilizar diferentes procedimientos de comparación múltiple, haciendo uso de diferentes tipos de patrones de medias, tipos de hipótesis y grados de dependencia entre las observaciones. Si se viola la independencia, los resultados revelan que ningún procedimiento mantiene controlada la tasa de error por contraste al nivel nominal, al mismo tiempo, conforme se incrementa la correlación en una pequeña cantidad, la potencia por comparación también se incrementa.

Type
Articles
Copyright
Copyright © Cambridge University Press 1999

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aitken, A.C. (1935). On least squares and linear combination of observation. Proceeding of the Royal Society of Edinburgh, 55, 4248.CrossRefGoogle Scholar
Einot, I., & Gabriel, K.R. (1975). A study of the power of several methods of multiple comparisons. Journal of the American Statistical Association, 70, 574583.Google Scholar
Fisher, R.A. (1935). The Design of Experiments. Edinburgh and London: Oliver and Boyd.Google Scholar
GAUSS (1992). The Gauss System (version 3.1). Washington: Aptech Systems, Inc.Google Scholar
Hayter, A.J. (1986). The maximum familywise error rate of Fisher's least significant difference test. Journal of the American Statistical Association, 81, 10001004.CrossRefGoogle Scholar
Keuls, M. (1952). The use of the ‘Studentized range’ in connection with and analysis of variance. Euphtyca, 1, 112122.CrossRefGoogle Scholar
Kinderman, A.J., & Ramage, J.G. (1976). Computer generation of normal random numbers. Journal of American Statistical Association, 77, 893896.CrossRefGoogle Scholar
Newman, D. (1939). The distribution of the range in samples from a normal population expressed in terms of an independent estimate of standard deviation. Biometrika, 31, 2030.CrossRefGoogle Scholar
Pavur, R. (1988). Type I error rates for multiple comparison procedures with dependent data. American Statistician, 42, 171174.Google Scholar
Pavur, R., & Lewis, T.D. (1983). Unbiased F test for factorial experiments for correlated data. Communications in Statistics, Theory, and Methods, 12, 829840.CrossRefGoogle Scholar
Ryan, T.A. (1959). Multiple comparisons in psychological research. Psychological Bulletin, 56, 2647.CrossRefGoogle ScholarPubMed
Scariano, S.M., & Davenport, J.M. (1987). The effects of violations of independence assumptions in the one-way ANOVA. American Statistician, 41, 123129.Google Scholar
Schaffer, J.P. (1995). Multiple hypothesis testing. Annual Review Psychology, 46, 561584.CrossRefGoogle Scholar
Schauer, E.M., & Stoller, D.S. (1966). On the generation of normal random vectors. Technometrics, 4, 451464.Google Scholar
Scheffé, H. (1959). The Analysis of Variance, New York: Wiley.Google Scholar
Seaman, M.A., Levin, J.R., & Serlin, R.C. (1991). New developments in pairwise multiple comparisons: Some powerful and practicable procedures. Psychological Bulletin, 110, 577586.CrossRefGoogle Scholar
Tukey, V.W. (1953). The Problem of Multiple Comparisons. Mimeographed Monograph, Princeton University.Google Scholar
Vallejo, G., & Fernández, P. (1990). Diseños de medidas repetidas con errores autocorrelacionados. Psicothema, 2, 189209.Google Scholar
Vallejo, G., & Menéndez, I.A. (1995). Comparaciones múltiples en diseños transversales con datos dependientes. Psicothema, 7, 401418.Google Scholar
Welsch, R.E. (1977). Stepwise multiple comparison procedures. Journal of the American Statistical Association, 72, 566575.CrossRefGoogle Scholar