We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A model is presented for factor analysing scores on a set of psychological tests administered as both pre- and postmeasures in a study of change. The model assumes that the same factors underlie the tests on each occasion, but that factor scores as well as factor loadings may change between occasions. Factors are defined to be orthogonal between as well as within occasions. A two-stage least squares procedure for fitting the model is described, and generally provides a unique rotation solution for the factors on each occasion.
A description is given of diagrams (available separately) for computing tetrachoric correlation coefficients. The diagrams are entered with “per cent of combined groups above dividing point” and difference between groups in their per cents above the dividing point.
A model is presented for evaluating potential effectiveness of a Bayesian classification system using the expected value of the posterior probability for true classifications as an evaluation metric. For a given set of input parameters, the value of this complex metric is predictable from a simply computed row variance metric. Prediction equations are given for several representative sets of input parameters.
Cognitive diagnostic models (CDMs) have arisen as advanced psychometric models in the past few decades for assessments that intend to measure students’ mastery of a set of attributes. Recently, quite a few studies attempted to extend CDMs to longitudinal versions, and they all tended to model transition probabilities from non-mastery to mastery or vice versa for each attribute separately, with an exception of a few studies (e.g., Chen et al. 2018; Madison & Bradshaw 2018). However, these pioneering works have not taken into consideration the attribute relationships and the ever-changing attributes in a learning period. In this paper, we consider a profile-level latent transition CDM (TCDM hereafter), which can not only identify transition probabilities across the same attributes over time, but also the transition pathways across different attributes. Two versions of the penalized expectation-maximization (PEM) algorithms are proposed to shrink the probabilities associated with impermissible transition pathways to 0 and, thereby, help explore attribute relationships in a longitudinal setting. Simulation results reveal that PEM with group penalty holds great promise for identifying learning trajectories.
The test information function serves important roles in latent trait models and in their applications. Among others, it has been used as the measure of accuracy in ability estimation. A question arises, however, if the test information function is accurate enough for all meaningful levels of ability relative to the test, especially when the number of test items is relatively small (e.g., less than 50). In the present paper, using the constant information model and constant amounts of test information for a finite interval of ability, simulated data were produced for eight different levels of ability and for twenty different numbers of test items ranging between 10 and 200. Analyses of these data suggest that it is desirable to consider some modification of the test information function when it is used as the measure of accuracy in ability estimation.
Gwendolen Bishop is a name that appears in the margins of my recent account of the English avant-garde theatre. Prior to that she barely made it even into the margins, and then often with some rather significant indecision as to how actually to spell her name. The aim of this essay is to retrieve her from the margins and bring her more centrally into view. In doing so I consciously weave together her arts practices and her personal life, for these are deeply connected. The making of an avantist culture in early twentieth-century England was done not simply by arts experiments but also by kinds of behavior that challenged dominant ideas. In our Western twenty-first century we note and make much of Edwardian behaviors that contested assumptions about gender and sexuality, but we should note, alongside that, some equally striking challenges to ideas about class. Both are apparent in the Bishop story, which I tell more or less as a biographical narrative. The danger when one recovers a person from the shadows is that, in trying to situate them among their contemporaries, one writes overmuch about those contemporaries, such that our person fades again into the mists. With our biographical focus fixed solidly on her we can, I hope, discover how Gwendolen Bishop made her very particular contribution to this exciting cultural period on the eve of modernism.
Are economic decisions affected by short-term stress, failure, or both? Such effects have not been clearly distinguished in previous experimental research, and have the potential to worsen economic outcomes, especially in disadvantaged socioeconomic groups. We validate a novel experimental protocol to examine the individual and combined influences of stress, failure, and success. The protocol employs a 2 × 3 experimental design in two sessions and can be used online or in laboratory studies to analyse the impact of these factors on decision-making and behaviour. The stress protocol was perceived as significantly more stressful than a control task, and it induced a sizeable and significant rise in state anxiety. The provision of negative feedback (“failure”) significantly lowered participants’ assessment of their performance, induced feelings of failure, and raised state anxiety.
Using the well-known strategy in which parameters are linked to the sampling distribution via an identification analysis, we offer an interpretation of the item parameters in the one-parameter logistic with guessing model (1PL-G) and the nested Rasch model. The interpretations are based on measures of informativeness that are defined in terms of odds of correctly answering the items. It is shown that the interpretation of what is called the difficulty parameter in the random-effects 1PL-G model differs from that of the item parameter in a random-effects Rasch model. It is also shown that the traditional interpretation of the guessing parameter in the 1PL-G model changes, depending on whether fixed-effects or random-effects versions of both models are considered.
Guttman’s index of indeterminacy (2ρ2 − 1) measures the potential amount of uncertainty in picking the right alternative interpretation for a factor. When alternative solutions for a factor are equally likely to be correct, then the squared multiple correlation ρ2 for predicting the factor from the observed variables is the average correlation ρAB between independently selected alternative solutions A and B, while var (ρAB) = (1 − ρ2)2/s, where s is the dimensionality of the space in which unpredicted components of alternative solutions are to be found. When alternative solutions for the factor are not equally likely to be chosen, ρ2 is the lower bound for E(ρAB); however, E(ρAB) need not be a modal value in the distribution of ρAB. Guttman’s index and E(ρAB) measure different aspects of the same indeterminacy problem.
A commonly used method to evaluate the accuracy of a measurement is to provide a confidence interval that contains the parameter of interest with a given high probability. Smallest exact confidence intervals for the ability parameter of the Rasch model are derived and compared to the traditional, asymptotically valid intervals based on the Fisher information. Tables of the exact confidence intervals, termed Clopper-Pearson intervals, can be routinely drawn up by applying a computer program designed by and obtainable from the author. These tables are particularly useful for tests of only moderate lengths where the asymptotic method does not provide valid confidence intervals.
An initial transformation for facilitating the analysis of a factor problem is discussed. Such a transformation is used in the rotational procedure developed by L. L. Thurstone in an accompanying article.
Partial Least Squares as applied to models with latent variables, measured indirectly by indicators, is well-known to be inconsistent. The linear compounds of indicators that PLS substitutes for the latent variables do not obey the equations that the latter satisfy. We propose simple, non-iterative corrections leading to consistent and asymptotically normal (CAN)-estimators for the loadings and for the correlations between the latent variables. Moreover, we show how to obtain CAN-estimators for the parameters of structural recursive systems of equations, containing linear and interaction terms, without the need to specify a particular joint distribution. If quadratic and higher order terms are included, the approach will produce CAN-estimators as well when predictor variables and error terms are jointly normal. We compare the adjusted PLS, denoted by PLSc, with Latent Moderated Structural Equations (LMS), using Monte Carlo studies and an empirical application.
A generalized least squares approach is presented for incorporating linear constraints on the standardized row and column scores obtained from a canonical analysis of a contingency table. The method is easy to implement and may simplify considerably the interpretation of a data matrix. The approach is compared to a restricted maximum likelihood procedure.
An individual differences model for multidimensional scaling is outlined in which individuals are assumed differentially to weight the several dimensions of a common “psychological space”. A corresponding method of analyzing similarities data is proposed, involving a generalization of “Eckart-Young analysis” to decomposition of three-way (or higher-way) tables. In the present case this decomposition is applied to a derived three-way table of scalar products between stimuli for individuals. This analysis yields a stimulus by dimensions coordinate matrix and a subjects by dimensions matrix of weights. This method is illustrated with data on auditory stimuli and on perception of nations.
In component analysis solutions, post-multiplying a component score matrix by a nonsingular matrix can be compensated by applying its inverse to the corresponding loading matrix. To eliminate this indeterminacy on nonsingular transformation, we propose Joint Procrustes Analysis (JPA) in which component score and loading matrices are simultaneously transformed so that the former matrix matches a target score matrix and the latter matches a target loading matrix. The loss function of JPA is a function of the nonsingular transformation matrix and its inverse, and is difficult to minimize straightforwardly. To deal with this difficulty, we reparameterize those matrices by their singular value decomposition, which reduces the minimization to alternately solving quartic equations and performing the existing multivariate procedures. This algorithm is assessed in a simulation study. We further extend JPA for cases where targets are linear functions of unknown parameters. We also discuss how the application of JPA can be extended in different fields.
It is shown that Kruskal's multidimensional scaling loss function is differentiable at a local minimum. Or, to put it differently, that in multidimensional scaling solutions using Kruskal's stress distinct points cannot coincide.
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang’s (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock–Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
Parametric likelihood estimation is the prevailing method for fitting cognitive diagnosis models—also called diagnostic classification models (DCMs). Nonparametric concepts and methods that do not rely on a parametric statistical model have been proposed for cognitive diagnosis. These methods are particularly useful when sample sizes are small. The general nonparametric classification (GNPC) method for assigning examinees to proficiency classes can accommodate assessment data conforming to any diagnostic classification model that describes the probability of a correct item response as an increasing function of the number of required attributes mastered by an examinee (known as the “monotonicity assumption”). Hence, the GNPC method can be used with any model that can be represented as a general DCM. However, the statistical properties of the estimator of examinees’ proficiency class are currently unknown. In this article, the consistency theory of the GNPC proficiency-class estimator is developed and its statistical consistency is proven.