We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Implications of random error of measurement for the sensitivity of the F test of differences between means are elaborated. By considering the mathematical models appropriate to design situations involving true and fallible measures, it is shown how measurement error decreases the sensitivity of a test of significance. A method of reducing such loss of sensitivity is described and recommended for general practice.
For questionnaires with two answer categories, it has been proven in complete generality that if a minimal sufficient statistic exists for the individual parameter and if it is the same statistic for all values of the item parameters, then the raw score (or the number of correct answers) is the minimal sufficient statistic. It follows that the model must by of the Rasch type with logistic item characteristic curves and equal item-discriminating powers.
This paper extends these results to multiple choice questionnaires. It is shown that the minimal sufficient statistic for the individual parameter is a function of the so-called score vector. It is also shown that the so-called equidistant scoring is the only scoring of a questionnaire that allows for a real valued sufficient statistic that is independent of the item parameters, if a certain ordering property for the sufficient statistic holds.
We introduce a new statistical procedure for the identification of unobserved categories that vary between individuals and in which objects may span multiple categories. This procedure can be used to analyze data from a proposed sorting task in which individuals may simultaneously assign objects to multiple piles. The results of a synthetic example and a consumer psychology study involving categories of restaurant brands illustrate how the application of the proposed methodology to the new sorting task can account for a variety of categorization phenomena including multiple category memberships and for heterogeneity through individual differences in the saliency of latent category structures.
The ethical conduct of judicial officers has been traditionally seen as a matter for individual judges to determine for themselves. Today, judges are still frequently left to consider ethical dilemmas with little formal institutional support. They must rely on their own resources or informal advice and counsel from colleagues and the head of jurisdiction. This article will explore whether this arrangement continues to be appropriate. We consider a hypothesis that a number of factors, including the growing numbers and diversity of the judiciary mean that it is less likely that there will be common understandings of the ethical values to be employed in resolving difficult dilemmas. Thus, we further hypothesise, the traditional arrangements are likely to prove insufficient. Drawing on the findings of a survey of judicial officers across Australian jurisdictions conducted in 2016, we test these hypotheses by reference to the perceptions of Australian judicial officers as to the adequacy of the ethical support available to them. Finally, we consider the variety of supports that are available in comparable jurisdictions and also in the legal profession, before turning to possible solutions to the question our hypotheses raise, including the introduction of ‘ethical infrastructures’ in the form of more formal arrangements that provide ethical guidance to judges. We argue that these ethical support mechanisms have the potential to enhance the quality of ethical decision-making and foster an ethical culture within the judiciary.
This article considers implications of the recent Love decision in the High Court for the debate about Indigenous constitutional recognition and a First Nations constitutional voice. Conceptually, it considers how the differing judgments reconcile the sui generis position of Indigenous peoples under Australian law with the theoretical ideal of equality—concepts which are in tension both in the judicial reasoning and in constitutional recognition debates. It also discusses the judgments’ limited findings on Indigenous sovereignty, demonstrating the extent to which this is predominantly a political question that cannot be adequately resolved by courts. Surviving First Nations sovereignty can best be recognised and peacefully reconciled with Australian state sovereignty through constitutional reform authorised by Parliament and the people. The article then discusses political ramifications. It argues that allegations of judicial activism enlivened by this case, rather than demonstrating the risks of a First Nations voice, in fact illustrate the foresight of the proposal: a First Nations voice was specifically designed to be non-justiciable and therefore intended to address such concerns. Similarly, objections that this case introduced a new, race-based distinction into the Constitution are misplaced. Such race-based distinctions already exist in the Constitution’s text and operation. The article then briefly offers high-level policy suggestions addressing two practical issues arising from Love. With respect to the three-part test of Indigenous identity, it suggests a First Nations voice should avoid the unjustly onerous burdens of proof that are perpetuated in some of the reasoning in Love. It also proposes policy incentives to encourage Indigenous non-citizens resident in Australia to seek Australian citizenship, helping to prevent threats of deportation like those faced by Love and Thoms.
Evidence is given to indicate that Lawley's formulas for the standard errors of maximum likelihood loading estimates do not produce exact asymptotic results. A small modification is derived which appears to eliminate this difficulty.
The National Security Legislation Amendment (Espionage and Foreign Interference) Act 2018 (Cth) introduced the first offences for acts of foreign interference in Australian history. Inter alia, the laws target activities sponsored by a foreign principal which seek to influence Australia’s democratic processes using coercive, deceptive and covert conduct. The Act’s offences address coercive and deceptive conduct by foreign actors, which align with those behaviours which find contempt in international law. However, it is the Act’s targeting of ‘covert’ conduct which has drawn the widest criticism, and which was the subject of a High Court challenge in Zhang v Commissioner of Police [2021] HCA 16. Despite the High Court not being required to determine the validity of the foreign interference offences, there remain serious questions regarding the proportionality of the offences within the legislation which target covert behaviour which is not coercive or deceptive. Such benign covert behaviour is not condemned in international law, and its prohibition in Australia presents as an attempt by the government to remediate exploitable gaps in international law by controlling the interactions of its own citizenry with foreign actors. When the available alternatives to such measures are considered, this regulation appears excessive. Thus, a future challenge to Australia’s foreign interference laws may focus on the burden which the foreign interference offence’s ‘covert’ element places on the constitutionally entrenched implied freedom of political communication.
The COVID-19 pandemic and the ensuing mandated health protections saw courts turn to communications technology as a means to be able to continue to function. However, courts are unique institutions that exercise judicial power in accordance with the rule of law. Even in a pandemic, courts need to function in a manner consistent with their institutional role and their essential characteristics. This article uses the unique circumstances brought about by the pandemic to consider how courts can embrace technology but maintain the core or essential requirements of a court. This article identifies three essential features of courts—open justice, procedural fairness and impartiality—and examines how this recent adoption of technology has maintained or challenged those essential features. This examination allows for an assessment of how the courts operated during the pandemic and also provides guidance for making design decisions about a technology-enabled future court.
International organisations emphasise how Governments around the world must use the public procurement process to aid a global drive to eliminate human trafficking in their supply chains. In this significant and original contribution, the authors examine a leading procurement model, the Australian Commonwealth Procurement Rules (CPR), for the purpose of examining whether the CPR model satisfies the necessary standards of Legal Certainty and Effectiveness for addressing the risk of trafficking occurring in public sector supply chains. The research generates new insights for countries seeking to tackle trafficking via public procurement systems and identifies pitfalls for countries to avoid if seeking to emulate the Australia CPR model, making appropriate reference to US and UK models where appropriate. The authors demonstrate how key elements of the CPR model fail to provide for the required degree of Legal Certainty and Effectiveness to tackle trafficking. System failure is demonstrated by analysis of the CPR, showing either how key CPR provisions fail to satisfy these 2 key tests, or because there is a complete absence of appropriate provisions to comprehensively deal with the risk of trafficking in public sector supply chains. This article should serve not only as a guide to countries yet to address human rights considerations in their public procurement supply chains, but also as a blueprint for countries around the world seeking to re-evaluate whether existingprovisions in their domestic procurement frameworks are fit to tackle the global scourge of trafficking in public supply chains.
The theoretical and practical importance of a double undertaking is discussed: the development of learning and transfer taxonomies with psychometric relevance and the building of psychometric classificatory systems with implications for learning and instruction. Psychometric classifications of human performances are most often based on the covariation of individual differences. The model presented justifies the expectation that the transfer from learning one task to learning another is linearly dependent on the coefficient of intercorrelation between the two tasks when the coefficient is corrected for attenuation. Two studies so far have explicitly confirmed the main deductions from this model. Contrary to the predictions, however, the regression curves yielded negative intercepts. Two empirically testable explanations are offered, one of which would be in full accordance with the model, while the other would call for a further assumption.
It is well known that the classical exploratory factor analysis (EFA) of data with more observations than variables has several types of indeterminacy. We study the factor indeterminacy and show some new aspects of this problem by considering EFA as a specific data matrix decomposition. We adopt a new approach to the EFA estimation and achieve a new characterization of the factor indeterminacy problem. A new alternative model is proposed, which gives determinate factors and can be seen as a semi-sparse principal component analysis (PCA). An alternating algorithm is developed, where in each step a Procrustes problem is solved. It is demonstrated that the new model/algorithm can act as a specific sparse PCA and as a low-rank-plus-sparse matrix decomposition. Numerical examples with several large data sets illustrate the versatility of the new model, and the performance and behaviour of its algorithmic implementation.
The slide-vector scaling model attempts to account for the asymmetry of a proximity matrix by a uniform shift in a fixed direction imposed on a symmetric Euclidean representation of the scaled objects. Although no method for fitting the slide-vector model seems available in the literature, the model can be viewed as a constrained version of the unfolding model, which does suggest one possible algorithm. The slide-vector model is generalized to handle three-way data, and two examples from market structure analysis are presented.
The graphic item-counter is described and its use as a statistical device is explained. Procedures are presented for obtaining Pearson product-moment correlations by means of the graphic item-counter.
Structural models that yield circumplex inequality patterns for the elements of correlation matrices are reviewed. Particular attention is given to a stochastic process defined on the circle proposed by T. W. Anderson. It is shown that the Anderson circumplex contains the Markov Process model for a simplex as a limiting case when a parameter tends to infinity.
Anderson's model is intended for correlation matrices with positive elements. A replacement for Anderson's correlation function that permits negative correlations is suggested. It is shown that the resulting model may be reparametrzed as a factor analysis model with nonlinear constraints on the factor loadings. An unrestricted factor analysis, followed by an appropriate rotation, is employed to obtain parameter estimates. These estimates may be used as initial approximations in an iterative procedure to obtain minimum discrepancy estimates.
The High Court applies the ‘text and structure approach’ when deriving constitutional implications. This requires implications to be drawn from the ‘text’ and ‘structure’ of the document. A particular line of criticism has been made by some scholars that frames this approach as a falsehood. According to these scholars, judges claim to be drawing implications solely from the ‘text’ and ‘structure’ but are, in fact, employing ‘external’ sources when carrying out this task. I argue that this criticism is misguided. Judges are using ‘external’ sources to help illuminate the ideas conveyed by, or contained within, the ‘text’ and ‘structure’. This means that their use of ‘external’ sources is not necessarily a circumvention of the text and structure approach but an accompaniment to it. The relevant scholars’ critique seems to be rooted in flawed conceptualisations of the Constitution’s ‘text’ and ‘structure’ and their ideational content. This work examines the problems with the relevant scholars’ critique and offers what I consider to be a more accurate explanation of the operation (and shortcomings) of the text and structure approach.
A previous mathematical study of a situation, in which the behavior of a larger group of individuals is controlled by a smaller group, is generalized for the case when the “activity” of the individuals in the group is continuously graded. The existence of two possible social configurations and of sudden transitions from one configuration to another are found in this case also.
The interrelationships between two sets of measurements made on the same subjects can be studied by canonical correlation. Originally developed by Hotelling [1936], the canonical correlation is the maximum correlation between linear functions (canonical factors) of the two sets of variables. An alternative statistic to investigate the interrelationships between two sets of variables is the redundancy measure, developed by Stewart and Love [1968]. Van Den Wollenberg [1977] has developed a method of extracting factors which maximize redundancy, as opposed to canonical correlation.
A component method is presented which maximizes user specified convex combinations of canonical correlation and the two nonsymmetric redundancy measures presented by Stewart and Love. Monte Carlo work comparing canonical correlation analysis, redundancy analysis, and various canonical/redundancy factoring analyses on the Van Den Wollenberg data is presented. An empirical example is also provided.
This study develops Markov Chain Monte Carlo (MCMC) estimation theory for the General Condorcet Model (GCM), an item response model for dichotomous response data which does not presume the analyst knows the correct answers to the test a priori (answer key). In addition to the answer key, respondent ability, guessing bias, and difficulty parameters are estimated. With respect to data-fit, the study compares between the possible GCM formulations, using MCMC-based methods for model assessment and model selection. Real data applications and a simulation study show that the GCM can accurately reconstruct the answer key from a small number of respondents.
With the advent of web-based technology, online testing is becoming a mainstream mode in large-scale educational assessments. Most online tests are administered continuously in a testing window, which may post test security problems because examinees who take the test earlier may share information with those who take the test later. Researchers have proposed various statistical indices to assess the test security, and one most often used index is the average test-overlap rate, which was further generalized to the item pooling index (Chang & Zhang, 2002, 2003). These indices, however, are all defined as the means (that is, the expected proportion of common items among examinees) and they were originally proposed for computerized adaptive testing (CAT). Recently, multistage testing (MST) has become a popular alternative to CAT. The unique features of MST make it important to report not only the mean, but also the standard deviation (SD) of test overlap rate, as we advocate in this paper. The standard deviation of test overlap rate adds important information to the test security profile, because for the same mean, a large SD reflects that certain groups of examinees share more common items than other groups. In this study, we analytically derived the lower bounds of the SD under MST, with the results under CAT as a benchmark. It is shown that when the mean overlap rate is the same between MST and CAT, the SD of test overlap tends to be larger in MST. A simulation study was conducted to provide empirical evidence. We also compared the security of MST under the single-pool versus the multiple-pool designs; both analytical and simulation studies show that the non-overlapping multiple-pool design will slightly increase the security risk.