To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers
of an area covered by the Dark Energy Survey, reaching a depth of 25–30
rms at a spatial resolution of
11–18 arcsec, resulting in a catalogue of
220 000 sources, of which
180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here.
Frascati international research criteria for HIV-associated neurocognitive disorders (HAND) are controversial; some investigators have argued that Frascati criteria are too liberal, resulting in a high false positive rate. Meyer et al. recommended more conservative revisions to HAND criteria, including exploring other commonly used methodologies for neurocognitive impairment (NCI) in HIV including the global deficit score (GDS). This study compares NCI classifications by Frascati, Meyer, and GDS methods, in relation to neuroimaging markers of brain integrity in HIV.
Two hundred forty-one people living with HIV (PLWH) without current substance use disorder or severe (confounding) comorbid conditions underwent comprehensive neurocognitive testing and brain structural magnetic resonance imaging and magnetic resonance spectroscopy. Participants were classified using Frascati criteria versus Meyer criteria: concordant unimpaired [Frascati(Un)/Meyer(Un)], concordant impaired [Frascati(Imp)/Meyer(Imp)], or discordant [Frascati(Imp)/Meyer(Un)] which were impaired via Frascati criteria but unimpaired via Meyer criteria. To investigate the GDS versus Meyer criteria, the same groupings were utilized using GDS criteria instead of Frascati criteria.
When examining Frascati versus Meyer criteria, discordant Frascati(Imp)/Meyer(Un) individuals had less cortical gray matter, greater sulcal cerebrospinal fluid volume, and greater evidence of neuroinflammation (i.e., choline) than concordant Frascati(Un)/Meyer(Un) individuals. GDS versus Meyer comparisons indicated that discordant GDS(Imp)/Meyer(Un) individuals had less cortical gray matter and lower levels of energy metabolism (i.e., creatine) than concordant GDS(Un)/Meyer(Un) individuals. In both sets of analyses, the discordant group did not differ from the concordant impaired group on any neuroimaging measure.
The Meyer criteria failed to capture a substantial portion of PLWH with brain abnormalities. These findings support continued use of Frascati or GDS criteria to detect HIV-associated CNS dysfunction.
Objectives: Studies of neurocognitively elite older adults, termed SuperAgers, have identified clinical predictors and neurobiological indicators of resilience against age-related neurocognitive decline. Despite rising rates of older persons living with HIV (PLWH), SuperAging (SA) in PLWH remains undefined. We aimed to establish neuropsychological criteria for SA in PLWH and examined clinically relevant correlates of SA. Methods: 734 PLWH and 123 HIV-uninfected participants between 50 and 64 years of age underwent neuropsychological and neuromedical evaluations. SA was defined as demographically corrected (i.e., sex, race/ethnicity, education) global neurocognitive performance within normal range for 25-year-olds. Remaining participants were labeled cognitively normal (CN) or impaired (CI) based on actual age. Chi-square and analysis of variance tests examined HIV group differences on neurocognitive status and demographics. Within PLWH, neurocognitive status differences were tested on HIV disease characteristics, medical comorbidities, and everyday functioning. Multinomial logistic regression explored independent predictors of neurocognitive status. Results: Neurocognitive status rates and demographic characteristics differed between PLWH (SA=17%; CN=38%; CI=45%) and HIV-uninfected participants (SA=35%; CN=55%; CI=11%). In PLWH, neurocognitive groups were comparable on demographic and HIV disease characteristics. Younger age, higher verbal IQ, absence of diabetes, fewer depressive symptoms, and lifetime cannabis use disorder increased likelihood of SA. SA reported increased independence in everyday functioning, employment, and health-related quality of life than non-SA. Conclusions: Despite combined neurological risk of aging and HIV, youthful neurocognitive performance is possible for older PLWH. SA relates to improved real-world functioning and may be better explained by cognitive reserve and maintenance of cardiometabolic and mental health than HIV disease severity. Future research investigating biomarker and lifestyle (e.g., physical activity) correlates of SA may help identify modifiable neuroprotective factors against HIV-related neurobiological aging. (JINS, 2019, 25, 507–519)
Childhood maltreatment is one of the strongest predictors of adulthood depression and alterations to circulating levels of inflammatory markers is one putative mechanism mediating risk or resilience.
To determine the effects of childhood maltreatment on circulating levels of 41 inflammatory markers in healthy individuals and those with a major depressive disorder (MDD) diagnosis.
We investigated the association of childhood maltreatment with levels of 41 inflammatory markers in two groups, 164 patients with MDD and 301 controls, using multiplex electrochemiluminescence methods applied to blood serum.
Childhood maltreatment was not associated with altered inflammatory markers in either group after multiple testing correction. Body mass index (BMI) exerted strong effects on interleukin-6 and C-reactive protein levels in those with MDD.
Childhood maltreatment did not exert effects on inflammatory marker levels in either the participants with MDD or the control group in our study. Our results instead highlight the more pertinent influence of BMI.
Declaration of interest
D.A.C. and H.W. work for Eli Lilly Inc. R.N. has received speaker fees from Sunovion, Jansen and Lundbeck. G.B. has received consultancy fees and funding from Eli Lilly. R.H.M.-W. has received consultancy fees or has a financial relationship with AstraZeneca, Bristol-Myers Squibb, Cyberonics, Eli Lilly, Ferrer, Janssen-Cilag, Lundbeck, MyTomorrows, Otsuka, Pfizer, Pulse, Roche, Servier, SPIMACO and Sunovian. I.M.A. has received consultancy fees or has a financial relationship with Alkermes, Lundbeck, Lundbeck/Otsuka, and Servier. S.W. has sat on an advisory board for Sunovion, Allergan and has received speaker fees from Astra Zeneca. A.H.Y. has received honoraria for speaking from Astra Zeneca, Lundbeck, Eli Lilly, Sunovion; honoraria for consulting from Allergan, Livanova and Lundbeck, Sunovion, Janssen; and research grant support from Janssen. A.J.C. has received honoraria for speaking from Astra Zeneca, honoraria for consulting with Allergan, Livanova and Lundbeck and research grant support from Lundbeck.
Objectives: Human immunodeficiency virus (HIV) disproportionately affects Hispanics/Latinos in the United States, yet little is known about neurocognitive impairment (NCI) in this group. We compared the rates of NCI in large well-characterized samples of HIV-infected (HIV+) Latinos and (non-Latino) Whites, and examined HIV-associated NCI among subgroups of Latinos. Methods: Participants included English-speaking HIV+ adults assessed at six U.S. medical centers (194 Latinos, 600 Whites). For overall group, age: M=42.65 years, SD=8.93; 86% male; education: M=13.17, SD=2.73; 54% had acquired immunodeficiency syndrome. NCI was assessed with a comprehensive test battery with normative corrections for age, education and gender. Covariates examined included HIV-disease characteristics, comorbidities, and genetic ancestry. Results: Compared with Whites, Latinos had higher rates of global NCI (42% vs. 54%), and domain NCI in executive function, learning, recall, working memory, and processing speed. Latinos also fared worse than Whites on current and historical HIV-disease characteristics, and nadir CD4 partially mediated ethnic differences in NCI. Yet, Latinos continued to have more global NCI [odds ratio (OR)=1.59; 95% confidence interval (CI)=1.13–2.23; p<.01] after adjusting for significant covariates. Higher rates of global NCI were observed with Puerto Rican (n=60; 71%) versus Mexican (n=79, 44%) origin/descent; this disparity persisted in models adjusting for significant covariates (OR=2.40; CI=1.11–5.29; p=.03). Conclusions: HIV+ Latinos, especially of Puerto Rican (vs. Mexican) origin/descent had increased rates of NCI compared with Whites. Differences in rates of NCI were not completely explained by worse HIV-disease characteristics, neurocognitive comorbidities, or genetic ancestry. Future studies should explore culturally relevant psychosocial, biomedical, and genetic factors that might explain these disparities and inform the development of targeted interventions. (JINS, 2018, 24, 163–175)
Both qualitative and quantitative research routinely fall short, producing misleading causal inferences. Because these weaknesses are in part different, we are convinced that multimethod strategies are productive. Each approach can provide additional leverage that helps address shortcomings of the other. This position is quite distinct from that of Beck, who believes that the two types of analysis cannot be adjoined. We review examples of adjoining that Beck dismisses, based on what we see as his outdated view of qualitative methods. By contrast, we show that these examples demonstrate how qualitative and quantitative analysis can work together.
Concerns about nutrient loads into our waters have focused attention on poultry litter applications. Like many states with a large poultry industry, Georgia recently designed a subsidy program to facilitate the transportation of poultry litter out of vulnerable watersheds. This paper uses a transportation model to examine the necessity of a poultry litter subsidy to achieve water protection goals in Georgia. We also demonstrate the relationship between diesel and synthetic fertilizer prices and the value of poultry litter. Results suggest that a well-functioning market would be able to remove excess litter from vulnerable watersheds in the absence of a subsidy.
Process tracing is a fundamental tool of qualitative analysis. This method is often invoked by scholars who carry out within-case analysis based on qualitative data, yet frequently it is neither adequately understood nor rigorously applied. This deficit motivates this article, which offers a new framework for carrying out process tracing. The reformulation integrates discussions of process tracing and causal-process observations, gives greater attention to description as a key contribution, and emphasizes the causal sequence in which process-tracing observations can be situated. In the current period of major innovation in quantitative tools for causal inference, this reformulation is part of a wider, parallel effort to achieve greater systematization of qualitative methods. A key point here is that these methods can add inferential leverage that is often lacking in quantitative analysis. This article is accompanied by online teaching exercises, focused on four examples from American politics, two from comparative politics, three from international relations, and one from public health/epidemiology.
Abstract. Proportional-hazards models are frequently used to analyze data from randomized controlled trials. This is a mistake. Randomization does not justify the models, which are rarely informative. Simpler methods work better. This discussion is salient because the misuse of survival analysis has introduced a new hazard in epidemiology: It can lead to serious mistakes in medical treatment. Life tables, Kaplan-Meier curves, and proportional-hazards models, aka “Cox models,” all require strong assumptions, such as stationarity of mortality and independence of competing risks. Where the assumptions fail, the methods also tend to fail. Justifying those assumptions is fraught with difficulty. This is illustrated with examples: the impact of religious feelings on survival and the efficacy of hormone replacement therapy. What are the implications for statistical practice? With observational studies, the models could help disentangle causal relations if the assumptions behind the models can be justified.
In this chapter, I will discuss life tables and Kaplan-Meier estimators, which are similar to life tables. Then I turn to proportional-hazards models, aka “Cox models.” Along the way, I will look at the efficacy of screening for lung cancer, the impact of negative religious feelings on survival, and the efficacy of hormone replacement therapy.
What are the conclusions about statistical practice? Proportional-hazards models are frequently used to analyze data from randomized controlled trials.
Abstract. Regression models have been used in the social sciences at least since 1899, when Yule published a paper on the causes of pauperism. Regression models are now used to make causal arguments in a wide variety of applications, and it is perhaps time to evaluate the results. No definitive answers can be given, but this chapter takes a rather negative view. Snow's work on cholera is presented as a success story for scientific reasoning based on nonexperimental data. Failure stories are also discussed, and comparisons may provide some insight. In particular, this chapter suggests that statistical technique can seldom be an adequate substitute for good design, relevant data, and testing predictions against reality in a variety of settings.
Regression models have been used in social sciences at least since 1899, when Yule published his paper on changes in “out-relief” as a cause of pauperism: He argued that providing income support outside the poorhouse increased the number of people on relief. At present, regression models are used to make causal arguments in a wide variety of social science applications, and it is perhaps time to evaluate the results.
A crude four-point scale may be useful:
Regression usually works, although it is (like anything else) imperfect and may sometimes go wrong.
Regression sometimes works in the hands of skillful practitioners, but it isn't suitable for routine use.
Abstract. The “Huber Sandwich Estimator” can be used to estimate the variance of the MLE when the underlying model is incorrect. If the model is nearly correct, so are the usual standard errors, and robustification is unlikely to help much. On the other hand, if the model is seriously in error, the sandwich may help on the variance side, but the parameters being estimated by the MLE are likely to be meaningless–except perhaps as descriptive statistics.
This chapter gives an informal account of the so-called “Huber Sandwich Estimator,” for which Peter Huber is not to be blamed. We discuss the algorithm and mention some of the ways in which it is applied. Although the chapter is mainly expository, the theoretical framework outlined here may have some elements of novelty. In brief, under rather stringent conditions the algorithm can be used to estimate the variance of the MLE when the underlying model is incorrect. However, the algorithm ignores bias, which may be appreciable. Thus, results are liable to be misleading.
To begin the mathematical exposition, let i index observations whose values are yi. Let θ ∈ Rp be a p × 1 parameter vector. Let y → fi (y|θ) be a positive density. If yi takes only the values 0 or 1, which is the chief case of interest here, then fi (0|θ) > 0, fi (1|θ) > 0, and fi (0|θ) + fi (0|θ) + fi (1|θ) = 1. Some examples involve real- or vector-valued yi, and the notation is set up in terms of integrals rather than sums.
“Son, no matter how far you travel, or how smart you get, always remember this: Someday, somewhere, a guy is going to show you a nice brand-new deck of cards on which the seal is never broken, and this guy is going to offer to bet you that the jack of spades will jump out of this deck and squirt cider in your ear. But, son, do not bet him, for as sure as you do you are going to get an ear full of cider.”
Abstract. After sketching the conflict between objectivists and subjectivists on the foundations of statistics, this chapter discusses an issue facing statisticians of both schools, namely, model validation. Statistical models originate in the study of games of chance and have been successfully applied in the physical and life sciences. However, there are basic problems in applying the models to social phenomena; some of the difficulties will be pointed out. Hooke's law will be contrasted with regression models for salary discrimination, the latter being a fairly typical application in the social sciences.
What is probability?
For a contemporary mathematician, probability is easy to define, as a countably additive set function on a σ-field, with a total mass of one. This definition, perhaps cryptic for non-mathematicians, was introduced by A. N. Kolmogorov around 1930, and has been extremely convenient for mathematical work; theorems can be stated with clarity, and proved with rigor.
Abstract. Regression adjustments are often made to experimental data to address confounders that may not be balanced by randomization. Since randomization does not justify the models, bias is likely; nor are the usual variance calculations to be trusted. Here, we evaluate regression adjustments using Neyman's non-parametric model. Previous results are generalized, and more intuitive proofs are given. A bias term is isolated, and conditions are given for unbiased estimation in finite samples.
Data from randomized controlled experiments (including clinical trials) are often analyzed using regression models and the like. The behavior of the estimates can be calibrated using the non-parametric model in Neyman (1923), where each subject has potential responses to several possible treatments. Only one response can be observed, according to the subject's assignment; the other potential responses must then remain unobserved. Covariates are measured for each subject and may be entered into the regression, perhaps with the hope of improving precision by adjusting the data to compensate for minor imbalances in the assignment groups.
As discussed in Freedman (2006b [Chapter 17], 2008a), randomization does not justify the regression model, so that bias can be expected, and the usual formulas do not give the right variances. Moreover, regression need not improve precision. Here, we extend some of those results, with proofs that are more intuitive.