To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Accurate methods for determining the duration of HIV infection at the individual level are valuable in many settings, including many critical research studies and in clinical practice (especially for acute infection). Since first published in 2003, the ‘Fiebig staging system’ has been used as the primary way of classifying early HIV infection into five sequential stages based on HIV test result patterns in newly diagnosed individuals. However, Fiebig stages can only be assigned to individuals who produce both a negative and a positive test result on the same day, on specific pairs of tests of varying ‘sensitivity’. Further, in the past 16 years HIV-testing technology has evolved substantially, and three of the five key assays used to define Fiebig stages are no longer widely used. To address these limitations, we developed an improved and more general framework for estimating the duration of HIV infection by interpreting any combination of diagnostic test results, whether obtained on single or multiple days, into an estimated date of detectable infection, or EDDI. A key advantage of the EDDI method over Fiebig staging is that it allows for the generation of a point estimate, as well as an associated credibility interval for the date of first detectable infection, for any person who has at least one positive and one negative HIV test of any kind. The tests do not have to be run on the same day; they do not have to be run during the acute phase of infection and the method does not rely on any special pairing of tests to define ‘stages’ of infection. The size of the interval surrounding the EDDI (and therefore the precision of the estimate itself) depends largely on the length of time between negative and positive tests. The EDDI approach is also flexible, seamlessly incorporating any assay for which there is a reasonable diagnostic delay estimate. An open-source, free online tool includes a user-updatable curated database of published diagnostic delays. HIV diagnostics have evolved tremendously since that original publication more than 15 years ago, and it is time to similarly evolve the methods used to estimate timing of infection. The EDDI method is a flexible and rigorous way to estimate the timing of HIV infection in a continuously evolving diagnostic landscape.
Research has shown that religious affiliation has a protective effect against deliberate self-harm. This is particularly pronounced in periods of increased religious significance, such as periods of worship, celebration, and fasting. However, no data exist as to whether this effect is present during the Christian period of Lent. Our hypothesis was that Lent would lead to decreased presentations of self-harm emergency department (ED) in a predominantly Catholic area of Ireland.
Following ethical approval, we retrospectively analysed data on presentations to the ED of University Hospital Limerick during the period of Lent and the 40 days immediately preceding it. Frequency data were compared using Pearson’s chi-squared tests in SPSS.
There was no significant difference in the overall number of people presenting to the ED with self-harm during Lent compared to the 40 days preceding it (χ2 = 0.75, df = 1, p > 0.05), and there was no difference in methods of self-harm used. However, there was a significant increase in attendances with self-harm during Lent in the over 50’s age group (χ2 = 7.76, df = 1, p = 0.005).
Based on our study, Lent is not a protective factor for deliberate self-harm and was associated with increased presentations in the over 50’s age group. Further large-scale studies are warranted to investigate this finding as it has implications for prevention and management of deliberate self-harm.
We have detected 27 new supernova remnants (SNRs) using a new data release of the GLEAM survey from the Murchison Widefield Array telescope, including the lowest surface brightness SNR ever detected, G 0.1 – 9.7. Our method uses spectral fitting to the radio continuum to derive spectral indices for 26/27 candidates, and our low-frequency observations probe a steeper spectrum population than previously discovered. None of the candidates have coincident WISE mid-IR emission, further showing that the emission is non-thermal. Using pulsar associations we derive physical properties for six candidate SNRs, finding G 0.1 – 9.7 may be younger than 10 kyr. Sixty per cent of the candidates subtend areas larger than 0.2 deg2 on the sky, compared to < 25% of previously detected SNRs. We also make the first detection of two SNRs in the Galactic longitude range 220°–240°.
This work makes available a further
of the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey, covering half of the accessible galactic plane, across 20 frequency bands sampling 72–231 MHz, with resolution
. Unlike previous GLEAM data releases, we used multi-scale CLEAN to better deconvolve large-scale galactic structure. For the galactic longitude ranges
$345^\circ < l < 67^\circ$
$180^\circ < l < 240^\circ$
, we provide a compact source catalogue of 22 037 components selected from a 60-MHz bandwidth image centred at 200 MHz, with RMS noise
and position accuracy better than 2 arcsec. The catalogue has a completeness of 50% at
, and a reliability of 99.86%. It covers galactic latitudes
towards the galactic centre and
for other regions, and is available from Vizier; images covering
for all longitudes are made available on the GLEAM Virtual Observatory (VO).server and SkyView.
We examined the latest data release from the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey covering 345° < l < 60° and 180° < l < 240°, using these data and that of the Widefield Infrared Survey Explorer to follow up proposed candidate Supernova Remnant (SNR) from other sources. Of the 101 candidates proposed in the region, we are able to definitively confirm ten as SNRs, tentatively confirm two as SNRs, and reclassify five as H ii regions. A further two are detectable in our images but difficult to classify; the remaining 82 are undetectable in these data. We also investigated the 18 unclassified Multi-Array Galactic Plane Imaging Survey (MAGPIS) candidate SNRs, newly confirming three as SNRs, reclassifying two as H ii regions, and exploring the unusual spectra and morphology of two others.
Introduction: Telephone Triage Services (TTS) manage phone calls from the public regarding general medical problems and provide telephone advice. This telephone based care can overlap with care provided by Poison Centres. Our objective was to examine the impact of a provincial 811 TTS on the IWK Regional Poison Centre (RPC). Methods: This is a retrospective descriptive study using interrupted time series methodology. We compared monthly IWK RPC call volume in the pre-811 era (January 2007-July 2009) and the post-811 era (September 2009-December 2017). We summarized the characteristics of callers who accessed the IWK RPC in terms of client age, sex, intentionality, time of day, call disposition and outcome. Caller characteristics were compared between the pre- and post-811 eras using chi-square test for categorical variables. We used segmented regression analysis to evaluate changes in slope of call volume in the pre- and post 811 eras. The Durbin-Watson statistic was performed to test for serial correlation and the Dickey-Fuller test to investigate seasonality. Results: The dataset included 82683 calls to the IWK RPC – 27028 pre-811 and 55655 post-811. Overall, 55% of calls were for female clients and the largest age group was children aged 0-5 years (37%). Most calls originated from home (47%), followed by a health care facility (23%). Most calls were managed at home (65%). Less than 3% of calls resulted in major effect or death. The Durbin Watson statistic was not statistically significant (p = 0.94). The Dickey-Fuller test indicated series stationarity (p = 0.001). There was no statistically significant change in call volume to the IWK RPC due to the introduction of 811 (p = 0.39). There was no significant variation by time of day, day of week or month, with most calls occurring in the evening. There were significantly more calls regarding intentional ingestions in the post-811 era (23% vs. 19% pre-811, p < .001). Outcomes in the pre and post 811 eras were as follows: minor/no effect/non-toxic/minimal 80% vs. 78%; moderate 7% vs. 10%; and, major/death 1.7% vs. 2.0%. Conclusion: The introduction of a TTS did not change call volumes at our RPC. The increase in the percentage of calls about intentional ingestions may reflect an increase in call acuity as the 811-TTS likely manages calls about minor/non-toxic ingestions without consulting with the RPC. Our future research will examine the nature of poison related calls to the 811-TTS.
Introduction: Cardioactive steroid poisoning occurs worldwide with the use of pharmaceutical digoxin and botanical cardiac glycosides. The wholesale price of the antidote, digoxin immune fab, has increased over 300% from 2010 to 2015. Our objective was to identify gaps in the existing literature with respect to the use of digoxin immune fab in cardioactive steroid toxicity in acute care settings. Methods: We used scoping study methodology, as described by Arksey and O'Malley, to assess the range and scope of empiric research and will report: 1) sources of cardioactive steroid toxicity in acute settings; 2) doses of digoxin immune fab used in treatment; and, 3) intervention outcomes of acute cardioactive steroid toxicity following the administration of digoxin immune fab as first or second-line therapy. We collaborated with a library scientist to devise search strategies for PubMed, CINAHL, EMBASE, CENTRAL and Toxnet. We sought unpublished literature through the Canadian Electronic Library, Proquest, and Scopus and searched reference lists of included studies. We hand searched relevant conference proceedings and applicable guidelines. Two reviewers independently reviewed titles and abstracts using predetermined criteria. Using a structured data abstraction form, two reviewers independently extracted data. All discrepancies were resolved through consensus. Results: Our search strategy yielded 3458 results. After screening titles and abstracts 384 underwent full text screening. We included 147 studies and are currently extracting data from 12 French studies and 135 English studies. To date we have extracted data from 90 case reports and case series. Conclusion: Given concerns over rising costs, our findings will shed light on the extent of the evidence for use of digoxin immune fab in acute care settings.
To evaluate the impact of a hard stop in the electronic health record (EHR) on inappropriate gastrointestinal pathogen panel testing (GIPP).
We used a quasi-experimental study to evaluate testing before and after the implementation of an EHR alert to stop inappropriate GIPP ordering.
Midwest academic medical center.
Hospitalized patients with diarrhea for which GIPP testing was ordered, between January 2016 through March 2017 (period 1) and April 2017 through June 2018 (period 2).
A hard stop in the EHR prevented clinicians from ordering a GIPP more than once per admission or in patients hospitalized for >72 hours.
During period 1, 1,587 GIPP tests were ordered over 212,212 patient days, at a rate of 7.48 per 1,000 patient days. In period 2, 1,165 GIPP tests were ordered over 222,343 patient days, at a rate of 5.24 per 1,000 patient days. The Poisson model estimated a 30% reduction in total GIPP ordering rates between the 2 periods (relative risk, 0.70; 95% confidence interval [CI], 0.63–0.78; P < .001). The rate of inappropriate tests ordered decreased from 21.5% to 4.9% between the 2 periods (P < .001). The total savings calculated factoring only GIPP orders that triggered the hard stop was ∼$67,000, with potential savings of $168,000 when factoring silent best-practice alert data.
A simple hard stop alert in the EHR resulted in significant reduction of inappropriate GIPP testing, which was associated with significant cost savings. Clinicians can practice diagnostic stewardship by avoiding ordering this test more than once per admission or in patients hospitalized >72 hours.
The use of targets with surface structures for laser-driven particle acceleration has potential to significantly boost the particle and radiation energies because of enhanced laser absorption. We investigate, via experiment and particle-in-cell simulations, the impact of micron-scale surface-structured targets on the spectrum of electrons and protons accelerated by a picosecond laser pulse at relativistic intensity. Our results show that, compared with flat-surfaced targets, structures on this scale give rise to a significant enhancement in particle and radiation emission over a wide range of laser–target interaction parameters. This is due to the longer plasma scale length when using micro-structures on the target front surface. We do not observe an increase in the proton cutoff energy with our microstructured targets, and this is due to the large volume of the relief.
To examine the impact of multiple psychiatric disorders over the lifetime on risk of mortality in the general population.
Data came from a random community-based sample of 1397 adults in Atlantic Canada, recruited in 1992. Major depression, dysthymia, panic disorder, generalised anxiety disorder and alcohol use disorders were assessed using the Diagnostic Interview Schedule (DIS). Vital status of participants through 2011 was determined using probabilistic linkages to the Canadian Mortality Database. Cox proportional hazard models with age at study entry as the time scale were used to investigate the relationship between DIS diagnoses and mortality, adjusted for participant education, smoking and obesity at baseline.
Results suggested that mood and anxiety disorders rarely presented in isolation – the majority of participants experienced multiple psychiatric disorders over the lifetime. Elevated risk of death was found among men with both major depression and dysthymia (HR 2.56; 95% CI 1.12–5.89), depression and alcohol use disorders (HR 2.45; 95% CI 1.18–5.10) and among men and women who experienced both panic disorder and alcohol use disorders (HR 3.80; 95% CI 1.19–12.16).
The experience of multiple mental disorders over the lifetime is extremely common, and associated with increased risk of mortality, most notably among men. Clinicians should be aware of the importance of considering contemporaneous symptoms of multiple psychiatric conditions.
Global inequity in access to and availability of essential mental health services is well recognized. The mental health treatment gap is approximately 50% in all countries, with up to 90% of people in the lowest-income countries lacking access to required mental health services. Increased investment in global mental health (GMH) has increased innovation in mental health service delivery in LMICs. Situational analyses in areas where mental health services and systems are poorly developed and resourced are essential when planning for research and implementation, however, little guidance is available to inform methodological approaches to conducting these types of studies. This scoping review provides an analysis of methodological approaches to situational analysis in GMH, including an assessment of the extent to which situational analyses include equity in study designs. It is intended as a resource that identifies current gaps and areas for future development in GMH. Formative research, including situational analysis, is an essential first step in conducting robust implementation research, an essential area of study in GMH that will help to promote improved availability of, access to and reach of mental health services for people living with mental illness in low- and middle-income countries (LMICs). While strong leadership in this field exists, there remain significant opportunities for enhanced research representing different LMICs and regions.
Introduction: Screening for organ and tissue donation is an essential skill for emergency physicians. In 2015, 4564 individuals were on a waiting list for organ transplant and 242 died while waiting. As Canadas donation rates are less than half that of other comparable countries, it is crucial to ensure we are identifying all potential donors. Patients deceased from poisoning are a source that may not be considered for referral as often as those who die from other causes. This study aims to identify if patients dying from poisoning represent an under-referred group and determine what physician characteristics influence referral decisions. Methods: In this cross-sectional unidirectional survey study, physician members of the Canadian Association of Emergency Physicians were invited to participate. Participants were presented with 20 organ donation scenarios that included poisoned and non-poisoned deaths, as well as one ideal scenario for organ or tissue donation used for comparison. Participants were unaware of the objective to explore donation in the context of poisoning deaths. Following the organ donation scenarios, a range of follow-up questions and demographics were included to explore factors influencing the decision to refer or not refer for organ or tissue donation. Results were reported descriptively and associations between physician characteristics and decisions to refer were assessed using odds ratios and 95% confidence intervals. Results: 208/2058 (10.1%) physicians participated. 25% did not refer in scenarios involving a drug overdose (n=71). Specific poisonings commonly triggering the decision to not refer included palliative care medications (n=34, 18%), acetaminophen (n=42, 22%), chemical exposure (n=48, 27%) and organophosphates (n=87, 48%). Factors associated with an increased likelihood to refer potential donors following overdose included previous organ and tissue donation training (OR=2.6), having referred in the past (OR=4.3), available donation support (OR=3.9), greater than 10 years of service (OR=2.1), large urban center (OR=3.8), holding emergency medicine certification (OR=3.6), male gender (OR=2.2, CI), and having indicated a desire to be a donor on government identification (OR=5.8). Conclusion: Scenarios involving drug overdoses were associated with under-referral for organ and tissue donation. As poisoning is not a contraindication for referral, this represents a potential source of donors. By examining characteristics that put clinicians at risk for under-referral of organ or tissue donors, becoming aware of potential biases, improving transplant knowledge bases, and implementing support and training programs for the organ and tissue donation processes, we have the opportunity to improve these rates and reduce morbidity and mortality for Canadians requiring organ or tissue donation.
The global spread of non-tuberculous mycobacteria (NTM) may be due to HIV/AIDS and other environmental factors. The symptoms of NTM and tuberculosis (TB) disease are indistinguishable, but their treatments are different. Lack of research on the epidemiology of NTM infections has led to underestimation of its prevalence within TB endemic countries. This study was designed to determine the prevalence and clinical characteristics of pulmonary NTM in Bamako. A cross-sectional study which include 439 suspected cases of pulmonary TB. From 2006 to 2013 a total of 332 (76%) were confirmed to have sputum culture positive for mycobacteria. The prevalence of NTM infection was 9.3% of our study population and 12.3% of culture positive patients. The seroprevalence of HIV in NTM group was 17.1%. Patients who weighed <55 kg and had TB symptoms other than cough were also significantly more likely to have disease due to NTM as compared to those with TB disease who were significantly more likely to have cough and weigh more than 55 kg (OR 0.05 (CI 0.02–0.13) and OR 0.32 (CI 0.11–0.93) respectively). NTM disease burden in Bamako was substantial and diagnostic algorithms for pulmonary disease in TB endemic countries should consider the impact of NTM.
The discovery of the first electromagnetic counterpart to a gravitational wave signal has generated follow-up observations by over 50 facilities world-wide, ushering in the new era of multi-messenger astronomy. In this paper, we present follow-up observations of the gravitational wave event GW170817 and its electromagnetic counterpart SSS17a/DLT17ck (IAU label AT2017gfo) by 14 Australian telescopes and partner observatories as part of Australian-based and Australian-led research programs. We report early- to late-time multi-wavelength observations, including optical imaging and spectroscopy, mid-infrared imaging, radio imaging, and searches for fast radio bursts. Our optical spectra reveal that the transient source emission cooled from approximately 6 400 K to 2 100 K over a 7-d period and produced no significant optical emission lines. The spectral profiles, cooling rate, and photometric light curves are consistent with the expected outburst and subsequent processes of a binary neutron star merger. Star formation in the host galaxy probably ceased at least a Gyr ago, although there is evidence for a galaxy merger. Binary pulsars with short (100 Myr) decay times are therefore unlikely progenitors, but pulsars like PSR B1534+12 with its 2.7 Gyr coalescence time could produce such a merger. The displacement (~2.2 kpc) of the binary star system from the centre of the main galaxy is not unusual for stars in the host galaxy or stars originating in the merging galaxy, and therefore any constraints on the kick velocity imparted to the progenitor are poor.
In the context of water use for agricultural production, water footprints (WFs) have become an important sustainability indicator. To understand better the water demand for beef and sheep meat produced on pasture-based systems, a WF of individual farms is required. The main objective of this study was to determine the primary contributors to freshwater consumption up to the farm gate expressed as a volumetric WF and associated impacts for the production of 1 kg of beef and 1 kg of sheep meat from a selection of pasture-based farms for 2 consecutive years, 2014 and 2015. The WF included green water, from the consumption of soil moisture due to evapotranspiration, and blue water, from the consumption of ground and surface waters. The impact of freshwater consumption on global water stress from the production of beef and sheep meat in Ireland was also computed. The average WF of the beef farms was 8391 l/kg carcass weight (CW) of which 8222 l/kg CW was green water and 169 l/kg CW was blue water; water for the production of pasture (including silage and grass) contributed 88% to the WF, concentrate production – 10% and on-farm water use – 1%. The average stress-weighted WF of beef was 91 l H2O eq/kg CW, implying that each kg of beef produced in Ireland contributed to freshwater scarcity equivalent to the consumption of 91 l of freshwater by an average world citizen. The average WF of the sheep farms was 7672 l/kg CW of which 7635 l/kg CW was green water and 37 l/kg CW was blue water; water for the production of pasture contributed 87% to the WF, concentrate production – 12% and on-farm water use – 1%. The average stress-weighted WF was 2 l H2O eq/kg CW for sheep. This study also evaluated the sustainability of recent intensification initiatives in Ireland and found that increases in productivity were supported through an increase in green water use and higher grass yields per hectare on both beef and sheep farms.
We present techniques developed to calibrate and correct Murchison Widefield Array low-frequency (72–300 MHz) radio observations for polarimetry. The extremely wide field-of-view, excellent instantaneous (u, v)-coverage and sensitivity to degree-scale structure that the Murchison Widefield Array provides enable instrumental calibration, removal of instrumental artefacts, and correction for ionospheric Faraday rotation through imaging techniques. With the demonstrated polarimetric capabilities of the Murchison Widefield Array, we discuss future directions for polarimetric science at low frequencies to answer outstanding questions relating to polarised source counts, source depolarisation, pulsar science, low-mass stars, exoplanets, the nature of the interstellar and intergalactic media, and the solar environment.
ABSTRACT.Piracy has revived since 1945 with the failure of new international legal regimes to replace the former imperial naval power. Crime at sea is technically harder than terrorism or smuggling, but similar forces and capabilities make it possible and profitable.
RÉSUMÉ.La piraterie a connu une recrudescence depuis 1945 de par l'échec des nouveaux régimes juridiques internationaux à remplacer les anciennes puissances maritimes impériales. La criminalité en mer est techniquement plus complexe que le terrorisme ou la contrebande mais des forces et capacités similaires la rendent possible et lucrative.
Piracy declined at the end of the 19th century, thanks to technology and empires, but it never went away. Steam power revolutionized the operation of both warships and merchant vessels but was too expensive for pirates to adopt. Like other colonial peoples, they had little access to modern arms. The British Empire in particular had the naval wherewithal, but also the political will and legal justification of hostes humani generis(“enemies of all mankind”), to hunt them down, including putting an end to the coastal raiding that had always been part, often the largest part, of pirate practice. Off the coast of China, a geographical expanse wracked by internal discord and foreign intervention where Western imperial power was applied with less certainty, maritime depredation continued to be troublesome.
In Asia piracy almost certainly continued during and after World War II, albeit at a low level. Amongst the coastal peoples of Southeast Asia, around the Straits of Malacca and Singapore(“the Straits”), the Sulu Sea and southern Thailand, activity that is regarded by outsiders as piracy remained widely accepted.1 In addition, groups affiliated to the nationalist Kuomintang regime, after its defeat by the Communists in 1949 and its retreat to Taiwan, attacked shipping along the southern Chinese coast as far south as the South China Sea well into the 1950s.
Six categories of piracy have been observed around the globe since 1945(Table 1).
Records prior to 1992 are patchy and confused because at the time neither states nor international institutions saw piracy and its territorial sea equivalent, armed robbery at sea, as a problem, largely because international shipping was unaffected.
The energy spectra of protons generated by ultra-intense (1020 W cm−2) laser interactions with a preformed plasma of scale length measured by shadowgraphy are presented. The effects of the preformed plasma on the proton beam temperature and the number of protons are evaluated. Two-dimensional EPOCH particle-in-cell code simulations of the proton spectra are found to be in agreement with measurements over a range of experimental parameters.
In 2011 the Incidence Assay Critical Path Working Group reviewed the current state of HIV incidence assays and helped to determine a critical path to the introduction of an HIV incidence assay. At that time the Consortium for Evaluation and Performance of HIV Incidence Assays (CEPHIA) was formed to spur progress and raise standards among assay developers, scientists and laboratories involved in HIV incidence measurement and to structure and conduct a direct independent comparative evaluation of the performance of 10 existing HIV incidence assays, to be considered singly and in combinations as recent infection test algorithms. In this paper we report on a new framework for HIV incidence assay evaluation that has emerged from this effort over the past 5 years, which includes a preliminary target product profile for an incidence assay, a consensus around key performance metrics along with analytical tools and deployment of a standardized approach for incidence assay evaluation. The specimen panels for this evaluation have been collected in large volumes, characterized using a novel approach for infection dating rules and assembled into panels designed to assess the impact of important sources of measurement error with incidence assays such as viral subtype, elite host control of viraemia and antiretroviral treatment. We present the specific rationale for several of these innovations, and discuss important resources for assay developers and researchers that have recently become available. Finally, we summarize the key remaining steps on the path to development and implementation of reliable assays for monitoring HIV incidence at a population level.
Many adults with autism spectrum disorder (ASD) remain undiagnosed. Specialist assessment clinics enable the detection of these cases, but such services are often overstretched. It has been proposed that unnecessary referrals to these services could be reduced by prioritizing individuals who score highly on the Autism-Spectrum Quotient (AQ), a self-report questionnaire measure of autistic traits. However, the ability of the AQ to predict who will go on to receive a diagnosis of ASD in adults is unclear.
We studied 476 adults, seen consecutively at a national ASD diagnostic referral service for suspected ASD. We tested AQ scores as predictors of ASD diagnosis made by expert clinicians according to International Classification of Diseases (ICD)-10 criteria, informed by the Autism Diagnostic Observation Schedule-Generic (ADOS-G) and Autism Diagnostic Interview-Revised (ADI-R) assessments.
Of the participants, 73% received a clinical diagnosis of ASD. Self-report AQ scores did not significantly predict receipt of a diagnosis. While AQ scores provided high sensitivity of 0.77 [95% confidence interval (CI) 0.72–0.82] and positive predictive value of 0.76 (95% CI 0.70–0.80), the specificity of 0.29 (95% CI 0.20–0.38) and negative predictive value of 0.36 (95% CI 0.22–0.40) were low. Thus, 64% of those who scored below the AQ cut-off were ‘false negatives’ who did in fact have ASD. Co-morbidity data revealed that generalized anxiety disorder may ‘mimic’ ASD and inflate AQ scores, leading to false positives.
The AQ's utility for screening referrals was limited in this sample. Recommendations supporting the AQ's role in the assessment of adult ASD, e.g. UK NICE guidelines, may need to be reconsidered.