To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Access has become a keyword of the twenty-first century. However, even in the 1960s, government data collection and growing computational power facilitated new forms of statistical analysis that people thought could become new ‘intelligence’ systems. The legislative response to these threats were new data protection and information privacy regimes that included ‘data subject rights’ – mechanisms by which individuals could obtain access to information about them held by others, and rectify any inaccuracy. This type of transparency gave individuals a way to participate in the profiling regime, by attempting to ensure that the data used by profilers was accurate and relevant. Informed by the German constitutional concept of informational self-determination, limitations to profiling in data protection are premised on the idea that a person’s self-image ought to be the primary determinant of their identity. However, it is argued here that this approach loses traction as the profiling environment becomes more sophisticated.
This chapter presents theory and research that examine tasks in relation to the cognitive processes involved in L2 production in what we have called the Psycholinguistic Perspective. The chapter explores and critiques two models of task-based performance - the Limited Attention Capacity Hypothesis and the Cognition Hypothesis - which have informed a large body of research. The chapter reviews studies that investigated how task design and implementation variables impact on the complexity, accuracy, lexis and fluency of the learners’ production. The chapter also considers a key issue for TBLT, namely the relationship between task performance and L2 acquisition.
Good research design includes choosing what to measure and how to measure it. We can’t measure everything. Fortunately, clear predictions dictate the measurements we need to make to test them. This chapter provides general advice on methods, then covers the importance of the validity, accuracy, sensitivity of the measures we use. I end with a reminder that methods must also be feasible.
Errors in data are a part of life for experimenters in science and engineering. This chapter considers the types of errors, including random and systematic error that can occur during an experiment and methods by which uncertainties arising from such errors can be combined. Many worked examples are included in this chapter, as well as exercises for the student to complete
Neuroprosthetic speech devices are an emerging technology that can offer the possibility of communication to those who are unable to speak. Patients with ‘locked in syndrome,’ aphasia, or other such pathologies can use covert speech—vividly imagining saying something without actual vocalization—to trigger neural controlled systems capable of synthesizing the speech they would have spoken, but for their impairment.
We provide an analysis of the mechanisms and outputs involved in speech mediated by neuroprosthetic devices. This analysis provides a framework for accounting for the ethical significance of accuracy, control, and pragmatic dimensions of prosthesis-mediated speech. We first examine what it means for the output of the device to be accurate, drawing a distinction between technical accuracy on the one hand and semantic accuracy on the other. These are conceptual notions of accuracy.
Both technical and semantic accuracy of the device will be necessary (but not yet sufficient) for the user to have sufficient control over the device. Sufficient control is an ethical consideration: we place high value on being able to express ourselves when we want and how we want. Sufficient control of a neural speech prosthesis requires that a speaker can reliably use their speech apparatus as they want to, and can expect their speech to authentically represent them. We draw a distinction between two relevant features which bear on the question of whether the user has sufficient control: voluntariness of the speech and the authenticity of the speech. These can come apart: the user might involuntarily produce an authentic output (perhaps revealing private thoughts) or might voluntarily produce an inauthentic output (e.g., when the output is not semantically accurate). Finally, we consider the role of the interlocutor in interpreting the content and purpose of the communication.
These three ethical dimensions raise philosophical questions about the nature of speech, the level of control required for communicative accuracy, and the nature of ‘accuracy’ with respect to both natural and prosthesis-mediated speech.
Routine psychiatric assessments tailored to older patients are often insufficient to identify the complexity of presentation in younger patients with dementia. Significant overlap between psychiatric disorders and neurodegenerative disease means that high rates of prior incorrect psychiatric diagnosis are common. Long delays to diagnosis, misdiagnosis and lack of knowledge from professionals are key concerns. No specific practice guidelines exist for diagnosis of young-onset dementia (YOD).
The review evaluates the current evidence about best practice in diagnosis to guide thorough assessment of the complex presentations of YOD with a view to upskilling professionals in the field.
A comprehensive search of the literature adopting a scoping review methodology was conducted regarding essential elements of diagnosis in YOD, over and above those in current diagnostic criteria for disease subtypes. This methodology was chosen because research in this area is sparse and not amenable to a traditional systematic review.
The quality of evidence identified is variable with the majority provided from expert opinion and evidence is lacking on some topics. Evidence appears weighted towards diagnosis in frontotemporal dementia and its subtypes and young-onset Alzheimer's disease.
The literature demonstrates that a clinically rigorous and systematic approach is necessary in order to avoid mis- or underdiagnosis for younger people. The advent of new disease-modifying treatments necessitates clinicians in the field to improve knowledge of new imaging techniques and genetics, with the goal of improving training and practice, and highlights the need for quality indicators and alignment of diagnostic procedures across clinical settings.
Radiocarbon (14C) dating is routinely used, yet occasionally, issues still arise surrounding laboratory offsets, and unexpected and unexplained variability. Quality assurance and quality control have long been recognized as important in addressing the two issues of comparability (or bias, accuracy) and uncertainty or variability (or precision) of measurements both within and between laboratories (Long and Kalin 1990). The 14C community and the wider user communities have supported interlaboratory comparisons as one of several strands to ensure the quality of measurements (Scott et al. 2018). The nature of the intercomparisons has evolved as the laboratory characteristics have changed s. The next intercomparison is currently being planned to take place in 2019–2020. The focus of our work in designing intercomparisons is to (1) assist laboratories by contributing to their QA/QC processes, (2) supplement and enhance our suite of reference materials that are available to laboratories, (3) provide consensus 14C values with associated (small) uncertainties for performance checking, and (4) provide estimates of laboratory offsets and error multipliers which can inform subsequent modeling and laboratory improvements.
This paper aims to provide an optimal design of geometric parameters of a special architecture of the delta parallel mechanism, in order to improve positioning accuracy, workspace size, and kinematic and dynamic performance characteristics. In the studied 3[P2(US)] robot, the radius of both fixed and moving platforms, length of the connecting rods, and installation angle of the actuators of the manipulator are chosen as the decision variables. These parameters are optimized to maximize the weighted objective function, comprising workspace volume, global dexterity, global mass, global error, and global error sensitivity indices. Optimizations are performed employing two distinct algorithms, Genetic and Harmony Search whose results confirm each other. The optimal design of the robot leads to maximum workspace size, high dexterity, and dynamic performance, with a minimum error of the end-effector position in its reachable workspace.
Deep neural networks have attracted considerable attention because of their state-of-the-art performance on a variety of image restoration tasks, including image completion, denoising, and segmentation. However, their record of performance is built upon extremely large datasets. In many cases (for example, electron microscopy), it is extremely labor intensive, if not impossible, to acquire tens of thousands of images for a single project. The present work shows the possibility of attaining high-accuracy image segmentation, isolating regions of interest, for small datasets of transmission electron micrographs by employing encoder-decoder neural networks and image augmentation.
The present study aimed to investigate whether the visceral adiposity index (VAI) is an effective predictor to identify unhealthy metabolic phenotype by comparing normal-weight and overweight individuals.
A population-based cross-sectional study. Data were collected by interviews, anthropometric evaluation, dietetic, clinical and laboratory tests. The area under the receiver-operating characteristic curve (AUC) and prevalence ratio (PR), obtained from Poisson regression, were used to compare the predictive capacity of the obesity indicators evaluated (VAI, BMI, waist and neck circumference, waist-to-height and waist-to-hip ratios) and their association with the unhealthy metabolic phenotype. All analyses were stratified by sex and by nutritional status.
Viçosa, Minas Gerais, Brazil.
A total of 854 Brazilian adults (20–59 years old) of both sexes.
VAI was the best predictor for unhealthy metabolic phenotype among men (AUC = 0·865) and women (AUC = 0·843) at normal weight. VAI also had the best predictive capacity among overweight women (AUC = 0·903). Among overweight men, its accuracy (AUC = 0·830) was higher than that of waist-to-hip ratio. In the adjusted regression models, VAI was the indicator most strongly associated with the unhealthy metabolic phenotype, especially among those with normal weight (PR = 6·74; 95 % CI 3·15, 14·42 for men; PR = 7·14; 95 % CI 3·79, 13·44 for women).
VAI has better predictive capacity in detecting unhealthy metabolic phenotype than conventional anthropometric indicators, regardless of nutritional status and sex.
To assess the extent of error present in self-reported weight data in the Women’s Health Initiative, variables that may be associated with error, and to develop methods to reduce any identified error.
Prospective cohort study.
Forty clinical centres in the USA.
Women (n 75 336) participating in the Women’s Health Initiative Observational Study (WHI-OS) and women (n 6236) participating in the WHI Long Life Study (LLS) with self-reported and measured weight collected about 20 years later (2013–2014).
The correlation between self-reported and measured weights was 0·97. On average, women under-reported their weight by about 2 lb (0·91 kg). The discrepancies varied by age, race/ethnicity, education and BMI. Compared with normal-weight women, underweight women over-reported their weight by 3·86 lb (1·75 kg) and obese women under-reported their weight by 4·18 lb (1·90 kg) on average. The higher the degree of excess weight, the greater the under-reporting of weight. Adjusting self-reported weight for an individual’s age, race/ethnicity and education yielded an identical average weight to that measured.
Correlations between self-reported and measured weights in the WHI are high. Discrepancies varied by different sociodemographic characteristics, especially an individual’s BMI. Correction of self-reported weight for individual characteristics could improve the accuracy of assessment of obesity status in postmenopausal women.
The present experiment examined how the interaction between senders’ communicative competence, veracity and the medium through which judgments were made affected observers’ accuracy. Stimuli were obtained from a previous study. Observers (N = 220) judged the truthfulness of statements provided by a good truth teller, a good liar, a bad truth teller, and a bad liar presented either via an audio-only, video-only, audio-video, or transcript format. Log-linear analyses showed that the data were best explained via the saturated model, therefore indicating that all the four variables interacted, G2(0) = 0, p = 1, Q2 = 1. Follow-up analyses showed that the good liar and bad liar were best evaluated via the transcript (z = 2.5) and the audio-only medium (z = 3.9), respectively. Both the good truth teller and the bad truth teller were best assessed through the audio-video medium (z = 2.1, good truth teller, z = 3.4, bad truth teller). Results indicated that all the factors interacted and played a joint role on observers’ accuracy. Difficulties and suggestions for choosing the right medium are presented.
Using whole-genome sequence (WGS) data are supposed to be optimal for genome-wide association studies and genomic predictions. However, sequencing thousands of individuals of interest is expensive. Imputation from single nucleotide polymorphisms panels to WGS data is an attractive approach to obtain highly reliable WGS data at low cost. Here, we conducted a genotype imputation study with a combined reference panel in yellow-feather dwarf broiler population. The combined reference panel was assembled by sequencing 24 key individuals of a yellow-feather dwarf broiler population (internal reference panel) and WGS data from 311 chickens in public databases (external reference panel). Three scenarios were investigated to determine how different factors affect the accuracy of imputation from 600 K array data to WGS data, including: genotype imputation with internal, external and combined reference panels; the number of internal reference individuals in the combined reference panel; and different reference sizes and selection strategies of an external reference panel. Results showed that imputation accuracy from 600 K to WGS data were 0.834±0.012, 0.920±0.007 and 0.982±0.003 for the internal, external and combined reference panels, respectively. Increasing the reference size from 50 to 250 improved the accuracy of genotype imputation from 0.848 to 0.974 for the combined reference panel and from 0.647 to 0.917 for the external reference panel. The selection strategies for the external reference panel had no impact on the accuracy of imputation using the combined reference panel. However, if only an external reference panel with reference size >50 was used, the selection strategy of minimizing the average distance to the closest leaf had the greatest imputation accuracy compared with other methods. Generally, using a combined reference panel provided greater imputation accuracy, especially for low-frequency variants. In conclusion, the optimal imputation strategy with a combined reference panel should comprehensively consider genetic diversity of the study population, availability and properties of external reference panels, sequencing and computing costs, and frequency of imputed variants. This work sheds light on how to design and execute genotype imputation with a combined external reference panel in a livestock population.
Recently, several epistemologists have defended an attractive principle of epistemic rationality, which we shall call Ur-Prior Conditionalization. In this essay, I ask whether we can justify this principle by appealing to the epistemic goal of accuracy. I argue that any such accuracy-based argument will be in tension with Evidence Externalism, i.e., the view that agent’s evidence may entail nontrivial propositions about the external world. This is because any such argument will crucially require the assumption that, independently of all empirical evidence, it is rational for an agent to be certain that her evidence will always include truths, and that she will always have perfect introspective access to her own evidence. This assumption is incompatible with Evidence Externalism. I go on to suggest that even if we don’t accept Evidence Externalism, the prospects for any accuracy-based justification for Ur-Prior Conditionalization are bleak.
Although serological assays have been widely used for the diagnosis of canine visceral leishmaniasis (CVL), they present different performances depending on the clinical profile of the dogs. This study evaluated the accuracy of serological tests, immunochromatographic (Dual Path Platform: DPP®) and enzyme-linked immunosorbent (ELISA EIE®), for CVL in relation to the detection of Leishmania DNA through real-time polymerase chain reaction (real-time PCR) in samples from symptomatic and asymptomatic dogs from a non-endemic area in the state of Rio Grande do Sul, Southern Brazil. Serum from 140 dogs (39 symptomatic and 101 asymptomatic) was tested by DPP and ELISA followed by real-time PCR. From a total of 140 samples evaluated, Leishmania DNA was detected by real-time PCR in 41.4% (58/140). Moreover, 67.2% of samples positive in real-time PCR were positive in both DPP and ELISA (39/58), showing moderate agreement between methods. In the symptomatic group, one sample non-reactive in both serological assays was positive in real-time PCR, whereas in the asymptomatic group, 17.8% non-reactive or undetermined samples in serological assays were positive in the molecular method. Leishmania DNA was not detected in 17.9% reactive samples by serological assays from the symptomatic group, and in 3.9% from asymptomatic dogs. Real-time PCR demonstrated greater homogeneity between symptomatic and asymptomatic groups compared with DPP and ELISA. The molecular method can help to establish the correct CVL diagnosis, particularly in asymptomatic dogs, avoiding undesirable euthanasia.
This study investigated the potential application of genomic selection under a multi-breed scheme in the Spanish autochthonous beef cattle populations using a simulation study that replicates the structure of linkage disequilibrium obtained from a sample of 25 triplets of sire/dam/offspring per population and using the BovineHD Beadchip. Purebred and combined reference sets were used for the genomic evaluation and several scenarios of different genetic architecture of the trait were investigated. The single-breed evaluations yielded the highest within-breed accuracies. Across breed accuracies were found low but positive on average confirming the genetic connectedness between the populations. If the same genotyping effort is split in several populations, the accuracies were lower when compared with single-breed evaluation, but showed a small advantage over small-sized purebred reference sets over the accuracies of subsequent generations. Besides, the genetic architecture of the trait did not show any relevant effect on the accuracy with the exception of rare variants, which yielded slightly lower results and higher loss of predictive ability over the generations.
The Ottawa Ankle Rules (OAR) are a clinical decision tool used to minimize unnecessary radiographs in ankle and foot injuries. The OAR are a reliable tool to exclude fractures in children over 5 years of age when applied by physicians. Limited data support its use by other health care workers in children. Our objective was to determine the accuracy of the OAR when applied by non-physician providers (NPP).
Children aged 5 to 17 years presenting with an acute ankle or foot injury were enrolled. Phase 1 captured baseline data on x-ray use in 106 patients. NPPs were then educated on the usage of the OAR and completed an OAR learning module. In phase 2, NPPs applied the OAR to 184 included patients.
The sensitivity of the foot rule, as applied by NPP’s, was 100% (56-100% CI) and the specificity was 17% (9-29% CI) for clinically significant fractures. The sensitivity of the ankle portion of the rule, as applied by NPP’s, was 88% (47-99 CI) and the specificity was 31% (23-40% CI) for clinically significant fractures. The only clinically significant fracture missed by NPP’s was detected on physician assessment. Inter-observer agreement was κ=0.24 for the ankle rule and κ=0.49 for the foot rule.
The sensitivity of the OAR when applied by NPP’s was very good. More training and practice using the OAR would likely improve NPP’s inter-observer reliability. Our data suggest the OAR may be a useful tool for NPP’s to apply prior to physician assessment.
A single-item depression measure may not be adequate in capturing the complex entity of mental health, despite wide use of this indicator in community studies. This study evaluated the accuracy of a single-question depression measure in comparison to two composite indices–the Center for Epidemiologic Studies Depression Scale (CESD) and the Geriatric Depression Scale (GDS).
Materials and methods:
A total of 800 elderly participants ranging from 60 to 89 years of age and residing in Seoul were recruited using a multistage sampling scheme in 2015. The survey was conducted by trained interviewers with a constructed questionnaire. Reliability and validity measures such as the Kappa index, sensitivity, specificity, PPV, NPV, and AUC were used to evaluate the accuracy of the single question measure. Socio-demographic group differences in accuracy were compared by age, sex, marital status, education, employment, and financial status.
The prevalence of depression by a single-question measure was much lower than those of CESD and GDS (5.5%, 12.3%, and 12.1%, respectively). The sensitivity of the single-item measure, based on CESD and GDS, was extremely low at 30.6% and 36.1%. In the subgroup analysis, however, there was a marked educational discrepancy in all accuracy measures; in sensitivity, people with a university degree or higher showed about 2.4 times higher sensitivity than those having only a primary school education.
The results show that a single-question depression measure should be used with caution. In addition, the single-question measure could substantially underestimate depression among the risk group of older adults.
Traumatic brain injury (TBI) occurs frequently during child and early adulthood, and is associated with negative outcomes including increased risk of drug abuse, mental health disorders and criminal offending. Identification of previous TBI for at-risk populations in clinical settings often relies on self-report, despite little information regarding self-report accuracy. This study examines the accuracy of adult self-report of hospitalized TBI events and the factors that enhance recall.
The Christchurch Health and Development Study is a birth cohort of 1265 children born in Christchurch, New Zealand, in 1977. A history of TBI events was prospectively gathered at each follow-up (yearly intervals 0–16, 18, 21, 25 years) using parental/self-report, verified using hospital records.
At 25 years, 1003 cohort members were available, with 59/101 of all hospitalized TBI events being recalled. Recall varied depending on the age at injury and injury severity, with 10/11 of moderate/severe TBI being recalled. Logistic regression analysis indicated that a model using recorded loss of consciousness, age at injury, and injury severity, could accurately classify whether or not TBI would be reported in over 74% of cases.
This research demonstrates that, even when individuals are carefully cued, many instances of TBI will not recalled in adulthood despite the injury having required a period of hospitalization. Therefore, screening for TBI may require a combination of self-report and review of hospital files to ensure that all cases are identified. (JINS, 2016, 22, 717–723)