To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An early economic evaluation to inform the translation into clinical practice of a spectroscopic liquid biopsy for the detection of brain cancer. Two specific aims are (1) to update an existing economic model with results from a prospective study of diagnostic accuracy and (2) to explore the potential of brain tumor-type predictions to affect patient outcomes and healthcare costs.
A cost-effectiveness analysis from a UK NHS perspective of the use of spectroscopic liquid biopsy in primary and secondary care settings, as well as a cost–consequence analysis of the addition of tumor-type predictions was conducted. Decision tree models were constructed to represent simplified diagnostic pathways. Test diagnostic accuracy parameters were based on a prospective validation study. Four price points (GBP 50-200, EUR 57-228) for the test were considered.
In both settings, the use of liquid biopsy produced QALY gains. In primary care, at test costs below GBP 100 (EUR 114), testing was cost saving. At GBP 100 (EUR 114) per test, the ICER was GBP 13,279 (EUR 15,145), whereas at GBP 200 (EUR 228), the ICER was GBP 78,300 (EUR 89,301). In secondary care, the ICER ranged from GBP 11,360 (EUR 12,956) to GBP 43,870 (EUR 50,034) across the range of test costs.
The results demonstrate the potential for the technology to be cost-effective in both primary and secondary care settings. Additional studies of test use in routine primary care practice are needed to resolve the remaining issues of uncertainty—prevalence in this patient population and referral behavior.
The COVID-19 pandemic has altered numerous elements of social, political, and economic life. Mask wearing is arguably an essential component of the new normal until substantial progress is made on a vaccine. However, though evidence suggests the practice is a positive for public health and limiting the transmission of COVID-19, there is variation in attitudes toward and practices of mask wearing. Specifically, there appears to be a sex-based divide in mask wearing, with men more likely to resist wearing masks. Utilizing an original survey, we test the correlation between masculinity and mask wearing. We find that identification with norms of masculinity has a significant influence on affective responses toward mask wearing.
Raw milk cheeses are commonly consumed in France and are also a common source of foodborne outbreaks (FBOs). Both an FBO surveillance system and a laboratory-based surveillance system aim to detect Salmonella outbreaks. In early August 2018, five familial FBOs due to Salmonella spp. were reported to a regional health authority. Investigation identified common exposure to a raw goats' milk cheese, from which Salmonella spp. were also isolated, leading to an international product recall. Three weeks later, on 22 August, a national increase in Salmonella Newport ST118 was detected through laboratory surveillance. Concomitantly isolates from the earlier familial clusters were confirmed as S. Newport ST118. Interviews with a selection of the laboratory-identified cases revealed exposure to the same cheese, including exposure to batches not included in the previous recall, leading to an expansion of the recall. The outbreak affected 153 cases, including six cases in Scotland. S. Newport was detected in the cheese and in the milk of one of the producer's goats. The difference in the two alerts generated by this outbreak highlight the timeliness of the FBO system and the precision of the laboratory-based surveillance system. It is also a reminder of the risks associated with raw milk cheeses.
The role of cognitive complaints has recently received increasing attention in dementia research.
To investigate whether the subjective perception of cognitive deficits is related to multi-morbidity in an old Italian cohort.
The study population(N = 6,825) included persons who did not receive a diagnosis of dementia(DSM-IIIR criteria), were not cognitively impaired and scored < 4 at the Global Deterioration Scale(GDS). On GDS stage one, Individuals with GDS score equal to one do not report memory problems and no deficits are detected during the interview. In subjects with GDS score = 2, a very mild cognitive decline is appreciable. On GDS score = 3,deficiencies begin to be noted. The examining physicians diagnosed the somatic disorders according to the International Classification Disease version 10 (ICD-10). Mental health was clinically assessed by the examining physicians with semi-structured questions. A multimorbidity index was created based on the number of co-occurring chronic disorders. Binary logistic regression analyses were used to estimate multiadjusted Odds ratio (aOR) and 95% Confidence Intervals (CI).
According to GDS, 28.4% (N = 1,940) of participants reported some degree of perceived cognitive decline. Cognitive complaints were associated with increasing age, low education, and multimorbidity. Stroke (aOR,95%CI 1.6;1.3–1.9),diabetes(aOR,95%CI 1.4;1.1–1.7),depressive(aOR,95%CI 2.2;1.8–2.7) and anxiety symptoms(aOR, 95%CI 1.5;1.3–1.8) were significantly associated with perceived cognitive decline. When performances at MMSE were taken into account, cardiovascular(aOR,95%CI 2.3;1.3–4.1) and respiratory diseases(aOR,95%CI 1.9;1.0–3.6) were associated with self-perceived cognitive decline in absence of observable cognitive deficits.
Cognitive complaints have many somatic correlates and some of them may account for the discrepancy between perceived cognitive decline and cognitive assessment.
22q11.2 deletion syndrome (22q11DS), one of the most common recurrent copy number variant disorders, is associated with dopaminergic abnormalities and increased risk for psychotic disorders.
Given the elevated prevalence of substance use and dopaminergic abnormalities in non-deleted patients with psychosis, we investigated the prevalence of substance use in 22q11DS, compared with that in non-deleted patients with psychosis and matched healthy controls.
This cross-sectional study involved 434 patients with 22q11DS, 265 non-deleted patients with psychosis and 134 healthy controls. Psychiatric diagnosis, full-scale IQ and COMT Val158Met genotype were determined in the 22q11DS group. Substance use data were collected according to the Composite International Diagnostic Interview.
The prevalence of total substance use (36.9%) and substance use disorders (1.2%), and weekly amounts of alcohol and nicotine use, in patients with 22q11DS was significantly lower than in non-deleted patients with psychosis or controls. Compared with patients with 22q11DS, healthy controls were 20 times more likely to use substances in general (P < 0.001); results were also significant for alcohol and nicotine use separately. Within the 22q11DS group, there was no relationship between the prevalence of substance use and psychosis or COMT genotype. Male patients with 22q11DS were more likely to use substances than female patients with 22q11DS.
The results suggest that patients with 22q11DS are at decreased risk for substance use and substance use disorders despite the increased risk of psychotic disorders. Further research into neurobiological and environmental factors involved in substance use in 22q11DS is necessary to elucidate the mechanisms involved.
Rising sea levels due to climate change can have severe consequences for coastal populations and ecosystems all around the world. Understanding and projecting sea-level rise is especially important for low-lying countries such as the Netherlands. It is of specific interest for vulnerable ecological and morphodynamic regions, such as the Wadden Sea UNESCO World Heritage region.
Here we provide an overview of sea-level projections for the 21st century for the Wadden Sea region and a condensed review of the scientific data, understanding and uncertainties underpinning the projections. The sea-level projections are formulated in the framework of the geological history of the Wadden Sea region and are based on the regional sea-level projections published in the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5). These IPCC AR5 projections are compared against updates derived from more recent literature and evaluated for the Wadden Sea region. The projections are further put into perspective by including interannual variability based on long-term tide-gauge records from observing stations at Den Helder and Delfzijl.
We consider three climate scenarios, following the Representative Concentration Pathways (RCPs), as defined in IPCC AR5: the RCP2.6 scenario assumes that greenhouse gas (GHG) emissions decline after 2020; the RCP4.5 scenario assumes that GHG emissions peak at 2040 and decline thereafter; and the RCP8.5 scenario represents a continued rise of GHG emissions throughout the 21st century. For RCP8.5, we also evaluate several scenarios from recent literature where the mass loss in Antarctica accelerates at rates exceeding those presented in IPCC AR5.
For the Dutch Wadden Sea, the IPCC AR5-based projected sea-level rise is 0.07±0.06m for the RCP4.5 scenario for the period 2018–30 (uncertainties representing 5–95%), with the RCP2.6 and RCP8.5 scenarios projecting 0.01m less and more, respectively. The projected rates of sea-level change in 2030 range between 2.6mma−1 for the 5th percentile of the RCP2.6 scenario to 9.1mma−1 for the 95th percentile of the RCP8.5 scenario. For the period 2018–50, the differences between the scenarios increase, with projected changes of 0.16±0.12m for RCP2.6, 0.19±0.11m for RCP4.5 and 0.23±0.12m for RCP8.5. The accompanying rates of change range between 2.3 and 12.4mma−1 in 2050. The differences between the scenarios amplify for the 2018–2100 period, with projected total changes of 0.41±0.25m for RCP2.6, 0.52±0.27m for RCP4.5 and 0.76±0.36m for RCP8.5. The projections for the RCP8.5 scenario are larger than the high-end projections presented in the 2008 Delta Commission Report (0.74m for 1990–2100) when the differences in time period are considered. The sea-level change rates range from 2.2 to 18.3mma−1 for the year 2100.
We also assess the effect of accelerated ice mass loss on the sea-level projections under the RCP8.5 scenario, as recent literature suggests that there may be a larger contribution from Antarctica than presented in IPCC AR5 (potentially exceeding 1m in 2100). Changes in episodic extreme events, such as storm surges, and periodic (tidal) contributions on (sub-)daily timescales, have not been included in these sea-level projections. However, the potential impacts of these processes on sea-level change rates have been assessed in the report.
Despite aspirations to be a world-class national curriculum, the Australian Curriculum (AC) has been criticised as ‘manifestly deficient’ (Australian Government Department of Education and Training, 2014 p. 5) as an inclusive curriculum, failing to meet the needs of all students with disabilities (SWD) and their teachers. There is a need for research into the daily attempts of educators to navigate the tension between a ‘top-down’ system-wide curriculum and a ‘bottom-up’ regard for individual student needs, with a view to informing both policy and practice. This article is the first of two research papers in which we report the findings from a national online Research in Special Education (RISE) Australian Curriculum Survey of special educators in special schools, classes, and units regarding their experience using the AC to plan for and teach SWD. Survey results indicated (a) inconsistent use of the AC as the primary basis for developing learning objectives and designing learning experiences, (b) infrequent use of the achievement standards to support assessment and reporting, and (c) considerable supplementation of the AC from other resources when educating SWD. Overall, participants expressed a lack of confidence in translating the AC framework into a meaningful curriculum for SWD. Implications for policy, practice, and future research are discussed.
The aim of the present work was to address experimentally the possible impact of exposure to air pollution during gestation on the differentiation and function of the gonads of the offspring using a rabbit model. Rabbits were exposed daily to diluted diesel exhaust gas or filtered air from the 3rd until the 27th day of gestation, during which time germ cells migrate in genital ridges and divide, and fetal sex is determined. Offspring gonads were collected shortly before birth (28th day of gestation) or after puberty (7.5 months after birth). The structure of the gonads was analyzed by histological and immunohistological methods. Serum concentrations of testosterone and anti-Müllerian hormone were determined using ELISA. The morphology and the endocrine function of the gonads collected just at the arrest of the exposure were similar in polluted and control animals in both sexes. No differences were observed as well in gonads collected after puberty. Sperm was collected at the head of the epididymis in adults. Sperm motility and DNA fragmentation were measured. Among all parameters analyzed, only the sperm DNA fragmentation rate was increased three-fold in exposed males. Mechanisms responsible for these modifications and their physiological consequences are to be further clarified.
Programmatic learning goals serve as the foundation for an educational institution’s curriculum design and assurance of learning processes. The purpose of our study is to determine the relevance or alignment of undergraduate business school learning goals. We identify the learning goals of US undergraduate business programs accredited by the Association to Advance Collegiate Schools of Business-International (AACSB) and determine the extent to which the goals are aligned with (a) evidence-based competencies that are needed for managerial success (including the ‘Great Eight’ and the ‘hyperdimensional taxonomy’) and (b) content areas identified in AACSB’s Eligibility Procedures and Accreditation Standards for Business Accreditation. We found that learning goals conform to AACSB Standards and evidence-based managerial competencies, but goals are most closely aligned with AACSB Standards, followed by the Great Eight, and the hyperdimensional taxonomy, respectively. We discuss the implications of our findings with respect to business schools’ assurance of learning processes and provide recommendations for AACSB, business schools, the broader academy, and future research.
Since the 1990s, labor history has been presented as “in crisis”. This negative evaluation is an overstatement. It has nevertheless prodded historians, often productively, to rethink the basic orientations of working-class history. This survey article explores three recent pathways to a “new” labor history: the turn to transnational and global study; the “new” history of capitalism; and the study of slavery as unfree labor. These new approaches to labor history highlight an old dilemma: how the structured determinations of laboring life are balanced alongside dimensions of human agency in understanding the complex experience of the working-class past. It is argued that we need to consider both structure and agency in the researching and writing of labor history. If an older “new” labor history accented agency, new pathways to labor history too often seem constrained by “mind forg’d manacles” that limit understandings of workers’ past lives by emphasizing structure and determination.
The ultimate aim of any radar experiment is of course to determine information about the structures which backscatter the radio waves, and the environment in which they exist. For example, it might be of interest to study the wind speeds associated with the scatterers, or the shape of the scatterers, or to differentiate types of scatterers or reflectors. It might be of interest to determine the radar cross-section of the scatterers, or their spatial distribution over the sky. Other desired information might include the spatial and temporal variation of the scatterer velocities as a function of time and height. If the radio scatter is due to turbulence, it might be desirable to measure the intensity of the turbulence, and/or its spatial distribution. It might be of interest to determine the relative percentages of turbulent to non-turbulent scatter. The list can go on.
In the preceding chapters, we concentrated on: (i) the principles of radar (Chapters 2 to 6); (ii) signal processing procedures (Chapters 3 to 5); and (iii) the nature of the scattering mechanisms (especially Chapter 3). Now is the time to bring all this information together and look more closely at the interaction between the radar and its scattering environment. In particular, we want to determine how the radar may be used to deduce information about the scatterers themselves. This information could include all sorts of spatial scales, right down to the radar wavelength (often indirect information at such small scales), and a wide variety of temporal scales, from fractions of a second to many years.
The purpose of this chapter is therefore to discuss ways that relevant atmospheric parameters can be determined and then interpreted, in order to give new insights into the nature of the scatterers. We will re-examine some of the parameters already discussed, like spectral characteristics, and we will also introduce new ones, like the turbulence anisotropy, amplitude distributions, phase distributions, turbulence strengths, tropopause height, and so forth. Some of the approximations used in determining these parameters are also critically examined. Some consideration will be given to experimental design, and then interpretation of the results. Studies of the parameters evaluated over long periods of time can give a considerable amount of additional information, over and above that which can be determined from a few discrete observations, but discussion of this aspect will not be considered in great detail, due to lack of space.
This book is about designing, building, and using atmospheric radars. Of course the term “atmospheric radar” covers a wide and diverse set of instruments, which can be used to study a wide range of atmospheric phenomena, and we cannot cover all radar types nor all applications. However, radars used for MST (Mesosphere-Stratosphere-Troposphere) studies employ a very high percentage of the techniques used in atmospheric studies, and cover an extraordinary range of physical processes. Therefore we have chosen this field as our focus. A reader familiar with this book should not only have developed a broad comprehension of the MST region, but should be able to diversify easily to other fields of atmospheric radar work.
While the primary targets of this book are new and advanced graduate science and engineering students working with radar to study the atmosphere, we have also aimed to make it accessible and useful to a wider audience. The extensive references and diagrams should make it valuable as a general reference resource even for more experienced workers in the field. The level of difficulty in each chapter has been adapted to suit the standards of a student with a modest background in mathematics and signalprocessing. Some level of understanding of Fourier methods, including Fourier integrals, is desirable, although not mandatory. Nevertheless, some of the chapters are pitched at a level which could be followed even by an interested amateur. Chapter 2, for example, gives a moderately detailed history of the development of atmospheric radar, examining the development of experimental radio applications for both meteorology and world-wide communication following World War II, and would be of interest to, and easily comprehenced by, an enthusiastic radar hobbyist or history buff. Yet the detail on scatter processes in Chapter 3 in regard to the refractive index of the atmosphere and ionosphere should be enough to satisfy more discerning tastes in mathematical complexity.
The layout of the chapters has been carefully developed, mixing the areas of technical detail and practical application in a way that we hope will keep the reader stimulated as we develop parallel themes of radar engineering, experimental design, application and understanding of meteorological/atmospheric physics and chemistry.
We begin with an overview of the atmosphere which can easily be comprehended by a reader with no knowledge at all of radar.
It should be clear from the foregoing chapters that the range of applications of MST and windprofiler radar is broad and challenging. Some techniques are mature, some are under development, and some are even no doubt yet to be discovered. Measurements of wind velocities and, by extension, wave motions, wave-mean flow interactions, momentum flux deposition and turbulence, are possible. Capabilities for temperature measurements, and the possibility of humidity measurements, have been discussed. Strange echoes such as polar mesosphere summer echoes have given new insights into the plasma processes of the lower thermosphere. Studies of turbulence anisotropy are possible. We have demonstrated functional radar designs that cost as little as $100 000 up to many millions of dollars.
We will not dwell on these many achievements, however, which should be selfevident. What is perhaps of greater interest is the future of these instruments, and this will be the main focus here.
The future harbors both pragmatic and curiosity-driven aspects. From the point of view of the former, networks of radars, providing data for incorporation into computer forecasting and now-casting models, offer the hope of better forecasts. They have been shown to have benefits in forecasting on time-scales from a few hours out to several days, especially with systems deployed in Japan, Europe, and Canada (see Chapter 12). At the time of writing (2015), the European Space Agency is about to launch a specialized satellite instrument (AEOLUS) for measurement of tropospheric winds from space by lidar, and the networks of windprofilers discussed will be crucial tools for validation of these data. However, since the satellite only measures winds at sunrise and sunset, the radars, with their continuous recording capability, will continue to provide valuable input to meteorological models for many years to come.
Accurate records of winds are of course valuable for large-scale forecasts. This can impact aircraft travel, allowing better flight planning. The ability of radars to make reliable measurements of turbulence strengths can also be of value from the perspective of aircraft passenger safety.
As we have already discussed, there are many competing factors that must be taken into account in order to optimally investigate the atmosphere through radar observations. One of the more notable examples is the Doppler dilemma. Obviously one would like to select an inter-pulse period (IPP) corresponding to a sufficiently large Nyquist velocity interval. Here sufficiently large means a velocity range that encompasses most of the anticipated radial velocities to be observed. The range of Nyquist velocities is extended by decreasing the IPP. However, decreasing the IPP also reduces the maximum unambiguous range that can be measured. Ideally one would like to maintain a large Nyquist velocity (short IPP) and large maximum unambiguous range (long IPP) – hence the dilemma. Another example involves the disparity between the desire to improve range resolution and improve radar sensitivity. Range resolution can be improved by decreasing the radar pulse width; however, this means a decrease in the amount of power that illuminates a scatterer and corresponding decrease in detectability. That is, the desire to increase the detectability of atmospheric signals by transmitting longer radar pulses is at odds with the need to improve range resolution.
In many cases, techniques have been developed that allow us to work around the compromises that arise in designing radar experiments. For example, pulse compression (discussed in Chapter 4) is used to improve range resolution without compromising the signal-to-noise ratio (SNR) (Schmidt et al., 1979). By and large, such techniques are known to introduce corresponding undesirable side effects. For the case of pulse compression, either the existence of some level of range side-lobes, or a decrease in temporal resolution, are a by-product of complementary codes.
In this chapter, we discuss how the use of multiple-receiver and multiple-frequency techniques can be used in atmospheric remote sensing as a means of improving angular and range resolution respectively. Before proceeding, we should clarify one point of nomenclature. The term multiple-receiver will be used throughout this chapter to describe a radar system that is capable of receiving and recording atmospheric signals on two or more spatially separated antennas or groups of antennas. The myriad names associated with interferometric techniques were discussed in Chapter 2, Section 2.15.6: here, we will discuss in detail just a subset of these, but the points discussed will cover to some extent all the techniques.