To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We employ global input–output analysis to quantify amplification of exogenous disturbances in compressible boundary layer flows. Using the spatial structure of the dominant response to time-periodic inputs, we explain the origin of steady reattachment streaks in a hypersonic flow over a compression ramp. Our analysis of the laminar shock–boundary layer interaction reveals that the streaks arise from a preferential amplification of upstream counter-rotating vortical perturbations with a specific spanwise wavelength. These streaks are associated with heat-flux striations at the wall near flow reattachment and they can trigger transition to turbulence. The streak wavelength predicted by our analysis compares favourably with observations from two different hypersonic compression ramp experiments. Furthermore, our analysis of inviscid transport equations demonstrates that base-flow deceleration contributes to the amplification of streamwise velocity and that the baroclinic effects are responsible for the production of streamwise vorticity. Finally, the appearance of the temperature streaks near reattachment is triggered by the growth of streamwise velocity and streamwise vorticity perturbations as well as by the amplification of upstream temperature perturbations by the reattachment shock.
Whether maternal obesity and gestational weight gain (GWG) are associated with early-childhood development in low-income, urban, minority populations, and whether effects differ by child sex remain unknown. This study examined the impact of prepregnancy BMI and GWG on early childhood neurodevelopment in the Columbia Center for Children’s Environmental Health Mothers and Newborns study. Maternal prepregnancy weight was obtained by self-report, and GWG was assessed from participant medical charts. At child age 3 years, the Psychomotor Development Index (PDI) and Mental Development Index (MDI) of the Bayley Scales of Infant Intelligence were completed. Sex-stratified linear regression models assessed associations between prepregnancy BMI and pregnancy weight gain z-scores with child PDI and MDI scores, adjusting for covariates. Of 382 women, 48.2% were normal weight before pregnancy, 24.1% overweight, 23.0% obese, and 4.7% underweight. At 3 years, mean scores on the PDI and MDI were higher among girls compared to boys (PDI: 102.3 vs. 97.2, P = 0.0002; MDI: 92.8 vs. 88.3, P = 0.0001). In covariate-adjusted models, maternal obesity was markedly associated with lower PDI scores in boys [b = −7.81, 95% CI: (−13.08, −2.55), P = 0.004], but not girls. Maternal BMI was not associated with MDI in girls or boys, and GWG was not associated with PDI or MDI among either sex (all-P > 0.05). We found that prepregnancy obesity was associated with lower PDI scores at 3 years in boys, but not girls. The mechanisms underlying this sex-specific association remain unclear, but due to elevated obesity exposure in urban populations, further investigation is warranted.
A prototype X-ray fluorescence system for chemical and phase microanalysis of materials has been developed and tested. Preliminary work with this system has indicated X-ray fluorescence detection limits on the order of 40 picograms for heavier elements such as gold when using a 100 micron collimator, 400 second counting time and a silver anode operating at 12 Kw. Phase identification by X- ray diffraction can be obtained from the same spot. A proposed design for an improved system providing greater elemental sensitivities and capable of semi-automated operation has been completed.
X-ray Microfluorescence (XRMF) analysis uses a finely collimated beam of X-rays to excite fluorescent radiation in a sample (Nichols & Ryon 1986). Characteristic fluorescent radiation emanating from the small interaction volume element is acquired using an energy dispersive detector placed in close proximity to the sample. The signal from the detector is processed using a computer-based multi-channel analyzer.
XRMF imaging is accomplished by translating the sample through the small X-ray beam in a step or continuous raster mode. As the sample is translated, a pixel by pixel X-ray intensity image is formed for each chemical element in the sample. The resulting digitized image information for each element is stored for subsequent processing and/or display. The images, in the form of elemental maps representing identical areas, may be displayed and color coded by element and/or intensity and then overlayed for spatial correlation.
The present study of parameters affecting the performance of an X-ray microfluorescence system has shown how such systems use X-ray beams with effective spot sizes less than 100 micrometers to bridge the gap in analytical capabilities between predominately surface micro analytical techniques such as SEM/EDX and bulk analytical methods such as standard XRF analysis. The combination of XRMF spectroscopy with digital imaging allows chemical information to be obtained and mapped from surface layers as well as from layers or structures beneath the sample surface. Simultaneously, it provides valuable high resolution chemical information in a readily interpreted visual form which displays the homogeneity within a given layer or structure. XRMF systems retain the advantages of minimal sample preparation, non-destructive analysis and high sensitivity inherent to XRF methods.
The use of high-intensity, 8Kw, x-ray sources (Rigaku rotating-anode generator and wide - angle goniometer for this study) provides both opportunities and challenges. With high - intensity x-ray sources, detection limits can be lowered significantly while still offering count times of practical duration. On the other hand, the availability of high intensity x-ray sources puts greater demands on information extraction procedures and on the mechanical preciseness of sample containment and support. In particular we addressed the use of a cylindrical aluminum sample cell with a 0.010’’ polycrystalline (cold rolled) beryllium window electron –beam welded to an aluminum frame. See Figure 1. This cell permitted analysis of various air-sensitive specimens. The sample was pressed against the back of the beryllium window by a spring-loaded backing plate.
During the summer of 2016, the Hawaii Department of Health responded to the second-largest domestic foodborne hepatitis A virus (HAV) outbreak in the post-vaccine era. The epidemiological investigation included case finding and investigation, sequencing of RNA positive clinical specimens, product trace-back and virologic testing and sequencing of HAV RNA from the product. Additionally, an online survey open to all Hawaii residents was conducted to estimate baseline commercial food consumption. We identified 292 confirmed HAV cases, of whom 11 (4%) were possible secondary cases. Seventy-four (25%) were hospitalised and there were two deaths. Among all cases, 94% reported eating at Oahu or Kauai Island branches of Restaurant Chain A, with 86% of those cases reporting raw scallop consumption. In contrast, a food consumption survey conducted during the outbreak indicated 25% of Oahu residents patronised Restaurant Chain A in the 7 weeks before the survey. Product trace-back revealed a single distributor that supplied scallops imported from the Philippines to Restaurant Chain A. Recovery, amplification and sequence comparison of HAV recovered from scallops revealed viral sequences matching those from case-patients. Removal of product from implicated restaurants and vaccination of those potentially exposed led to the cessation of the outbreak. This outbreak further highlights the need for improved imported food safety.
On September 5, 2017, a decision of the Grand Chamber of the European Court of Human Rights (ECHR) in Bărbulescu v. Romania (Bărbulescu) helped define the boundaries regarding employee privacy in the European workplace. The Bărbulescu decision held that a Romanian employee's legally protected right to privacy was violated when his employer monitored personal messages he sent from a company account, reversing a previous decision by the ECHR in this case that had expanded employers' rights to monitor employees.
Limitations of access have long restricted exploration and investigation of the cavities beneath ice shelves to a small number of drillholes. Studies of sea-ice underwater morphology are limited largely to scientific utilization of submarines. Remotely operated vehicles, tethered to a mother ship by umbilical cable, have been deployed to investigate tidewater-glacier and ice-shelf margins, but their range is often restricted. The development of free-flying autonomous underwater vehicles (AUVs) with ranges of tens to hundreds of kilometres enables extensive missions to take place beneath sea ice and floating ice shelves. Autosub2 is a 3600 kg, 6.7 m long AUV, with a 1600 m operating depth and range of 400 km, based on the earlier Autosub1 which had a 500 m depth limit. A single direct-drive d.c. motor and five-bladed propeller produce speeds of 1–2 m s−1. Rear-mounted rudder and stern-plane control yaw, pitch and depth. The vehicle has three sections. The front and rear sections are free-flooding, built around aluminium extrusion space-frames covered with glass-fibre reinforced plastic panels. The central section has a set of carbon-fibre reinforced plastic pressure vessels. Four tubes contain batteries powering the vehicle. The other three house vehicle-control systems and sensors. The rear section houses subsystems for navigation, control actuation and propulsion and scientific sensors (e.g. digital camera, upward-looking 300 kHz acoustic Doppler current profiler, 200 kHz multibeam receiver). The front section contains forward-looking collision sensor, emergency abort, the homing systems, Argos satellite data and location transmitters and flashing lights for relocation as well as science sensors (e.g. twin conductivity–temperature–depth instruments, multibeam transmitter, sub-bottom profiler, AquaLab water sampler). Payload restrictions mean that a subset of scientific instruments is actually in place on any given dive. The scientific instruments carried on Autosub are described and examples of observational data collected from each sensor in Arctic or Antarctic waters are given (e.g. of roughness at the underside of floating ice shelves and sea ice).
Among the important factors in the formation of melt water are: (1) The air and soil temperatures. (2) The presence or absence of debris on snow and ice. (3) The surface gradients of the glaciers. These gradients determine the areas of snow and ice in the zone where melting can occur as well as the amount of insolation. (4.) The orientation of snow and ice slopes. In general, in the Southern Hemisphere north-facing slopes receive more insolation than south-facing slopes.
The main source of the melt water is Wilson Piedmont Glacier, and the snowdrift-ice slabs are next in importance. The seasonal snowfall is not an important source, nor is the ice in the active zone. As no rain has ever been reported, all run-off is melt water.
The seasonal discharge of the Surko and Scheuren Rivers was roughly measured in 1957–58. It was found to be approximately 13 m3 s−1 d for the Surko River and approximately 19 m3 s−1 d for the Scheuren River, and it seems likely that the total seasonal discharge of all streams in the area was not far from 50 m3 s−1 d.
Development of herbicide-resistant crops has resulted in significant changes to agronomic practices, one of which is the adoption of effective, simple, low-risk, crop-production systems with less dependency on tillage and lower energy requirements. Overall, the changes have had a positive environmental effect by reducing soil erosion, the fuel use for tillage, and the number of herbicides with groundwater advisories as well as a slight reduction in the overall environmental impact quotient of herbicide use. However, herbicides exert a high selection pressure on weed populations, and density and diversity of weed communities change over time in response to herbicides and other control practices imposed on them. Repeated and intensive use of herbicides with the same mechanisms of action (MOA; the mechanism in the plant that the herbicide detrimentally affects so that the plant succumbs to the herbicide; e.g., inhibition of an enzyme that is vital to plant growth or the inability of a plant to metabolize the herbicide before it has done damage) can rapidly select for shifts to tolerant, difficult-to-control weeds and the evolution of herbicide-resistant weeds, especially in the absence of the concurrent use of herbicides with different mechanisms of action or the use of mechanical or cultural practices or both.
Herbicides are the foundation of weed control in commercial crop-production systems. However, herbicide-resistant (HR) weed populations are evolving rapidly as a natural response to selection pressure imposed by modern agricultural management activities. Mitigating the evolution of herbicide resistance depends on reducing selection through diversification of weed control techniques, minimizing the spread of resistance genes and genotypes via pollen or propagule dispersal, and eliminating additions of weed seed to the soil seedbank. Effective deployment of such a multifaceted approach will require shifting from the current concept of basing weed management on single-year economic thresholds.
The Dark Energy Survey is undertaking an observational programme imaging 1/4 of the southern hemisphere sky with unprecedented photometric accuracy. In the process of observing millions of faint stars and galaxies to constrain the parameters of the dark energy equation of state, the Dark Energy Survey will obtain pre-discovery images of the regions surrounding an estimated 100 gamma-ray bursts over 5 yr. Once gamma-ray bursts are detected by, e.g., the Swift satellite, the DES data will be extremely useful for follow-up observations by the transient astronomy community. We describe a recently-commissioned suite of software that listens continuously for automated notices of gamma-ray burst activity, collates information from archival DES data, and disseminates relevant data products back to the community in near-real-time. Of particular importance are the opportunities that non-public DES data provide for relative photometry of the optical counterparts of gamma-ray bursts, as well as for identifying key characteristics (e.g., photometric redshifts) of potential gamma-ray burst host galaxies. We provide the functional details of the DESAlert software, and its data products, and we show sample results from the application of DESAlert to numerous previously detected gamma-ray bursts, including the possible identification of several heretofore unknown gamma-ray burst hosts.
As the pool of fundamental data available to astronomers continues to increase, the question of how best to promote the necessary cross-discipline interaction becomes increasingly important. Commission 14 has traditionally played an important role in this activity, by publishing triennial reports in the IAU Proceedings, as well as by responding to more specific requests for data. We are fortunate in having the support for these activities of some energetic Working-Groups and Chairmen, whos contributions to the present report are very gratefully acknowledged. With the expansion of available data it is appropriate that these reports take on more and more the form of references to review articles and other more specific data bases. The question of whether the field of activity of the Commission should be enlarged was discussed at Patras and will be reviewed again at the Delhi meeting. One possibility is to include nuclear processes and fundamental particle physics. On the other hand a rationale for limiting the scope of our activities might be the direct application to astronomical observations. Astronomical theorists are usually better placed to access the fundamental data themselves. The interaction between fundamental physics and astronomy will in general take two forms. There is the essential service role of making data available in a usable form. However, we should surely aim to stimulate the other very profitable mode, in which the two disciplines are brought together to form real scientific collaborations, in order to research the problems of astronomy.
Research in molecular spectroscopy over much of the electromagnetic spectrum has continued intensively over the past three years. It has been stimulated not only by the imperatives of fundamental research programmes in many laboratories, but also by the impact of molecular lasers on the field, and the needs of atmospheric and environmental programmes. The literature is so prolific that it is impossible even to review briefly here all that is relevant to astrophysical needs. Thus most of this report has been compiled from the contributions from individual workers and Research Centres.
Spectrocopy is the classical diagnostic tool of astrophysics. Intensities and line shapes of well identified emission and/or absorption atomic and molecular features are used to provide information on species concentrations, and degree of excitation, from which gas kinetic, rotational, vibrational, electronic and excitation “temperatures” can be inferred when LTE conditions exist. Departures from LTE can also be determined spectroscopically. Diagnostic interpretation of spectra in optically thin circumstances is fairly straightforward. However, in optically thick conditions when the photon mean free path is very much less than the geometrical path, the emission spectrum is controlled by the absorption coefficient (Armstrong and Nicholls, 1972), (see equation 4a).
The relative contribution of demographic, lifestyle and medication factors to the association between affective disorders and cardiometabolic diseases is poorly understood.
To assess the relationship between cardiometabolic disease and features of depresion and bipolar disorder within a large population sample.
Cross-sectional study of 145 991 UK Biobank participants: multivariate analyses of associations between features of depression or bipolar disorder and five cardiometabolic outcomes, adjusting for confounding factors.
There were significant associations between mood disorder features and ‘any cardiovascular disease’ (depression odds ratio (OR) = 1.15, 95% CI 1.12–1.19; bipolar OR = 1.28, 95% CI 1.14–1.43) and with hypertension (depression OR = 1.15, 95% CI 1.13–1.18; bipolar OR = 1.26, 95% CI 1.12–1.42). Individuals with features of mood disorder taking psychotropic medication were significantly more likely than controls not on psychotropics to report myocardial infarction (depression OR = 1.47, 95% CI 1.24–1.73; bipolar OR = 2.23, 95% CI 1.53–3.57) and stroke (depression OR = 2.46, 95% CI 2.10–2.80; bipolar OR = 2.31, 95% CI 1.39–3.85).
Associations between features of depression or bipolar disorder and cardiovascular disease outcomes were statistically independent of demographic, lifestyle and medication confounders. Psychotropic medication may also be a risk factor for cardiometabolic disease in individuals without a clear history of mood disorder.