To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The x-ray energy emitted from laser-produced plasmas has been measured under various experimental conditions. Two Nd-glass lasers were used in separate experiments to focus pulsed laser light on planar targets. X-ray fluences were measured with newly developed silicon detector calorimeters. Results for various experimental conditions are reported in terms of the efficiency with which the laser light was converted to x-ray energy by plasma production.
We report on eight years of spectropolarimetric monitoring of the WR140 binary. The broad-band linear polarization decreased systematically after the 1985 periastron passage. By 1991, it settled to a constant value at which it has remained through the 1993 periastron passage. We do not detect, in data taken after 1989, a line-effect in He II Λ4686. This suggests either that the continuum and the line emission scatter at the same region, or that any intrinsic polarization has been below our detection limit. We conclude that the presently observed polarization of WR140 is consistent with interstellar foreground polarization.
Background: Intracranial mycotic aneurysms are rare forms of vascular abnormalities. They are typically fragile and have high tendency to bleed. Even when they are successfully secured upon intervention, the medical management can be challenging in presence of other non-ruptured aneurysms and concomitant cerebral vasospasm. Methods: A 31 year old female was admitted with right sided large intracerebral hemorrhage due to ruptured mycotic MCA aneurysm. She was also known with severe tricuspid regurgitation from drug abuse. Others aneurysms were also located intracranially and extracranially, including subclavian and renal arteries. Results: The MCA aneurysm was successfully clipped during decompressive craniectomy. The non-ruptured left ACA aneurysm was occluded through endovascular intervention. Due to cardiac condition and presence of other non-secured extarcranial aneurysms, we followed the MNI protocol for treating cerebral vasospsam by milrinone infusion. The treatment was successful for over three weeks until another micro-aneurysm had ruptured which had lead to severe and rapid clinical deterioration, that had lead eventually to death. Conclusions: Intracranial mycotic aneurysms remain challenging. Patients should be selected for surgical clipping versus endovascular intervention based on clinical state and radiological features. We suggest using milrinone over induced hypertension therapy for post-intervention cerebral vasospasm in order to lower the risk for rupturing non-secured aneurysms.
By applying a display ecology to the Deeper, Wider, Faster proactive, simultaneous telescope observing campaign, we have shown a dramatic reduction in the time taken to inspect DECam CCD images for potential transient candidates and to produce time-critical triggers to standby telescopes. We also show how facilitating rapid corroboration of potential candidates and the exclusion of non-candidates improves the accuracy of detection; and establish that a practical and enjoyable workspace can improve the experience of an otherwise taxing task for astronomers. We provide a critical road test of two advanced displays in a research context—a rare opportunity to demonstrate how they can be used rather than simply discuss how they might be used to accelerate discovery.
We summarize the results of the Far-Ultraviolet Spectroscopic Explorer (FUSE) program to study O VI in the Milky Way halo. Spectra of 100 extragalactic objects and two distant halo stars are analyzed to obtain measures of O VI absorption along paths through the Milky Way thick disk/halo and beyond. Strong O VI absorption over the velocity range from −100 to 100 km s−1 reveals a widespread but highly irregular distribution of O VI, implying the existence of substantial amounts of hot gas with T~3×105 K in the Milky Way thick disk/halo. The overall distribution of O VI can be described by a plane-parallel patchy absorbing layer with an average O VI mid-plane density of no(O VI) = 1.7×10−8 cm−3, an exponential scale height of ~2.3 kpc, and a ~0.25 dex excess of O VI in the northern Galactic polar region. Approximately 60 percent of the sky is covered by high velocity O VI with |vLSR|>100 km s−1. This high velocity O VI traces a variety of phenomena in and near the Milky Way including outflowing material from the Milky Way, tidal interactions with the Magellanic Clouds, accretion of gas onto the Milky Way, and warm/hot gas interactions in a highly extended (>70 kpc) Galactic corona or with hot intergalactic gas in the Local Group.
We used the Wisconsin Ultraviolet Photo-Polarimeter Experiment during the Astro-2 mission aboard the Space Shuttle Endeavour, to obtain ultraviolet spectropolarimetry of three classical novae that had recently gone into outburst. All three novae appear to have intrinsic polarization, with polarization changes across emission lines. This result indicates that, geometrically, the ejecta were quite aspherical.
We are developing an atlas of spectropolarimetric observations of 61 bright northern Be stars obtained from 1989-94 using the halfwave polarimeter (HPOL) at the 0.9m telescope of the University of Wisconsin Pine Bluff Observatory (PBO). The data cover the wavelength range from about 3400-7600Å, with a spectral resolution of about 25Å. This atlas will contain all data (297 observations total) obtained as part of a survey program with HPOL during the time when the detector in use was a dual Reticon array; the survey observations with HPOL continue, using a new CCD detector which extends the spectral coverage out to 1.05μm and improves the spectral resolution to about 12Å. The CCD observations will be presented later in a second volume of the atlas.
Only a brief summary of the findings of the survey from the first 5 years of the project is presented here. A full analysis of the data will be included in a paper to be published elsewhere. The general wavelength dependence of polarization for classical Be stars can be considered on the basis of these observations, and results on polarimetric variability are available. In particular, we find that 56% (20 of 36) of the Be stars observed 3 or more times from 1989-94 show significantly variable polarization at the level of 0.1% changes (inclusion of preliminary results from the continuing CCD survey indicates that the percentage is even higher). The timescales for these changes range from as short as night-to-night to as long as several months. Several of the stars showed evidence for polarimetric “outbursts” during the time period covered by the observations.
This work experimentally examines the detachment of liquid droplets from both oleophilic and oleophobic fibres, using an atomic force microscope. The droplet detachment force was found to increase with increasing fibre diameter and forces were higher for philic fibres than phobic fibres. We also considered the detachment of droplets situated on the intersection of two fibres and arrays of fibres (such as found in fibrous mats or filters) and found that the required detachment forces were higher than for similarly sized droplets on a single fibre, though not as high as expected based on theory. A model was developed to predict the detachment force, from single fibres, which agreed well with experimental results. It was found that the entire dataset (single and multiple fibres) could be best described by power law relationships.
Probit-based models relating a proportional response variable to a temporal explanatory variable, assuming that the times to response are normally distributed within the population, have been used in seed biology for describing the rate of loss of viability during seed ageing and the progress of germination over time in response to environmental signals (e.g. water, temperature). These models may be expressed as generalized linear models (GLMs) with a probit (cumulative normal distribution) link function, and, using GLM fitting procedures in current statistical software, parameters of these models are efficiently estimated while taking into account the binomial error distribution of the dependent variable. The fitted parameters can then be used to calculate the ‘traditional’ model parameters, such as the hydro- or hydrothermal time constant, the mean or median response of the seeds (e.g. mean time to death, median base water potential), and the standard deviation of the normal distribution of that response. Furthermore, through consideration of the deviance and residuals, performing model evaluation and modification can lead to improved understanding of the underlying physiological/ecological processes. However, fitting a binomial GLM is not appropriate for the cumulative count data often collected from germination studies, as successive observations are not independent, and time-to-event/survival analysis should be considered instead. This review discusses well-known probit-based models, providing advice on how to collect appropriate data and fit the models to those data, and gives an overview of alternative analysis approaches to improve understanding of the underlying mechanisms of seed dormancy and germination behaviour.
The excavation of an oval crop mark close to the Abingdon causewayed enclosure showed a complex sequence of development, starting with a rectangular ditched enclosure and most probably ending with an oval barrow of a type with parallels elsewhere in lowland England. The site included the grave of two individuals associated with a polished knife, a belt slider and most probably a leaf shaped arrowhead, and produced a series of radiocarbon dates extending from the Earlier to the Later Neolithic. A number of formal deposits around one end of the site are matched by similar material from the inner ditch of the causewayed enclosure, suggesting a direct link between the two monuments.
Depression is a common and important cause of morbidity and mortality worldwide. It is commonly treated with antidepressants and/or psychological therapy, but some people prefer alternative approaches such as exercise. There are a number of theoretical reasons why exercise may improve depression. This is an update of a review first published in 2009.
It might seem that the choice of treatments is not a statistical matter, being a question for the experimenter alone. However, there are very many examples of research programmes where the objectives of the programme either have been, or could have been, achieved much more efficiently by using statistical theory. In some situations, there are several intuitively appealing alternative sets of treatments, and the choice between these alternatives can be assisted by a statistical assessment of the efficiency with which the alternative sets of treatments satisfy the objectives of the experiment. In other situations statistical methods can allowmore objectives to be achieved from the same set of experimental resources. In yet other situations statistical arguments can lead to adding experimental treatments which avoid the necessity of making possibly unjustifiable assumptions in the interpretation of results from the experimental data. The early theory of the design of experiments was less concerned with the choice of treatments than with the control of variation between units, but there is a large body of knowledge about treatment structure. Later, much of the statistical theory of experimental design was concerned with the optimal choice of treatments, and while some of this theory is very mathematical and some has little relevance to real experiments, the general principles are important and will be considered in Chapter 16.
The first point to clarify is that there are many different forms of objective for an experiment. Some experiments are concerned with the determination of optimal operating conditions for some chemical or biological process. Others are intended to make very precise comparisons between two or more methods of controlling or encouraging the growth of a biological organism. Some experiments are intended to provide information about the effects of increasing amounts of some stimulus on the output of the experimental units to which the stimulus is applied. And other experiments are planned to screen large numbers of possible contestants for a job, for example, drugs for treating cancer or athletes for a place in the Olympics. And most experiments have not one but several objectives.
The need to develop statistical theory for designing experiments stems, like the need for statistical analysis of numerical information, from the inherent variability of experimental results. In the physical sciences, this variability is frequently small and, when thinking of experiments at school in physics and chemistry, it is usual to think of ‘the correct result’ from an experiment. However, practical experience of such experiments makes it obvious that the results are, to a limited extent, variable, this variation arising as much from the complexities of the measurement procedure as from the inherent variability of experimental material. As the complexity of the experiment increases, and the differences of interest become relatively smaller, then the precision of the experiment becomes more important. An important area of experimentation within the physical sciences, where precision of results and hence the statistical design of experiments is important, is the optimisation and control of industrial chemical processes.
Whereas the physical sciences are thought of as exact, it is quite obvious that biological sciences are not. Most experiments on plants or animals use many plants or animals because it is clear that the variation between plants, or between animals, is very large. It is impossible, for example, to predict quantitatively the exact characteristics of one plant from the corresponding characteristics of another plant of the same species, age and origin.
Thus, no medical research worker would make confident claims for the efficacy of a new drug merely because a single patient responded well to the drug. In the field of market research, no newspaper would publish an opinion poll based on interviews with only two people, but would require a sample of at least 500, together with information about the method of selection of the sample.
An experiment to compare four varieties of tomato is to be run in a greenhouse at a horticultural research station. The greenhouse has 16 compartments in a 4 × 4 array. The initial proposal is to use a Latin square design, such as that shown in Figure 11.1(a), which is randomised, as in Chapter 8 by randomly permuting rows and columns, to give the square shown in Figure 11.1(b). On seeing the randomised design, the experimenter recalls that previous experiments in this greenhouse showed unusually high yields along the top-right to bottom-left diagonal and is concerned that this might bias the results in favour of variety D. She also notes that the situation is even worse in the unrandomised design.
One possible solution is to abandon the natural seeming 4 × 4 row-and-column structure and define blocks according to the distance from the top-right to bottom-left diagonal. However, this is not satisfactory either, since the experimenter has also previously observed row and column trends. Another possible solution is to restrict the randomisation more than usual to ensure that the design has all of the required properties.
Time-trend resistant run orders and designs
In industrial experiments, the experimental units are often sequential runs of the same process. The possibility of time trends needs to be allowed for, but resources are often too scarce to benefit from using very small blocks.
(a) In an experiment to investigate the effect of training on human-computer-human interactions, six subjects were randomly allocated to each of four training programmes. Subjects were then paired into 12 blocks using two replicates of an unreduced balanced incomplete block design. Each pair carried out a conversation through a computer ‘chat’ program. In addition to several response variables measured on each subject individually, each pair was given a score by an independent observer, for the success of their interaction.
We have only a single response representing each block. Can we use this information and, if so, how? If we can, do the block totals contain useful information about the effects of treatments on other responses? Does this affect how we should design the experiment? In particular, for this response, should we have used some blocks with both subjects getting the same treatment?
(b) Eight feeds are to be compared for their effects on the growth of young chickens. The experiment will be carried out using 32 cages, arranged in four brooders, with each brooder having four tiers of two cages. Should the experiment be designed to ensure that each treatment appears once in each brooder and once in each tier, or should we consider the brooder×tier combinations as blocks of size 2 and choose a good design for this setup? Can we do both simultaneously?
Identifying multiple levels in data
In Section 7.3 we considered the analysis for general block–treatment designs. However, in that analysis only the information about treatments from comparisons within blocks was considered.
Many of the problems considered in previous chapters have occurred during statistical consultancy sessions and are appropriate to motivate this chapter. Typical examples are the peppers experiment in Chapter 7 and the rabbits experiment in Chapter 15. Three more problems occurred in the work of one of us (RM) in quick succession within three weeks in early 1984 and illustrate a wide range of practical design problems.
(a) Problem 1: the moving cups. In assaying the chemical concentration of liquid samples, a set of about 24 samples can be automatically assayed in a single batch. The samples are placed in cups which are held in position on a circular disc, and the disc rotates so that each cup in turn appears beneath the assay machinery. It is required to investigate changes in the concentration of chemicals over time, and therefore the 24 samples are assayed over a time period of between one and two hours. Two shapes of cup are available, and two sizes of sample are possible within each cup. Three chemicals, sodium (Na), chlorine (Cl) and potassium (K) are of interest, and it is proposed to have two levels (zero and some) for each chemical. The cups may be covered during the run of 24 samples until each is sampled, but the covers, if used, must be used for all 24 samples in the run. […]