To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The current study advanced research on the link between community violence exposure and aggression by comparing the effects of violence exposure on different functions of aggression and by testing four potential (i.e., callous–unemotional traits, consideration of others, impulse control, and anxiety) mediators of this relationship. Analyses were conducted in an ethnically/racially diverse sample of 1,216 male first-time juvenile offenders (M = 15.30 years, SD = 1.29). Our results indicated that violence exposure had direct effects on both proactive and reactive aggression 18 months later. The predictive link of violence exposure to proactive aggression was no longer significant after controlling for proactive aggression at baseline and the overlap with reactive aggression. In contrast, violence exposure predicted later reactive aggression even after controlling for baseline reactive aggression and the overlap with proactive aggression. Mediation analyses of the association between violence exposure and reactive aggression indicated indirect effects through all potential mediators, but the strongest indirect effect was through impulse control. The findings help to advance knowledge on the consequences of community violence exposure on justice-involved youth.
The anabolic potential of a dietary protein is determined by its ability to elicit postprandial rises in circulating essential amino acids and insulin. Minimal data exist regarding the bioavailability and insulinotropic effects of non-animal-derived protein sources. Mycoprotein is a sustainable and rich source of non-animal-derived dietary protein. We investigated the impact of mycoprotein ingestion, in a dose–response manner, on acute postprandial hyperaminoacidaemia and hyperinsulinaemia. In all, twelve healthy young men completed five experimental trials in a randomised, single-blind, cross-over design. During each trial, volunteers consumed a test drink containing either 20 g milk protein (MLK20) or a mass matched (not protein matched due to the fibre content) bolus of mycoprotein (20 g; MYC20), a protein matched bolus of mycoprotein (40 g; MYC40), 60 g (MYC60) or 80 g (MYC80) mycoprotein. Circulating amino acid, insulin and uric acid concentrations, and clinical chemistry profiles, were assessed in arterialised venous blood samples during a 4-h postprandial period. Mycoprotein ingestion resulted in slower but more sustained hyperinsulinaemia and hyperaminoacidaemia compared with milk when protein matched, with overall bioavailability equivalent between conditions (P>0·05). Increasing the dose of mycoprotein amplified these effects, with some evidence of a plateau at 60–80 g. Peak postprandial leucine concentrations were 201 (sem 24) (30 min), 118 (sem 10) (90 min), 150 (sem 14) (90 min), 173 (sem 23) (45 min) and 201 (sem 21 (90 min) µmol/l for MLK20, MYC20, MYC40, MYC60 and MYC80, respectively. Mycoprotein represents a bioavailable and insulinotropic dietary protein source. Consequently, mycoprotein may be a useful source of dietary protein to stimulate muscle protein synthesis rates.
Photometric observations on the UBV system have been made of a number of optically identified radio sources. The measurements are basically of two types: (1) offset photometry with the Siding Spring 40-inch reflector of objects identified as probable quasars or N galaxies, and (2) observations with the Siding Spring 24-inch reflector of radio galaxies brighter than V = 14m.0.
Observations for the Parkes 2700 MHz catalogue are carried out in two stages: (i), a relatively fast finding survey, followed by (ii), accurate measurement (at 2700 MHz) of flux densities and positions of the sources on the survey scans. For the first six parts of the catalogue intervals of several weeks between the stages resulted from the necessity to reduce the (analogue) survey scans manually. This communication describes a computer program which records and reduces these scans, and which produces an on-line map and source listing for the survey region. Thus the measurements of the second stage may be carried out immediately after the survey; additional advantages provided by the technique are discussed later in the paper.
Background: Transitioning from medical school to residency is difficult and stressful, necessitating innovation in easing this transition. In response, a Canadian neurosurgical Rookie Camp was designed and implemented to foster acquisition of technical, cognitive and behavioral skills among incoming Canadian post graduate year one (PGY-1) neurosurgery residents. Methods: The inaugural Rookie Camp was held in July 2012 in Halifax. The curriculum was developed based on a national needs-assessment and consisted of a pre-course manual, 7 case-based stations, 4 procedural skills stations and 2 group discussions. The content was clinically focused, used a variety of teaching methods, and addressed multiple CanMEDS competencies. Evaluation included participant and faculty surveys and a pre-course, post-course, and 3-month retention knowledge test. Results: 17 of 23 PGY-1 Canadian neurosurgical residents participated in the Camp. All agreed the course content was relevant for PGY-1 training and the experience prepared them for residency. All participants would recommend the course to future neurosurgical residents. A statistically significant improvement was observed in knowledge related to course content (F(2,32) = 7.572, p<0.002). There were no significant differences between post-test and retention-test scores at three months. Conclusion: The inaugural Canadian Neurosurgery Rookie Camp for PGY-1 residents was successfully delivered, with engagement from participants, training programs, the Canadian Neurosurgical Society, and the Royal College. In addition to providing fundamental knowledge, which was shown to be retained, the course eased junior residents’ transition to residency by fostering camaraderie and socialization within the specialty.
Antarctic and Southern Ocean science is vital to understanding natural variability, the processes that govern global change and the role of humans in the Earth and climate system. The potential for new knowledge to be gained from future Antarctic science is substantial. Therefore, the international Antarctic community came together to ‘scan the horizon’ to identify the highest priority scientific questions that researchers should aspire to answer in the next two decades and beyond. Wide consultation was a fundamental principle for the development of a collective, international view of the most important future directions in Antarctic science. From the many possibilities, the horizon scan identified 80 key scientific questions through structured debate, discussion, revision and voting. Questions were clustered into seven topics: i) Antarctic atmosphere and global connections, ii) Southern Ocean and sea ice in a warming world, iii) ice sheet and sea level, iv) the dynamic Earth, v) life on the precipice, vi) near-Earth space and beyond, and vii) human presence in Antarctica. Answering the questions identified by the horizon scan will require innovative experimental designs, novel applications of technology, invention of next-generation field and laboratory approaches, and expanded observing systems and networks. Unbiased, non-contaminating procedures will be required to retrieve the requisite air, biota, sediment, rock, ice and water samples. Sustained year-round access to Antarctica and the Southern Ocean will be essential to increase winter-time measurements. Improved models are needed that represent Antarctica and the Southern Ocean in the Earth System, and provide predictions at spatial and temporal resolutions useful for decision making. A co-ordinated portfolio of cross-disciplinary science, based on new models of international collaboration, will be essential as no scientist, programme or nation can realize these aspirations alone.
The future of centimetre and metre-wave astronomy lies with the Square Kilometre Array (SKA), a telescope under development by a consortium of 17 countries that will be 50 times more sensitive than any existing radio facility. Most of the key science for the SKA will be addressed through large-area imaging of the Universe at frequencies from a few hundred MHz to a few GHz. The Australian SKA Pathfinder (ASKAP) is a technology demonstrator aimed in the mid-frequency range, and achieves instantaneous wide-area imaging through the development and deployment of phased-array feed systems on parabolic reflectors. The large field-of-view makes ASKAP an unprecedented synoptic telescope that will make substantial advances in SKA key science. ASKAP will be located at the Murchison Radio Observatory in inland Western Australia, one of the most radio-quiet locations on the Earth and one of two sites selected by the international community as a potential location for the SKA. In this paper, we outline an ambitious science program for ASKAP, examining key science such as understanding the evolution, formation and population of galaxies including our own, understanding the magnetic Universe, revealing the transient radio sky and searching for gravitational waves.
EMU is a wide-field radio continuum survey planned for the new Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The primary goal of EMU is to make a deep (rms ∼ 10 μJy/beam) radio continuum survey of the entire Southern sky at 1.3 GHz, extending as far North as +30° declination, with a resolution of 10 arcsec. EMU is expected to detect and catalogue about 70 million galaxies, including typical star-forming galaxies up to z ∼ 1, powerful starbursts to even greater redshifts, and active galactic nuclei to the edge of the visible Universe. It will undoubtedly discover new classes of object. This paper defines the science goals and parameters of the survey, and describes the development of techniques necessary to maximise the science return from EMU.
The emerging commercial farmers in Namibia represent a new category of farmer that has entered the freehold farming sector since Namibia's independence in 1990. Several assessments of agricultural training needs have been carried out with these farmers but the issue of human–carnivore conflict has not yet been addressed. This study investigated one of the key components driving human–carnivore conflict, namely the attitudes of these farmers towards carnivores and how this affects the level of conflict and carnivore removal. We observed that the attitudes of these farmers are similar to farmers elsewhere. In general, farmers reported high levels of human–carnivore conflict. Many farmers perceived that they had a carnivore problem when sighting a carnivore or its tracks, even in the absence of verified carnivore depredation. Such sightings were a powerful incentive to prompt farmers to want to take action by removing carnivores, often believed to be the only way to resolve human–carnivore conflict. Nonetheless, our study showed that farmers who understood that carnivores play an ecological role had a more favourable attitude and were less likely to want all carnivores removed. We found that negative attitudes towards carnivores and loss of livestock, especially of small stock, predicted actual levels of human–carnivore conflict. Goat losses additionally predicted actual carnivore removals. We discuss the implications of our findings in relation to the activities of support structures for emerging commercial farmers in Namibia.
Lactoferrin (LTF) is a milk glycoprotein favorably associated with the immune system of dairy cows. Somatic cell count is often used as an indicator of mastitis in dairy cows, but knowledge on the milk LTF content could aid in mastitis detection. An inexpensive, rapid and robust method to predict milk LTF is required. The aim of this study was to develop an equation to quantify the LTF content in bovine milk using mid-infrared (MIR) spectrometry. LTF was quantified by enzyme-linked immunosorbent assay (ELISA), and all milk samples were analyzed by MIR. After discarding samples with a coefficient of variation between 2 ELISA measurements of more than 5% and the spectral outliers, the calibration set consisted of 2499 samples from Belgium (n = 110), Ireland (n = 1658) and Scotland (n = 731). Six statistical methods were evaluated to develop the LTF equation. The best method yielded a cross-validation coefficient of determination for LTF of 0.71 and a cross-validation standard error of 50.55 mg/l of milk. An external validation was undertaken using an additional dataset containing 274 Walloon samples. The validation coefficient of determination was 0.60. To assess the usefulness of the MIR predicted LTF, four logistic regressions using somatic cell score (SCS) and MIR LTF were developed to predict the presence of mastitis. The dataset used to build the logistic regressions consisted of 275 mastitis records and 13 507 MIR data collected in 18 Walloon herds. The LTF and the interaction SCS × LTF effects were significant (P < 0.001 and P = 0.02, respectively). When only the predicted LTF was included in the model, the prediction of the presence of mastitis was not accurate despite a moderate correlation between SCS and LTF (r = 0.54). The specificity and the sensitivity of models were assessed using Walloon data (i.e. internal validation) and data collected from a research herd at the University of Wisconsin – Madison (i.e. 5886 Wisconsin MIR records related to 93 mastistis events – external validation). Model specificity was better when LTF was included in the regression along with SCS when compared with SCS alone. Correct classification of non-mastitis records was 95.44% and 92.05% from Wisconsin and Walloon data, respectively. The same conclusion was formulated from the Hosmer and Lemeshow test. In conclusion, this study confirms the possibility to quantify an LTF indicator from milk MIR spectra. It suggests the usefulness of this indicator associated to SCS to detect the presence of mastitis. Moreover, the knowledge of milk LTF could also improve the milk nutritional quality.
In the past it may have been true that Stephen Senn's (2003) analogy was right. Paraphrasing his words for the sake of moderate language: scientists regarded statistics as the one-night stand: the quick fix, avoiding long-term entanglement. This analogy is no longer apt. Statistical procedures now drive many if not most areas of current astrophysics and cosmology. In particular the currently understood nature of our Universe is a product of statistical analysis of large and combined data sets. Here we briefly describe the scene in three areas dominating definition of the current model of the Universe and its history. The three areas inextricably tie together the shape and content of the Universe and the formation of structure and galaxies, leading to life as we know it. While these sketches are not reviews, we show by cross-referencing how frequently our preceding discussions play in to current research in cosmology.
The galaxy universe
The story of galaxy formation since 1990 is based on two premises. Firstly, it was widely accepted that the matter content in the Universe is primarily cold and dark – CDM prevails. The recognition of dark matter was slow, despite Zwicky (1937) demonstrating its existence via the cosmic virial theorem. The measurements of rotation curves of spiral galaxies (e.g. Rubin et al., 1980) convinced us.
Peter Scheuer started this. In 1977 hewalked into JVW's office in the Cavendish Lab and quietly asked for advice on what further material should be taught to the new intake of Radio Astronomy graduate students (that year including the hapless CRJ). JVW, wrestling with simple Chi-square testing at the time, blurted out ‘They know nothing about practical statistics …’. Peter left thoughtfully. A day later he returned. ‘Good news! The Management Board has decided that the students are going to have a course on practical statistics.’ Can I sit in, JVW asked innocently. ‘Better news! The Management Board has decided that you're going to teach it …’.
So, for us, began the notion of practical statistics. A subject that began with gambling is not an arcane academic pursuit, but it is certainly subtle as well. It is fitting that Peter Scheuer was involved at the beginning of this (lengthy) project; his style of science exemplified both subtlety and pragmatism. We hope that we can convey something of both. If an echo of Peter's booming laugh is sometimes heard in these pages, it is because we both learned from him that a useful answer is often much easier – and certainly much more entertaining – than you at first think.
After the initial course, the material for this book grew out of various further courses, journal articles and the abundant personal experience that results from understanding just a little of any field of knowledge that counts Gauss and Laplace amongst its originators.
Teaching is highly educational for teachers. Teaching from the first edition revealed to us how much students enjoyed Monte Carlo methods, and the ability with such methods to test and to check every derivation, test, procedure or result in the book. Thus, a change in the second edition is to introduce Monte Carlo as early as possible (Chapter 2). Teaching also revealed to us areas in which we assumed too much (and too little). We have therefore aimed for some smoothing of learning gradients where slope changes have appeared to be too sudden. Chapters 6 and 7 substantially amplify our previous treatments of Bayesian hypothesis testing/modelling, and include much more on model choice and Markov chain Monte Carlo (MCMC) analysis. Our previous chapter on 2D (sky distribution) analysis has been significantly revised. We have added a final chapter sketching the application of statistics to some current areas of astrophysics and cosmology, including galaxy formation and large-scale structure, weak gravitational lensing, and the cosmological microwave background (CMB) radiation.
We received very helpful comments from anonymous referees whom CUP consulted about our proposals for the second edition. These reviewers requested that we keep the book (a) practical and (b) concise and – small, or ‘backpackable’, as one of them put it. We have additional colleagues to thank either for further discussions, finding errata or because we just plain missed them from our first edition list: Matthew Colless, Jim Condon, Mike Disney, Alan Heavens, Martin Hendry, Jim Moran, Douglas Scott, Robert Smith and Malte Tewes.
Frustra fit per plura quod potest fieri per pauciora – it is futile to do with more things that which can be done with fewer.
(William of Ockham, c.1285–1349)
Nature laughs at the difficulties of integration.
(Pierre-Simon de Laplace, 1749–1827; Gordon & Sorkin, 1959)
One of the attractive features of the Bayesian method is that it offers a principled way of making choices between models. In classical statistics, we may fit to a model, say by least squares, and then use the resulting χ2 statistic to decide if we should reject the model. We would do this if the deviations from the model are unlikely to have occurred by chance. However, it is not clear what to do if the deviations are likely to have occurred, and it is even less clear what to do if several models are available. For example, if a model is in fact correct, the significance level derived from a χ2 test (or, indeed, any significance test) will be uniformly distributed between zero and one (Exercise 7.1).
The problem with model choice by χ2 (or any similar classical method) is that these methods do not answer the question we wish to ask. For a model H and data D, a significance level derived from a minimum χ2 tells us about the conditional probability, prob(D | H).
The stock market is an excellent economic forecaster. It has predicted six of the last three recessions.
The only function of economic forecasting is to make astrology look respectable.
(John Kenneth Galbraith)
In contrast to previous chapters, we now consider data transformation, how to transform data in order to produce improved outcomes in either extracting or enhancing signal.
There are many observations consisting of sequential data, such as intensity as a function of position as a radio telescope is scanned across the sky or as signal varies across a row on a CCD detector, single-slit spectra, time-measurements of intensity (or any other property). What sort of issues might concern us?
(i) trend-finding; can we predict the future behaviour of data?
(ii) baseline detection and/or assessment, so that signal on this baseline can be analysed;
(iii) signal detection, identification, for example, of a spectral line or source in sequential data for which the noise may be comparable in magnitude to the signal;
(iv) filtering to improve signal-to-noise ratio;
(v) quantifying the noise;
(vi) period-finding; searching the data for periodicities;
(vii) correlation of time series to find correlated signal between antenna pairs or to find spectral lines;
(viii) modelling; many astronomical systems give us our data convolved with some more or less known instrumental function, and we need to take this into account to get back to the true data.
Statistics, the most important science in the whole world: for upon it depends the practical application of every other science and of every art.
If your experiment needs statistics, you ought to have done a better experiment.
Science is about decision. Building instruments, collecting data, reducing data, compiling catalogues, classifying, doing theory – all of these are tools, techniques or aspects which are necessary. But we are not doing science unless we are deciding something; only decision counts. Is this hypothesis or theory correct? If not, why not? Are these data self-consistent or consistent with other data? Adequate to answer the question posed? What further experiments do they suggest?
We decide by comparing. We compare by describing properties of an object or sample, because lists of numbers or images do not present us with immediate results enabling us to decide anything. Is the faint smudge on an image a star or a galaxy? We characterize its shape, crudely perhaps, by a property, say the full-width half-maximum, the FWHM, which we compare with the FWHM of the point-spread function. We have represented a data set, the image of the object, by a statistic, and in so doing we reach a decision.
Statistics are there for decision and because we know a background against which to take a decision.
In embarking on statistics we are entering a vast area, enormously developed for the Gaussian distribution in particular. This is classical territory; historically, statistics were developed because the approach now called Bayesian had fallen out of favour. Hence, direct probabilistic inferences were superseded by the indirect and conceptually different route, going through statistics and intimately linked to hypothesis testing. The use of statistics is not particularly easy. The alternatives to Bayes' methods are subtle and not very obvious; they are also associated with some fairly formidable mathematical machinery. We will avoid this, presenting only results and showing the use of statistics, while trying to make clear the conceptual foundations.
Statistics are designed to summarize, reduce or describe data. The formal definition of a statistic is that it is some function of the data alone. For a set of data X1, X2, …, some examples of statistics might be the average, the maximum value or the average of the cosines. Statistics are therefore combinations of finite amounts of data. In the following discussion, and indeed throughout, we try to distinguish particular fixed values of the data, and functions of the data alone, by upper case (except for Greek letters). Possible values, being variables, we will denote in the usual algebraic spirit by lower case.
(interchange between Peter Scheuer and his then student, CRJ)
(The) premise that statistical significance is the only reliable indication of causation is flawed.
(US Supreme Court, Matrixx Initiatives, Inc. vs. Siracusano, 22 March 2011)
It is often the case that we need to do sample comparison: we have someone else's data to compare with ours; or someone else's model to compare with our data; or even our data to compare with our model. We need to make the comparison and to decide something. We are doing hypothesis testing – are our data consistent with a model, with somebody else's data? In searching for correlations as we were in Chapter 4, we were hypothesis testing; in the model-fitting of Chapter 6 we are involved in data modelling and parameter estimation.
A frequentist point of view might be to consider the entire science of statistical inference as hypothesis testing followed by parameter estimation. However, if experiments were properly designed, the Bayesian approach would be right: it answers the sample-comparison questions we wished to pose in the first place, namely what is the probability, given the data, that a particular model is right? Or: what is the probability, given two sets of data, that they agree? The two-stage process should be unecessary at best.