To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To update current estimates of non–device-associated pneumonia (ND pneumonia) rates and their frequency relative to ventilator associated pneumonia (VAP), and identify risk factors for ND pneumonia.
Academic teaching hospital.
All adult hospitalizations between 2013 and 2017 were included. Pneumonia (device associated and non–device associated) were captured through comprehensive, hospital-wide active surveillance using CDC definitions and methodology.
From 2013 to 2017, there were 163,386 hospitalizations (97,485 unique patients) and 771 pneumonia cases (520 ND pneumonia and 191 VAP). The rate of ND pneumonia remained stable, with 4.15 and 4.54 ND pneumonia cases per 10,000 hospitalization days in 2013 and 2017 respectively (P = .65). In 2017, 74% of pneumonia cases were ND pneumonia. Male sex and increasing age we both associated with increased risk of ND pneumonia. Additionally, patients with chronic bronchitis or emphysema (hazard ratio [HR], 2.07; 95% confidence interval [CI], 1.40–3.06), congestive heart failure (HR, 1.48; 95% CI, 1.07–2.05), or paralysis (HR, 1.72; 95% CI, 1.09–2.73) were also at increased risk, as were those who were immunosuppressed (HR, 1.54; 95% CI, 1.18–2.00) or in the ICU (HR, 1.49; 95% CI, 1.06–2.09). We did not detect a change in ND pneumonia risk with use of chlorhexidine mouthwash, total parenteral nutrition, all medications of interest, and prior ventilation.
The incidence rate of ND pneumonia did not change from 2013 to 2017, and 3 of 4 nosocomial pneumonia cases were non–device associated. Hospital infection prevention programs should consider expanding the scope of surveillance to include non-ventilated patients. Future research should continue to look for modifiable risk factors and should assess potential prevention strategies.
To detect modest associations of dietary intake with disease risk, observational studies need to be large and control for moderate measurement errors. The reproducibility of dietary intakes of macronutrients, food groups and dietary patterns (vegetarian and Mediterranean) was assessed in adults in the UK Biobank study on up to five occasions using a web-based 24-h dietary assessment (n 211 050), and using short FFQ recorded at baseline (n 502 655) and after 4 years (n 20 346). When the means of two 24-h assessments were used, the intra-class correlation coefficients (ICC) for macronutrients varied from 0·63 for alcohol to 0·36 for polyunsaturated fat. The ICC for food groups also varied from 0·68 for fruit to 0·18 for fish. The ICC for the FFQ varied from 0·66 for meat and fruit to 0·48 for bread and cereals. The reproducibility was higher for vegetarian status (κ > 0·80) than for the Mediterranean dietary pattern (ICC = 0·45). Overall, the reproducibility of pairs of 24-h dietary assessments and single FFQ used in the UK Biobank were comparable with results of previous prospective studies using conventional methods. Analyses of diet–disease relationships need to correct for both measurement error and within-person variability in dietary intake in order to reliably assess any such associations with disease in the UK Biobank.
To update current estimates of non–device-associated urinary tract infection (ND-UTI) rates and their frequency relative to catheter-associated UTIs (CA-UTIs) and to identify risk factors for ND-UTIs.
Academic teaching hospital.
All adult hospitalizations between 2013 and 2017 were included. UTIs (device and non-device associated) were captured through comprehensive, hospital-wide active surveillance using Centers for Disease Control and Prevention case definitions and methodology.
From 2013 to 2017 there were 163,386 hospitalizations (97,485 unique patients) and 1,273 UTIs (715 ND-UTIs and 558 CA-UTIs). The rate of ND-UTIs remained stable, decreasing slightly from 6.14 to 5.57 ND-UTIs per 10,000 hospitalization days during the study period (P = .15). However, the proportion of UTIs that were non–device related increased from 52% to 72% (P < .0001). Female sex (hazard ratio [HR], 1.94; 95% confidence interval [CI], 1.50–2.50) and increasing age were associated with increased ND-UTI risk. Additionally, the following conditions were associated with increased risk: peptic ulcer disease (HR, 2.25; 95% CI, 1.04–4.86), immunosuppression (HR, 1.48; 95% CI, 1.15–1.91), trauma admissions (HR, 1.36; 95% CI, 1.02–1.81), total parenteral nutrition (HR, 1.99; 95% CI, 1.35–2.94) and opioid use (HR, 1.62; 95% CI, 1.10–2.32). Urinary retention (HR, 1.41; 95% CI, 0.96–2.07), suprapubic catheterization (HR, 2.28; 95% CI, 0.88–5.91), and nephrostomy tubes (HR, 2.02; 95% CI, 0.83–4.93) may also increase risk, but estimates were imprecise.
Greater than 70% of UTIs are now non–device associated. Current targeted surveillance practices should be reconsidered in light of this changing landscape. We identified several modifiable risk factors for ND-UTIs, and future research should explore the impact of prevention strategies that target these factors.
The aim of this study was to explore the experiences of radiotherapy students on clinical placement, specifically focussing on the provision of well-being support from clinical supervisors.
Materials and methods:
Twenty-five students from the University of the West of England and City University of London completed an online evaluation survey relating to their experiences of placement, involving Likert scales and open-ended questions.
The quantitative results were generally positive; however, the qualitative findings were mixed. Three themes emerged: (1) provision of information and advice; (2) an open, inclusive and supportive working environment; and (3) a lack of communication, understanding, and consistency.
Students’ experiences on placement differed greatly and appeared to relate to their specific interactions with different members of staff. It is suggested that additional training around providing well-being support to students may be of benefit to clinical supervisors.
To determine the burden of skin and soft tissue infections (SSTI), the nature of antimicrobial prescribing and factors contributing to inappropriate prescribing for SSTIs in Australian aged care facilities, SSTI and antimicrobial prescribing data were collected via a standardised national survey. The proportion of residents prescribed ⩾1 antimicrobial for presumed SSTI and the proportion whose infections met McGeer et al. surveillance definitions were determined. Antimicrobial choice was compared to national prescribing guidelines and prescription duration analysed using a negative binomial mixed-effects regression model. Of 12 319 surveyed residents, 452 (3.7%) were prescribed an antimicrobial for a SSTI and 29% of these residents had confirmed infection. Topical clotrimazole was most frequently prescribed, often for unspecified indications. Where an indication was documented, antimicrobial choice was generally aligned with recommendations. Duration of prescribing (in days) was associated with use of an agent for prophylaxis (rate ratio (RR) 1.63, 95% confidence interval (CI) 1.08–2.52), PRN orders (RR 2.10, 95% CI 1.42–3.11) and prescription of a topical agent (RR 1.47, 95% CI 1.08–2.02), while documentation of a review or stop date was associated with reduced duration of prescribing (RR 0.33, 95% CI 0.25–0.43). Antimicrobial prescribing for SSTI is frequent in aged care facilities in Australia. Methods to enhance appropriate prescribing, including clinician documentation, are required.
Despite many interventions aiming to reduce excessive gestational weight gain (GWG), it is currently unclear the impact on infant anthropometric outcomes. The aim of this review was to evaluate offspring anthropometric outcomes in studies designed to reduce GWG. A systematic search of seven international databases, one clinical trial registry and three Chinese databases was conducted without date limits. Studies were categorised by intervention type: diet, physical activity (PA), lifestyle (diet + PA), other, gestational diabetes mellitus (GDM) (diet, PA, lifestyle, metformin and other). Meta-analyses were reported as weighted mean difference (WMD) for birthweight and birth length, and risk ratio (RR) for small for gestational age (SGA), large for gestational age (LGA), macrosomia and low birth weight (LBW). Collectively, interventions reduced birthweight, risk of macrosomia and LGA by 71 g (WMD: −70.67, 95% CI −101.90 to −39.43, P<0.001), 16% (RR: 0.84, 95% CI 0.73–0.98, P=0.026) and 19% (RR: 0.81, 95% CI 0.69–0.96, P=0.015), respectively. Diet interventions decreased birthweight and LGA by 99 g (WMD −98.80, 95% CI −178.85 to −18.76, P=0.016) and 65% (RR: 0.35, 95% CI 0.17–0.72, P=0.004). PA interventions reduced the risk of macrosomia by 51% (RR: 0.49, 95% CI 0.26–0.92, P=0.036). In women with GDM, diet and lifestyle interventions reduced birthweight by 211 and 296 g, respectively (WMD: −210.93, 95% CI −374.77 to −46.71, P=0.012 and WMD:−295.93, 95% CI −501.76 to −90.10, P=0.005, respectively). Interventions designed to reduce excessive GWG lead to a small reduction in infant birthweight and risk of macrosomia and LGA, without influencing the risk of adverse outcomes including LBW and SGA.
Palaeoecology has been prominent in studies of environmental change during the Holocene epoch in Scotland. These studies have been dominated by palynology (pollen, spore and related bio-and litho-stratigraphic analyses) as a key approach to multi- and inter-disciplinary investigations of topics such as vegetation, climate and landscape change. This paper highlights some key dimensions of the pollen- and vegetation-based archive, with a focus upon woodland dynamics, blanket peat, human impacts, biodiversity and conservation. Following a brief discussion of chronological, climatic, faunal and landscape contexts, the migration, survival and nature of the woodland cover through time is assessed, emphasising its time-transgressiveness and altitudinal variation. While agriculture led to the demise of woodland in lowland areas of the south and east, the spread of blanket peat was especially a phenomenon of the north and west, including the Western and Northern Isles. Almost a quarter of Scotland is covered by blanket peat and the cause(s) of its spread continue(s) to evoke recourse to climatic, topographic, pedogenic, hydrological, biotic or anthropogenic influences, while we remain insufficiently knowledgeable about the timing of the formation processes. Humans have been implicated in vegetational change throughout the Holocene, with prehistoric woodland removal, woodland management, agricultural impacts arising from arable and pastoral activities, potential heathland development and afforestation. The viability of many current vegetation communities remains a concern, in that Scottish data show reductions in plant diversity over the last 400 years, which recent conservation efforts have yet to reverse. Palaeoecological evidence can be used to test whether conservation baselines and restoration targets are appropriate to longer-term ecosystem variability and can help identify when modern conditions have no past analogues.
A diverse millipede (diplopod) fauna has been recovered from the earliest Carboniferous (Tournaisian) Ballagan Formation of the Scottish Borders, discovered by the late Stan Wood. The material is generally fragmentary; however, six different taxa are present based on seven specimens. Only one displays enough characters for formal description and is named Woodesmus sheari Ross, Edgecombe & Clark gen. & sp. nov. The absence of paranota justifies the erection of Woodesmidae fam. nov. within the Archipolypoda. The diverse fauna supports the theory that an apparent lack of terrestrial animal fossils from ‘Romer's Gap' was due to a lack of collecting and suitable deposits, rather than to low oxygen levels as previously suggested.
Footprints in Time: The Longitudinal Study of Indigenous Children (LSIC) is a national study of 1759 Australian Aboriginal and Torres Strait Islander children living across urban, regional and remote areas of Australia. The study is in its 11th wave of annual data collection, having collected extensive data on topics including birth and early life influences, parental health and well-being, identity, cultural engagement, language use, housing, racism, school engagement and academic achievement, and social and emotional well-being. The current paper reviews a selection of major findings from Footprints in Time relating to the developmental origins of health and disease for Australian Aboriginal and Torres Strait Islander peoples. Opportunities for new researchers to conduct further research utilizing the LSIC data set are also presented.
Water wave overwash of a step by small steepness, regular incident waves is analysed using a computational fluid dynamics (CFD) model and a mathematical model, in two spatial dimensions. The CFD model is based on the two-phase, incompressible Navier–Stokes equations, and the mathematical model is based on the coupled potential-flow and nonlinear shallow-water theories. The CFD model is shown to predict vortices, breaking and overturning in the region where overwash is generated, and that the overwash develops into fast-travelling bores. The mathematical model is shown to predict bore heights and velocities that agree with the CFD model, despite neglecting the complicated dynamics where the overwash is generated. Evidence is provided to explain the agreement in terms of the underlying agreement of mass and energy fluxes.
Gravitational interactions allow one to investigate the nature of matter in the universe independent of the properties that make it luminous. Much as studies of the dynamics of galaxies and clusters of galaxies have indicated the presence of dark matter, gravitational lensing provides an independent probe of the large scale distribution of dark matter in the universe.
Worldwide 350 million people suffer from major depression, with the majority of cases occurring in low- and middle-income countries. We examined the patterns, correlates and care-seeking behaviour of adults suffering from major depressive episode (MDE) in China.
A nationwide study recruited 512 891 adults aged 30–79 years from 10 provinces across China during 2004–2008. The 12-month prevalence of MDE was assessed by the Modified Composite International Diagnostic Interview-short form. Logistic regression yielded adjusted odds ratios (ORs) of MDE associated with socio-economic, lifestyle and health-related factors and major stressful life events.
Overall, 0.7% of participants had MDE and a further 2.4% had major depressive symptoms. Stressful life events were strongly associated with MDE [adjusted OR 14.7, 95% confidence interval (CI) 13.7–15.7], with a dose–response relationship with the number of such events experienced. Family conflict had the highest OR for MDE (18.9, 95% CI 16.8–21.2) among the 10 stressful life events. The risk of MDE was also positively associated with rural residency (OR 1.5, 95% CI 1.4–1.7), low income (OR 2.3, 95% CI 2.1–2.4), living alone (OR 2.6, 95% CI 2.3–3.0), smoking (OR 1.4, 95% CI 1.3–1.6) and certain other mental disorders (e.g. anxiety, phobia). Similar, albeit weaker, associations were observed with depressive symptoms. Among those with MDE, about 15% sought medical help or took psychiatric medication, 15% reported having suicidal ideation and 6% reported attempting suicide.
Among Chinese adults, the patterns and correlates of MDE were generally consistent with those observed in the West. The low rates of seeking professional help and treatment highlight the great gap in mental health services in China.
You work for an airline in the operations department. They ask you to reach into the historical company and industry datasets and figure out the expected loading for every flight in their book for this coming year. This ties directly into projected revenue. In the back of your mind, you know that expenses for the company need to be offset by revenue. It doesn't help to be overly optimistic because the risk of not meeting the projections carries with it the risk of future disappointment.
What should you do? Passenger boardings are cyclical. There are risks of a downturn. But a downturn could impact profitability of the entire company. The price of oil and jet fuel is important. Mergers happen every year which change the competitive landscape and make some routes more efficient.
Time series analysis can certainly help. Just like confidence intervals in statistics (please see also the Appendix), there is a band of uncertainty around any of the projections. The expected airline passenger loadings is a random variable. Passenger boarding is one of several arenas we will explore with the help of tools in R.
Examining Time Series
We begin with a quick survey of the types of times series we will model. We can use the quantmod and PerformanceAnalytics packages, firstly. We will define the vector of symbols to be downloaded. The symbols GSPC, VIX, TNX, refer to the S&P 500 index, the CME volatility index, and the ten-year treasury yield, respectively. Use the getSymbols() function to download the time series for the symbols in sym.vec between the dates of January 3, 2003 and September 10, 2015. If the quantmod and PerformanceAnalytics packages are not yet installed, they can be installed with the commands install.packages(“quantmod”, dependencies=TRUE) and install.packages(“PerformanceAnalytics”, dependencies=TRUE).
> sym.vec <-c(“∧GSPC”,“∧VIX”)
> getSymbols(sym.vec, from = “2005-01-03”, to = “2015-09-16”)
 “GSPC” “VIX”
The first plot is of the S&P 500, shown in Figure 6.1.We see the peak of the market in early 2007 and then the steep decline as the housing crisis hit.We see the market bottom out in mid 2009 and begin a large rally that lasts through mid 2015. We see minor corrections in 2011 and 2012 due to uncertainty surrounding the government debt in the Euro zone, and a significant sell-off in mid 2015 due to uncertainty surrounding China's economy.
When traveling across the agricultural American Midwest, one can hear AM radio stations broadcast the futures prices of corn, soybeans, wheat, and other commodities every weekday at various times of the day. Iowa and Illinois lead the nation in corn production. Kansas leads the nation in wheat production. Listening to these farm reports is entertaining. Included in the broadcasts are very detailed weather reports. Weather is critical to many producers’ livelihoods. After a few weeks of listening to these broadcasts, we learn that nobody truly knows for sure whether the agricultural market prices will go up or down on a trading day. Hedging the production chain price risk, especially for the farmers, who seasonally grow the crops, is achieved with futures and option securities. Producers very often want to lock in a price for delivering corn at the end of the season, or want to be compensated for a significant reduction in the agricultural price in order to guarantee recovering their fixed costs over the upcoming days and months.
From Chapter 14, we have gained more familiarity with the random walk processes assumed by the option models. We continue the option theme from the prior chapter and examine a very popular model for pricing European options. The most famous and widely accepted model of option valuation model is known as the Black–Scholes model of 1973 (Black and Myron, 1973). It revolutionized the pricing and trading of options which, prior to this, were priced in rather arbitrary ways. Black and Scholes relied upon the stochastic calculus. Stochastic calculus was invented by Itô in 1951 to address the need for a calculus for random variables as functions over time, like our stock market prices (Ito, 1951). Together with Merton, Black and Scholes were awarded the 1997 Nobel Prize in Economic Sciences for this invention. We discuss the Black–Scholes model here in order to complete our tour of financial analytics. We will try to make minimal use of stochastic calculus.
We saved this more mathematical material for the end of the book because it involves the more complex type of security: options. As mentioned in Chapter 14, options are derivative securities.
In 1994 the Channel Tunnel opened between England and France, allowing high-speed Eurostar trains to whisk passengers from the continent to the United Kingdom and back on a grand scale. What an amazing engineering feat it was for the time (beyond many people's earlier imaginations), yet we take it for granted today. In 1994, Grumman Aerospace Corporation, the chief contractor on the Apollo Lunar Module, was acquired by Northrop Corporation to form the new aerospace giant, Northrop Grumman. It was the prime contractor of the newly deployed advanced technology B-2 Stealth Bomber. On a much more mundane and personal scale, also in 1994, in a townhouse just outside the City of Chicago, I was performing a tedious daily exercise: looking up daily closing prices each evening in a stack of Investor's Business Daily newspapers for the two stock investments that were about to be purchased. This was not only to find out their running rate of return but also to find out their historical volatility relative to other stocks before entering into the positions. Doing this manual calculation was slow and tedious. The WorldWideWeb was introduced in the form of the Mosaic browser the next year. It was not long before Yahoo! was posting stock quotes and historical price charts, as well as technical indicators on the charts, available on demand for free in just a few seconds via the new web browsers.
The advent of spreadsheet software took analysts to a new level of analytical thinking. No longer were live, human-operated calculations limited to a single dimension. Each row or column could present a time dimension, a production category, a business scenario. And the automated dependency feature made revisions quite easy. Now spreadsheets can be used for a prototype for a more sophisticated and permanent analytical product: the large-scale, analytical computer program.
With modern programming languages like R and Python®, a skilled analyst can now design their analytic logic with significantly less effort than before, using resources such as Yahoo! or other free services for historical quotes. It has been said that Python's terse syntax allows for programs with the same functionality as their Java equivalents, yet four times smaller, and we suspect that R is similar. A small financial laboratory can be built on a laptop costing less than $200 in a matter of weeks, simulating multiple market variables as required.
Statistics is a mathematical science which is concerned with collecting and organizing data and conducting experiments involving random variables which represent the data. These random variables can represent natural or simulated events. The amazing attribute of statistics is its ability to explain the organization of the data we observe. In this chapter we will cover some basic formulas that will lay a foundation for the subsequent analytics framework.
A discussion of statistics is necessary for any treatment of financial analytics. In order to discuss investments in financial instruments from a quantitative perspective, a certain amount of preliminary background is needed. This will provide a higher level of accuracy. We will not be able to do our job well without it. This background will be conveniently stated in terms of formulas, beginning here and continuing throughout the book. Formulas provide crisp specifications for the computer instructions in R.
We begin with probability with discrete outcomes. After completing the first three sections, the reader is invited to visit the Appendix for a review of the many potential probability distributions and statistical analysis concepts that are used in analytics.
In probability we are concerned with the likelihood of events. An event A is defined such that ∅ ⊆ A ⊆ S, where ∅ is defined as the null space or empty set and S is defined as the sample space or set of all possible outcomes. We also set that the probability of event A occurring as 0 ≤ P(A) ≤ 1, where the probability of the null space is P(∅) = 0 and the probability of the sample space is P(S) = 1. We define the complement Ac as the set satisfying 1) A ∪ Ac = S and 2) A ∩ Ac = ∅. While they might seem superfluous, these two conditions simply make mathematically rigorous the requirement that (1) an event must either happen or not happen, and (2) an event may not both happen and not happen.
The data mining and machine learning literature is now flush with scenarios about predicting baseball players’ salaries from their prior year's number of hits and walks and about predicting the product sales from the prices, customer income, and level of advertising. These are amazing and noteworthy stories. They inspire data scientists to continue their cause. The classic examples feature a large two-dimensional array of cases as rows with the independent variables, also known as the stimulus variables, and the predictable variables, also known as the response variables, as columns. If we are predicting an athlete's salary, this figure is hand-tuned by the people who negotiate contracts. Better athletic production yields better salaries as athletes are constantly compared to one other. The salary is a figure updated usually one time per year at most. And only a handful of people are involved in setting the athlete's salary or the price of a consumer item. So these represent ideal cases in predictability.
Unfortunately, prediction in the case of financial analytics never turns out to be as accurate as in these sports and marketing areas. There is just more random noise in the financial markets with prices that are updated every second of every trading day. Thousands of participants are involved. Every security is affected by many other securities. For example, oil prices are affected not only by oil supply and demand and by the volume of trades at each incremental oil price level but by interest rates and various foreign exchange rates. Nevertheless, one can try these financial predictions using the same techniques in order to experiment and observe what can be predicted.
The process of attempting prediction yields at least two benefits. On the one hand, it may be possible to predict attributes from combinations of other attributes: in this case, response variables from stimulus variables. If this was the case, we can stand alongside those other successes in other areas of data science. On the other hand, prediction may not be possible or even all that useful, however; the collection exercise, getting all the data into rows and columns of the array, provides observations that can be made in an unsupervised learning sense. And discoveries can be made by applying thresholds or sorting and filtering the attributes to find maximally performing securities, as we are able to observe in Chapter 7.