To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Business history and theory reflect a tension between public and private conceptions of the corporation. This is embodied in the famous Berle-Dodd debate, which provides the basis for contemporary clashes between “different visions of corporatism,” such as the conflict between shareholder primacy and stakeholder-centered versions of the corporation. This chapter examines a number of recent developments suggesting that the pendulum, which swung so clearly in favour of a private conception of the corporation from the 1980s onwards, is in the process of changing direction. The chapter provides two central insights. The first is that there is not one problem, but multiple problems in corporate law, and that different problems may come to the forefront at different times. The second insight is that corporate governance techniques (such as performance-based pay), which are designed to ameliorate one problem in corporate law, such as corporate performance, can at the same time exacerbate other problems involving the social impact of corporations.
Background: Tracheal aspirate bacterial cultures are routinely collected in mechanically ventilated children for the evaluation of ventilator-associated infections (VAIs). However, frequent bacterial colonization of endotracheal and tracheostomy tubes contribute to the marginal performance characteristics of the test for diagnosing VAI. Published literature characterizing drivers of culture collection and the predictive value of positive cultures are limited. Methods: This single-center, retrospective cohort study included children admitted to the pediatric intensive care unit who were receiving mechanical ventilation for at least 48 hours and had 1 or more semiquantitative tracheal aspirate cultures collected between September 1, 2019, and August 31, 2020. Indications for culture collection were determined through medical record review and included fever, hypothermia, tracheal secretion changes, radiographic pneumonia, increased oxygen requirement, and/or increased positive end-expiratory pressure (PEEP). A positive culture was defined as moderate or heavy growth of a noncommensal bacterial organism. A purulent Gram stain was defined as detection of moderate or many white blood cells. Diagnosis of VAI was based on treating-clinician documentation and was ascertained through medical record review. Logistic regression accounting for clustering by patient was performed to estimate the association between indications for culture collection and (1) culture positivity, (2) purulent Gram stain, and (3) diagnosis of VAI. Results: In total, 625 tracheal aspirate cultures were performed in 261 unique patients. Common indications for culture collection included isolated fever or hypothermia (n = 124, 20%), fever with an increase in oxygen requirement or PEEP (n = 71, 11%), isolated increase in oxygen requirement or PEEP (n = 67, 11%), or isolated secretion change (n = 54, 9%) (Figure 1). Overall, 230 cultures (37%) were positive and 218 (35%) Gram stains were purulent. There were no associations between culture indications and a positive culture. Presence of isolated fever was negatively associated with a purulent Gram stain (odds ratio [OR], 0.49; 95% CI, 0.30–0.81; P = .005); otherwise, there were no associations between indication and purulent Gram stain. Finally, in a multivariable model, odds of VAI diagnosis increased with both the number of indications for culture collection and purulent Gram stain, but not with positive culture (Figure 2). Conclusions: Number and type of clinical signs were not associated with tracheal aspirate culture positivity or purulence on Gram stain, but they were associated with a clinical diagnosis of VAI. These findings suggest that positive tracheal aspirate cultures may not aid clinicians in the diagnosis of VAI, and they highlight the opportunity for improved diagnostic stewardship.
Antisaccade tasks can be used to index cognitive control processes, e.g. attention, behavioral inhibition, working memory, and goal maintenance in people with brain disorders. Though diagnoses of schizophrenia (SZ), schizoaffective (SAD), and bipolar I with psychosis (BDP) are typically considered to be distinct entities, previous work shows patterns of cognitive deficits differing in degree, rather than in kind, across these syndromes.
Large samples of individuals with psychotic disorders were recruited through the Bipolar-Schizophrenia Network on Intermediate Phenotypes 2 (B-SNIP2) study. Anti- and pro-saccade task performances were evaluated in 189 people with SZ, 185 people with SAD, 96 people with BDP, and 279 healthy comparison participants. Logistic functions were fitted to each group's antisaccade speed-performance tradeoff patterns.
Psychosis groups had higher antisaccade error rates than the healthy group, with SZ and SAD participants committing 2 times as many errors, and BDP participants committing 1.5 times as many errors. Latencies on correctly performed antisaccade trials in SZ and SAD were longer than in healthy participants, although error trial latencies were preserved. Parameters of speed-performance tradeoff functions indicated that compared to the healthy group, SZ and SAD groups had optimal performance characterized by more errors, as well as less benefit from prolonged response latencies. Prosaccade metrics did not differ between groups.
With basic prosaccade mechanisms intact, the higher speed-performance tradeoff cost for antisaccade performance in psychosis cases indicates a deficit that is specific to the higher-order cognitive aspects of saccade generation.
Clinical trials are a fundamental tool in evaluating the safety and efficacy of new drugs, medical devices, and health system interventions. Clinical trial visits generally involve eligibility assessment, enrollment, intervention administration, data collection, and follow-up, with many of these steps performed during face-to-face visits between participants and the investigative team. Social distancing, which emerged as one of the mainstay strategies for reducing the spread of SARS-CoV-2, has presented a challenge to the traditional model of clinical trial conduct, causing many research teams to halt all in-person contacts except for life-saving research. Nonetheless, clinical research has continued during the pandemic because study teams adapted quickly, turning to virtual visits and other similar methods to complete critical research activities. The purpose of this special communication is to document this rapid transition to virtual methodologies at Clinical and Translational Science Awards hubs and highlight important considerations for future development. Looking beyond the pandemic, we envision that a hybrid approach, which implements remote activities when feasible but also maintains in-person activities as necessary, will be adopted more widely for clinical trials. There will always be a need for in-person aspects of clinical research, but future study designs will need to incorporate remote capabilities.
In Chapters 18 and 19, we introduced a statistical formalization of causal effects using potential outcomes, focusing on the estimation of average causal effects and interactions using data from controlled experiments. In practice, logistic, ethical, or financial constraints can make it difficult or impossible to externally assign treatments, and simple estimates of the treatment effect based on differences or regressions can be biased when selection into treatment and control group is not random. To estimate effects when there is imbalance and lack of overlap between treatment and control groups, you should include as regression predictors all the confounders that explain this selection. The present chapter discusses methods for causal inference in the presence of systematic pre-treatment differences between treatment and control groups. A key difficulty is that there can be many pre-treatment variables with mismatch, hence the need for adjustment on many variables.
Most textbooks on regression focus on theory and the simplest of examples. Real statistical problems, however, are complex and subtle. This is not a book about the theory of regression. It is about using regression to solve real problems of comparison, estimation, prediction, and causal inference. Unlike other books, it focuses on practical issues such as sample size and missing data and a wide range of goals and techniques. It jumps right in to methods and computer code you can use immediately. Real examples, real stories from the authors' experience demonstrate what regression can do and its limitations, with practical advice for understanding assumptions and implementing methods for experiments and observational studies. They make a smooth transition to logistic regression and GLM. The emphasis is on computation in R and Stan rather than derivations, with code available online. Graphics and presentation aid understanding of the models and model fitting.
Statistical inference can be formulated as a set of operations on data that yield estimates and uncertainty statements about predictions and parameters of some underlying process or population. From a mathematical standpoint, these probabilistic uncertainty statements are derived based on some assumed probability model for observed data. In this chapter, we sketch the basics of probability modeling, estimation, bias and variance, and the interpretation of statistical inferences and statistical errors in applied work. We introduce the theme of uncertainty in statistical inference and discuss how it is a mistake to use hypothesis tests or statistical significance to attribute certainty from noisy data.
This introductory chapter lays out the key challenges of statistical inference in general and regression modeling in particular. We present a series of applied examples to show how complex and subtle regression can be, and why a book-length treatment is needed, not just on the mathematics of regression modeling but on how to apply and understand these methods.
This chapter is a departure from the rest of the book, which focuses on data analysis: building, fitting, understanding, and evaluating models fit to existing data. In the present chapter, we consider the design of studies, in particular asking the question of what sample size is required to estimate a quantity of interest to some desired precision. We focus on the paradigmatic inferential tasks of estimating population averages, proportions, and comparisons in sample surveys, or estimating treatment effects in experiments and observational studies. However, the general principles apply for other inferential goals such as prediction and data reduction. We present the relevant algebra and formulas for sample size decisions and demonstrating with a range of examples, but we also criticize the standard design framework of “statistical power,” which when studied naively yields unrealistic expectations of success and can lead to the design of ineffective, noisy studies. As we frame it, the goal of design is not to attain statistical significance with some high probability, but rather to have a sense–before and after data have been collected–about what can realistically be learned from statistical analysis of an empirical study.
So far, we have been interpreting regressions predictively: given the values of several inputs, the fitted model allows us to predict y, typically considering the n data points as a simple random sample from a hypothetical infinite “superpopulation” or probability distribution. Then we can make comparisons across different combinations of values for these inputs. This section of the book considers causal inference, which concerns what would happen to an outcome y as a result of a treatment, intervention, or exposure. This chapter introduces the notation and ideas of causal inference in the context of randomized experiments, which allow clean inference for average causal effects and serve as a starting point for understanding the tools and challenges of causal estimation.
With logistic as with linear regression, fitting is only part of the story. In this chapter we develop more advanced graphics to visualize data and fitted logistic regressions with one or more predictors. We discuss the challenges of interpreting coefficients in the presence of interactions and the use of linear transformations to aid understanding. We show how to make probabilistic predictions and how to average these predictions to obtain summaries–average predictive comparisons–that can be more interpretable than logistic regression coefficients. We discuss the evaluation of fitted models using binned residual plots and predictive errors, and we present all these tools in the context of a worked example. The chapter concludes with a discussion of the use of Bayesian inference and prior distributions to resolve a challenge of inference that arises with sparse discrete data, which again we illustrate with an applied example.
Going forward, there are various ways in which we find it useful in applied work to push against the boundaries of linear regression and generalized linear models. Consider this concluding chapter as an introduction to various methods that we plan to discuss in more detail in the sequel to this book.
It is not always best to fit a regression using data in their raw form. In this chapter we start by discussing linear transformations for standardizing predictors and outcomes in a regression, which connects to “regression to the mean,” earlier discussed in Chapter 6, and how it relates to linear transformations and correlation. We then discuss logarithmic and other transformations in the context of a series of examples in which input and outcome variables are transformed and combined in various ways in order to get more understandable models and better predictions. This leads us to more general thoughts about building and comparing regression models in an applied context, which we develop in the context of an additional example.
Simple methods from introductory mathematics and statistics have three important roles in regression modeling. First, linear algebra and simple probability distributions are the building blocks for elaborate models. Second, it is useful to understand the basic ideas of inference separately from the details of particular classes of model. Third, it is often useful in practice to construct quick estimates and comparisons for small parts of a problem–before fitting an elaborate model, or in understanding the output from such a model. This chapter provides a quick review of some of these basic ideas.