To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Hamilcar and Hannibal Barca embody a colossal father-son military legacy. Yet their family – the so-called ‘Barcid’ dynasty – has a murky history. Modern scholars have presumed that Hamilcar, the first notable historical figure to bear the name Barkas, received it as a ‘nickname’ meaning ‘lightning’. The rationale is that the name derives from the Phoenician word brq and is thus the equivalent to the Greek epithet Keraunos. There is, however, no evidence in our classical sources, to which exclusively we owe our knowledge of events, supporting this. Furthermore, the name Barca was passed on to Hamilcar’s sons, something suggestive of an inherited family surname. This article submits an alternative to the widely endorsed ‘lightning’ theory. This new perspective explores the possibility that the Barcid dynasty had roots in the city of Barce in Cyrenaica and was a relatively new addition to the Carthaginian aristocracy in the third century BC. Using textual evidence from Polybius, Diodorus and others, this fresh take clarifies other aspects of the Barcid dynasty’s tumultuous history, such as their animosity towards the Carthaginian Council of Elders and their departure to Spain in the 220s.
Acute ischemic stroke may affect women and men differently. We aimed to evaluate sex differences in outcomes of endovascular treatment (EVT) for ischemic stroke due to large vessel occlusion in a population-based study in Alberta, Canada.
Methods and Results:
Over a 3-year period (April 2015–March 2018), 576 patients fit the inclusion criteria of our study and constituted the EVT group of our analysis. The medical treatment group of the ESCAPE trial had 150 patients. Thus, our total sample size was 726. We captured outcomes in clinical routine using administrative data and a linked database methodology. The primary outcome of our study was home-time. Home-time refers to the number of days that the patient was back at their premorbid living situation without an increase in the level of care within 90 days of the index stroke event. In adjusted analysis, EVT was associated with an increase of 90-day home-time by an average of 6.08 (95% CI −2.74–14.89, p-value 0.177) days in women compared to an average of 11.20 (95% CI 1.94–20.46, p-value 0.018) days in men. Further analysis revealed that the association between EVT and 90-day home-time in women was confounded by age and onset-to-treatment time.
We found a nonsignificant nominal reduction of 90-day home-time gain for women compared to men in this province-wide population-based study of EVT for large vessel occlusion, which was only partially explained by confounding.
In Chapters 18 and 19, we introduced a statistical formalization of causal effects using potential outcomes, focusing on the estimation of average causal effects and interactions using data from controlled experiments. In practice, logistic, ethical, or financial constraints can make it difficult or impossible to externally assign treatments, and simple estimates of the treatment effect based on differences or regressions can be biased when selection into treatment and control group is not random. To estimate effects when there is imbalance and lack of overlap between treatment and control groups, you should include as regression predictors all the confounders that explain this selection. The present chapter discusses methods for causal inference in the presence of systematic pre-treatment differences between treatment and control groups. A key difficulty is that there can be many pre-treatment variables with mismatch, hence the need for adjustment on many variables.
Most textbooks on regression focus on theory and the simplest of examples. Real statistical problems, however, are complex and subtle. This is not a book about the theory of regression. It is about using regression to solve real problems of comparison, estimation, prediction, and causal inference. Unlike other books, it focuses on practical issues such as sample size and missing data and a wide range of goals and techniques. It jumps right in to methods and computer code you can use immediately. Real examples, real stories from the authors' experience demonstrate what regression can do and its limitations, with practical advice for understanding assumptions and implementing methods for experiments and observational studies. They make a smooth transition to logistic regression and GLM. The emphasis is on computation in R and Stan rather than derivations, with code available online. Graphics and presentation aid understanding of the models and model fitting.
Statistical inference can be formulated as a set of operations on data that yield estimates and uncertainty statements about predictions and parameters of some underlying process or population. From a mathematical standpoint, these probabilistic uncertainty statements are derived based on some assumed probability model for observed data. In this chapter, we sketch the basics of probability modeling, estimation, bias and variance, and the interpretation of statistical inferences and statistical errors in applied work. We introduce the theme of uncertainty in statistical inference and discuss how it is a mistake to use hypothesis tests or statistical significance to attribute certainty from noisy data.
This introductory chapter lays out the key challenges of statistical inference in general and regression modeling in particular. We present a series of applied examples to show how complex and subtle regression can be, and why a book-length treatment is needed, not just on the mathematics of regression modeling but on how to apply and understand these methods.
This chapter is a departure from the rest of the book, which focuses on data analysis: building, fitting, understanding, and evaluating models fit to existing data. In the present chapter, we consider the design of studies, in particular asking the question of what sample size is required to estimate a quantity of interest to some desired precision. We focus on the paradigmatic inferential tasks of estimating population averages, proportions, and comparisons in sample surveys, or estimating treatment effects in experiments and observational studies. However, the general principles apply for other inferential goals such as prediction and data reduction. We present the relevant algebra and formulas for sample size decisions and demonstrating with a range of examples, but we also criticize the standard design framework of “statistical power,” which when studied naively yields unrealistic expectations of success and can lead to the design of ineffective, noisy studies. As we frame it, the goal of design is not to attain statistical significance with some high probability, but rather to have a sense–before and after data have been collected–about what can realistically be learned from statistical analysis of an empirical study.
So far, we have been interpreting regressions predictively: given the values of several inputs, the fitted model allows us to predict y, typically considering the n data points as a simple random sample from a hypothetical infinite “superpopulation” or probability distribution. Then we can make comparisons across different combinations of values for these inputs. This section of the book considers causal inference, which concerns what would happen to an outcome y as a result of a treatment, intervention, or exposure. This chapter introduces the notation and ideas of causal inference in the context of randomized experiments, which allow clean inference for average causal effects and serve as a starting point for understanding the tools and challenges of causal estimation.
With logistic as with linear regression, fitting is only part of the story. In this chapter we develop more advanced graphics to visualize data and fitted logistic regressions with one or more predictors. We discuss the challenges of interpreting coefficients in the presence of interactions and the use of linear transformations to aid understanding. We show how to make probabilistic predictions and how to average these predictions to obtain summaries–average predictive comparisons–that can be more interpretable than logistic regression coefficients. We discuss the evaluation of fitted models using binned residual plots and predictive errors, and we present all these tools in the context of a worked example. The chapter concludes with a discussion of the use of Bayesian inference and prior distributions to resolve a challenge of inference that arises with sparse discrete data, which again we illustrate with an applied example.
Going forward, there are various ways in which we find it useful in applied work to push against the boundaries of linear regression and generalized linear models. Consider this concluding chapter as an introduction to various methods that we plan to discuss in more detail in the sequel to this book.
It is not always best to fit a regression using data in their raw form. In this chapter we start by discussing linear transformations for standardizing predictors and outcomes in a regression, which connects to “regression to the mean,” earlier discussed in Chapter 6, and how it relates to linear transformations and correlation. We then discuss logarithmic and other transformations in the context of a series of examples in which input and outcome variables are transformed and combined in various ways in order to get more understandable models and better predictions. This leads us to more general thoughts about building and comparing regression models in an applied context, which we develop in the context of an additional example.
Simple methods from introductory mathematics and statistics have three important roles in regression modeling. First, linear algebra and simple probability distributions are the building blocks for elaborate models. Second, it is useful to understand the basic ideas of inference separately from the details of particular classes of model. Third, it is often useful in practice to construct quick estimates and comparisons for small parts of a problem–before fitting an elaborate model, or in understanding the output from such a model. This chapter provides a quick review of some of these basic ideas.