To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Cognitive impairment is a core feature of psychotic disorders, but the profile of impairment across adulthood, particularly in African-American populations, remains unclear.
Using cross-sectional data from a case–control study of African-American adults with affective (n = 59) and nonaffective (n = 68) psychotic disorders, we examined cognitive functioning between early and middle adulthood (ages 20–60) on measures of general cognitive ability, language, abstract reasoning, processing speed, executive function, verbal memory, and working memory.
Both affective and nonaffective psychosis patients showed substantial and widespread cognitive impairments. However, comparison of cognitive functioning between controls and psychosis groups throughout early (ages 20–40) and middle (ages 40–60) adulthood also revealed age-associated group differences. During early adulthood, the nonaffective psychosis group showed increasing impairments with age on measures of general cognitive ability and executive function, while the affective psychosis group showed increasing impairment on a measure of language ability. Impairments on other cognitive measures remained mostly stable, although decreasing impairments on measures of processing speed, memory and working memory were also observed.
These findings suggest similarities, but also differences in the profile of cognitive dysfunction in adults with affective and nonaffective psychotic disorders. Both affective and nonaffective patients showed substantial and relatively stable impairments across adulthood. The nonaffective group also showed increasing impairments with age in general and executive functions, and the affective group showed an increasing impairment in verbal functions, possibly suggesting different underlying etiopathogenic mechanisms.
Making replication studies widely conducted and published requires new incentives. Academic awards can provide such incentives by highlighting the best and most important replications. The Organization for Human Brain Mapping (OHBM) has led such efforts by recently introducing the OHBM Replication Award. Other communities can adopt this approach to promote replications and reduce career cost for researchers performing them.
Hill (Twin Research and Human Genetics, Vol. 21, 2018, 84–88) presented a critique of our recently published paper in Cell Reports entitled ‘Large-Scale Cognitive GWAS Meta-Analysis Reveals Tissue-Specific Neural Expression and Potential Nootropic Drug Targets’ (Lam et al., Cell Reports, Vol. 21, 2017, 2597–2613). Specifically, Hill offered several interrelated comments suggesting potential problems with our use of a new analytic method called Multi-Trait Analysis of GWAS (MTAG) (Turley et al., Nature Genetics, Vol. 50, 2018, 229–237). In this brief article, we respond to each of these concerns. Using empirical data, we conclude that our MTAG results do not suffer from ‘inflation in the FDR [false discovery rate]’, as suggested by Hill (Twin Research and Human Genetics, Vol. 21, 2018, 84–88), and are not ‘more relevant to the genetic contributions to education than they are to the genetic contributions to intelligence’.
Although meta-analytic neuroimaging studies demonstrate a relative lack of specificity in the brain, this evidence may be the result of limits inherent to these types of studies. From this perspective, we review recent findings that suggest that brain function is most appropriately categorized according to the computational capacity of each brain system, rather than the specific task states that elicit its activity.
An opportunity cost model of effort requires flexible integration of valuation and self-control systems. Reciprocal connections between these networks and brainstem neuromodulatory systems are likely to provide the signals that affect subsequent persistence or failure when faced with effort challenges. The interaction of these systems should be taken into account to strengthen a normative neural model of effort.
The goal of an fMRI data analysis is to analyze each voxel's time series to see whether the BOLD signal changes in response to some manipulation. For example, if a stimulus was repeatedly presented to a subject in a blocked fashion, following the trend shown in the red line in the top panel of Figure 5.1, we would search for voxel time series that match this pattern, such as the BOLD signal shown in blue. The tool used to fit and detect this variation is the general linear model (GLM), where the BOLD time series plays the role of dependent variable, and the independent variables in the model reflect the expected BOLD stimulus timecourses. Observe, though, that square wave predictor in red doesn't follow the BOLD data very well, due to sluggish response of the physiology. This leads to one major focus of this chapter: Using our understanding of the BOLD response to create GLM predictors that will model the BOLD signal as accurately as possible. The other focus is modeling and accounting for BOLD noise and other souces of variation in fMRI time series.
Throughout this chapter the models being discussed will refer to modeling the BOLD signal in a single voxel in the brain. Such a voxel-by-voxel approach is known as a mass univariate data analysis, in contrast to a multivariate approach (see Chapters 8 and 9 for uses of multivariate models).
The amount of computation that is performed, and data that are produced, in the process of fMRI research can be quite astounding. For a laboratory with multiple researchers, it becomes critical to ensure that a common scheme is used to organize the data; for example, when a student leaves a laboratory, the PI may still need to determine which data were used for a particular analysis reported in a paper in order to perform additional analyses. In this appendix, we discuss some practices that help researchers meet the computational needs of fMRI research and keep the data deluge under control, particularly as they move toward developing a research group or laboratory with multiple researchers performing data analysis.
Computing for fMRI analysis
The power of today's computers means that almost all of the data analysis methods discussed in this book can be performed on a standard desktop machine. Given this, one model for organization of a laboratory is what we might call “just a bunch of workstations” (JBOW). Under this model, each member of the research group has his or her own workstation on which to perform analyses. This model has the benefit of requiring little in the way of specialized hardware, system administration, or user training. Thus, one can get started very quickly with analysis.
One of the oldest debates in the history of neuroscience centers on the localization of function in the brain; that is, whether specific mental functions are localized to specific brain regions or instead rely more diffusely upon the entire brain (Finger, 1994). The concept of localization first arose from work by Franz Gall and the phrenologists, who attempted to localize mental functions to specific brain regions based on the shape of the skull. Although Gall was an outstanding neuroscientist (Zola-Morgan, 1995), he was wrong in his assumption about how the skull relates to the brain, and phrenology was in the end taken over by charlatans. In the early twentieth century, researchers such as Karl Lashley argued against localization of function, on the basis of research showing that cortical lesions in rats had relatively global effects on behavior. However, across the twentieth century the pendulum shifted toward a localizationist view, such that most neuroscientists now agree that there is at least some degree of localization of mental function. At the same time, the function of each of these regions must be integrated in order to acheive coherent mental function and behavior. These concepts have been referred to as functional specialization and functional integration, respectively (Friston, 1994).
Today, nearly all neuroimaging studies are centered on functional localization. However, there is increasing recognition that neuroimaging research must take functional integration seriously to fully explain brain function (Friston, 2005; McIntosh, 2000).
The goal of this book is to provide the reader with a solid background in the techniques used for processing and analysis of functional magnetic resonance imaging (fMRI) data.
A brief overview of fMRI
Since its development in the early 1990s, fMRI has taken the scientific world by storm. This growth is easy to see from the plot of the number of papers that mention the technique in the PubMed database of biomedical literature, shown in Figure 1.1. Back in 1996 it was possible to sit down and read the entirety of the fMRI literature in a week, whereas now it is barely feasible to read all of the fMRI papers that were published in the previous week! The reason for this explosion in interest is that fMRI provides an unprecedented ability to safely and noninvasively image brain activity with very good spatial resolution and relatively good temporal resolution compared to previous methods such as positron emission tomography (PET).
Blood flow and neuronal activity
The most common method of fMRI takes advantage of the fact that when neurons in the brain become active, the amount of blood flowing through that area is increased. This phenomenon has been known for more than 100 years, though the mechanisms that cause it remain only partly understood. What is particularly interesting is that the amount of blood that is sent to the area is more than is needed to replenish the oxygen that is used by the activity of the cells.
In some cases fMRI data are collected from an individual with the goal of understanding that single person; for example, when fMRI is used to plan surgery to remove a tumor. However, in most cases, we wish to generalize across individuals to make claims about brain function that apply to our species more broadly. This requires that data be integrated across individuals; however, individual brains are highly variable in their size and shape, which requires that they first be transformed so that they are aligned with one another. The process of spatially transforming data into a common space for analysis is known as intersubject registration or spatial normalization.
In this chapter we will assume some familiarity with neuroanatomy; for those without experience in this domain, we discuss a number of useful atlases in Section 10.2. Portions of this chapter were adapted from Devlin & Poldrack (2007).
At a gross level, the human brain shows remarkable consistency in its overall structure across individuals, although it can vary widely in its size and shape. With the exception of those suffering genetic disorders of brain development, every human has a brain that has two hemispheres joined by a corpus callosum whose shape diverges relatively little across individuals. A set of major sulcal landmarks (such as the central sulcus, sylvian fissure, and cingulate sulcus) are present in virtually every individual, as are a very consistent set of deep brain structures such as the basal ganglia.
The dimensionality of fMRI data is so large that, in order to understand the data, it is necessary to use visualization tools that make it easier to see the larger patterns in the data. Parts of this chapter are adapted from Devlin & Poldrack (2007) and Poldrack (2007).
Visualizing activation data
It is most useful to visualize fMRI data using a tool that provides simultaneous viewing in all three canonical orientations at once (see Figure 10.1), which is available in all of the major analysis packages.
Because we wish to view the activation data overlaid on brain anatomy, it is necessary to choose an anatomical image to serve as an underlay. This anatomical image should be as faithful as possible to the functional image being overlaid. When viewing an individual participant's activation, the most accurate representation is obtained by overlaying the statistical maps onto that individual's own anatomical scan coregistered to the functional data. When viewing activation from a group analysis, the underlay should reflect the anatomical variability in the group as well as the smoothing that has been applied to the fMRI data. Overlaying the activation on the anatomical image from a single subject, or on a single-subject image, implies a degree of anatomical precision that is not actually present in the functional data. Instead, the activation should be visualized on an average structural image from the group coregistered to the functional data, preferably after applying the same amount of spatial smoothing as was applied to the functional data.
The goal of statistical inference is to make decisions based on our data, while accounting for uncertainty due to noise in the data. From a broad perspective, statistical inference on fMRI data is no different from traditional data analysis on, say, a response time dataset. Inference for fMRI is challenging, however, because of the massive nature of the datasets and their spatial form. Thus, we need to define precisely what are the features of the images that we want to make inference on, and we have to account for the multiplicity in searching over the brain for an effect.
We begin with a brief review of traditional univariate statistical inference and then discuss the different features in images we can make inference on and finally cover the very important issue of multiple testing.
Basics of statistical inference
We will first briefly review the concepts of classical hypothesis testing, which is the main approach used for statistical inference in fMRI analysis. A null hypothesis H0 is an assertion about a parameter, some feature of the population from which we're sampling. H0 is the default case, typically that of “no effect,” and the alternative hypothesis H1 corresponds to the scientific hypothesis of interest. A test statistic T is a function of the data that summarizes the evidence against the null hypothesis. We write T for the yet-to-be-observed (random valued) test statistic, and t for a particular observed value of T.
Many of the operations that are performed on fMRI data involve transforming images. In this chapter, we provide an overview of the basic image processing operations that are important for many different aspects of fMRI data analysis.
What is an image?
At its most basic, a digital image is a matrix of numbers that correspond to spatial locations. When we view an image, we do so by representing the numbers in the image in terms of gray values (as is common for anatomical MRI images such as in Figure 2.1) or color values (as is common for statistical parametric maps). We generally refer to each element in the image as a “voxel,” which is the three-dimensional analog to a pixel. When we “process” an image, we are generally performing some kind of mathematical operation on the matrix. For example, an operation that makes the image brighter (i.e., whiter) corresponds to increasing the values in the matrix.
In a computer, images are represented as binary data, which means that the representation takes the form of ones and zeros, rather than being represented in a more familiar form such as numbers in plain text or in a spreadsheet. Larger numbers are represented by combining these ones and zeros; a more detailed description of this process is presented in Box 2.1.
Numeric formats. The most important implication of numeric representation is that information can be lost if the representation is not appropriate.
Functional magnetic resonance imaging (fMRI) has, in less than two decades, become the most commonly used method for the study of human brain function. FMRI is a technique that uses magnetic resonance imaging to measure brain activity by measuring changes in the local oxygenation of blood, which in turn reflects the amount of local brain activity. The analysis of fMRI data is exceedingly complex, requiring the use of sophisticated techniques from signal and image processing and statistics in order to go from the raw data to the finished product, which is generally a statistical map showing which brain regions responded to some particular manipulation of mental or perceptual functions. There are now several software packages available for the processing and analysis of fMRI data, several of which are freely available.
The purpose of this book is to provide researchers with a sophisticated understanding of all of the techniques necessary for processing and analysis of fMRI data. The content is organized roughly in line with the standard flow of data processing operations, or processing stream, used in fMRI data analysis. After starting with a general introduction to fMRI, the chapters walk through all the steps that one takes in analyzing an fMRI dataset. We begin with an overview of basic image processing methods, providing an introduction to the kinds of data that are used in fMRI and how they can be transformed and filtered.
Just as music recorded in a studio requires mixing and editing before being played on the radio, MRI data from the scanner require a number of preprocessing operations in order to prepare the data for analysis. Some of these operations are meant to detect and repair potential artifacts in the data that may be caused either by the MRI scanner itself or by the person being scanned. Others are meant to prepare the data for later processing stages; for example, we may wish to spatially blur the data to help ensure that the assumptions of later statistical operations are not violated. This chapter provides an overview of the preprocessing operations that are applied to fMRI data prior to the analyses discussed in later chapters. The preprocessing of anatomical data will be discussed in Chapter 4.
In many places, the discussion in this chapter assumes basic knowledge of the mechanics of MRI data acquisition. Readers without a background in MRI physics should consult a textbook on MRI imaging techniques, such as Buxton (2002).
An overview of fMRI preprocessing
Preprocessing of fMRI data varies substantially between different software packages and different laboratories, but there is a standard set of methods to choose from. Figure 3.1 provides an overview of the various operations and the usual order in which they are performed. However, note that none of these preprocessing steps is absolutely necessary in all cases, although we believe that quality control measures are mandatory.
The statistical methods discussed in the book so far have had the common feature of trying to best characterize the dataset at hand. For example, when we apply the general linear model to a dataset, we use methods that determine the model parameters that best describe that dataset (where “best” means “with the lowest mean squared difference between the observed and fitted data points”). The field known variously as machine learning, statistical learning, or pattern recognition takes a different approach to modeling data. Instead of finding the model parameters that best characterize the observed data, machine learning methods attempt to find the model parameters that allow the most accurate prediction for new observations. The fact that these are not always the same is one of the most fundamental intuitions that underlies this approach.
The field of machine learning is enormous and continually growing, and we can only skim the surface of these methods in this chapter. At points we will assume that the reader has some familiarity with the basic concepts of machine learning. For readers who want to learn more, there are several good textbooks on machine learning methods, including Alpaydin (2004), Bishop (2006), Duda et al. (2001), and Hastie et al. (2001).
The general linear model is an important tool in many fMRI data analyses. As the Name “general” suggests, this model can be used for many different types of analyses, including correlations, one-sample t-tests, two-sample t-tests, analysis of variance (ANOVA), and analysis of covariance (ANCOVA). This appendix is a review of the GLM and covers parameter estimation, hypothesis testing, and model setup for these various types of analyses.
Some knowledge of matrix algebra is assumed in this section, and for a more detailed explanation of the GLM, it is recommended to read Neter et al. (1996).
Estimating GLM parameters
The GLM relates a single continuous dependent, or response, variable to one or more continuous or categorical independent variables, or predictors. The simplest model is a simple linear regression, which contains a single independent variable. For example, finding the relationship between the dependent variable of mental processing speed and the independent variable, age (Figure A.1). The goal is to create a model that fits the data well and since this appears to be a simple linear relationship between age and processing speed, the model is Y = β0+β1X1, where Y is a vector of length T containing the processing speeds for T subjects, β0 describes where the line crosses the y axis, β1 is the slope of the line and X1 is the vector of length T containing the ages of the subjects.