To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
OBJECTIVES/SPECIFIC AIMS: (1) Develop a concept map of ideas from diverse stakeholders on how to best improve training programs. (2) Assess the degree of consensus amongst stakeholders regarding importance and feasibility. (3) Identify which ideas are both important and feasible to inform policy and curricular interventions. METHODS/STUDY POPULATION: Concept mapping is a 4 step approach to data gathering and analysis. (1) Stakeholders [pediatricians (peds), MH professionals (MHPs), trainees, parents] were recruited to brainstorm ideas in response to this prompt: “To prepare future pediatricians for their role in caring for children and adolescents with mental and behavioral health conditions, residency training needs to...”. (2) Content analysis was used to edit and synthesize ideas. (3) A subgroup of stakeholders sorted ideas into groups and rated for importance and feasibility. (4) A large group of anonymous participants rated ideas for importance and feasibility. Multidimensional scaling and hierarchical cluster analysis grouped ideas into clusters. Average importance and feasibility were calculated for each cluster and were compared statistically in each cluster and between subgroups. Bivariate plots were created to show the relative importance and feasibility of each idea. The “Go-Zone” is where statements are feasible and important and can drive action planning. RESULTS/ANTICIPATED RESULTS: Content analysis was applied to 497 ideas resulting in 99 that were sorted by 40 stakeholders and resulted in 7 clusters: Modalities, Prioritization of MH, Systems-Based, Self-Awareness/Relationship Building, Clinical Assessment, Treatment, and Diagnosis Specific Skills. In total, 216 participants rated statements for importance, 209 for feasibility: 17% MHPs, 82% peds, 55% trainees. There was little correlation between importance and feasibility for each cluster. Compared with peds, MHPs rated Modalities, and Prioritization of MH higher in importance and Prioritization of MH as more feasible, but Treatment less feasible. Trainees rated 5 of 7 clusters higher in importance and all clusters more feasible than established practitioners. DISCUSSION/SIGNIFICANCE OF IMPACT: Statements deemed feasible and important should drive policy changes and curricular development. Innovation is needed to make important ideas more feasible. Differences between importance and feasibility in each cluster and between stakeholders need to be addressed to help training programs evolve.
The discovery of the first electromagnetic counterpart to a gravitational wave signal has generated follow-up observations by over 50 facilities world-wide, ushering in the new era of multi-messenger astronomy. In this paper, we present follow-up observations of the gravitational wave event GW170817 and its electromagnetic counterpart SSS17a/DLT17ck (IAU label AT2017gfo) by 14 Australian telescopes and partner observatories as part of Australian-based and Australian-led research programs. We report early- to late-time multi-wavelength observations, including optical imaging and spectroscopy, mid-infrared imaging, radio imaging, and searches for fast radio bursts. Our optical spectra reveal that the transient source emission cooled from approximately 6 400 K to 2 100 K over a 7-d period and produced no significant optical emission lines. The spectral profiles, cooling rate, and photometric light curves are consistent with the expected outburst and subsequent processes of a binary neutron star merger. Star formation in the host galaxy probably ceased at least a Gyr ago, although there is evidence for a galaxy merger. Binary pulsars with short (100 Myr) decay times are therefore unlikely progenitors, but pulsars like PSR B1534+12 with its 2.7 Gyr coalescence time could produce such a merger. The displacement (~2.2 kpc) of the binary star system from the centre of the main galaxy is not unusual for stars in the host galaxy or stars originating in the merging galaxy, and therefore any constraints on the kick velocity imparted to the progenitor are poor.
In mathematics you don’t understand things. You just get used to them.
(John von Neumann, 1903–57)
The majority of practical choice study applications do not progress beyond the simple multinomial logit (MNL) model discussed in previous chapters. The ease of computation, and the wide availability of software packages capable of estimating the MNL model, suggest that this trend will continue. The ease with which the MNL model may be estimated, however, comes at a price in the form of the assumption of Independence of Identically Distributed (IID) error components. While the IID assumption and the behaviorally comparable assumption of Independence of Irrelevant Alternatives (IIA) allow for ease of computation (as well as providing a closed form solution), as with any assumption violations both can and do occur. When violations do occur, the cross-substitution effects (or correlation) observed between pairs of alternatives are no longer equal given the presence or absence of other alternatives within the complete list of available alternatives in the model (Louviere et al. 2000).
The nested logit (NL) model represents a partial relaxation of the IID and IIA assumptions of the MNL model. As discussed in Chapter 4, this relaxation occurs in the variance components of the model, together with some correlation within sub-sets of alternatives, and while more advanced models such as mixed multinomial logit (see Chapter 15) relax the IID assumption more fully, the NL model represents an excellent advancement for the analyst in terms of studies of choice. As with the MNL model, the NL model is relatively straightforward to estimate and offers the added benefit of being a closed-form solution. More advanced models relax the IID assumption in terms of the covariances; however, all are of open-form solution and as such require complex analytical calculations to identify changes in the choice probabilities through varying levels of attributes (see Louviere et al. (2000) and Train (2003, 2009), as well as the following chapters in this book). In this chapter, we show how to use NLOGIT to estimate NL models and to interpret the output, especially the output that is additional to what is obtained when estimating an MNL model. As with previous chapters, we have been very specific in terms of our explanation of the command syntax as well as the output generated.
The secret of greatness is simple: do better work than any other man in your field – and keep on doing it.
(Wilfred A. Peterson)
The choice modeler has available a number of econometric models. Traditionally, the more common models applied to choice data are the multinomial logit (MNL) and nested logit (NL) models. Increasingly, however, choice modelers are estimating the mixed logit (ML) or random parameters logit model. In Chapter 4, we outlined the theory behind this class of models. In this chapter we estimate a range of ML models using Nlogit, including recent developments in scaled mixed logit (or generalized mixed logit). As with Chapters 11 and 13 (MNL model) and Chapter 14 (NL model), we explain in detail the commands necessary to estimate ML models as well as the interpretation of the output generated by Nlogit. An understanding of the theory behind the ML model is presented in Chapter 4; however we anticipate that in reading this chapter you will have a better understanding of the model, at least from an empirical standpoint.
The mixed logit model basic commands
The ML model syntax commands build on the commands of the MNL model discussed in Chapter 11. We begin with the basic ML syntax command, building upon this in later sections as we add to the complexity of the ML model.
This chapter was co-authored with Waiyan Leong and Andrew Collins.
Any economic decision or judgment has an associated, often subconscious, psychological process prodding it along, in ways that makes the “neoclassical ambition of avoiding [this] necessity ….unrealizable” (Simon 1978, 507). The translation of this fundamental statement on human behavior has become associated with the identification of the heuristics that individuals use to simplify preference construction and hence make choices, or to make the representation of what matters relevant, regardless of the degree of complexity as perceived by the decision maker and/or the analyst. Despite this recognition in behavioral research as long ago as the 1950s (see Svenson 1998), that cognitive processes have a key role in preference revelation, and the reminders throughout the literature (see McFadden 2001b; Yoon and Simonson 2008) about rule-driven behavior, we still see relatively little of the decision processing literature incorporated into discrete choice modeling which is, increasingly, becoming the mainstream empirical context for preference measurement and willingness to pay (WTP) derivatives.
There is an extensive literature focussing on these matters that might broadly be described as heuristics and biases, and which is crystallized in the notion of process, in contrast to outcome.
I’m all in favor of keeping dangerous weapons out of the hands of fools. Let’s start with typewriters.
(Frank Lloyd Wright 1868–1959)
Almost without exception, everything human beings undertake involves a choice (consciously or subconsciously), including the choice not to choose. Some choices are the result of habit while others are fresh decisions made with great care, based on whatever information is available at the time from past experiences and/or current inquiry.
Over the last forty years, there has been a steadily growing interest in the development and application of quantitative statistical methods to study choices made by individuals (and, to a lesser extent, groups of individuals or organizations). With an emphasis on both understanding how choices are made and on forecasting future choice responses, a healthy literature has evolved. Reference works by Louviere et al. (2000) and Train (2003, 2009) synthesize the contributions. However while these sources represent the state of the art (and practice), they are technically advanced and often a challenge for both the beginner and practitioners.
To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of.
As seen in Chapter 2, individual preferences, subject to any constraints faced by those operating in a market, will give rise to choices. These choices in the aggregate sum to represent the total demand for various goods and services within that market. Rather than attempt to model demand based on aggregate level data, discrete choice models seek to model demand using disaggregate level data. Note that this does not necessarily mean that different discrete choice models are estimated for each individual, although some researchers do attempt such feats (e.g., Louviere et al. 2008). Rather, models dealing with aggregate level demand data typically work with variables where each data point represents the amount of some good or service sold at a specific point in time, whereas discrete choice models are typically applied to data where each data point represents an individual choice situation, where the sum of the choices combine to produce information about overall demand.
Importantly, to be able to refer to “demand” we have to allow for the presence of a “no choice,” since some goods and services are not consumed by an individual. Throughout this chapter and the rest of the book, we will refer to both choice and demand, and treat them as interchangeable words. In doing so, we also recognize the broader context within which discrete choice models can be used, that often distinguishes between discrete choice models and a complete system of demand models, the latter at a more aggregate economy wide level, in contrast to discrete choice models that are most commonly applied at a sectoral level (e.g., transport or health). Truong and Hensher (2012), among others, develop the theoretical linkages between discrete choice models and continuous choice models, where discrete choice models focus on the structure of tastes or preferences at the individual level, while continuous demand models can be used to describe the interactions between these preferences at the industry or sectoral level, extendable to an entire economy.
This chapter will discuss some issues in statistical inference in the analysis of choice models. We are concerned with two kinds of computations, hypothesis tests and variance estimation. To illustrate the analyses, we will work through an example based on a revealed preference (RP) data set. In this chapter, we present syntax and output generated using Nlogit to demonstrate the concepts covered. The syntax and output is, for the more familiar reader, largely self-explanatory; however, for the less familiar reader, we refer you to Chapter 11, which you may wish to read before going further. The multinomial logit model for the study is shown in the following Nlogit set up which gives the utility functions for four travel modes: bus, train, busway and car, respectively:
The attributes are act = access time, invc = in vehicle cost, invt2 = in vehicle time, egt = egress time, trnf = transfer wait time, tc = toll cost, pc = parking cost, and invt = in vehicle time for car. Where a particular example uses a method given in more detail in later chapters, we will provide a cross-reference.
The literature on household economics has made substantial progress in the study of group decision making, beginning with the initial theoretical contributions (Becker 1993; Browning and Chiappori 1998; Lampietti 1999; Chiuri 2000; Vermeulen 2002), and subsequent empirical applications in various fields, such as marketing (Arora and Allenby 1999; Adamowicz et al. 2005), transport (Brewer and Hensher 2000; Hensher et al. 2007), and environmental economics (Quiggin 1998; Smith and Houtven 1998; Bateman and Munroe 2005; Dosman and Adamowicz 2006). Recent studies, for example, provide evidence of substantial differences in taste intensities between domestic partners, and make an attempt at reconciling them with observed joint choices using power functions (Dosman and Adamowicz 2006; Beharry et al. 2009). The evidence collected so far indicates that, for some categories of decisions, the conventional practice of selecting one member of the couple as representative of the tastes of the entire household may be biased when compared with the preference estimates underlying joint deliberation by the same couple.
Despite the existence of an extensive literature on group decision making, synthesized in Dellaert et al. (1998) and Vermuelen (2002), there has been a limited focus on ways in which multiple agents have been recognized in the formalization of discrete choice models. This literature can broadly be divided into two. (i) a focus on the game playing between agents in a sequential choice process that involves initial preferences (with or without knowledge of the agent’s choice), followed by a process of feedback, review, and revision or maintenance of the initial preference. This approach endogenizes the preferences of other decision makers in the ultimate group decision. We call this interactive agency choice experiments (IACE), as developed initially by Hensher and detailed in Brewer and Hensher (2000) and Rose and Hensher (2004). (ii) studies that develop ways of establishing the influence and power of each agent in the joint choice outcome, which may or may not use an IACE framework. Puckett and Hensher (2006) review this literature, which is primarily in marketing and household economics and has, for example, been extended and implemented in the study of freight distribution chains by Hensher et al. (2008), to the study of partnering between bus operators and the regulator by Hensher and Knowles (2007) and, most recently, to the household purchase of alternative fueled vehicles by Hensher et al. (2011) and Beck et al. (2012).
As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.
This chapter was co-authored with Michiel Bliemer and Andrew Collins.
This chapter might be regarded as a diversion from the main theme of discrete choice models and estimation; however, the popularity of stated choice (SC) data developed within a formal framework known as the “design of choice experiments” is sufficient reason to include one chapter on the topic, a topic growing in such interest that it justifies an entire book-length treatment. In considering the focus of this chapter (in contrast to the chapter in the first edition), we have decided to focus on three themes. The first is a broad synthesis of what is essentially experimental design in the context of data needs for choice analysis (essentially material edited from the first edition). The second is an overview in reasonable chronological order of the main developments in the literature on experimental design, drawing on the contribution of Rose and Bliemer (2014), providing an informative journey on the evolution of approaches that are used to varying degrees in the design and implementation of choice experiments. With the historical record in place, we then focus on a number of topics which we believe need to be given a more detailed treatment, which includes sample size issues, best–worst designs, and pivot designs. We draw on the key contributions in Rose and Bliemer (2012, 2013); Rose (2014); and Rose et al. (2008). We use Ngene (Choice Metrics 2012), a comprehensive tool that complements Nlogit5 and which has the capability to design the wide range of choice experiments discussed in this chapter, and to provide syntax for use in a few of the designs. We refer the reader to the Ngene manual for more details (www.choice-metrics.com/documentation.html).
A growing number of empirical studies involves the assessment of influences on a choice among ordered discrete alternatives. Ordered logit and probit models are well known, including extensions to accommodate random parameters (RP) and heteroskedasticity in unobserved variance (see, e.g., Bhat and Pulugurtha 1998; Greene 2007). The ordered choice model allows for non-linear effects of any variable on the probabilities associated with each ordered level (see, e.g., Eluru et al. 2008). However, the traditional ordered choice model is potentially limited, behaviorally, in that it holds the threshold values to be fixed. This can lead to inconsistent (i.e., incorrect) estimates of the effects of variables. Extending the ordered choice random parameter model to account for threshold random heterogeneity, as well as underlying systematic sources of explanation for unobserved heterogeneity, is a logical extension in line with the growing interest in choice analysis in establishing additional candidate sources of observed and unobserved taste heterogeneity.
A substantive application here is used to illustrate the behavioral gains from generalizing the ordered choice model to accommodate random thresholds in the presence of RP. It is focussed on the influences on the role that a specific attribute processing strategy, of preserving each attribute or ignoring it, plays when choosing among unlabeled attribute packages of alternative tolled and non-tolled routes for the commuting trip in a stated choice experiment (see Hensher 2001a, 2004, 2008). The ordering represents the number of attributes attended to from the full set. Despite a growing number of studies focussing on these issues (see, e.g., Cantillo et al. 2006; Hensher 2006; Swait 2001; Campbell et al. 2008), the entire domain of every attribute is treated as relevant to some degree, and included in the utility expressions for every individual. While acknowledging the extensive study of non-linearity in attribute specification, which permits varying marginal (dis)utility over an attribute’s range, including account for asymmetric preferences under conditions of gain and loss (see Hess et al. 2008), this is not the same as establishing ex ante the extent to which a specific attribute might be totally excluded from consideration for all manner of reasons, including the influence of the design of a choice experiment when stated choice data is being used.
In Chapter 11 we presented the standard output generated by Nlogit for the multinomial logit (MNL) choice model. By the addition of supplementary commands to the basic command syntax, the analyst is able to generate further output to aid in an understanding of choice. We present some of these additional commands now. As before, we demonstrate how the command syntax should appear and detail line by line how to interpret the output. The revealed preference (RP) data in the North West travel choice data set is used to illustrate the set of commands and outputs.
The entire command set up and model output is given up front to make it easy for the reader to see at a glance the commands that are used in this chapter. The command set up has two choice models; the first is the MNL model estimated to obtain the standard set of parameter estimates as well as useful additional outputs such as elasticities, partial (or marginal effects) and prediction success; the second MNL model uses the parameter estimates from the first model to undertake “what if” analysis using ;simulation and ;scenario, that involves selecting the relevant alternatives and attributes you want to change to predict the absolute and relative change in the choice shares. Arc elasticities can be inferred from the scenario analysis, since it provides before and after choice shares associated with before and after attribute levels.
As soon as questions of will or decision or reason or choice of action arise, human science is at a loss.
(Noam Chomsky, 1928 –)
Individuals are born traders. They consciously or subconsciously make decisions by comparing alternatives and selecting an action that we call a choice outcome. As simple as the observed outcome may be to the decision maker (i.e., the chooser), the analyst who is trying to explain this choice outcome through some captured data will never have available all the information required to be able to explain the choice outcome fully. This challenge becomes even more demanding as we study the population of individuals, since differences between individuals abound.
If the world of individuals could be represented by one person, then life for the analyst would be greatly simplified, because whatever choice response we elicit from that one person could be expanded to the population as a whole to get the overall number of individuals choosing a specific alternative. Unfortunately there is a huge amount of variability in the reasoning underlying decisions made by a population of individuals. This variability, often referred to as heterogeneity, is in the main not observed by the analyst. The challenge is to find ways of observing and hence measuring this variability, maximizing the amount of measured variability (or observed heterogeneity) and minimizing the amount of unmeasured variability (or unobserved heterogeneity). The main task of the choice analyst is to capture such information through data collection, and to recognize that any information not captured in the data (be it known but not measured, or simply unknown) is still relevant to an individual’s choice, and must somehow be included in the effort to explain choice behavior.
An economist is an expert who will know tomorrow why the things he predicted yesterday didn’t happen today.
(Laurance J. Peter 1919–90)
In this chapter we demonstrate, through the use of a labeled mode choice data set (summarized in Appendix 11A to this chapter), how to model choice data by means of Nlogit. In writing this chapter we have been very specific. We demonstrate line by line the commands necessary to estimate a model in Nlogit. We do likewise with the output, describing in detail what each line of output means in practical terms. Knowing that “one must learn to walk before one runs,” we begin with estimation of the most basic of choice models, the multinomial logit (MNL). We devote Chapter 12 to additional output that may be obtained for the basic MNL model and later chapters (especially Chapters 21–22) to more advanced models.
Modeling choice in Nlogit: the MNL command
The basic commands necessary for the estimation of choice models in Nlogit are as follows:
;lhs = choice, cset, altij
;choices =<names of alternatives>
U(alternative 1 name) = <utility function 1>/
U(alternative 2 name) = <utility function 2>/
U(alternative i name) = <utility function i>$
We will use this command syntax with the labeled mode choice data described in Chapter 10, shown here as:
While other command structures are possible (e.g., using RHS and RH2 instead of specifying the utility functions – we do not describe these here and refer the interested reader to Nlogit’s help references), the above format provides the analyst with the greatest flexibility in specifying choice models. It is for this reason that we use this command format over the other formats available.
In teaching courses on discrete choice modeling, we have increasingly observed that many participants struggle with the look of choice data. Courses and texts on econometrics often provide the reader with an already formatted data set (as does this book) yet fail to mention how (and why) the data were formatted in the manner they were. This leaves the user to work out the whys and the hows of data formatting by themselves (albeit with the help of lists such as the Limdep List: see http://limdep.itls.usyd.edu.au). The alternative is for the non-expert to turn to user manuals; however, such manuals are often written by experts for experts. We have found that even specialists in experimental design or econometrics have problems in setting up their data for choice modeling.
We now focus on how to format choice data for estimation purposes. We concentrate on data formatting for the program Nlogit from Econometric Software. While other programs capable of modeling choice data exist in the market, we choose to concentrate on Nlogit because this is the program that the authors are most familiar with (indeed Greene and Hensher are the developers of Nlogit). Nlogit also offers all of the discrete choice models that are used by practitioners and researchers. The release of Nlogit5.0 in August 2012 came with a comprehensive set of (four) online manuals (and no hard copy manuals). The discussion here complements the manuals. All the features of Nlogit that are used here are available in version 5.0 (dated September 2012).