Hostname: page-component-8448b6f56d-wq2xx Total loading time: 0 Render date: 2024-04-16T16:50:56.272Z Has data issue: false hasContentIssue false

Darwin’s bureaucrat

Reassessing the microfoundations of bureaucracy scholarship

Published online by Cambridge University Press:  06 December 2019

Kevin B. Smith
Affiliation:
University of Nebraska–Lincoln
Jayme L. N. Renfro
Affiliation:
University of Northern Iowa

Abstract

The study of bureaucratic behavior—focusing on control, decision-making, and institutional arrangements—has historically leaned heavily on theories of rational choice and bounded rationality. Notably absent from this research, however, is attention to the growing literature on biological and especially evolutionary human behavior. This article addresses this gap by closely examining the extant economic and psychological frameworks—which we refer to as “Adam Smith’s bureaucrat” and “Herbert Simon’s bureaucrat”—for their shortcomings in terms of explanatory and predictive theory, and by positing a different framework, which we call “Charles Darwin’s bureaucrat.” This model incorporates new insights from an expanding multidisciplinary research framework and has the potential to address some of the long-noted weaknesses of classic theories of bureaucratic behavior.

Type
Article
Copyright
© Association for Politics and the Life Sciences 2019

Biopolitics is the use of life science methods in the study human political behavior. Who we are and how we act are influenced by biological and environmental factors—nature and nurture. Those who study the life sciences would largely see this as obvious, and although some of the social science fields (namely, psychology and economics) have begun to swing this way, political science and the policy/administration sciences have only recently become open to this possibility. Consequently, biopolitics, and more saliently, biology and public administration, have historically been behind the curve in terms of research, suffering from poor reception and support as traditionalists continue to push environmental determinism.

More than 40 years ago, there was some indication that political and policy research might have been starting to lean toward integrating life science methodology. Luther Gulick, a prominent public administration scholar, published a paper imploring “the establishment of a systematic watch for a new thinking and involving advance in fundamental science relating to human behavior.”Footnote 1 Lamenting the tendency of policy and public administration researchers to shy away from individual behavioral research developments, he pointed out the direct relevance of these developments to their fields. Rational actor theories, altruism, decision-making, and leadership are all central tenets of political science broadly, and policy and public administration studies specifically, he argued, and so it is imperative for these fields to be informed by, if not embrace, the methods and theories provided by the life sciences.

The late 1970s and early 1980s saw the publication of a handful of works seemingly inspired by Gulick’s call to action. Robert Presthus and Lynton Caldwell both published research looking at the biobehavioral aspects of bureaucratic organization and policymaking,Footnote 2, Footnote 3 and Gulick himself published a collection of essays on biology and bureaucracy in 1984.Footnote 4 Since then, with the notable exception of Nancy Meyer-Emerick’s article covering the concepts of biopolitics and their potential application to public administration,Footnote 5 there was a significant gap for a long time in the application and discussion of biopolitics in the policy/public administration literature. There has been some recent movement, however, as new journals in behavioral public administration and behavioral public policy have sprung up in the past couple of years, as well as a handful of promising and insightful articles in other journals.

This is not to say that biopolitics was a dead field during that gap, and indeed, there has been a resurgence of interest and advances in the application of biology to politics. These works, however, have been housed largely inside of a purely citizen-behavior framework, even when the themes are clearly relevant to policy and administration. For example, BouchardFootnote 6 and Heatherington and WeilerFootnote 7 looked at the concept of authoritarianism through an evolutionary lens, linking views on submission to authority and rule conformity with long-developed survival instincts. McDermott et al.Footnote 8 examined the relationship between genetics and altruism, finding that an increase in monoamine oxidase A leads to increased aggression in provocative situations. Anderson and SummersFootnote 9 provided a framework for leadership emergence and effectiveness. Research on empathy indicates that humans are predisposed toward prosocial cooperative behavior rather than self-serving rational behavior.Footnote 10

Leadership, cooperation, empathy—these are all areas with which the policy and public administration fields have concerned themselves, yet these lines of research have largely remained separate. Further, most of the work (though certainly not all—see Nicholson-Crotty, Nicholson-Crotty, and WebeckFootnote 11 and Christensen and WrightFootnote 12 for two recent examples) has focused on the public’s attitudes surrounding policy and bureaucracy. The present article endeavors to demonstrate how we might take the existing momentum forward by tackling theories of bureaucratic behavior.

Bureaucrats occupy a unique position in a democratic polity. They are policymakers who engage in politics “of the first order” and help determine the will of the state.Footnote 13, Footnote 14 Yet they are not directly accountable to voters and hold an information advantage over their elected counterparts.Footnote 15 The result is a paradox: “How does one square a permanent [and powerful] … civil service—which neither the people by the vote nor their representatives by their appointments can readily replace—with the principle of government ‘by the people’?”Footnote 16

This paradox helps explain why the bureaucracy literature is heavily oriented toward issues of control and accountability and the institutional arrangements that best facilitate such aims.Footnote 17, Footnote 18, Footnote 19 Critical to this undertaking is a comprehensive and broadly accepted theory of bureaucratic behavior, a framework of human decision-making that accounts for the unique political position of bureaucrats. Currently, no such framework exists. There are several contenders, but none with general empirical confirmation of its descriptive and predictive powers. Why?

We argue that mainstream models of bureaucratic behavior rest on incomplete and/or demonstrably false assumptions about human nature. All behavioral frameworks ultimately rest on such assumptions, and the validity of these assumptions drives the descriptive, predictive, and explanatory powers of the model.Footnote 20, Footnote 21 Theories of bureaucratic behavior are anchored by classically or boundedly rational perspectives of human nature. We argue that these provide a weak foundation for general behavioral models, especially in terms of generating effective prescriptive responses to issues of democratic control. Better theories must start with a more complete understanding of how humans vested with discretionary authority and domain-specific expertise make choices. Toward that end, this article argues for expanding the disciplinary boundaries of bureaucracy scholarship in new directions.

Politics, administration, and the policymaking bureaucrat

Models of bureaucratic behavior must account for two fundamental characteristics of civil servants: (1) bureaucrats routinely exercise discretionary authority, that is, they make choices and take actions that, in practice, constitute authoritative expressions of the will of the state; and (2) bureaucrats routinely know things that other people do not—they have an information advantage. Classic worksFootnote 22, Footnote 23, Footnote 24 argue that even the most lowly public servants routinely exercise discretionary authority, and information asymmetry is broadly recognized.Footnote 25, Footnote 26 Given this, a critical question is, How will bureaucrats use their discretionary power and information advantages to make decisions? How a theory answers that question is the foundation for understanding how (whether) democracy can control bureaucracy and for formulating prescriptive institutional reform.

There have been numerous responses to this foundational question. Corralling these theories into a parsimonious set of foils to test a competing theoretical proposition inevitably risks trivializing or ignoring parts of a large literature. A comprehensive, if not exhaustive, approach is to divide models of bureaucratic behavior depending on whether they are based on economic or psychological perspectives of human nature. This division creates two distinct ideal types of bureaucrat, which we term “Adam Smith’s bureaucrat” and “Herbert Simon’s bureaucrat.” We propose a third archetype, which we call “Charles Darwin’s bureaucrat,” whose nature is anchored in biology and evolutionary theory.

Adam Smith’s bureaucrat

Adam Smith’s bureaucrat is the progeny of classical microeconomic theory and an important part of theories of bureaucratic behavior. The genesis for Adam Smith’s bureaucrat is the rational choice paradigm, exemplified by seminal works by Downs,Footnote 27 Olson,Footnote 28 and Buchanan and Tullock.Footnote 29 All represent influential theories of political behavior based on minimalistic assumptions about human nature. These works were the vanguard of the rational choice movement, which by the turn of the twenty-first century, for good or ill, occupied the theoretical center of political science.Footnote 20

Rational choice takes an individualistic view of human nature, and its variants share a core of critical assumptions about humans as decision makers: (1) they have stable, transitive preferences; (2) they can calculate the cost of foregone opportunities; (3) they can compare future and present benefits; and (4) they have a utility function that integrates all aspects of their lives.Footnote 30 Given these characteristics, a comprehensively rational actor will generate a set of expected payoffs for each possible outcome in a choice situation, rank order them, and choose the one that best maximizes utility.Footnote 31 Or, as Buchanan and TullockFootnote 32 succinctly put it, people know their preferences and will seek to satisfy them by choosing “more” over “less.”

These assumptions are operationalized into models of behavior through a deductive methodological framework: create a decision-making situation and deduce how the individual rational actor behaves in that situation. The logical deduction of individually rational behavior, often in the form of sophisticated mathematical manipulations, is given particular emphasis (to the point that some question whether empirical confirmation is slightedFootnote 33). This approach has been used extensively to build models of bureaucratic behavior—models that have produced a clear and concise answer to what bureaucrats do with inside information and discretionary authority: further their own self-interest. Classic studies, notably by Tullock,Footnote 15 Downs,Footnote 34 and Niskanen,Footnote 35 employed informal and formal versions of the rational choice paradigm, but all supported this general conclusion.

This conclusion, however, is critically dependent on the underlying assumptions about human nature, which go beyond the axiomatic assumptions of the rational choice paradigm just sketched. An initial problem for applying rational choice models to bureaucratic behavior was how to handle the key question of bureaucratic preferences. Rational choice makes no formal requirement about the form that preferences take. Preferences are taken as “given” or considered “primitive”; the assumption is simply that actors have preferences and self-interestedly seek to satisfy them. To explain and predict behavior, rational choice models require a secondary assumption about preferences. In economics, this commonly takes the form of economic self-interest. Yet short of outright corruption, bureaucrats do not typically make decisions that affect their short-term economic gains. Something had to substitute for economic self-interest to drive models of bureaucratic behavior.

Beyond the general assumption of self-interest, there has never been a clear consensus on what that something is. Tullock argued that bureaucratic self-interest is represented by career advancement and that this goal determines preference orderings in decision situations. For example, in Tullock’s framework, a bureaucrat is rational to highlight information reflecting positively on their job performance and to suppress information doing the opposite.Footnote 15 This resulted in a distorted set of expectations about the capabilities and performance of bureaucracies as a whole.

Downs took a more pluralistic view, arguing that preferences could be shaped by a variety of individual goals: specific programmatic desires, job security, or even the public interest. Still, Downs came to a similar conclusion as Tullock: rational bureaucrats have strong incentives to distort information and to use their discretionary authority to shirk goals that do not serve their own interests.Footnote 36 Niskanen also recognized that bureaucrats may be motivated by a variety of goals, but he argued that even a well-meaning bureaucrat is unlikely to advance the public interest much.Footnote 37 Niskanen argued that administrative man’s equivalent to economic man’s self-interest is budget maximization.

The picture of the decision maker emerging from such studies is the archetypical Adam Smith’s bureaucrat: someone who puts their discretionary authority and information advantage toward selfish ends. The end result of an agency full of Adam Smith’s bureaucrats is inefficiency and dysfunction and an organization that serves its own needs rather than those of democracy. This helps explain the bureaucracy literature’s keen interest in issues of control, its focus on the incentives for individual decision-making, and the popularity of prescriptive reforms arguing that self-interested actors can produce a better collective good in more market-like institutional arrangements.Footnote 17, Footnote 34, Footnote 35, Footnote 38, Footnote 39, Footnote 40, Footnote 41, Footnote 42, Footnote 43

Adam Smith’s bureaucrat provides a fruitful basis for study and formulating prescriptive institutional reforms, but it has a major drawback as a foundational element for a theory of bureaucratic behavior: there is not much empirical evidence that such a bureaucrat exists. Rational choice challengers argue that its central assumptions about human nature are wrong that and its behavioral predictions have been repeatedly falsified.Footnote 44 The bureaucratic arena is no different. Employees of public agencies are routinely observed acting in ways that contradict the behavioral expectations of rational choice. Budgets are minimized as well as maximized, whistle-blowers destroy their careers publicizing rather than distorting information, and some bureaucrats have actively tried to put their bureaus out of business on the grounds that doing so is in the public interest.Footnote 45, Footnote 46, Footnote 47 Bureaucrats generally seem to be more interested in public-spirited working than the self-interested shirking that the Adam Smith archetype predicts.Footnote 12, Footnote 17, Footnote 21, Footnote 44, Footnote 48

Principal-agent theory is perhaps the most commonly used framework for applying rational choice models to bureaucratic decision-making, and it exemplifies the general and particular problems of rational choice explanations of bureaucratic behavior. Theoretically, principal-agent theory, like most game theoretic frameworks, rests in part on the common knowledge assumption, which holds that individuals act rationally and expect others to do the same. Prescriptively, principal-agent theory consistently concludes that agency performance is heavily dependent on setting up the right incentive structures for bureaucratic decision-making. Yet the common knowledge assumption is empirically contradicted by an extensive literature in experimental economics, and an equally extensive record shows that performance and incentives are not strongly correlated.Footnote 49, Footnote 50 Agents routinely act “irrationally” by the precepts of principal-agent theory—and as a result, organizations work considerably better than rational choice frameworks predict.Footnote 51 In short, both the descriptive and predictive power of Adam Smith’s bureaucrat are weak.

Herbert Simon’s bureaucrat

Psychology, rather than economics, can also provide the microfoundations of bureaucratic behavior. Observing actual bureaucrats engaged in a budgeting process, Herbert SimonFootnote 52, Footnote 53 saw little to suggest that rational utility maximization drives their behavior. Bureaucrats are creatures of habit, have problems making trade-offs, and their emotions and values drive decisions as much as any calculation of marginal utility gains. The id and the superego are as important as the self-interested ego. In short, bureaucrats fall far short of the assumptions underpinning comprehensive rationality. Simon concluded that the rational choice conception of human nature was so far removed from reality that trying to use utility maximization frameworks to explain decision-making was “hopeless.”Footnote 54 Simon saw people as goal oriented and intendedly rational, but he believed that their ability to behave rationally is limited (or bounded) by the cognitive and emotional makeup of the human mind.

Little suggested to Simon that people make choices by calculating and rank ordering payoffs across all known or expected outcomes. Rather than processing information in parallel to create a grand utility function, Simon’s more inductive approach to decision-making led to the conclusion that humans examine options sequentially against a preset level of satisfaction. Classical rational choice requires calculating utility for all potential outcomes; bounded rationality reduces this to a simply binary choice. If an option clears an individual’s aspiration level, it is chosen; if not, another option is sought. People satisfice, in other words, rather than maximize.

Some put Simon’s model under the traditional rational choice umbrella, but others strongly resist Simon’s framework being reduced to “second rate maximization.”Footnote 55, Footnote 56 Clear lines of demarcation separate comprehensive and boundedly rational models of behavior. It cannot be overemphasized: Simon rejected the notion that human decision-making can be understood by assuming that people have utility functions that create a consistent ordering of alternatives.Footnote 57 In contrast to comprehensive rationality, boundedly rational beings are not particularly good at assessing probabilities and risk; not only do they have difficulty calculating the probabilities of alternate outcomes, their cognitive and emotional limitations prevent them from inferring the exact nature of the problems they face and in generating responses to those problems. Even preferences are not “primitive” properties of the individual but often properties of the environment.Footnote 58 A boundedly rational being, in other words, may not even know their preferences until clued in by some environmental factor in a given situation.

These differences create an alternative model of bureaucratic behavior that reorients the study of bureaucratic decision-making. The focus shifts from internal individual utility calculations to the characteristics of the environment, from the outcomes of decision-making to the process of decision-making. Bounded rationality argues that behavior is a driven in large part by response to a task environment. The goals people pursue, the options they consider and choose to achieve those goals, are not products of “immutable first principles” but products of “time and place that can only be ascertained by empirical inquiry.”Footnote 59

The archetype emerging from Simon’s model of choice is a very different character than Adam Smith’s bureaucrat. Herbert Simon’s bureaucrat adapts to the environment but imperfectly reads and translates environmental signals. These processing limits result in choices that are not always maximally adaptive and very different from those made by economic man. For one thing, Herbert Simon’s bureaucrat may not act self-interestedly. Rather than maximizing budgets or career opportunities, shirking the expectations of principals, and distorting information, this bureaucrat will attempt to adapt to whatever problems and challenges the task environment presents. This does not rule out self-interested behavior, but such behavior is not automatically predicted by the theoretical model.

Viewing bureaucracy through the bounded rationality lens makes control more a situational than a universal imperative. If the organizational culture of the bureaucracy stresses values of public service and self-sacrifice, Herbert Simon’s bureaucrat may become emotionally attached to these values, which will become important behavioral motivations and act as restraints on bureaucratic self-interest (military and paramilitary bureaucracies such as fire and police departments might be good examples of this). A number of classic works look at bureaucracy through the prism of Herbert Simon’s bureaucrat, and they are less works of formal theoretical abstraction in the rational choice tradition and much more grounded in empirical observation.Footnote 23, Footnote 24, Footnote 30, Footnote 53

Such studies make a convincing case for the empirical reality of Herbert Simon’s bureaucrat. Bureaucrats really do seem to be driven by values, habit, rules, and the challenges presented by the day-to-day task environment. Herbert Simon’s bureaucrat, in short, has provided the core of realistic descriptive frameworks of bureaucrats and bureaucracy for a wide range of scholarship.

The central problem with Herbert Simon’s bureaucrat is that, whatever its environment-specific descriptive power, its predictive power is weak. Simon and more contemporary advocates of bounded rationality are unapologetic behavioralists.Footnote 30 They argue that decisions cannot be accurately described or explained unless the behavior of the decision makers and their environments are directly observed; making point predictions about behavior is not something bounded rationality claims to do.

This lack of predictive pretensions, however, makes frameworks based on Simon’s archetype hard to systematically test. Behavioral models of choice have little to say about what humans granted discretionary authority and inside information will do with those advantages in a general sense—there is wide possibility of values, habits, heuristics, and environmental influences to guide decision-making but little in the way of universal behavioral predispositions. Values and heuristics come from things such as organizational culture and individual socialization. These vary from individual to individual and react differently to given task environments.

This perspective of human nature sets clear limits on the explanatory jurisdiction of the resulting models. For example, Jones calls for a research agenda that focuses on examining the difference between observed behavior and rational behavior. He argues that observed behavior (B) can be decomposed into two mutually exclusive and exhaustive categories: comprehensive rational goal attainment (G) and limited rationality (L). This leads to the fundamental equation of behavior in fixed task environments: B = G + L (see JonesFootnote 21, Footnote 30, Footnote 60). Measuring the difference between G and L thus measures how cognitive limitations prevent comprehensively rational behavior. This is an agenda with considerable potential to shed light on why observed behavior is frequently different from comprehensively rational behavior, but it provides no predictive traction on outcomes or illuminates any universal behavioral predispositions.

Herbert Simon’s bureaucrat ultimately has no systematic answer to the question, What will humans do with power and information? Bureaucrats may use their power and information advantages for their own profit, or they may employ them in genuine pursuit of the public interest. What differentiates self-serving from public-spirited behavior is a potentially long list of exogenous environmental influences that can be observed and understood case by case but not necessarily in any real general, comprehensive sense.

Charles Darwin’s bureaucrat

Adam Smith’s and Herbert Simon’s bureaucrats unavoidably carry the advantages and disadvantages of rational choice or bounded rationality. These can be briefly summarized: rational choice provides a universal basis for deducing point predictions of behavior, but it often fails descriptively; bounded rationality excels descriptively, but it makes little pretense about making point predictions of behavior. Rational choice is based on an arguably false conception of human nature; bounded rationality rests on a realistic but limited conception of human nature that sees the environment as the primary explanation of behavior.Footnote 45, Footnote 61, Footnote 62, Footnote 63, Footnote 64, Footnote 65, Footnote 66

The failures of rational choice and bounded rationality have a common cause: neither is a fully developed theory of cognition or preference formation.Footnote 62, Footnote 67 Rational choice simply takes preferences as given; it says nothing about where they come from or why people have the preferences they do. This is an inheritance from economics, which has no theory of “tastes.”Footnote 68 Faced with behaviors contradicting those deduced from first principles, rational choice frequently beats a hasty retreat into behavioralism to cover the gaps between theoretical expectations and empirical observation. A classic example is Riker and Ordeshook’sFootnote 69 modification of the Downsian expected utility model of voting to include a social psychological variable, a fixed benefit representing the utility gained from the act of voting itself. In essence, this makes the rational choice model of voting comport with observed behavior by saying that people have lots of different reasons for voting. This is surely true, but it is also theoretically vacuous. Rational choice frameworks of bureaucratic behavior have made similar modifications, and they have been similarly criticized for accounting for an observed behavior without really explaining it.Footnote 49

Bounded rationality does little better. Simon consistently argued that behavior is more a product of external environmental influences than the internal psychological environment of humans. Bounded rationality assumes people have preferences and goals and try to be rational, but they have cognitive and emotional limitations that prevent comprehensively rational behavior. It does not explain why people try to be rational, where preferences or goals might come from except in terms of the task environment, or why humans have the cognitive and emotional limitations they do.

This has led some critics to assign bounded rationality the same central flaw as rational choice: rationality in both variants boils down to a generic claim that people have preferences which are manifested in a purposive response to environmental stimuli.Footnote 62, Footnote 70 The general lack of understanding (or even interest) about the inner workings of the human mind means all frameworks rooted in rational or behavioral notions of choice are reduced to “incoherent environmentalism.”Footnote 71 The result is models with weak predictive and/or descriptive powers that struggle even to produce a falsifiable hypothesis. Behavior is assumed to be rational, or at least intendedly rational, and as long as behavior is demonstrably not random, then these core assumptions can be left unchallenged. Models of comprehensive and bounded rationality both assume that rational attainment of preferences is the central motivation of behavior, but they show little curiosity about where these behavioral dispositions come from (outside of an external task environment) and entertain no real alternative explanation of behavior.

Such criticisms probably go too far. Scholars working within the bounded rationality framework, for example, have invested considerable effort into trying to understand how individual goals are identified and prioritized. In doing so, they have simply cataloged aspects of the task environment that correlate with behavior, but they have adopted, applied, and refined a number of frameworks imported from cognitive psychology that are anchored in attempts to better understand the inner workings of the human mind. Cognitive biases and attentional limits, for example, clearly shape goal prioritization and decision-making and have been explicitly incorporated into the bounded rationality framework. Jones, for example, explicitly argues the key assumption of organizational behavioral theory should be anchored in a cognitive and emotional based understanding of individual-level decision-making.Footnote 21 In other words, to understand institutional-level outcomes, you first need to grasp basic facts about humans, for example, their limited attention spans, how that limitation shapes goal-driven behavior, and how that micro-level behavior manifests itself at the macro level. Similarly, OstromFootnote 42 argued that a better understanding collective action must incorporate what we know about the empirical reality of human behavior, and that means recognizing that humans can be “better than rational.” People avoid the social dilemmas of purely rational behavior (e.g., the tragedy of the commons) because, at least under certain circumstances, they are willing to engage in trust and reciprocity—behaviors motived by aspects of human psychology that seem innate.

Contrary to critics of rational and behavioral models, this suggests that Herbert Simon’s bureaucrat does not have quite the same black box problem as Adam Smith’s bureaucrat. Still, Herbert Simon’s bureaucrat remains a considerable distance from having a fully realized notion of how preferences are generated and remains heavily focused on the task environment. What will people do given power and an information advantage? Lacking a systematic and empirically verifiable understanding of how humans generate preferences and make choices, it is almost impossible to answer this question. There is considerable skepticism among bounded rationality proponents that such an understanding is possible: “To make predictions we would need to study the formation of the reasons people use for the decisions they make. This is equivalent to exploring preference formation and doing it inductively because there are no a priori reasons (on the part of the investigator) for assuming any particular set of reasons (on the part of the subject).”Footnote 72

Yet an increasing body of interdisciplinary research suggests that researchers do have a basis for a priori expectations about the motivations that drive particular choices.Footnote 68, Footnote 70, Footnote 73, Footnote 74, Footnote 75 Common to this literature is its evolutionary perspective. In brief, the core claim is that evolution acts on behavioral as well as physical traits. By selecting some behavioral predispositions over others, evolution will exert a powerful force in shaping the motivators for behavior. There was a burst of research based on such claims by political scientists in the first decade of the twenty-first century.Footnote 8, Footnote 62, Footnote 76, Footnote 77, Footnote 78

To the best of our knowledge, there has been little attempt to employ similar frameworks to the study of bureaucracy. Yet bureaucracy scholars may benefit enormously from this approach. There is no obvious reason not to apply evolutionary models to bureaucratic behavior, and such a path has already been at least partially laid by bounded rationality cross-fertilizing with cognitive psychology and parallel fields like behavioral economics.Footnote 79, Footnote 80, Footnote 81, Footnote 82, Footnote 83, Footnote 84, Footnote 85, Footnote 86, Footnote 87 Can a new theoretical archetype, Charles Darwin’s bureaucrat, provide a fuller understanding of what humans do given discretionary authority and information advantages?

An evolutionary approach to understanding behavioral motivation

Evolutionary models suggest that much of human behavior is driven by universal predispositions. These are products of a selection process favoring traits that conferred fitness advantages to individuals living in the dominant social environment of human history—hunter-gatherer groups.Footnote 70, Footnote 73, Footnote 75 Maximally adaptive behaviors under such circumstances have universal implications for a variety of decision-making situations, implications that can take the form of empirically testable hypotheses.Footnote 68, Footnote 75, Footnote 78, Footnote 88

This evolutionary framework can be readily adapted to explain why people sometimes act self-interestedly, and at other times do not, and also to make systematic behavioral predictions. It does so by identifying universal behavioral predispositions and explaining how humans trade off the contradictory goals that these predispositions generate. For much of human history, reproductive fitness—even basic survival—was tied to being a member of a viable group. Because of the clear adaptive advantages, humans are thus endowed with a strong, innate tendency to be group oriented.Footnote 62, Footnote 68, Footnote 70, Footnote 75, Footnote 76, Footnote 88 Yet there are also fitness advantages to be had from acting selfishly rather than putting the group first. This means individuals were likely subject to conflicting selection pressures: for selfish behavior, but also for other-regarding behavior.Footnote 62, Footnote 88 The big question in terms of constructing behavioral models, then, becomes: How do humans trade off the conflicting goals that arise from such predispositions?

Alford and Hibbing’s theory of wary cooperation provides one answer.Footnote 62 This theory rejects the notion that we have fixed, transitive preferences for all choice situations. We navigate much of what we encounter using a basic set of evolved behavioral predispositions (Alford and Hibbing provide a specific list of such predispositions) that are highly sensitive to a specific feature of our environment: other human beings. Importantly, and in distinct contrast to bounded rationality, this is a social rather than an individualistic understanding of human behavior. It views human behavior as innately centered on other people, though not necessarily concerned with their welfare. What humans have is an intense desire to keep up appearances—to be seen as good group members.Footnote 75, Footnote 89

Besides offering a clear fitness advantage from an evolutionary basis (good group members are more likely survive and reproduce), such a base behavioral motivation also provides a simple and ready mechanism for making trade-offs between selfish and group-oriented goals. The basic behavioral rule of thumb is take the action that looks like the “right thing to do” to others. If you can appear fair and act self-interestedly, then do so. On the other hand, if forced to choose between self-interest and appearing fair, opt for appearing fair. This fits with bounded rationality’s notion of heuristics to overcome limited processing capability, but the explanation is rooted as much in the internal as the external environment, and it does so in a way that makes it possible to make behavioral predictions. This also can fit with a rational choice understanding of human behavior where “appear fair” is the “primitive” preference that transitively orders choice options. Yet, in contradiction to rational choice orthodoxy, the evolutionary framework of the wary cooperator explains where the preference comes from, why it is there, and what feature of the environment prioritizes choice options; the preference is not simply a “given” chosen to comport with observed behavior but a logical and empirically testable hypothesis derived from the underlying theory. Critically, this is achieved by assuming intentionality in decision-making is inherently social rather than individualistic—a clear difference from the individualistic tradition of rational choice.

There already exists an extensive empirical record supporting such an argument about base human behavioral predispositions.Footnote 61, Footnote 64, Footnote 65, Footnote 66 This record, though, is typically interpreted from a rational choice perspective—that it is rational to act cooperatively, either on the expectation that the short-term loss of cooperation will yield a long-term gain or as a means to avoid punishment.Footnote 90, Footnote 91, Footnote 92 This approach allows a rational choice interpretation of widespread cooperation, but it undermines its predictive abilities. Now selfish and cooperative acts are rational, which explains all behavior, which is to say no behavior. If all behavior is rational, the already weak null in empirical tests of rational models atrophies to nothing.

The evolutionary hypothesis proposed here, however, not only predicts cooperative behavior and selfish behavior, it specifies the conditions when one is favored over the other. Self-interest is pursued only when there is little chance of it being seen as such; humans will not routinely put self-interest above reputation. This claim is the key theoretical building block for an alternative to Adam Smith’s and Herbert Simon’s bureaucrats. In contrast to Adam Smith’s bureaucrat, Darwin’s bureaucrat is not going to be a self-interested maximizer predestined to create the dilemmas of control and accountability wrapped up the moral hazard at the heart of principal-agent theory. Like Herbert Simon’s bureaucrat, Darwin’s bureaucrat will be sensitive to environmental context, but in a predictable way. Darwin’s bureaucrat is less a creature of the task environment than the environment of evolutionary adaption and the fairly consistent social-related (as opposed to purely individualistic) goals it hardwired into human psychology.

The core microfoundational question for bureaucratic theory is, To what ends will humans use inside information and discretionary authority? The evolutionary framework sketched above provides a basic response: use information and discretionary authority to serve self-interested ends, but only if there is no perceived reputational cost to doing so. If the choice is between conforming to group norms or serving self-interest, choose the former even if it imposes a steep individual cost. The key for Darwin’s bureaucrat is neither “What it’s in it for me?” nor “What’s good enough for me?” but “How will this look to others?” Darwin’s bureaucrat will be happy to take advantage of others and act self-interestedly but will have an innate aversion to being perceived as self-interested.

In economic games involving information asymmetry, people behave exactly like this.Footnote 61, Footnote 93, Footnote 94 Give someone (a “decider”) a set of tokens representing something of real value and ask them to share it how they wish with a partner (a “recipient”), and they will not act purely rationally. They go for something approximating an even split. They continue to divide those tokens similarly even when given “insider information,” that is, the decider—and only the decider—is told those worth twice as much to one player or the other. In this situation, what appears to be an even split to the recipient actually means the decider is playing someone for a sucker. That could be the recipient if the tokens are worth twice as much to the decider. But the decider play themselves for a sucker in going for an even split of the tokens when they are worth twice as much to the recipient—they are effectively deciding to get half as much of the resource as the person they are playing with. This seems to make little sense in the framework of classical choice, but it looks very much like Darwin’s bureaucrat in action. The key motivational factor here seems to be not some sense of individualistic gain, but something social—a reputation for fairness. That preference for fairness over individual gain is almost certainly an endowment of our evolutionary inheritance, not just a product of a particular task environment. At a minimum, there is compelling evidence that fairness preferences are genetically influenced—that is, they are an evolved psychological motivator of human behavior.Footnote 95, Footnote 96

This is all consistent with the explanatory core of Darwin’s bureaucrat: that administrative behavior will be motivated by a set of at least partially innate predispositions that are inherently social in nature. It not only explains why behavior can sometimes be self-interested as Adam Smith’s bureaucrat would predict, but also why it sometimes appears altruistic. It not only identifies variation in behavior attributable to the differences in the task environment as Herbert Simon’s bureaucrat would predict, it identifies a consistent underlying goal preference that explains that variation, and offers specific predictions. That underlying goal prioritization is explained more by its functional evolutionary purpose than the task environment per se. The explicit evolutionary and social behavioral basis of Darwin’s bureaucrat, in short, seems to provide a (relatively) clear theoretical break from both Adam Smith’s and Herbert Simon’s bureaucrats.

Discussion

It is possible to argue that Darwin’s bureaucrat is just a subcategory of Adam Smith’s bureaucrat; the “primitive” preference is to appear fair, and this provides the stable ordering of preferences that serve as a basis for self-interested behavior. It is equally possible to argue Darwin’s bureaucrat is a subcategory of Herbert Simon’s bureaucrat; the same preference functioning as a heuristic that simplifies decision-making into basic satisficing framework. We would suggest, though, such an interpretation is looking through the wrong end of the theoretical lens. It is more likely that Adam Smith’s and Herbert Simon’s bureaucrats are subcategories of Darwin’s bureaucrat. At a minimum, the latter can account for behavioral explanations that arise from either (but not both) of the former.

Darwin’s bureaucrat is clearly not comprehensively rational. The brain of Darwin’s bureaucrat is modular rather than an all-purpose calculating machine; Darwin’s bureaucrat has no need to calculate expected utilities across a wide range of outcomes. Darwin’s bureaucrat is best described as other regarding rather than classically self-interested, his behavior ranging from selfish to altruistic with the variation being driven by sensitivity to other people. For Darwin’s bureaucrat the interests of others are a primary behavioral motivation—intentionality in decision-making, the very core of goal-seeking behavior, is social rather than individualistic. These represent a real separation from rational choice theory. Perhaps Adam Smith’s bureaucrat can be reformulated as an other-regarding, satisficing being whose intentionality is driven in no small measure by the interests of others. The result, however, is to make the underlying conception of human nature virtually unrecognizable as rational choice.

Darwin’s bureaucrat differs from Simon’s bureaucrat in that the internal environment is critically important. It is the understanding of the latter that provides the predictive power of the model. Darwin’s bureaucrat is not a blank slate shaped by the environment but fully equipped with a flexible set of tools to navigate and successfully interact with that environment. Those tools can be identified and used to fashion predictive as well as descriptive hypotheses. Yes, the task environment is important to Darwin’s bureaucrat, but he or she reacts in predictable ways to environmental stimuli. Herbert Simon’s bureaucrat can be refashioned into Darwin’s bureaucrat; the evolved predispositions used as the heuristics or “prepackaged solutions” underlying satisficing behavior. Doing this, however, provides an immediate description of the “inner environment” bounded rationality has relegated to secondary importance since its conception by Simon 50 years ago. Opening up that black box gives Herbert Simon’s bureaucrat predictive power and shifts focus back to predicting decisional outcomes rather than tracking decisional processes—in other words, toward the very things that advocates of bounded rationality take some pains to suggest that Herbert Simon’s bureaucrat cannot do.

A final case for Darwin’s bureaucrat as a distinct perspective of human nature of considerable potential value to bureaucracy scholars is his implicit (and increasingly explicit) role in a vast range of bureaucracy scholarship. For example, the descriptive anomalies that confound predictive theories of bureaucratic behavior (such as overcooperation and “irrational” levels of effort), can be readily accommodated by Darwin’s bureaucrat.Footnote 49 Some of the discipline’s best bureaucracy scholars already acknowledge the potential for evolutionary biology and psychology to make contributions, though there is a debate over the extent of this contribution. JonesFootnote 30 recognizes that evolutionary psychology offers an alternative approach to decision-making, but argues bounded rationality is better served by rejecting a domain-centered view of the human brain. There are calls for the notions of reciprocityFootnote 48, Footnote 49 and trustFootnote 97, Footnote 98 anchored in evolutionary theory to serve as alternate research agendas to traditional principal-agent theory. Darwin’s bureaucrat has a distinct and important role to play in this debate. If nothing else, Darwin’s bureaucrat clarifies what the debate is about—an argument over fundamental human nature—helps identify the core theoretical contenders with a claim to resolving that debate, and illuminates a path to systematically separating and testing those claims.

Darwin’s bureaucrat also suggests an alternate view on prescriptive as well as theoretical approaches to bureaucracy. Microeconomic perspectives orient bureaucratic reform toward getting power out of the hands of bureaucracies with “monopolies” over public goods and services, and/or to alter the behavioral incentives embedded in the institutional environment so that a rational bureaucratic agent will take the action desired by a democratic principle.Footnote 35, Footnote 42, Footnote 43, Footnote 49 If Darwin’s bureaucrat is a reasonably accurate archetype, this immediately suggests some potential problems with these sorts of reforms. For example, decentralizing power by shifting the production of goods and services to competitive service is as likely to exacerbate as solve the problem of democratic control. Private service providers still exercise discretionary authority and they too can come to enjoy an information advantage. Yet as private or quasi-private organizations, they can conceal their information advantages more easily than public bureaucrats. Competition alone does not resolve these issues. Private companies, for example, are not subject to the same open meetings or records laws as public agencies. For Darwin’s bureaucrat, this difference is crucial: give decision makers increased insulation from the attention of others, and their decisions will shift from the altruistic end of the behavioral continuum to the selfish end. Some have argued that such situations are widespread in contracting-out situations.Footnote 99

In short, Darwin’s bureaucrat suggests that democracy cannot control any third-party implementer by simply addressing who makes the decisions. Information and others’ likely perception of decisions drives behavior. This suggests that any successful reform agenda will be built not on assumptions of self-interest or theoretically vacuous assertions about mental limitations, but on a recognition of how remarkably sensitive humans are to the perceptions of others.

References

Notes

1. Gulick, L., “Democracy and administration face the future,” Public Administration Review, 1977, 37(6): 706711, at p. 709.CrossRefGoogle Scholar

2. Presthus, R., The Organizational Society: An Analysis and a Theory (New York: St. Martin’s Press, 1978).Google Scholar

3. Caldwell, L. K., “Biology and bureaucracy: The coming confrontation,” Public Administration Review, 1980, 40(1): 112.CrossRefGoogle Scholar

4. Gulick, L., “Introduction,” in Biology and Bureaucracy: Public Administration and Public Policy from the Perspective of Evolutionary, Genetic, and Neurobiological Theory, White, E. and Losco, J., eds. (Lanham, MD: University Press of America, 1986), pp. xiixvi.Google Scholar

5. Meyer-Emerick, N., “Public administration and the life sciences: Revisiting biopolitics,” Administration & Society, 2007, 38(6): 689708.CrossRefGoogle Scholar

6. Bouchard, T. J. Jr., “Authoritarianism, religiousness and conservatism: Is ‘obedience to authority’ the explanation for their clustering, universality and evolution?,” in The Biological Evolution of Religious Mind and Behaviour, Voland, E. and Schiefenhövel, W., eds. (Berlin: Springer, 2009), pp. 165180.CrossRefGoogle Scholar

7. Heatherington, M. J. and Weiler, J. D., Authoritarianism and Polarization in American Politics (New York: Cambridge University Press, 2009).CrossRefGoogle Scholar

8. McDermott, R., Tingly, D., Cowden, J., Frazzetto, G., and Johnson, D., “Monoamine oxidase A gene predicts behavioral aggression following provocation,” Proceedings of the National Academy of Sciences, 2009, 106(7): 21182123.CrossRefGoogle ScholarPubMed

9. Anderson, W. D. and Summers, C., “Neuroendocrine mechanisms, stress coping strategies and social dominance: Comparative lessons about leadership potential,” Annals of the American Academy of Political and Social Science, 2007, 614: 102130.CrossRefGoogle Scholar

10. Bowles, S. and Gintis, H., “The origins of human cooperation,” in Genetic and Cultural Evolution of Cooperation, Hammerstein, P., ed. (Cambridge, MA: MIT Press, 2003), pp. 429444.Google Scholar

11. Nicholson-Crotty, S., Nicholson-Crotty, J., and Webeck, S., “Are public managers more risk averse? Framing effects and status quo bias across the sectors,” Journal of Behavioral Public Administration, 2019, 2(1), https://doi.org/10.30636/jbpa.21.35.CrossRefGoogle Scholar

12. Christiansen, R. and Wright, B., “Public service motivation and ethical behavior: Evidence from three experiments,” Journal of Behavioral Public Administration, 2018, 1(1), https://doi.org/10.30636/jbpa.11.18.Google Scholar

13. Meier, K. J., Politics and the Bureaucracy: Policymaking in the Fourth Branch of Government (Pacific Grove, CA: Brooks/Cole, 1993).Google Scholar

14. Frederickson, H. G. and Smith, K. B., The Public Administration Theory Primer (Boulder, CO: Westview Press, 2003).Google Scholar

15. Tullock, G., The Politics of Bureaucracy (Washington, DC: PublicAffairs Press, 1965).Google Scholar

16. Mosher, F. C., Democracy and the Public Service (New York: Oxford University Press, 1982), p. 7.Google Scholar

17. Brehm, J. and Gates, S., Working, Shirking, and Sabotage: Bureaucratic Response to a Democratic Public (Ann Arbor: University of Michigan Press, 1997).CrossRefGoogle Scholar

18. Carpenter, D., “Adaptive signal processing, hierarchy, and budgetary control in federal regulation,” American Political Science Review, 1996, 90(2): 283302.CrossRefGoogle Scholar

19. Balla, S. J., “Administrative procedures and political control of the bureaucracy,” American Political Science Review, 1998, 92(3): 663667.CrossRefGoogle Scholar

20. Lichbach, M. I., Is Rational Choice Theory All of Social Science? (Ann Arbor: University of Michigan Press, 2003).CrossRefGoogle Scholar

21. Jones, B. D., “Bounded rationality and political science: Lessons from public administration and public policy,” Journal of Public Administration Research and Theory, 2003, 13(4): 395-412.CrossRefGoogle Scholar

22. Waldo, D., The Administrative State (New York: Ronald Press Company, 1948).Google Scholar

23. Lipsky, M., Street-Level Bureaucracy: Dilemmas of the Individual in Public Service (New York: Russell Sage Foundation, 1980).Google Scholar

24. Wilson, J. Q., Bureaucracy: What Government Agencies Do and Why They Do It (New York: Basic Books, 1989).Google Scholar

25. Bendor, J., Taylor, S., and Van Gaalen, R., “Bureaucratic expertise v. legislative authority,” American Political Science Review, 1985, 79(4): 10411060.CrossRefGoogle Scholar

26. Bendor, J., Taylor, S., and Van Gaalen, R., “Politicians, bureaucrats, and asymmetric information,” American Journal of Political Science, 1987, 31(4): 796828.CrossRefGoogle Scholar

27. Downs, A., An Economic Theory of Democracy (New York: Harper, 1957).Google Scholar

28. Olson, M., The Logic of Collective Action (Cambridge, MA: Harvard University Press, 1965).Google Scholar

29. Buchanan, J. and Tullock, G., The Calculus of Consent (Ann Arbor: University of Michigan Press, 1962).CrossRefGoogle Scholar

30. Jones, B. D., Politics and the Architecture of Choice: Bounded Rationality and Governance (Chicago: University of Chicago Press, 2001).Google Scholar

31. Simon, H. A., “A behavioral model of rational choice,” Quarterly Journal of Economics, 1955, 69(1): 99118.CrossRefGoogle Scholar

32. Buchanan and Tullock, p. 18.Google Scholar

33. Lichbach, p. 38.Google Scholar

34. Downs, A., Inside Bureaucracy (Boston: Little, Brown, 1967).CrossRefGoogle Scholar

35. Niskanen, W. A. Jr., Bureaucracy and Representative Government (Chicago: Aldine, Atherton, 1971).Google Scholar

36. Downs, 1967, p. 77.Google Scholar

37. Niskanen, 1971, p. 39.Google Scholar

38. McCubbins, M., Noll, R., and Weingast, B., “Administrative procedures as instrument of political control,” Law, Economics, and Organization, 1987, 3(2): 243277.Google Scholar

39. Carpenter, D., “Adaptive signal processing, hierarchy, and budgetary control in federal regulation,” American Political Science Review, 1996, 90(2): 283302.CrossRefGoogle Scholar

40. Balla, S. J. and Wright, J. R., “Interest groups, advisory committees, and congressional oversight,” American Journal of Political Science, 2001, 45(4): 799812.CrossRefGoogle Scholar

41. Tiebout, C. M., “A pure theory of local expenditures,” Journal of Political Economy, 1956, 64(5): 416424.CrossRefGoogle Scholar

42. Ostrom, V., The Intellectual Crisis in Public Administration (Tuscaloosa: University of Alabama Press, 1973).Google Scholar

43. Jensen, U. T. and Vestergaard, C. F., “Public service motivation and public service behaviors: Testing the moderating effects of tenure,” Journal of Public Administration Research and Theory, 2017, 27(1): 5267.CrossRefGoogle Scholar

44. Green, D. P. and Shapiro, I., Pathologies of Rational Choice Theory (New Haven, CT: Yale University Press, 1994).Google Scholar

45. Blais, A. and Dion, S., “Are bureaucrats budget maximizers?,” in The Budget-Maximizing Bureaucrat: Appraisals and Evidence, Blais, A. and Dion, S., eds. (Pittsburgh, PA: University of Pittsburgh Press), pp. 355361.Google Scholar

46. Campbell, C. and Nauls, D., “The limits of the budget-maximizing theory: Some evidence from official’s views of their roles and careers,” in The Budget-Maximizing Bureaucrat: Appraisals and Evidence, Blais, A. and Dion, S., eds. (Pittsburgh, PA: University of Pittsburgh Press), pp. 85118.Google Scholar

47. Meier, p. 228.Google Scholar

48. Miller, G. J. and Whitford, A. B., “Trust and incentives in principal-agent negotiations,” Journal of Theoretical Politics, 2002, 14(2): 231267.CrossRefGoogle Scholar

49. Miller, G. J., “The political evolution of principal-agent models,” Annual Review of Political Science, 2005, 8: 203225.CrossRefGoogle Scholar

50. Waterman, R. W. and Meier, K. J., “Principal-agent models: An expansion?,” Journal of Public Administration Research and Theory, 1998, 8(2): 173202.CrossRefGoogle Scholar

51. Miller, p. 217.Google Scholar

52. Simon, H. A., Administrative Behavior (New York: Free Press, 1947).Google Scholar

53. Simon, H. A., “The potlatch between political science and economics,” in Competition and Cooperation: Conversations with Nobelists about Economics and Political Science, Alt, J., Levi, M., and Ostrom, E., eds. (New York: Russell Sage Foundation, 1999), pp. 112119.Google Scholar

54. Simon, H. A., “Nobel Laureate Simon ‘looks back’: A low-frequency mode,” Public Administration Quarterly, 1988, 12(3): 275300, at p. 286.Google Scholar

55. Lupia, A., McCubbins, M. D., and Popkin, S., “Beyond rationality: Reason and the study of politics,” in Elements of Reason: Cognition, Choice, and the Bounds of Rationality, Lupia, A., McCubbins, M. D., and Popkin, S., eds. (Cambridge: Cambridge University Press), pp. 120.Google Scholar

56. Jones, 2003, p. 399.Google Scholar

57. Simon, H. A., “Human nature in politics: The dialogue of psychology with political science,” American Political Science Review, 1985, 79(2): 293304, at p. 296.CrossRefGoogle Scholar

58. Simon, 1955, pp. 100101.Google Scholar

59. Simon, 1988, p. 301.CrossRefGoogle Scholar

60. Bryan D, Jones. “Bounded Rationality,” Annual Review of Political Science, 1999, 2: 297321.Google Scholar

61. Güth, W. and Tietz, R., “Ultimatum bargaining behavior: A survey and comparison of experimental results,” Journal of Economic Psychology, 1990, 11(3): 417449.CrossRefGoogle Scholar

62. Alford, J. R. and Hibbing, J. R., “The origin of politics: An evolutionary theory of political behavior,” Perspectives on Politics, 2004, 2(4): 707723.CrossRefGoogle Scholar

63. Tversky, A. and Thaler, R. H., “Anomalies: Preference reversals,” Journal of Economic Perspectives, 1990, 4(2): 201211.CrossRefGoogle Scholar

64. Nowak, M. A., Page, K. M. and Sigmund, K., “Fairness versus reason in the ultimatum game,” Science, 2000, 289: 17731775.CrossRefGoogle ScholarPubMed

65. Kahn, L. M. and Murnigham, K., “General experimentation bargaining in demand games with outside options,” American Economic Review, 1993, 83(5): 12601280.Google Scholar

66. Fehr, E. and Gatcher, S., “Cooperation and punishment in public goods experiments,” American Economic Review, 2000, 90(4): 980994.CrossRefGoogle Scholar

67. Lichbach, p. 38.Google Scholar

68. Rubin, P. H., Darwinian Politics: The Evolutionary Origins of Freedom (New Brunswick, NJ: Rutgers University Press, 2002), p. 16.Google Scholar

69. Riker, W. H. and Ordeshook, P. C., “A theory of the calculus of voting,” American Political Science Review, 1968, 62(1): 2544.CrossRefGoogle Scholar

70. Tooby, J. and Cosmides, L., “The psychological foundations of culture,” in The Adapted Mind: Evolutionary Psychology and the Generation of Culture, Barkow, J., Cosmides, L., and Tooby, J., eds. (New York: Oxford University Press, 1992), pp. 19136.Google Scholar

71. Tooby and Cosmides, p. 37.Google Scholar

72. Jones, 2003, p. 402.CrossRefGoogle Scholar

73. Somit, A. and Peterson, S. A., Darwinism, Dominance, and Democracy (Westport, CT: Praeger, 1997).Google Scholar

74. Masters, R., Beyond Relativism: Science and Human Values (Hanover, NH: University Press of New England, 1993).Google Scholar

75. Massey, D. S., “A brief history of human society: The origin and role of emotion in social life,” American Sociological Review, 2002, 67(1): 129.CrossRefGoogle Scholar

76. Hibbing, J. R. and Alford, J. R., “Accepting authoritative decisions: Humans as wary cooperators,” American Journal of Political Science, 2004, 48(1): 6276.CrossRefGoogle Scholar

77. Alford, J. R., Funk, C. L. and Hibbing, J. R., “Are political orientations genetically transmitted?,” American Political Science Review, 2005, 99(2): 153167.CrossRefGoogle Scholar

78. Orbell, J., Morikawa, T., Hartwig, J., Hanley, J. and Allen, N., “Machiavellian intelligence as a basis for the evolution of cooperative dispositions,” American Political Science Review, 2004, 98(1): 115.CrossRefGoogle Scholar

79. Cairney, P. and Weible, C., “The new policy sciences: Combining the cognitive science of choice, multiple theories of context, and basic and applied analysis,” Policy Sciences, 2017, 50(4): 619627.CrossRefGoogle Scholar

80. John, P., “Theories of policy change and variation reconsidered: A prospectus for the political economy of public policy,” Policy Sciences, 2018, 51(1): 116.CrossRefGoogle Scholar

81. Jolls, C., Sunstein, C. R., and Thaler, R. H.. “A behavioral approach to law and economics,” Stanford Law Review, 1998, 50(5): 14711550.CrossRefGoogle Scholar

82. Jones, B., “Behavioral rationality as a foundation for public policy studies,” Cognitive Systems Research, 2017, 43: 6375.CrossRefGoogle Scholar

83. Jones, B. D. and Thomas, H. F., “The cognitive underpinnings of policy process studies: Introduction to a special issue of Cognitive Systems Research,” Cognitive Systems Research, 2017, 45: 4851.CrossRefGoogle Scholar

84. Kahneman, D., Lovallo, D., and Sibony, O., “A structured approach to strategic decisions,” MIT Sloan Management Review, 2019, 60(3): 6773.Google Scholar

85. Kahneman, D. and Tversky, A., “Prospect theory: An analysis of decisions under risk,” Econometrica, 1979, 47(2): 263292.CrossRefGoogle Scholar

86. Kasdan, D. O., “Toward a theory of behavioral public administration,” International Review of Administrative Sciences, published online December 31, 2018, https://doi.org/10.1177/0020852318801506.CrossRefGoogle Scholar

87. Thaler, R. H. and Ganser, L. J., Misbehaving: The Making of Behavioral Economics (New York: W. W. Norton, 2015).Google Scholar

88. Hatemi, P. K., Smith, K., Alford, J. R., Martin, N. G. and Hibbing, J. R., “The genetic and environmental foundations of political, psychological, social, and economic behaviors: A panel study of twins and families,” Twin Research and Human Genetics, 2015, 18(3): 243255.CrossRefGoogle ScholarPubMed

89. Elster, J., “Rationality and the emotions,” Economic Journal, 1996, 106(438): 13861397.CrossRefGoogle Scholar

90. Axelrod, R., The Evolution of Cooperation (New York: Basic Books, 1984).Google Scholar

91. Kagel, J. H., Kim, C., and Moser, D., “Fairness in ultimatum games with asymmetric information and asymmetric payoffs,” Games and Economic Behavior, 1996, 13(1): 100110.CrossRefGoogle Scholar

92. Ridley, M., The Origins of Virtue (New York: Penguin, 1996).Google Scholar

93. Smith, K., “Representational altruism: The wary cooperator as authoritative decision maker,” American Journal of Political Science, 2006, 50(4): 10131022.CrossRefGoogle Scholar

94. Güth, W., Huck, S., and Ockenfels, P., “Two-level ultimatum bargaining with incomplete information: An experimental study,” Economic Journal, 1996, 106(436): 593604.CrossRefGoogle Scholar

95. Wallace, B., Cesarini, D., Pichtenstein, P., and Johannesson, M., “Heritability of ultimatum game responder behavior,” Proceedings of the National Academy of Sciences, 2009, 104: 1563115634.CrossRefGoogle Scholar

96. Cesarini, D., Dawes, C. T., Johannesson, M., Lichtenstein, P. and Wallace, B., “Genetic variation in preference for giving and risk taking,” Quarterly Journal of Economics, 2009, 124(2): 809842.CrossRefGoogle Scholar

97. Koeszegi, S. T., “Take the risk and trust? The strategic role of trust in negotiations,” in Negotiated Risks: International Talks on Hazardous Issues, Sjöstedt, G. and Avenhaus, R. (Berlin: Springer, 2009), pp. 6585.Google Scholar

98. Mislin, A., Williams, L. V., and Shaughnessy, B. A., “Motivating trust: Can mood and incentives increase interpersonal trust?,” Journal of Behavioral and Experimental Economics, 2015, 58: 1119.CrossRefGoogle Scholar

99. Sclar, E. D., You Don’t Always Get What You Pay For: The Economics of Privatization (Ithaca, NY: Cornell University Press, 2001).Google Scholar