To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Scholars of early Christian literature acknowledge that oral traditions lie behind the New Testament gospels. While the concept of orality is widely accepted, it has not resulted in a corresponding effort to understand the reception of the gospels within their oral milieu. In this book, Kelly Iverson reconsiders the experiential context in which early Christian literature was received and interpreted. He argues that reading and performance are distinguishable media events, and, significantly, that they produce distinctive interpretive experiences for readers and audiences alike. Iverson marshals an array of methodological perspectives demonstrating how performance generates a unique experiential context that shapes and informs the interpretive process. Iverson's study explores the dynamic oral environment in which ancient audiences experienced the gospel stories. He shows why an understanding of oral performance has important implications for the study of the NT, as well as for several issues that are largely unquestioned by biblical scholars.
Like meaning, interpretation is a slippery notion. This chapter discusses three different accounts of interpretation. First, I discuss the allegoresis account, according to which some texts have a deeper meaning hidden below the text's surface meaning. I then explore what is required if an allegorical interpretation is to be justified, and I argue that two kinds of interpretative knowledge must be distinguished: (1) knowing through interpretation that what a text (or its author) says is p, and (2) knowing through interpretation that what the text (or its author) says, viz. p, is true. Second, the traditional account of interpretation is discussed, according to which interpretation consists in clearing up textual obscurities – this is called the difficulty account of interpretation. A number of possible obscurities are identified, and I show what is required to clear them up in a justified way. Finally, the modernist view of interpretation is discussed, according to which reading inevitably involves interpretation. The most natural development of this view is that all reading involves disambiguation, and that to disambiguate is to interpret. Here, too, I discuss what is required for justified disambiguations.
The introduction explains the aims of the book, its timeliness, and its relevance, and it specifies a number of commitments that I work with: a true-belief view of knowledge, a realist conception of truth, justification as truth aiming, and the notion that writing is acting. It is explained that the book’s scope is wide in that it treats the epistemology of reading texts across literary genres, while it is at the same time exclusively focused on the interpretation of texts.
The objects of reading (words, sentences, texts) are objects that have meaning. Meaning is a slippery notion, but we cannot do without it. This chapter distinguishes a number of different notions of meaning: word meaning, sentence meaning, author's meaning, indicative meaning, effect meaning, and value meaning. First, I suggest that knowledge of (these kinds of) meaning cannot be obtained through the natural sciences. Next, a general account of interpretation is offered according to which a statement is an interpretation of a text (or a part thereof) provided it is an attempt to specify the meaning(s) of the text (or its parts). It is furthermore argued that interpretative statements can be true and justified and, hence, that there are interpretative facts of the matter.
Reading and textual interpretation are ordinary human activities, performed inside as well as outside academia, but precisely how they function as unique sources of knowledge is not well understood. In this book, René van Woudenberg explores the nature of reading and how it is distinct from perception and (attending to) testimony, which are two widely acknowledged knowledge sources. After distinguishing seven accounts of interpretation, van Woudenberg discusses the question of whether all reading inevitably involves interpretation, and shows that although reading and interpretation often go together, they are distinct activities. He goes on to argue that both reading and interpretation can be paths to realistically conceived truth, and explains the conditions under which we are justified in believing that they do indeed lead us to the truth. Along the way, he offers clear and novel analyses of reading, meaning, interpretation, and interpretative knowledge.
The first brief essay argues that although the body of Richard Wright's works, his legacy, retains a special relevance in multiple discourses about human and civil rights, systemic racism, and transnational efforts to address global social injustice, it is dangerous, both in traditional scholarship and in non-academic commentary, to incarcerate his works within a single ideology or within the framework of current ideologies that give special credibility to #BlackLivesMatter. Indeed, such a framing of legacies may lead critics to be complicit in producing confusion and error which serve to undermine the primal strength of Wright's legacy: the exceptional value of asking questions that provoke pragmatic responses rather than seemingly definitive answers. We ought not abandon the dialectics of the concrete and the dialogic of the imaginary as we argue for the value of Wright's legacy in temporal evolutions. The essay draws special attention to the vexed efforts of Christopher J. Lebron to explain the origins of #BlackLivesMatter. Dealing adequately with Wright's legacy involves a willingness to engage the problematic of what Roseann Liu and Savannah Shange propose is thick solidarity, the willingness to respect the layering of " interpersonal empathy with historical analysis, political acumen, and a willingness to be led by those most directly impacted as the subjects and the objects of history. The second essay tentatively concludes that we continue to examine and re-examine Wright's legacy inside of and outside of #BlackLivesMatter. Richard Wright’s class-conscious writings of the 1930s-1940s makes use of Marxism to help us think through the relationship between the individual and the social totality; race and class; “black lives” and “all lives.” In what sense is Bigger Thomas a “native son”? How do the hands in “I Have Seen Black Hands” relate to the laboring hands of the rest of the working class? Wright’s revolutionary communist standpoint, mediating between the particular and the general, what is and what can be, requires us to think dialectically past the limitations of a reformist politics of redistribution and inclusion.
An ethics of reading holds that reading is itself an act of ethical significance. When reading is responsible to the meaning of a text, it shows the author a respect the author deserves. Developing this ethics of reading places some of the key questions of philosophical hermeneutics – that is, of the theory of interpretation – in a new and illuminating setting. Just as importantly, it shows how reading in its ethical dimension gives powerful expression to the essence of ethical thinking itself.
In the decades before the Civil War, Americans appealed to the nation's sacred religious and legal texts - the Bible and the Constitution - to address the slavery crisis. The ensuing political debates over slavery deepened interpreters' emphasis on historical readings of the sacred texts, and in turn, these readings began to highlight the unbridgeable historical distances that separated nineteenth-century Americans from biblical and founding pasts. While many Americans continued to adhere to a belief in the Bible's timeless teachings and the Constitution's enduring principles, some antislavery readers, including Theodore Parker, Frederick Douglass, and Abraham Lincoln, used historical distance to reinterpret and use the sacred texts as antislavery documents. By using the debate over American slavery as a case study, Jordan T. Watkins traces the development of American historical consciousness in antebellum America, showing how a growing emphasis on historical readings of the Bible and the Constitution gave rise to a sense of historical distance.
In this book, Charles Larmore develops an account of morality, freedom, and reason that rejects the naturalistic metaphysics shaping much of modern thought. Reason, Larmore argues, is responsiveness to reasons, and reasons themselves are essentially normative in character, consisting in the way that physical and psychological facts - facts about the world of nature - count in favor of possibilities of thought and action that we can take up. Moral judgments are true or false in virtue of the moral reasons there are. We need therefore a more comprehensive metaphysics that recognizes a normative dimension to reality as well. Though taking its point of departure in the analysis of moral judgment, this book branches widely into related topics such as freedom and the causal order of the world, textual interpretation, the nature of the self, self-knowledge, and the concept of duties to ourselves.
This chapter analyses the two pillars of the Unified Approach and the Global Anti-Base Erosion Proposals in the light of alternative policy choices which were available to the OECD. These major alternative policy choices include destination-based cash-flow taxation, residual profit allocation by income, formulary apportionment and expanding the concept of permanent establishment. In each case these policies are explained and the advantages and disadvantages of the major policy discussed. Each policy is then analysed to see what it has contributed to the 2020s compromise and what further contribution it might make to international tax reform in the future.What emerges from this analysis is that key elements of the reform owe much to the destination basis of taxation present in the various alternative reform options and selectively adopted in particular by the Unified Approach in Pillar One.
In this chapter, the method of ‘frame-determination’ for IIA expropriation clauses is applied and three limits of the actus reus condition of typical IIA expropriation clauses are identified. (1) On the macro-structural level, concerning the interaction of IIA clauses with the rest of international law, facile references to customary international law are shown to be problematic: ‘’ in IIAs does not refer to a customary norm of certain validity and great specificity. (2) On the micro-structural level, the necessity of treating direct and indirect expropriation as fully equivalent is structurally inherent in typical IIAs. (3) All legality conditions are equal and cannot be doubled in the actus reus of indirect expropriation. The structure of typical IIA clauses does not support the majority of arguments based on ‘police powers’ or on a ‘right to regulate’.
This claim implies that the moral underpinnings of property do not eliminate the constitutive role of law or of a law-like set of social norms. By instantiating property rights, law goes beyond protecting people from actual or potential threats that others may pose to their bodily integrity. Property law proactively empowers people, expanding their ability to act and interact in the world.
The jurisprudential tradition that created the original methods that were in effect at the time of the Constitution provides the foundation for an interpretive approach for applying the Constitution’s fixed text to changing circumstances. Across the centuries, even commentators with strong preferences for following the lawmaker’s original meaning have recognized that there are legitimate times for judges to adapt an old law to fit new circumstances. In light of that history, this chapter describes a principled approach to adapting laws to changing circumstances that has its foundation in Edward Coke and William Blackstone, and was developed over the centuries in the UK courts.
Paralleling the summing problem associated with identifying a single intention of a multimember lawmaking body, the semantic summing problem appears when there are competing potential meanings for constitutional words or phrases. This chapter addresses the question of whether the new digital tools used in corpus linguistics searches have the potential to offer a “Big Data” solution to the problem. By examining the nature of the digital collections being searched, as well as the data analysis tools being employed, this chapter shows that corpus linguistics will not solve the semantic summing problem, and may well exacerbate it.
Previous chapters have focused primarily on factual issues bearing on theories of constitutional interpretation. This chapter turns toward perceptions as it explores how both elite and popular opinion influence the justices’ perspectives on interpretive issues. These perception issues fall generally into the Court’s need for what Richard Fallon has called “sociological legitimacy,” along with the individual justices’ views of their “fidelity to role,” as described by Lawrence Lessig. The specific issues addressed are aspects of what are sometimes considered “conventional wisdom,” and they turn out not to be true. The first is the notion that any interpretive approach based on the Framers’ understandings is so far out of step with the contemporary thinking in the international community of judges and scholars that it represents little more than a peculiarly American form of “ancestor worship,” and the second is the belief that calling on the Framers’ understandings is principally a tool for advancing conservative social and political views.
Nearly all of our current debates over constitutional interpretation have happened before, including those involving complex insights from linguistics, philosophy, and history that feel very modern to us. This book, while not intended to be a complete account of judicial decision making, has focused on what it has meant to interpret a legally authoritative text for many generations, and has shown how that traditional definition of interpretation maps onto the creation and interpretation of the US Constitution. It argues that constitutional theory needs to pay considerably more attention to the one constant theme through the various cycles of interpretive methods over the centuries: a search for the will of the lawmaker.
This paper considers the history and nature of the ‘modern rule of releases’, concerning compromises to settle or preclude litigation. The rule holds that only matters the parties had contemplated as well as what they intended to release will in fact be released, even if the compromise has been made in the most general terms. Thus the rule is engaged when the releasor executes a general release but does not appreciate the existence of some of the claims the words used purport to release. This paper shows how the rule is a confusion of different conceptual bases and lines of authority and was created by accidentally muddling them together. It argues that, despite this, it successfully straddles both bases, functions well conceptually and serves a vital role.
To determine the “will of the legislator,” William Blackstone pointed to “signs” of those intentions, the first of which is the words understood in their usual sense. This chapter will show the degree to which the words, even in context, have the potential to leave many important constitutional issues unresolved, hence the need for other evidence of the will of the lawmaker. In particular, this chapter will show that the “summing problem,” which has most often been associated with the difficulty of determining a single intention of the Framers, is matched by its semantic equivalent: the fact that the evidence of objective public meaning can lead to multiple potential meanings. To describe the problem, the chapter analyzes two clauses that have generated a great deal of litigation and interpretive controversy – the tax clauses and the Establishment Clause. In each case, there are multiple equally strong candidates for the objective public meanings of the words.
Identifying the will of the lawmaker has long been the central interpretive inquiry in American jurisprudence, an approach this nation inherited from a very lengthy set of legal predecessors. A great deal of commentary throughout Western legal history has been devoted to the questions of what constitutes the will of the lawmaker, and where interpreters should find evidence of that will, but there has been impressive agreement on the question of whether interpreters should do so. This chapter will address both what and where, but, first, there is a question that is peculiar to the American constitutional setting: who is the lawmaker? This chapter argues that the primary lawmaker is the Framers, and, only secondarily, the ratifiers. Based on work by Richard Ekins and others, it shows that there actually can be an intention of the constitutional lawmakers that is recoverable by interpreters. It also shows that the records of the constitutional debates and drafting can potentially provide essential information for interpreters seeking to determine what policy choice was made by the adoption of the constitutional language – that is, the ends and means represented by the text.