To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Ecosystem modeling, a pillar of the systems ecology paradigm (SEP), addresses questions such as, how much carbon and nitrogen are cycled within ecological sites, landscapes, or indeed the earth system? Or how are human activities modifying these flows? Modeling, when coupled with field and laboratory studies, represents the essence of the SEP in that they embody accumulated knowledge and generate hypotheses to test understanding of ecosystem processes and behavior. Initially, ecosystem models were primarily used to improve our understanding about how biophysical aspects of ecosystems operate. However, current ecosystem models are widely used to make accurate predictions about how large-scale phenomena such as climate change and management practices impact ecosystem dynamics and assess potential effects of these changes on economic activity and policy making. In sum, ecosystem models embedded in the SEP remain our best mechanism to integrate diverse types of knowledge regarding how the earth system functions and to make quantitative predictions that can be confronted with observations of reality. Modeling efforts discussed are the Century ecosystem model, DayCent ecosystem model, Grassland Ecosystem Model ELM, food web models, Savanna model, agent-based and coupled systems modeling, and Bayesian modeling.
The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years.
The rocky shores of the north-east Atlantic have been long studied. Our focus is from Gibraltar to Norway plus the Azores and Iceland. Phylogeographic processes shape biogeographic patterns of biodiversity. Long-term and broadscale studies have shown the responses of biota to past climate fluctuations and more recent anthropogenic climate change. Inter- and intra-specific species interactions along sharp local environmental gradients shape distributions and community structure and hence ecosystem functioning. Shifts in domination by fucoids in shelter to barnacles/mussels in exposure are mediated by grazing by patellid limpets. Further south fucoids become increasingly rare, with species disappearing or restricted to estuarine refuges, caused by greater desiccation and grazing pressure. Mesoscale processes influence bottom-up nutrient forcing and larval supply, hence affecting species abundance and distribution, and can be proximate factors setting range edges (e.g., the English Channel, the Iberian Peninsula). Impacts of invasive non-native species are reviewed. Knowledge gaps such as the work on rockpools and host–parasite dynamics are also outlined.
Larry Alexander's basic worry about adding thresholds to deontology is that to do so is hopelessly ad hoc and arbitrary – these are alien additions made by a theory desperate to avoid otherwise devastating counterexamples, but additions having no resonance with the theory that they are saving. Whether this is so depends on how one conceives of a more basic issue for deontology, namely, how its obligations win out over consequentialist considerations in more everyday situations, situations that are below whatever threshold one posits to exist. The guiding thesis is that if one rightly conceives of how deontology wins out over consequentialism below the threshold, one will have less difficulty in smoothly conceptualizing how deontology loses out to consequentialism in situations above the threshold. In each of these two scenarios, the most stringent obligation prevails, where "stringency" needs to be cashed out (both for deontological and for consequentialist obligations) in non-question-begging terms.
Despite extensive research on organizational virtue, our understanding about factors that promote virtue within organizations remains unclear. Drawing on upper echelon theory, we examine the relationship between five top management team (TMT) characteristics and organizational virtue orientation (OVO)—the integrated set of values and beliefs that support ethical traits and virtuous behaviors of an organization. Specifically, we utilize prospectuses of initial public offering (IPO) firms and 10-K post-IPO filings to explore how TMT composition with respect to member age, tenure, education, functional background, and gender influences OVO. Additionally, we examine the moderating effects of organizational size, and argue that the more expansive structures and processes associated with larger organizations diminish the main relationships. Our findings, using two sources of data, are consistent, but somewhat mixed in their support for our hypotheses. Overall, TMT characteristics do appear to influence OVO, but in more complex and counterintuitive ways than initially expected.
Animal models of early postnatal mother–infant interactions have highlighted the importance of tactile contact for biobehavioral outcomes via the modification of DNA methylation (DNAm). The role of normative variation in contact in early human development has yet to be explored. In an effort to translate the animal work on tactile contact to humans, we applied a naturalistic daily diary strategy to assess the link between maternal contact with infants and epigenetic signatures in children 4–5 years later, with respect to multiple levels of child-level factors, including genetic variation and infant distress. We first investigated DNAm at four candidate genes: the glucocorticoid receptor gene, nuclear receptor subfamily 3, group C, member 1 (NR3C1), μ-opioid receptor M1 (OPRM1) and oxytocin receptor (OXTR; related to the neurobiology of social bonds), and brain-derived neurotrophic factor (BDNF; involved in postnatal plasticity). Although no candidate gene DNAm sites significantly associated with early postnatal contact, when we next examined DNAm across the genome, differentially methylated regions were identified between high and low contact groups. Using a different application of epigenomic information, we also quantified epigenetic age, and report that for infants who received low contact from caregivers, greater infant distress was associated with younger epigenetic age. These results suggested that early postnatal contact has lasting associations with child biology.
In the southeastern United States, growers often double-crop soft red winter wheat with peanut. In some areas, tobacco is also grown as a rotational crop. Pyrasulfotole is a residual POST-applied herbicide used in winter wheat, but information about its effects on rotational crops is limited. Winter wheat planted in autumn 2014 was treated at Feekes stage 1 or 2 with pyrasulfotole at 300 or 600 g ai ha−1. Wheat was terminated by glyphosate at Feekes stage 3 to 4. Peanut was planted via strip tillage, while tobacco was transplanted into prepared beds after minimal soil disturbance. Peanut exhibited no differences in stand establishment, growth, or yield, and tobacco stand, growth, and biomass yields were not different from the nontreated control for any pyrasulfotole rate or treatment timing.
This essay undertakes two tasks: first, to describe the differing mens rea requirements for accomplice liability of both Anglo-American common law and the American Law Institute's Model Penal Code; and second, to recommend how the mens rea requirements of both of these two sources of criminal law in America should be amended so as to satisfy the goals of clarity and consistency and so as to more closely conform the criminal law to the requirements of moral blameworthiness. Three "pure models" of the mens rea requirements for complicity are distinguished, based on the three theories of liability conventionally distinguished in the general part of Anglo-American criminal law. One of these, the vicarious responsibility model, is put aside initially because of both its descriptive inaccuracy and its normative undesirability. The analysis proceeds using the other two models: that of the mens rea requirements for principal liability for completed crimes, and that of the mens rea requirements for attempt liability. Both the common law and the Model Penal Code are seen as complicated admixtures of these two models, the common law being too narrow in the scope of its threatened liability and the Model Penal Code being both too broad and too opaque in its demands for accomplice liability. The normative recommendation of the paper is to adopt the model for the mens rea of complicity that treats it as a form of principal liability, recognizing that the overbreadth of liability resulting from adoption of that model would have to be redressed by adopting a "shopkeeper's privilege" as an affirmative defense separate from any mens rea requirement.
The article uses the recent U.S. Supreme Court decision in the same-sex marriage
case Obergefell v. Hodges as the springboard for a general
enquiry into the nature and existence of a constitutional right to liberty under
the American Constitution. The discussion is divided into two main parts. The
first examines the meaning and the justifiability of there being a moral right
to liberty as a matter of political philosophy. Two such rights are
distinguished and defended: first, a right not to be coerced by the state when
the state is motivated by improper reasons (prominent among which are
paternalistic reasons); and second, a right not to be coerced by the state when
there are insufficient justifying reasons for the state to do so, irrespective
of how such state coercion may be motivated. Neither right is regarded as
“absolute,” and so it is morally permissible for the state
to override such rights in certain circumstances. The second part of the article
examines the distinct and additional considerations that must be taken into
account when these two moral rights to liberty are fashioned into corresponding
legal rights under American constitutional law. Both such rights survive the
transformation, but each becomes altered somewhat in its content. This legal
transformation includes recognition of the nonabsolute nature of moral rights,
such recognition taking the form of some doctrine of “compelling
state interests.” The discussion in these two main parts of the
article is prefaced with a defense of the article's use of political
philosophy to inform constitutional law, a defense motivated by Chief Justice
Robert's denunciation of such an approach to constitutional law in
his opinion in Obergefell.
Subjective Selves, Moral Agents, and Legal Subjects
Both law and the moral/political philosophy on which it is built pre-suppose certain views in psychology. These are fundamental views about who we are as persons, as moral agents, and as legal subjects. Much of our political philosophy and our legal institutions depend on these views being true of us; indeed much that we value in ourselves seems indefensible without these views being true. Yet the rise of cognitive science in general, and neuroscience in particular, is commonly taken to undermine these views. We thus need to assess whether this is true, either now given the present state of neuroscience, or in the future given what foreseeably may be developed by that science. The aim of this paper is to lay the groundwork for such an assessment by isolating as clearly as possible both what in our legal/political institutions is challenged by neuroscience, and what in neuroscience is doing the challenging. In particular I shall seek to clarify the different challenges that arise from work in neuroscience, for only when such challenges are distinguished, one from the other, can one begin to assess whether they are true.
I shall begin by spelling out more completely the legal, moral, and psychological suppositions about persons that seem to be challenged by recent advances in the brain sciences. Then in the next section I shall lay out the challenges to this view presented by current neuroscience.
Yaffe's handling of two general questions is assessed in this review. The first question is why mere attempts (as opposed to successful wrongdoing) should be made punishable in a well-conceived criminal code. The second question is how attempt liability should be conceived in such a code. As to the first question, Yaffe's nonsubstantive mode of answering it (in terms of his “transfer principle”) is contrasted to answers based on some more substantive desert-bases; Yaffe's own more substantive kind of answer (in terms of a desert-base of “faulty modes of reason recognition and response”) is examined in light of the implication that the traditional requirements of trying and intending are mere proxies for faulty reason recognition and response as the basis for blaming and punishing attempts. As to the second question, Yaffe's analysis of trying in terms of a “guiding commitment view” is examined in some detail. Canvassed here are the subquestions of: (1) whether appropriate assessibility by certain norms of rationality can give the nature of intending and thus of trying; (2) whether the externally de dicto intent requirements of criminal law statutes can easily be interpreted and applied by standard, extensional methods; (3) whether the distinctions between various kinds of elements within the content of intentions, can be justified ontologically or only practically; (4) whether actors necessarily intend results of their actions that are “very close” to other results that clearly are intended, and if not, whether such actors are nonetheless just as blameworthy for those results as if they did intend them; and (5) whether intention with respect to circumstance-elements differ (in either their psychology or in their moral effect) from beliefs with respect to such circumstance-elements.