We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study aimed to critically analyse Australia’s current and proposed policy actions to reduce added sugar consumption. Over-consumption of added sugar is a significant public health nutrition issue. The competing interests, values and beliefs among stakeholders mean they have disparate views regarding which policy actions are preferable to reduce added sugar consumption.
Design:
Semi-structured interviews using purposive, snowball sampling and policy mapping. Policy actions were classified by two frameworks: NOURISHING (e.g. behaviour change communication, food environment and food system) and the Orders of Change (e.g. first order: technical adjustments, second order: reforming the system, third order: transforming the system).
Setting:
Australia.
Participants:
Twenty-two stakeholders from the food industry, food regulation, government, public health groups and academia.
Results:
All proposed and existing policy actions targeted the food environment/behaviour change; most were assessed as first-order changes, and reductionist (nutrient specific) in nature. Influences on policy actions included industry power, stakeholder fragmentation, government ideology/political will and public pressure. Few stakeholders considered potential risks of policy actions, particularly of non-nutritive sweetener substitution or opportunity costs for other policies.
Conclusions:
Most of Australia’s policy actions to reduce added sugar consumption are reductionist. Preferencing nutrient specific, first-order policy actions could reflect the influence of vested interests, a historically dominant reductionist orientation to nutrition science and policy, and the perceived difficulty of pursuing second- or third-order changes. Pursuing only first-order policy actions could lead to ‘regrettable’ substitutions and creates an opportunity cost for more comprehensive policy aimed at adjusting the broader food system.
Chapter 8 presents the main positions in economics and in the social sciences regarding the agent/structure problem, and explores some contributions that can be made from artificial economics. First, it presents and discusses the individualist/reductionist, structuralist/holistic, and intermediate positions, regarding the agent/structure problem. Then presents simple artificial economics examples of the generation of endogenous preferences, agents' behavioral changes derived from their economic interaction, and of the demographic effects of the introduction of a market institution into an artificial economy.
The Geophysical Fluid Dynamics Laboratory (GFDL) is a pioneering institution in the field of climate modeling. Its founding director, Joseph Smagorinsky, was a member of the Princeton Meteorology Group. He hired a Japanese scientist, Syukuro Manabe, who formulated a one-dimensional model of climate, known as the radiative–convective model, that was able to calculate the amplifying climate feedback due to water vapor. This model provided one of the first reliable estimates of global warming. Manabe worked with other scientists to build three-dimensional climate models, including the first model that coupled an atmospheric model to an ocean model. The concepts of reductionism and emergentism, which provide the philosophical context for these scientific developments, are introduced.
Debates on dualism continue to plague psychiatry. I suggest that these debates are based on false dichotomies. According to metaphysical physicalism, reality is ultimately physical. Although this view excludes the idea of entities distinct from physical reality, it does not compel us to favour neural over psychological interventions. According to methodological dualism, both physical and mental interventions on the world can be deemed effective, and both perspectives can therefore be thought to be equally ‘real’.
Our practices of pursuing the truth and engaging in ethical or existential commitments, analyzed from a pragmatist perspective in the previous chapters, are inherently normative. This chapter considers the transcendental question concerning the very possibility of normativity - that is, the possibility of our engaging in the normative practices we do engage in, including practices of truth-seeking presupposing individual ethical sincerity - from the point of view of a pragmatist transcendental philosophy (as developed in the earlier chapters). It is suggested that such a transcendental question about normativity belongs to philosophical anthropology, as it examines the most basic aspects of the human condition. It is argued that no contingent and naturalizable matters of fact, such as psychological acts of recognition, can adequately ground the possibility of normativity in the transcendental sense. A pragmatist commitment to sincerity thus also entails a commitment to irreducible (but not therefore mystical or supernatural) normativity. A pragmatic and transcendental form of humanism emerges as the only way of making sense of normativity in our lives and practices.
The chapter discusses the impact of materialistic and reductionist perspectives on peace analysis and examines how they affect understanding human nature.
In the play As You Like It, context influences people to change from bad to good. This is different from the classic psychology experiments on the power of context, which typically demonstrate the power of context to move normal individuals to harm others (moving from good to bad). The change of context in As You Like It involves people moving from the royal court to the wild forest. The nature of “correct” behavior is determined by context. The new normative system prevailing in the forest leads to shifts in the evaluation of emotions, so that what is shunned at court (e.g., melancholy) becomes highly valued by some. Also, the materialism of court is abandoned in favor of a more simple life. This leads to two brothers, the usurping duke and Oliver, to abandon power and material riches in favor of their brothers. The change of context is also associated with a blurring of gender roles in some cases.
Aristotle identifies perception as central to all animals, enabling them to fulfill their ends. His biological works clarify his hylomorphic account of perception as a key activity of the soul by providing detailed overviews of types of perception and perceptual organs. Like other bodily organs, these have complex structures comprised of physical components, often in layers, all ultimately involving the four basic elements. I defend a compromise position on scholarly controversies about whether Aristotle can successfully provide a physicalist account of perception. Briefly, the answer is “yes and no.” His biological works, along with “chemical” works, do give physical accounts of perceptible features like colors and tastes, as well as of the organs (and parts) capable of registering them. However, because of his teleological views about nature, such accounts must be “top-down” and are never purely reductive or translatable into structural accounts like those of the atomists. Finally, we must remember that perception is crucial to the behavioral success of the animal as a whole within its environment. Perceptual “experience” in our modern sense does not occur in any organ but rather in the body as a whole, and more centrally in the heart and blood vessels.
An overview of the creativity-based DOC (Diverge Organize Converge) Process is described in this chapter. The DOC Process is an important sub-process necessary to reduce the design stages of The Innovation Pyramid to practice. It will be applied multiple times, in a variety of different ways, during the innovation design stages. The DOC Process can also be applied to a multitude of other non-innovative challenges. The DOC Process has three distinct components: divergent thinking, organizing and convergent thinking. It is the combination of all three that makes the method powerful. While most are familiar with divergent thinking, particularly brainstorming new solution ideas, the DOC Process is as useful for identifying root-cause problems as it is for ideating new solutions. The Diverge step expands from the original solution idea or problem situation. The second step of the process is the often ignored Organize step. Once the Organize step is completed, the process moves onto the Converge step. Converge narrows the organized concepts down to the “best.” Criteria for choosing “the best” problem or solution must be conscious and public. Unless agreed-upon criteria is defined for choosing “the best,” each of us will apply our own measures, ensuring a chaotic outcome.
Divergent thinking is the first of three essential steps in completing the non-linear DOC Process. Like all DOC Process steps, divergent thinking applies to both the problem and solution spaces. It simply requires different techniques when seeking to identify a root-cause problem versus attempting to come up with a new impactful problem-solving approach.
Identifying root cause issues begins with broadening our purview to ensure we are not overlooking the disease by focusing too closely on its symptoms. Divergence occurs as we zoom in from this broad purview; identifying factors or causes of the identified zoomed-out General Problem, then factors of the identified factors, then causes of the sub-factors, and so on, until the root-cause level is reached.
Divergent thinking for solutions occurs on several perspective levels: feature, function and system. In addition to ideating new and improved features of existing solutions, substitute solutions should be considered. Potential substitutes perform the same function as current solutions, but do so in a very different way. New technology is often the enabler of substitute products. System level thinking diverges solution ideas by strengthening or streamlining system connections or linkages at the broadest purview.
In this chapter we look at two important topics in social epistemology: testimony and disagreement. What is necessary for testimony to be a source of justification or knowledge? Can testimony be a basic source of justification? We also consider three views concerning the appropriate stance in the face of disagreement with one's epistemic peers: the Equal Weight View, the Steadfast View, and the Total Evidence View.
In “Reductionist vs. Neo-Wittgensteinian Semantics,” Rorty does two things: proposes a distinction between reductionist and neo-Wittgensteinian semantics and suggests we see Robert Brandom’s philosophy as contributing to the latter. By “reductionist semantics,” Rorty means a semantics that aims at purifying language by finding equivalent and more perspicuous expressions for expressions we currently use. “Neo-Wittgensteinian semantics” entails a semantics for which such a program does not make any sense because it assumes any expression has a perfectly respectable meaning merely by virtue of having a use. It is “neo-Wittgensteinian” because the view of meaning as use originates in Philosophical Investigations, and “neo-Wittgensteinian” because it relies on how that view was later developed by the likes of Quine, Sellars, Davidson, and Brandom. Having explained why Brandom should be so classified, Rorty then proceeds to discuss Brandom’s most valuable and original contribution to this type of semantics: Brandom’s account, in Making It Explicit, of the constraints on the use of marks and noises that make it possible for us to be said to be reasoning rather than simply sounding off in habitual, accepted ways.
In “Reductionism,” Rorty takes up the question “Can we abandon reductive analysis as a method of philosophical discovery and still keep the intellectual gains which have accrued from its employment as a method of deciding what questions to discuss?” Rorty uses the notion of reductionism to both present a synoptic vision of the history of Western philosophy and put forward an original metaphilosophical position. After presenting the twentieth-century program of reductive linguistic analysis as a mature form of the seventeetn century’s “reductionist conception” of the goal of inquiry, he examines J. O. Urmson’s arguments, ultimately finding that Urmson falls short of applying reductive analysis to the technical vocabularies of philosophers. Even though Rorty agrees with Urmson that most reductive analyses, judged by their own standards, are unsuccessful, Rorty nevertheless thinks a basis for distinguishing useful from useless analyses is possible. We also see here Rorty’s early interest in eliminability, which shortly thereafter becomes the basis for a distinctive contribution.
Chapter 1 begins by invoking an intuitive distinction between the generation of knowledge and the transmission of knowledge. Very roughly, generation concerns coming to know “for oneself,” as when one reasons to a conclusion on the basis of good evidence. Transmission concerns coming to know “from someone else,” as when one is told by someone else who knows. Section 1.1 argues that some but not all testimony is at the service of knowledge transmission, with the result that some but not all testimonial knowledge is transmitted knowledge. Section 1.2 redraws some familiar categories in the epistemology of testimony so as to better characterize our target and related phenomena, better frame our questions, and better see the possible answers. Finally, a central thesis of the book is introduced and discussed: that knowledge transmission is irreducible to knowledge generation, and for that reason requires its own theoretical treatment. More specifically, it is argued that an adequate account of transmission must go beyond the usual theoretical resources of traditional epistemology – that is, beyond those resources that the tradition uses to theorize knowledge generation.
Chapter 8 considers the widespread epistemic dependence that characterizes “big science,” and uses the information economy framework to dispel the worry that such dependence is inconsistent with the standards for scientific knowledge. This leads to a new argument against reductionism in the epistemology of testimony. First, reductionism is shown to be untenable for scientific knowledge. Second, if reductionism must be rejected for scientific knowledge, then it should be rejected more generally. This second idea can be vindicated in two ways. First, anti-reductionism about scientific knowledge entails anti-reductionism about knowledge in general, since anti-reductionism is best understood as the thesis that some transmitted knowledge cannot be reduced to generated knowledge. Second, if anti-reductionism is required for scientific knowledge, then reductionism for non-scientific knowledge is unmotivated. The most elegant position is anti-reductionism about knowledge transmission in general.
Chapter 2 introduces an “information economy” framework for approaching the epistemology of testimony. It is argued that, in a well-designed epistemic community, the norms governing information acquisition and information distribution will be different. This is because the dominant concern of information acquisition is quality control, whereas the dominant concern of information distribution is to provide access. The central idea, then, is to understand knowledge generation in terms of the norms governing information acquisition and to understand knowledge transmission in terms of the norms governing information distribution. The reason for adopting this approach is its explanatory power. In particular, the framework (a) explains a range of cases in the testimony literature; (b) provides a principled understanding of the transmission–generation distinction; and (c) explains the truth behind various and conflicting positions in the epistemology of testimony. Moreover, the framework nicely integrates with other plausible positions in epistemology, the philosophy of language, action theory, social science, and cognitive science.
How do we transmit or distribute knowledge, as distinct from generating or producing it? In this book John Greco examines the interpersonal relations and social structures which enable and inhibit the sharing of knowledge within and across epistemic communities. Drawing on resources from moral theory, the philosophy of language, action theory and the cognitive sciences, he considers the role of interpersonal trust in transmitting knowledge, and argues that sharing knowledge involves a kind of shared agency similar to giving a gift or passing a ball. He also explains why transmitting knowledge is easy in some social contexts, such as those involving friendship or caregiving, but impossible in contexts characterized by suspicion and competition rather than by trust and cooperation. His book explores phenomena that have been undertheorized by traditional epistemology, and throws new light on existing problems in social epistemology and the epistemology of testimony.
Reductionism is a widely endorsed methodology among biologists, a metaphysical theory advanced to vindicate the biologist's methodology, and an epistemic thesis those opposed to reductionism have been eager to refute. While the methodology has gone from strength to strength in its history of achievements, the metaphysical thesis grounding it remained controversial despite its significant changes over the last 75 years of the philosophy of science. Meanwhile, antireductionism about biology, and especially Darwinian natural selection, became orthodoxy in philosophy of mind, philosophy of science, and philosophy of biology. This Element expounds the debate about reductionism in biology, from the work of the post-positivists to the end of the century debates about supervenience, multiple realizability, and explanatory exclusion. It shows how the more widely accepted 21st century doctrine of 'mechanism' - reductionism with a human face - inherits both the strengths and the challenges of the view it has largely supplanted.
The chapter “Should Psychiatry Be Precise?” challenges the Precision Medicine Initiative (PMI) and the Research Domains Criteria (RDoC) initiative that have been advanced by the National Institute of Health. The chapter includes valuable points about the challenges of applying reduction across levels of analysis, the value of nosological revision, and the potential pitfalls of using big data. But the overall argument constructed in the chapter is a straw man that does not reflect well the intentions of those who designed these initiatives nor the understanding and aims of scientists now engaged in research under the aegis of these initiatives. A more constructive approach might focus on specification of tractable questions about causality in mental health research, and aspects of subjective experience that are currently under-represented in biological psychiatry.