To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Harmonization is an essential instrument of international risk governance. It is the process through which disparities among national regulatory standards are ironed out, producing uniform outcomes that all participants in a regime can accept and that facilitate the free exchange of regulated goods in commerce. Contrary to conventional belief, however, harmonization requires not only technical but also political cooperation, since standards themselves are not direct mirrors of reality but are co-produced responses to technoscientific and political uncertainty. Attempts to harmonize standards across national borders therefore pit alternative political cultures and their systems of public reasoning against one another. Put differently, harmonization calls into question the underlying models of subsidiarity that provide the foundation for robust international regimes. This paper examines three models of epistemic subsidiarity – coexistence, cosmopolitanism, and constitutionalism – and discusses each scheme's capacity to protect a nation's fundamental political commitments while advancing the goals of international risk governance.
For the most part of a century, two almost axiomatic beliefs guided democratic societies in their attempts to incorporate science into public policy. The first is that good scientific knowledge is the best possible foundation for public decisions across ever-widening policy domains. The second is that the best way to secure scientific inputs of high quality is to allow scientists the freedom to monitor and criticise each other's contributions, through procedures conventionally grouped under the heading of peer review. In this way, science comes closest to an ideal described as ‘speaking truth to power’. Scientists, in this view, should independently establish the facts of the matter as well as they can be established; politicians can then decide how to act upon those facts, taking other social values into consideration. We can think of this as the linearity-autonomy model of science for policy. In it, scientific fact-finding is seen as standing apart from and prior to politics, as decisions move in linear fashion from facts to values. Science is entitled to establish the quality and integrity of its findings on its own terms before political judgements come into play. Deviation from this ideal threatens to convert science into an instrument of politics. With loss of autonomy, science, it is thought, cannot deliver objective information about the functioning of nature or society.
Without science and its muscular twin technology, contemporary societies would be reduced to chaos. We would lose much of our ability to read, write, communicate, travel, grow crops, raise animals, cook food or find clean water to sustain our lives. Commercial transaction would stop; financial institutions be crippled; emergency services incapacitated, and hospitals no longer able to provide essential treatment. In that devastated, dying world, law and order would break down, and violence would flourish. Not insignificantly, we would lose the capacity to track and prosecute lawbreakers and criminals. Today, even law enforcement has become a ‘high-tech’ business, and DNA profiling, the subject of this book, is the most highly valued recent addition to the toolkit of the forensic sciences. For law enforcement agencies, it is hard to imagine life before or without it.
Technology's benefits for social order are obvious, ubiquitous and unquestionable. Yet, since long before the scientific revolution, human beings have looked upon the unchecked thirst for knowledge and its applications as dangerous things. Humanity's Faustian bargain with science set us on a path of discovering more and more about the way the world works and accomplishing more impressive feats with the results of that knowledge. But around the bends of the brightly lit corridors of enlightenment lurked unintended consequences that threatened to usurp our humanity and even annihilate us physically. Advances in the life sciences and technologies have proved particularly alarming because they destabilize the worth of life itself.
“Relying on Science, Romney Files Death Penalty Bill.” With that headline, a press release on April 28, 2005 announced that Massachusetts Governor Mitt Romney was seeking to reintroduce by legislation the death penalty that the state's Supreme Judicial Court ruled unconstitutional in 1984. The remainder of the text left little doubt that science was a major basis for the governor's action. The press release quoted Romney as saying that the bill provided a “gold standard for the death penalty in the modern scientific age.” Positing a symmetry that will be questioned below, Romney also declared, “Just as science can free the innocent, it can also identify the guilty.” The bill itself deferred to science by calling for corroborating scientific evidence, multiple layers of review, and a novel “no doubt” standard of proof. By raising the required standard of evidence and by restricting the class of capital crimes, the proposed law hoped to correct the defects of other death penalty statutes.
Do human societies learn? If so, how do they do it, and if not, why not? The American activist singer and song writer Pete Seeger took up the first question in the 1950s (Seeger 1955) in a song whose concluding lines circled hauntingly back to its opening and whose refrain – ‘When will they ever learn?’ – gave anti-war protest in the 1960s a musical voice. Seeger's answer was, apparently, ‘never’. Like many a pessimist before and since, Seeger saw human beings as essentially fallible creatures, doomed to repeat history's mistakes. But modern societies cannot afford to stop with that unregenerative answer. The consequences of error in tightly coupled, high-tech worlds could be too dire (Perrow 1984). If we do not learn, then it behoves us to ask the next-order questions. Why do we not? Could we do better?
For social analysts, part of the challenge is to decide where to look for answers. At what level of analysis should such questions be investigated? Who, to begin with, learns? Is it individuals or collectives, and if the latter, then how are knowledge and experience communicated both by and within groups whose membership remains indeterminate or changes over time? Organizational sociologists from Max Weber onwards have provided many insights into why collectives think alike.
The development of a multinational regulatory framework for biotechnology during the past twenty years provides an unparalleled opportunity to study the processes by which technological advances overcome public resistance and are incorporated into a receptive social context. Through the vehicle of regulation, states provide assurance that the risks of new technologies can be contained within manageable bounds. Procedures are devised to limit uncertainty, channel the flow of future public resistance, and define the permissible modalities of dissent. Regulation, in these respects, becomes integral to the shaping of technology. A regulated technology encompasses more than simply the ‘knowledge of how to fulfill certain human purposes in a specifiable and reproducible way.’ Regulation transmutes such instrumental knowledge into a cultural resource; it is a kind of social contract that specifies the terms under which state and society agree to accept the costs, risks and benefits of a given technological enterprise.
The passage of biotechnology from moratorium to market in just twenty years exemplifies this process of social accommodation. During this period, biotechnology moved from a research programme that aroused misgivings even among its most ardent advocates to a flourishing industry promising revolutionary benefits in return for negligible and easily controlled risks. The transformation occurred almost simultaneously and with remarkable speed throughout Europe and North America. To facilitate the commercialization of biotechnology, the United States, and the European Community and several of its member states, adopted laws and regulations to control not only laboratory research with genetically engineered organisms but also their purposeful release into the environment.
The American public firmly believes both in the rule of law and in the progressive and beneficial effects of science. Public opinion polls show that legal and scientific institutions continue to command wide respect, even in a period of diminished trust. According to a recent survey conducted by the Office of Technology Assessment (OTA), the Supreme Court and the scientific community each received higher confidence ratings than political institutions, the media, education, and even organized religion. Yet the impacts of science and the law can at times be profoundly antithetical, for the former acts as a force for social transformation, while the latter seeks to maintain the stability and continuity of societal institutions and norms of conduct. Vannevar Bush's famous metaphor “the endless frontier“ captured the sense of limitless aspiration associated with science and technology through much of this century.
Email your librarian or administrator to recommend adding this to your organisation's collection.