Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-19T23:05:02.665Z Has data issue: false hasContentIssue false

PUBLIC TRUST AND BIOTECH INNOVATION: A THEORY OF TRUSTWORTHY REGULATION OF (SCARY!) TECHNOLOGY

Published online by Cambridge University Press:  15 June 2022

Clark Wolf*
Affiliation:
Philosophy; Political Science, Iowa State University, USA
Rights & Permissions [Opens in a new window]

Abstract

Regulatory agencies aim to protect the public by moderating risks associated with innovation, but a good regulatory regime should also promote justified public trust. After introducing the USDA 2020 SECURE Rule for regulation of biotech innovation as a case study, this essay develops a theory of justified public trust in regulation. On the theory advanced here, to be trustworthy, a regulatory regime must (1) fairly and effectively manage risk, must be (2) “science based” in the relevant sense, and must in addition be (3) truthful, (4) transparent, and (5) responsive to public input. Evaluated with these norms, the USDA SECURE Rule is shown to be deeply flawed, since it fails appropriately to manage risk, and similarly fails to satisfy other normative requirements for justified trust. The argument identifies ways in which the SECURE Rule itself might be improved, but more broadly provides a normative framework for the evaluation of trustworthy regulatory policy-making.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives license (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Social Philosophy & Policy Foundation. Printed in the USA

I. Risk, Innovation, and Public Trust

In 1990, researchers conducted a poll asking whether people trusted government scientists to evaluate the safety of a proposed Nuclear Waste Storage Repository at Yucca Mountain. They found that only 29 percent of those polled believed that the federal government would be honest in its effort to research the safety of the site, while 68 percent believed that the government scientists would probably cook the books. Additionally, 52 percent of those surveyed said they thought the facility would be built whether the site was found to be safe or not. 1 Commenting on these responses, Rebecca Bratspies writes that they reveal “a lack of trust in the objectivity and intellectual honesty of the decision makers, and suggest a clear perception that the research process was an attempt to drum up public support for an already crafted agenda, rather than a genuine attempt at dialogue and shared agenda building.” 2 If regulatory agencies are to serve their assigned functions, they need to be entrusted appropriately to manage risks, to respect and protect rights, and to promote the public good. But trust cannot be taken for granted, and mistrust is often justified. Perhaps the polled citizens were right to doubt their government: What reasons would they have to believe that the risk analysis would be done impartially?

Some technologies apparently inspire intense, almost instinctive mistrust. Just as the 1990 poll showed mistrust of nuclear energy, there is similar public mistrust of biotechnology and of the agencies charged to regulate it. Surveys regularly show that the public fears biotech innovation and that many people do not believe that regulatory agencies will effectively protect their interests. 3 Since new technologies like biotech promise both the hope of benefits and the possibility of risks, how (if at all) should they be regulated? We can interpret this question to address the authority to regulate, the morality of regulation, or the strategic rationality of alternative regulatory regimes. Questions about the justification of public action, in this case, the regulation of technology, are often posed in an idealized mode that makes them distant from concrete choices that those who are charged to frame regulatory strategies must make. Such excessive idealization can undermine the practical value of philosophical approaches. To avoid such problems, this essay will focus on a specific regulatory policy, the 2020 USDA SECURE Rule for the regulation of new biotech crop varieties, using this case to develop a theory of trustworthy regulatory policy. 4 The goal is at the same time normative, practical, and explanatory: by coming better to understand our institutions and the values they serve, we may understand why they were structured in the way they are, and we may come to see how they can be improved.

I will argue that the policy implemented by the United States Department of Agriculture (USDA) during the summer of 2020, (henceforth “the SECURE Rule”) is seriously flawed. I will evaluate this rule by reference to norms that should, as I will argue, find expression in trustworthy regulatory strategies. But the theory of trustworthy regulation employed to reach this judgment is quite general. The norms employed apply not only to the USDA, but to any of the other regulatory agencies, and to the different regulatory regimes created by their various administrative rules. One might hope that a critical analysis like this one might motivate improvements in regulatory policy. Indeed, in other areas of practical ethics, notably in biomedical ethics, philosophical analysis has led to the development of better justified and informed policies concerning the consent of research subjects, the treatment of patients, use and overuse of drugs, and even the individuation of diagnostic categories. Is it unreasonable to hope that work in agricultural bioethics might similarly be put into practice, and used to improve regulation of biotech innovation?

I begin in Section II with a discussion of the different strategies adopted for the regulation of gene edited foods and crops in the United States and the European Union. Focusing primarily on the USDA SECURE Rule, and on a recent ruling by the Court of Justice of the European Union, I argue that the very different approaches of the United States and the EU reflect the different ways that they prioritize underlying values: while the U.S. policy primarily aims to promote innovation by minimizing regulatory hurdles, the EU policy emphasizes the management and perhaps even the minimization of human and environmental risks. In Section III, I describe a set of five necessary conditions that should, as I argue, be met by a regulatory agency hoping to earn the trust of stakeholders. Then, in Section IV, I evaluate the USDA SECURE Rule in light of these criteria, arguing that the rule fails in a variety of different ways. Most importantly, it fails appropriately to manage risk, and cannot, therefore, promote justified public trust. Finally, Section V provides a concise statement of this conclusion, and briefly addresses the concern that the ideal for public regulation described in this essay is inappropriately utopian.

II. Public Trust and the Regulation of Biotech Innovation in the United States and the European Union

Why do people mistrust biotechnology? 5 Sometimes public skepticism is attributed to the slap-dash methods used to generate the first genetic alterations, and to the way in which the technology was introduced to the public. Early in the biotech era, genetic modification was a random, laborious, and expensive process. Radiation was used to increase the rate of random mutations, in the hope that some might turn out to be interesting or beneficial. Later, gene-guns were used to inject DNA into cells, in the hope that some of the injected material might continue to function as a beneficial mutation. Subsequent transgenic techniques were more controlled, allowing genetic sequences from one organism to be spliced into the genes of another. Even then, the results of biotech genetic transformations were difficult to predict. The newly induced traits were often surprising, even to the researchers who had induced them.

New gene editing technologies, especially those using CRISPR-Cas9, make it possible to effect genetic transformations by altering the “spelling” of a genome without introducing genetic material from an external source. 6 CRISPR is quicker, easier, cheaper, and more precise than earlier technologies employed for genetic transformations. 7 Its use has dramatically increased the rate of biotech innovation in a variety of research contexts. The benefits of this new technology are already striking; it could be used to develop flood- and drought tolerant crops, to address nutritional deficiencies, and to make agriculture more sustainable and environmentally appropriate. Skeptics urge that it could also be co-opted to promote private profits with few associated public benefits. In either case, the use of this new technology must still address significant public skepticism and fear. Actual risks may be much smaller than the public believes them to be. But new technologies that are perceived to be risky may be avoided even if actual risk is low, and even when their adoption would be significantly beneficial. Mistrust sometimes has significant costs. 8

With the advent of CRISPR, the United States and the European Union both moved to enact new rules to cover regulation of gene edited foods and crops. In March 2018, U.S. Secretary of Agriculture Sonny Perdue announced that the USDA would not pursue additional regulation of plants “that could have been developed through traditional breeding techniques.” 9 The announcement was part of a push for “regulatory relief,” designed to encourage innovation. The details of the USDA policy were eventually published in the Federal Register in May 2020 as the USDA’s new Biotech Rule, “The SECURE Rule” concerning USDA regulation of agricultural biotechnology. 10 The SECURE Rule specifies a new oversight policy that, in its first stage, permits scientists and corporations to determine for themselves the extent to which their new crop varieties should undergo regulatory review. Secretary Perdue made it clear that the new policy would apply to plants developed using “innovative new breeding techniques,” including genome editing using CRISPR. 11 He emphasized the value of new breeding techniques that can “introduce new plant traits more quickly and precisely, potentially saving years or even decades in bringing needed new varieties to farmers.” The techniques in question include genetic deletions, base pair substitutions, complete null segregants, 12 and gene insertions from compatible plant relatives. Since these technologies are new and proliferant, one might think that it would be appropriate to adopt a presumption to subject them to additional scrutiny. But the SECURE Rule notes that “there is no evidence that use of recombinant deoxyribonucleic acid (DNA) or genome editing techniques necessarily and in and of itself introduces plant pest risk, irrespective of the technique employed.” 13 The Rule specifies that there is no reason specifically to regulate varieties produced by gene editing because they do not introduce any new and regulable risk. 14 As we will see, there are reasons to call this claim into question.

The SECURE Rule imposes much lighter regulatory oversight than the regime it replaces. At the first stage, it entirely exempts from regulation products with a single-sequence genetic deletion, a single base-pair substitution, any modification that adds DNA sourced from within the plant’s own gene pool and not from a more distantly related species, or organisms that are descended from a modified plant but do not retain the modifications of the parent plant. 15 Plants that are modified such that the plant-trait mechanism of action is the same as another plant for which the USDA’s Animal and Plant Health Inspection Service (APHIS) has already conducted a regulatory status review are similarly free from regulatory oversight. Developers with plant products that meet one of these criteria may self-determine that they are free from regulation, or may notify the USDA, which then has thirty days to decide whether regulated development trials are needed. If not, experimental trials can proceed without additional oversight. A second level of regulatory oversight is applied to new varieties produced through multiple sequential genetic changes, or which do not otherwise qualify for exemption at the first stage. 16 For such varieties, developers may request a Regulatory Status Review (RSR) in which the USDA determines whether the plant has any plausible plant-pest risk. At the third and most stringent regulatory level, plants that are not exempt at the first levels must petition the USDA for nonregulated status. If the petition is accepted, then the plant escapes regulation; but if not, a permit is required. Only those plants that receive permits are subject to regulations, designed to prevent organisms from escaping field trials, and to ensure that the modified organisms will not become a plant pest. While earlier regulatory regimes subjected almost all genetically engineered (GE) plants to regulation, representatives from the USDA-APHIS expect that the new rule will exempt most of them. APHIS literature predicts that under the new rule, only “about 1% of [genetically engineered] plants might not qualify for an exemption or deregulation after an initial review.” 17

The diminished level of regulatory oversight implied by the SECURE Rule pleased some, but dismayed and confused others. 18 Plant breeders and seed companies were relieved to hear that they face lighter regulatory burdens. Others argued that new breeding techniques should be treated with caution. Still others regard gene edited products as a new and potentially dangerous technology. Survey data regularly indicate that both U.S. and EU consumers have a significant desire for regulation of biotechnology, and it has been assumed that they will be similarly wary of crop varieties that have been CRISPR-edited. 19

The EU settled on a very different regulatory strategy. Four months after Secretary Perdue’s initial announcement, in July 2018, the Court of Justice of the European Union (CJEU) issued a press release specifying the regulatory status of “organisms obtained by mutagenesis.” “Mutagenesis” refers to any process that changes an organism’s genetic makeup by mutation, and which includes both transgenic and gene-edited organisms. According to the CJEU, all such organisms “are GMOs and are, in principle, subject to the obligations laid down by the GMO directive.” 20 The Court’s press release followed legal action by Confédération Paysanne, a French agricultural organization. Confédération Paysanne brought this action, joined by eight other associations, arguing that new mutagenesis techniques are significantly different from those employed prior to the adoption of the EU’s GMO directive. 21 For the past twenty years, genetically modified organisms have been identified, defined, and regulated under European law through the 2001 GMO Directive. 22 The new judgment clarifies that gene edited plant varieties will be included as GMOs and regulated as such under that directive.

Contrasting approaches to regulation in the United States and the EU create very different regulatory environments. While the USDA announcement emphasized the similarity between existing crops and crops produced by gene editing, the EU ruling states that plants produced by gene editing may introduce striking new risks. While the USDA statement notes no additional risks associated with plants produced using innovative breeding techniques, the CJEU notes Confédération Paysanne’s view that “the use of herbicide-resistant seed varieties carries a risk of significant harm to the environment and to human and animal health, in the same way as GMOs obtained by transgenesis.” 23 Like the USDA guideline describing alterations that “could otherwise have been developed using traditional techniques,” the CJEU’s exclusion of alterations that “do not occur naturally” is both vague and ambiguous. Alternative interpretations will need to be distinguished and addressed by the courts or the legislature before the implications of this ruling will be entirely clear. Genetics and organismal biology are swiftly advancing areas of scientific inquiry, but they have not provided, and may not be expected to provide, a clear and final view about which kinds of alteration can and which cannot occur naturally.

In other respects, however, the differences between the EU and the U.S. regulatory strategies are striking. The USDA emphasizes the value of plant innovation, and seeks to get out of the way of science and industry by minimizing regulatory hurdles. The European Parliament and the CJEU both emphasize the possibility of human and environmental risk. Both strategies have advantages and weaknesses: while the European model seems to impose a heavy regulatory burden on a technology that has relatively low risk, the U.S. model, as I will argue, is ineffective and haphazard in the way it manages risk. This undermines justified public trust in biotech innovation and could slow acceptance of the regulated technology.

Do the EU and U.S. strategies for regulation of biotech innovation simply reflect different but, perhaps, equally justifiable methods for balancing these twin objectives, to promote innovation while protecting against human and environmental risks? I will argue that they do not. The SECURE Rule neither reflects a science-based regulatory strategy nor effectively measures and manages possible risks; and the regulatory regime the rule describes neither deserves nor is likely to inspire the public trust that would be necessary and appropriate for the effective promotion of biotech innovation. Space does not permit an analysis of the changing state of biotech regulation in the EU, but my critical analysis of the USDA rule should not be taken as an endorsement of the EU regulatory strategy. The EU regulatory regime has substantially blocked adoption of biotech innovation in Europe and has slowed the development of biotech innovation worldwide. European import restrictions have had unfortunate global implications, since they provide a disincentive for farmers in poor countries to adopt technologies that are, in some cases, urgently needed. Thus, while this essay focuses critical attention on regulation in the United States, this should not be taken as advocacy for the differently problematic regime adopted in the EU.

III. Risk Management and Public Trust

Trust is a morally ambiguous commodity: it may be wrongly bestowed and fraudulently sought. To earn trust, one must be trustworthy, but to gain trust it’s only necessary to seem trustworthy. Trust in persons is different from trust in technology or trust in institutions. In the case of biotech innovation, an agency seeking to earn public trust may be working against a deep-seated psychological propensity: status-quo bias renders us naturally reluctant to accept what is new or different. 24 This propensity may be quite reasonable and appropriate in many environments. Novelty—divergence from the status quo—can come with new and unexpected or unpredictable risks, so perhaps we should expect this bias to arise independently in other species as well. But while status quo bias may protect us from the dangers of the new, it also renders us reluctant to accept and to use new technologies that might be a benefit. Precipitous acceptance of novelty may sometimes be risky, but reluctance to accept novelty can present similar risks.

Status quo bias may be one source of public mistrust of new technologies, and this propensity for mistrust must be taken into account by agencies like the USDA that seek to gain, and (one hopes) also to merit the trust of the public. Regulatory institutions should not simply seek to generate public trust in valuable technological innovations. They should seek to earn public trust by verifiably and transparently protecting public rights and interests. I propose here a set of five conditions that should be met if a regulatory strategy like the USDA SECURE Rule is to merit public trust. The implicit account of trustworthy regulatory policy is general, and should apply, mutatis mutandis, to other regulatory rules as well.

(1) Effective and Fair Management of Risks and Benefits. 25 The raison d’être of regulatory agencies is their role to protect the public from harm while minimizing interference with commerce and innovation: too much regulation will stifle innovation, but too little regulation would inadequately protect the public. This means, in many cases, careful and ethically informed use of risk-cost-benefit analysis to evaluate and minimize risks (subject to constraints) when they are manageable, and to prohibit the deployment of technologies that have unmanageable risks. But the effective and just management of risk does not simply require that the expected benefits outweigh the expected costs: outcomes that are cost-effective in this sense may still involve unfair distribution of risk and benefit, such as, for example, if the benefits accrue exclusively to the powerful and wealthy while the risks are carried by communities that are powerless or poor. It also matters whether risks are involuntarily imposed or voluntarily undertaken by those who bear them—without express consent, it is not permissible to subject people to excessive significant risks even when overall benefits outweigh overall costs. 26 And even when new technologies are reasonably expected to have benefits that outweigh their costs, regulatory agencies must ask whether the consequent imposition of risks and costs would violate the rights or compromise the liberty of those who bear them. Risk management decisions must therefore be made within the bounds of constraints, including requirements of fairness, autonomy, and respect for rights.

(2) Science-Based Regulatory Strategies. Regulatory agencies usually claim to use “science based” risk assessment tools, instead of relying on intuitions or fears. Indeed, the USDA touts the SECURE Rule as a science-based regulatory strategy. While trustworthy regulatory strategies must appropriately use the best available scientific data, and while appropriate formal models should be used to analyze the level of risk, to say that this means that science is the “base” of the strategy is a mistake. At several junctures, there are ineradicably subjective or nonscientific values that must be incorporated into this process. For example, in order to measure the degree of risk, analysts must assign a value, or a value range, to represent the badness (or goodness) of alternative outcomes. And while formal tools may roughly quantify risks, the judgment that risks are unacceptably high (or acceptably low) involves value judgments that may be justified and well reasoned (or unjustified and badly reasoned), but these judgments are not “scientific” in any strict sense. Nor will standard scientific methods provide a basis for judging whether risks imposed are unjust or unfair or that they are unreasonable or excessive. Ideals of justice, fairness, reasonableness, and harm are not essentially scientific standards. But the ideal that policy should be “science based” cannot mean that such standards are ignored or omitted. Policies that fail to meet these important normative standards would be untrustworthy in the extreme.

To understand the proper sense in which policies should be “science based,” it might help to look at a policy that fails this test. Consider the regulatory regime that was recently replaced by the new SECURE Rule: before implementation of the new rule, the trigger for USDA regulation for many engineered organisms involved the method by which they had been transformed. In many cases, Agrobacterium tumefaciens, a soil bacterium, was used to transfer segments of DNA into the plant genome. Agrobacterium is itself a plant pest, since it can cause crown gall disease in several species, by transferring some of its own DNA into the DNA of host-plant cells. This ability makes it useful, since Agrobacterium can be persuaded to transport desired DNA sequences into host plants to effect a transgenic transformation. The resultant genetically modified organism may have no lingering vestige of Agrobacterium DNA, and plants that have been modified using Agrobacterium are not at higher risk of becoming plant pests, as compared to plants that have been modified by other means. Since the use of Agrobacterium is not automatically associated with plant-pest risk, the regulatory trigger employed under the previous regime did not track the relevant risk. Nonetheless, USDA regulation under this earlier regime was touted as “science based” because the regulatory process used data acquired through scientific investigation, and because it employed formal risk analysis methods. But since this so-called “science based” regulatory regime did not track risk and did not assign higher degrees of regulatory oversight to cases where actual plant-pest risk was higher, it therefore did not appropriately manage risk. It failed, that is, at the most fundamental norm we should apply to regulatory rulemaking.

When regulatory policies are said to be “science based,” this is usually intended as a contrast with methods of policy choice that are clearly inappropriate. It would be wrong to adopt policies that replaced proper risk analysis with fear, regulating to protect people from what they are afraid of regardless of the actual risk. Clearly, regulatory strategies should be informed by the best available scientific data and evidence and should appropriately use formal techniques to evaluate risks. They must not, however, use the façade of science based risk analysis to exclude crucial considerations of fairness, justice, harm, or reasonableness of risk. And measures designed to manage risk must certainly track actual risk levels, and must gauge the degree of regulatory oversight to the level of actual risk. Regulatory regimes that fail to do these things cannot be called “science based” in any meaningful sense.

(3) Truthfulness. Obviously, agencies that lie to their stakeholders do not merit trust. But the obligation of truthfulness goes beyond the minimal obligation to avoid intentional and knowing communication of falsehoods. Truthfulness requires the use of language that effectively communicates the true or best information to stakeholders, without obfuscation and without the use of unnecessarily confusing terminology.

(4) Transparency. Without transparency, truthfulness cannot merit trust. Transparent decisions should, to the extent possible, be reviewable. It should be evident that they have been well made and based on good, publicly justifiable reasons. 27 In the ideal case, transparent decision-making processes foster trust because they can be understood and analyzed by stakeholders or stakeholder representatives. Just as public institutions in general should be publicly justifiable—that is, justifiable to constituents and stakeholders—regulatory institutions and their rulemaking processes should be publicly justifiable to those who are affected by administrative rules and decisions. If regulation is otherwise well constructed, transparency will increase public trust, since transparency facilitates public understanding of regulatory protections. By contrast, if regulations are not well constructed, increased understanding will decrease trust. This might be the paradigm test of trustworthiness: when regulation is trustworthy, then transparency provides understanding; and understanding in turn results in increased trust.

Will transparency have this effect in practice? Sometimes the reasons behind regulatory decisions are complex, more readily justifiable to experts than to the public at large. Reasons justifiable to experts may sometimes be opaque to the non-expert public. Sometimes there may be disagreement among experts about which kinds of reasons can be publicly justified. In practice, there may be cases where transparency generates mistrust, not because policies or the reasons behind their implementation are bad, but because they are easily misunderstood. Even then, however, the effort to make the regulatory process transparent will serve the goal of public justification. For obvious reasons, opaque decision processes undermine trust, and policies that undermine transparency will be less trustworthy.

(5) Responsiveness. A responsive agency must provide opportunities for stakeholders to express concerns and objections and must not treat public comment as a perfunctory performative exercise. Responsiveness is necessary for a variety of different reasons, but not least among these is the fact that diverse public input should appropriately inform risk-cost-benefit calculations by helping analysts to understand what is at stake and what weight to place on the values that may be at risk in regulatory decision-making. Public responsiveness is primary, but where biotech innovation is the target of USDA regulation, stakeholders include plant breeders and developers as well as members of the general public. Regulatory agencies must be responsive to stakeholder concerns about both overregulation and underregulation in the management of risks. However, responsiveness introduces the possibility of error: if regulatory agencies regulate perceived risks instead of actual risks, they abdicate their most fundamental obligation. And if they are more responsive to industry than to the public, this may be taken to indicate that the agency has been captured by the industries it is supposed to regulate.

Responsive agencies need to act appropriately to take into account public concerns, but it will not always be appropriate simply to act on public concerns, to regulate what people fear instead of what poses a real danger. To see why this might be so, consider a study conducted by Paul Slovic. 28 Slovic plotted prospective hazards—events that involved risk and possible regulation to mitigate that risk—on two axes: the vertical axis measured the degree to which a risk was “unknown,” and the horizontal axis measured people’s sense of dread associated with the risk. For example, Slovic identified risks associated with cadmium and trichloroethylene that were unknown; people had not heard of these hazards. Risks associated with nuclear weapons and nerve gas were known and were associated with a high degree of dread. Slovic’s survey data showed that people had a higher degree of concern and a greater desire for regulatory intervention to mitigate risks when those risks were in the unknown/dread quadrant, and lesser desire for regulation of risks that were in the known/not-dread quadrant. This effect appeared to be independent of the actual degree of risk associated with the hazards included in the study. Thus, subjects had a relatively low level of concern and low desire for regulation of risks associated with swimming pools, which were known/not-dreaded. By contrast, they had a surprisingly high level of concern and desire for regulation of risks associated with satellite crashes, which were in the unknown/dreaded quadrant. Actual risks associated with swimming pools are significant, while risks associated with satellite crashes are infinitesimal. Some simple regulations governing pool construction and management are effective at protecting people from harm and death: for example, pools can be constructed with easy step access so that children who fall in can get out without help. Pool covers can be made strong enough to support the weight of a child, so that people don’t fall through. These regulations are especially important to protect children from harm. A regulatory agency that responded to fears instead of hazards would have recommended excessive regulatory expenditures to protect people from satellite crashes and inadequate efforts to regulate pool safety.

Just as regulatory agencies can be inappropriately responsive to public fears, they can also be inappropriately responsive to the industries they are supposed to regulate. One common perception is that the USDA and other regulatory agencies are subject to capture by their own regulated industries, or by administrators who come from those industries. A captured agency cannot be trusted because it will systematically reflect the interests of industry rather than the interests of the public, in contexts where those interests are opposed. Regulatory capture—even the perception of regulatory capture—reasonably undermines public trust that regulatory agencies will effectively manage risks. The inference may go both ways: failure appropriately to manage risk is sometimes taken as evidence that an agency has been subject to capture. It is safe to say that in the case of the USDA, there has often been a problematic public perception that the agency has been captured, and that it therefore reflects the interests of industry and not the interests of the public. This has been a significant source of public mistrust.

IV. USDA’s SECURE Rule and the Regulation of Biotech Innovation

In the United States, management of biotech innovation is orchestrated under the Coordinated Framework for Regulation of Biotechnology, implemented in 1986. 29 The Coordinated Framework divides different tasks—different focus areas—among the various regulatory agencies, including the FDA, EPA, and USDA. The USDA’s authority to regulate biotechnology is limited, under this framework and under its legislative mandate, to a rather narrow focus on plant-pest risk. This leaves other agencies to evaluate broader risks to environmental and human health. The SECURE Rule constitutes the latest attempt to develop a regulatory regime that is focused on “science-based” risk assessment, and which is appropriately responsive to other public stakeholder interests.

A. Veracity, transparency, and responsiveness

How does the SECURE Rule fare when evaluated using norms of veracity, transparency, and responsiveness? While I will not allege that the USDA has been dishonest in its development and promulgation of the new rule, there are good reasons to question whether the new regulatory regime described by SECURE is appropriately transparent and responsive to stakeholder concerns. 30

Transparency requires that decision-making should be reviewable by stakeholders. Under the SECURE Rule, all initial regulatory decisions will be made by plant breeders themselves. Even USDA regulators will have no oversight authority with respect to plants that involve a single base-pair alteration, or plant innovations that involve existing plant-trait action mechanisms. SECURE allows developers simply to decide that they are exempt. If the activities in question were not associated with relevant risks, this might be appropriate. But as we will see, the SECURE Rule does not effectively track risk. It seems unlikely that public stakeholder trust would increase as stakeholders come to realize that plant breeders can mostly exempt themselves and their products from regulation in the first stage.

In a similar manner, the SECURE Rule provides only a low level of USDA responsiveness to expressed stakeholder concerns. As Greg Jaffe has pointed out, by exempting most products from regulation in the first stage, SECURE precludes public response ab initio. 31 Section III above defended the value of responsiveness, but also noted that there are inappropriate forms of responsiveness. In the case of biotech innovation, it would be inappropriate for the USDA to regulate on the basis of public fears that cannot be substantiated—to do so would risk overregulation that would infringe the rights of plant breeders to deploy innovative products even when they are demonstrably safe. If anything, the SECURE Rule moves to the opposite extreme: it is likely that the SECURE Rule will release developers from regulatory oversight in the vast majority of cases. There is concern that this constitutes excessive protection of the interests of industry and plant breeders, at the expense of the public.

However, most experts judge that the actual risk levels associated with plant biotech innovation are low. Might one respond that the USDA strategy minimizes regulation at this early stage because regulatory oversight is simply unnecessary to govern such minimal levels of risk? There are three responses to this argument, which will be elaborated in more detail in what follows: First, while risks associated with most innovative biotech products may be low, they cannot be known to be low in the absence of regulatory oversight. Single base-pair alterations may sometimes result in a significant increase in the relevant risk, but the SECURE Rule would not trigger regulatory oversight even if it did. Second, even where overall risk levels are low, the level of regulatory oversight should still be indexed to the level of risk. Third and finally, increasing rates of innovation can result in increased risk even when each individual innovative event is associated with risk levels that are very low.

B. Science-based regulation and the management of risk

In the discussion of “science based” regulation in Section III, I argued that the regulatory regime recently replaced by the SECURE Rule was not properly science based, in part because that rule indexed the level of regulatory oversight to the use (or not) of known plant pests like Agrobacterium in the development process. Since this regulatory trigger is not associated with higher degrees of risk, the former rule failed properly to track risk. The new SECURE Rule does a little better: instead of focusing on whether a plant pest was used in the development of a genetically modified organism, the new rule focuses on properties of the organism itself. Since the relevant risk is primarily a function of phenotype not genotype, and since risk is not in any direct way associated with the use of Agrobacterium (or other plant-pest organisms) in the development process, this is change in the right direction. But for several important reasons, the new rule still fails appropriately to manage the relevant risks.

The U.S. Plant Protection Act defines a “plant pest” as follows:

The term “Plant Pest” means any living stage of any of the following that can directly or indirectly injure, cause damage to, or cause disease in any plant or plant product: (A) a protozoan, (B) a nonhuman animal, (C) a parasitic plant, (D) a bacterium, (E) a fungus, (F) a virus or viroid, (G) an infectious agent or other pathogen, (H) any article similar to or allied with any of the articles specified in the preceding paragraph. 32

While USDA risk management is limited to risks that lie in the domain specified by this definition, the definition itself is fairly broad. The problem with the SECURE Rule is that there are predictable cases where significant plant-pest risk will not trigger regulatory oversight under the new rule. First, under the new rule many plants are simply exempted from all regulatory oversight from the start. Transformations involving a single sequence deletion, substitution, or addition from the plant’s gene pool are simply exempt. Developers need not check with the USDA if their engineered or edited organism falls into one of these categories; they can simply decide for themselves that they are exempt from regulation. Second, SECURE exempts from regulation plants that have the same plant-trait mechanism of action as another organism the USDA already regulates. If a new organism employs the same underlying biological process to achieve a desired function, then once again developers can decide for themselves that their product is not regulated by USDA.

But single-sequence deletions/substitutions/additions can sometimes involve dramatic changes in phenotype, and multiple-sequence genetic alterations may sometimes involve no discernible phenotype changes at all. 33 Plant-pest risk is associated with phenotype, not with the number of alterations employed in the development process. The new rule would seem to incorporate the same problem that plagued the previous regulatory regime: the trigger used to identify which genetically altered crops are liable for regulatory oversight is not appropriately indexed to the level of actual risk. Noting this problem, Greg Jaffe writes “While many, if not most, plants with a single deletion may not present any plant pest risks, if one does, shouldn’t USDA regulate it?” 34 Like the previous regulatory regime, the SECURE Rule fails at the most fundamental norm we should apply to regulatory rules.

A second argument leads to the same conclusion: A science-based regulatory policy would classify organisms as regulable (or not) depending on the likelihood that they present an actual risk. It would therefore be triggered by the phenotype of the regulated organism, preferably in a way that is context-sensitive, since the same phenotype might present risk in some environments but not in others. For example, experimental trials of cotton variants would present far less risk if trials (presumably indoor trials) were held in Minnesota, where any escaped individuals would be unlikely to survive. Cold-weather brassica variants would present less risk if trials were held in a tropical location like southern Arizona. In general, the risk posed by experimental trials of new varieties will be a function of both the phenotype and the environment in which the trial takes place. A science-based approach would index increasing levels of regulatory oversight to events with higher risk. But the USDA SECURE Rule entirely fails to do this at the first stage of the regulatory process.

C. Comprehensive risk management and the rate of innovation

The USDA is not charged to monitor overall risks of human and environmental harm posed by biotech innovation. It is institutionally required to focus on plant-pest risk. But the goal of the Coordinated Framework for the Regulation of Biotechnology is comprehensive risk management. The coordinated framework distributes to different agencies the management of different varieties of risk. Those who designed this regulatory framework apparently assumed that such piecemeal regulation could provide systematic oversight. This assumption fails to take into account the significance of an innovation like CRISPR, which does not merely provide an alternative method for developing biotech innovations, it also dramatically changes the rate of innovation. By making genetic editing cheaper, easier, and quicker, the use of CRISPR has resulted in the development of many new varieties in recent years. As the number of innovative biotech products that might be eligible for regulatory oversight increases, the potential burden for agencies working under the Coordinated Framework would also be expected to increase. As we have seen, the SECURE Rule renders many biotech innovations exempt from regulation ab initio, and is in no way responsive to changes in overall risk that result from increased rates of product development. This may be a good way to reduce the workload at the USDA, but it is not an effective way to manage aggregate risk.

How significant are the risks involved? Most experts reasonably assume that the risk associated with individual genetic engineered plants is quite low. There are many reasons given for this belief: First, most biotech innovations, one might argue, involve incremental changes that are unlikely to cause significant changes in the environment or to have significant human health impacts. But second, while it may be possible to produce genetically engineered plants that would have devastating environmental effects if introduced into our native ecosystems, few people would have a motive to develop such a product: plant breeders would be liable for environmental and human damage, so they have a strong motive to avoid producing a product that would cause such damage. This second reason might be called “self-regulation through legal liability.” Finally, existing biotech crops—those that have been in use for the past decades, since the first biotech crop was introduced in 1983—have proven to be quite safe. No significant environmental or human harm can be traced to the use of existing biotech crop innovations.

There are, of course, reasons to question each of these arguments: Incremental changes can sometimes have dramatic effects on human or environmental health. “Risk management by lawsuit” is unsatisfactory, since lawsuits can only take place after harm has already been caused. And legal action is less likely to be successful if plaintiffs cannot show that the harms they suffer were specifically caused by the actions or the product of the defendant. In relevantly similar cases in environmental law, such legal actions have often failed, even where it is plausible to believe that the plaintiff’s harms were caused by the defendant’s action. Finally, relatively few genetically engineered traits are widely in use, at present, so the safety of extant varieties might not justify confidence that future varieties will be similarly safe. Most current GE traits involve herbicide resistance (e.g., glyphosate tolerant soybeans and canola), or pest resistance (e.g., Bt corn and cotton). 35 These traits are well tested and may reasonably be expected to be safe. But CRISPR may change all of this: some innovations (e.g., gene drives) could have wide-reaching effects, and it is difficult to judge in advance and impossible to judge a priori what risks might be presented by products that could be developed using these new technologies. 36 As innovative plant breeding techniques are applied more widely, there is reason for concern that some innovations may impose risks quite unlike those of current varieties.

The widespread use of CRISPR has already changed the rate of biotech innovation. Products under development or already becoming available include non-browning apples and mushrooms, low nicotine tobacco, fragrant moss for home use, nutrient fortified bananas, and a wide variety of other new products and traits. As the rate of innovation changes, there is little reason to project that future innovations are likely to be safe merely because the past innovations that are already in use have been safe. Even if the level of risk associated with each new product is very low, the overall probability that some truly dangerous or risky product will escape regulatory oversight or will be introduced with inadequate oversight will increase. Overall risk is an increasing function of the number of risk-bearing events, and the number of risk-bearing events has increased with the rate of innovation. But the USDA’s new SECURE Rule is in no way responsive to this very significant change. It is not, therefore, an effective tool for the management of risks associated with biotech crop innovation in an era when the rate of technological change is increasing rapidly. The Coordinated Framework itself is ill suited to address this cause of increasing overall risk.

The USDA emphasizes that gene editing techniques do not introduce any new regulable risk, and that there is no reason to expect that products produced using CRISPR or other gene editing tools will be more risky than products produced using other methods for genetic transformation. This may well be true: at least, I have given here no reason to believe that products of gene-editing are in any way riskier than other genetically modified foods and crops. It seems quite reasonable to suppose, as the USDA does, that the risk associated with each product is likely to be acceptably low. But this is consistent with the possibility that overall risk is increasing with the rate of innovation, as the annual number of minimally risky development events increases. Regulatory rules that appropriately scale regulatory oversight to respond to risk must recognize and accommodate this change in overall risk. Neither the SECURE Rule nor other provisions in the Coordinated Framework do this.

By pointing these problems out, I do not mean to imply that the risks associated with biotech crop innovation—even aggregate risks—are high. My own presumption is that even the aggregate risk associated with biotech innovation may be quite acceptably low. By contrast, the risk if we were to forego the use of plant biotechnology may be quite high. The level of risk associated with individual crop varieties will of course be much lower than aggregate risk levels. But even under the assumption that the level of risk is low, it is inappropriate to implement regulatory rules that give the appearance of risk management but fail to link the level of regulation with the level of risk. This is not real risk management it is illusory risk management. Such a ruse is especially inappropriate in an innovation sector where the level of public concern—the level of public perceived risk—is relatively high. To make regulation trustworthy is not to replace regulation of actual risks with regulation of perceived risk, but to require implementation of actual risk management instead of settling for an illusion. As noted earlier, a paradigm of untrustworthy regulation is the case where regulation fails appropriately to manage risk. In such a case, increased understanding of the policy will lead to decreasing trust.

It is worth emphasizing that “better regulation” does not mean “more regulation.” The argument given here does not suggest that the level of regulatory oversight provided by the SECURE Rule is too low, or that we need stricter or more comprehensive regulation to provide for effective and trustworthy management of plant-pest risk. An argument for that claim would need to provide evidence that the risk level is higher than the level of regulation present, and I have advanced no such argument here. Under the SECURE Rule relatively few biotech crop varieties will trigger regulatory oversight, and one could argue that this is an acceptable outcome, or that it would be if otherwise trustworthy regulatory mechanisms were in place. As noted earlier, it is important to avoid both overregulation and underregulation. But if the regulatory trigger is unrelated, or inappropriately related to the actual level of risk, the result will be misregulation that both inappropriately regulates low-risk products and inappropriately omits to regulate those that are associated with higher risk. Trustworthy regulatory rules would appropriately respond to the level of risk involved in biotech innovation. I have argued that the SECURE Rule fails to do this.

To accommodate the objections discussed here it would be necessary to change the structure of both the Coordinated Framework and the SECURE Rule. Modification of the SECURE Rule itself would be a good start and would be much easier, since it would not require interagency negotiation. But perhaps there is another measure that would mitigate, though not fully address, the problem. Premarket testing of new products might appropriately respond to public fears and concerns about biotech innovation, while at the same time protecting plant breeders’ interest to demonstrate that their innovations are safe. A trustworthy USDA-implemented regime of premarket safety testing would effectively serve both interests, even if participation were voluntary. Plant breeders who believe that their products are safe would benefit from the opportunity to gather evidence demonstrating their safety. And consumers worried that biotech products may be risky would benefit if the USDA, or other agencies, could provide evidence that products are safe. Perhaps there is concern that mandatory premarket testing would constitute overregulation and would impose excessive demands on the agency. As the rate of biotech innovation increases, it may become infeasible to impose testing on all innovative plant products, and in any case such blanket testing would be unnecessary in many cases, since for most such products the premarket risk is extremely low. However, it is in the interest of developers to demonstrate that their product is safe, to promote public trust. A voluntary premarket testing program could be mutually beneficial, since it could appropriately respond to skeptical concerns while facilitating public acceptance. 37 If the USDA hopes to promote public trust in itself as a regulatory agency, and in the products it regulates, voluntary premarket testing seems uniquely suited to serve this interest.

V. Utopian Critique and Public Trust

I conclude with a practical recommendation for the reform of the USDA SECURE Rule: Proper USDA regulation of plant-pest risk would involve serious investigation into the properties of plant pests and would almost certainly focus on phenotype and on environmental factors that make it more likely that a given plant phenotype will be a pest in a given environment, not on the number of base pairs involved in the transformation. But there is a broader recommendation for the regulation of biotechnology, or of innovation in any technology sector: Properly graduated regulation of broader risks associated with biotechnology would require a more systematic and integrated regulatory regime than the present Coordinated Framework for Regulation of Biotechnology can provide, so that levels of regulatory oversight could be indexed to different levels of risk, sensitive to changes in risk levels due to changing rates of innovation. This standard should apply to regulatory regimes covering other areas of innovative technology. To deserve public trust, such regulatory regimes would also need to be responsive to reasonable public input, and transparent, truthful, and fair in operation. Is it utopian to think that an actual regulatory regime could be sensitive and responsive in this way? And where objective risk levels are reasonably believed to be low, would the implementation of such a regulatory regime cost more than it would be worth? If public mistrust reduces public use of valuable innovation, the cost of untrustworthy regulation may be relatively high. Still, the ideals described in this essay are ideals, and they might be more (or less) perfectly instantiated in various different regulatory regimes. In the real world, perhaps no such regime will be perfect. Indeed, the reason this essay has focused on a particular regulatory policy was, in part, to avoid unreasonable idealism. The value of ideals is that they can be used to evaluate and to improve the status quo, not to posit some unachievable Platonic ideal of perfection. In this spirit, it seems clear that the SECURE Rule and the Coordinated Framework could both be dramatically improved, and that improvements would render them more deserving of public trust.

One goal of this essay has been to evaluate the SECURE Rule itself. But the second, and much broader goal has been to describe a set of requirements that should be satisfied if regulatory rules and decisions are to merit the trust of the people they are intended to protect, and those they are intended to regulate or constrain. Trustworthy policies may not always garner public trust. But it is always a moral mistake for regulatory agencies to work to gain public trust instead of working to deserve it.

Footnotes

*

Departments of Philosophy and Political Science, Iowa State University, jwcwolf@iastate.edu. Competing Interests: Clark Wolf’s research is supported in part by grants from the National Science Foundation and the United States Department of Agriculture. Thanks to Carmen Bain, Christopher Cummings, Michael Dahlstrom, Clark Ford, Leifer Leiferson, Sonja Lindberg, David Peters, Theresa Selfa, Kan Wang, John Wolf, and Jeff Wolt, and graduate students in Sustainable Agriculture 610 for tremendously valuable discussion of topics covered in this essay. Thanks to David Schmidtz and an anonymous reviewer for extensive comments on an earlier draft. Thanks also to the other contributors to this volume, whose comments led to quite substantial changes and revisions. Work on this project was supported in part by the USDA National Institute of Food and Agriculture’s (NIFA) Social Implications of Emerging Technologies program, grant no: 2018–67023-27679, and by the NSF program sponsoring research on Innovations at the Nexus of Food, Energy, and Water Systems (INFEWS) grant no: 17–1605. Friends and colleagues who commented on this essay still have substantive disagreements with some of the arguments it includes. The author is therefore solely responsible for all errors and faults that remain.

References

1 See Bratspies, Rebecca M., “Regulatory Trust,” Arizona Law Review 51 (2009): 575631Google Scholar, and Kasperson, Roger E., Golding, Dominic, and Tuler, Seth, “Social Distrust as a Factor in Siting Hazardous Facilities and Communicating Risks,” Journal of Social Issues 48, no. 4 (1992): 161–87CrossRefGoogle Scholar.

2 Rebecca M. Bratspies, “Regulatory Trust,” 625.

3 Lee Rainie, Scott Keeter, and Andrew Perrin, “Trust and Distrust in America,” Pew Research Center: U.S. Politics and Policy (July 22, 2019), https://www.pewresearch.org/politics/2019/07/22/trust-and-distrust-in-america/. Accessed October 2020.

4 “SECURE” is an acronym. The full title of the new regulation is the “Sustainable, Ecological, Consistent, Uniform, Responsible, Efficient (SECURE) Rule. Information on this rule, direct from USDA-APHIS, is available here: https://www.aphis.usda.gov/aphis/ourfocus/biotechnology/biotech-rule-revision/secure-rule/secure-about/340_2017_perdue_biotechreg Accessed October 2020.

5 Paul Thompson, Food Biotechnology in Ethical Perspective (Cham, Switzerland: Springer, 2020) provides a careful scholarly treatment of many different sources of resistance to biotech foods. I do not mean inappropriately to simplify the problem by supposing it is merely a matter of perceived risk. Scott’s, Dane Food, Genetic Engineering, and Philosophy of Technology (Cham, Switzerland: Springer, 2018)CrossRefGoogle Scholar treats a series of philosophical reservations about food biotechnology, as does Comstock, Gary, Vexing Nature (Boston: Kluwer, 2000)CrossRefGoogle Scholar. A 2002 report by the Committee on Environmental Impacts Associated with Commercialization of Transgenic Plants (Washington, DC: National Academies Press, 2002) analyzed an earlier generation of agricultural biotech innovations.

6 An organism is transgenic if DNA from another species has been spliced into its genome. It is cysgenic if DNA from a different variety within the same species has been spliced into its genome. In gene-edited organisms, alterations are instead the result of direct genetic manipulation, without importation of DNA from a different variety or species.

7 I do not mean to overstate the precision of existing technology. The authors of the Consensus Study Report on Heritable Human Genome Editing (Washington, DC: National Academies Press, 2020) emphasize that CRISPR is not sufficiently precise to permit its use in heritable human genome editing. In its current state, researchers cannot guarantee that editing will not adventitiously introduce DNA, nor can they predict, with sufficient reliability, the phenotype impact of edits. Researchers developing genome-edited hornless cattle, for example, were found to have accidentally inserted genes that expressed antibiotic resistance. See Antonion Regalando, “Gene-Edited Cattle have a Major Screwup in their DNA,” MIT Technology Review (August 29, 2019). https://www.technologyreview.com/2019/08/29/65364/recombinetics-gene-edited-hornless-cattle-major-dna-screwup/ Accessed October 2020.

8 Paul Slovic argues that biotechnology is perceived by consumers to be an “unknown” risk associated with feelings of “dread,” and that this accounts for public fear and skepticism. See his “The Perception of Risk,” Science 236 (1985): 280–85, and his later book on the same topic, The Perception of Risk (London: Earthscan Publications, 2004).

9 United States Department of Agriculture (USDA), “Statement on Plant Breeding Innovation,” (2018). https://www.usda.gov/media/press-releases/2018/03/28/secretary-perdue-issues-usda-statement-plant-breeding-innovation. Accessed October 2020.

10 United States Department of Agriculture (USDA), “SECURE Rule: Final Rule on the Movement of Certain Genetically Engineered Organisms,” Federal Register 85, no. 96 (May 18, 2020): 29790–838. https://www.aphis.usda.gov/brs/fedregister/BRS_2020518.pdf. Accessed October 2020.

11 USDA, “Statement on Plant Breeding Innovation.”

12 A null segregant is a descendant of a genetically engineered plant, but a descendent that does not retain the change induced in the parent plant.

13 USDA, “Statement on Plant Breeding Innovation.”

14 USDA SECURE Rule.

15 USDA SECURE Rule, p. 29791. See also Greg Jaffe, “USDA’s New Biotech Rule Explained,” Center for Science in the Public Interest (June 2, 2020). https://www.cspinet.org/news/biotech-blog-usda%E2%80%99s-new-biotech-rule-explained-20200602.

16 Kan Wang has suggested to me that the rule appears to imply that plants developed with sequential single-base-pair alterations might escape regulation altogether. Since the developer could self-identify as free from regulation at each stage of development, it appears that the rule leaves a loophole whereby developers might satisfy the letter, but not the spirit of the regulation, by making each base-pair alteration separately.

17 Erik Stokstad, “United States Relaxes Rules for Biotech Crops,” Science (May 18, 2020). Emphasis added. doi:10.1126/science.abc8305.

18 See, for example, Maywa Montenegro, “How a New Biotech Rule will Foster Distrust with the Public and Impede Progress in Science,” The Conversation (June 1, 2020) https://theconversation.com/how-a-new-biotech-rule-will-foster-distrust-with-the-public-and-impede-progress-in-science-139547, and Steve Davies and Philip Brasher, “USDA Eases Biotech Regulations to Exempt Some Crops,” AgriPulse (May 14, 2020), https://www.agri-pulse.com/articles/13694-usda-announces-regulatory-exemptions-for-ge-plants.

19 C. Funk and L. Rainie, “Public and Scientists’ Views on Science and Society,” Pew Research Center Report (January 29, 2015) https://www.pewresearch.org/science/2015/01/29/public-and-scientists-views-on-science-and-society/; C. Funk, B. Kennedy, and M. Heffron, “Public Perspectives on Food Risks,” Pew Research Center Report (November 19, 2019) https://www.pewresearch.org/science/2018/11/19/public-perspectives-on-food-risks/.

20 Court of Justice of the European Union (CJEU), “Press Release no. 111/18: Organisms Obtained by Mutagenesis are GMOs and Are, in Principle, Subject to the Obligations Laid Down by the GMO Directive,” (July 25, 2018) https://curia.europa.eu/jcms/upload/docs/application/pdf/2018-07/cp180111en.pdf

21 Confédération paysanne and Others v. Premier ministre and Ministre de l’agriculture, de l’agroalimentaire et de la forêt, Request for a preliminary ruling from the Conseil d’État, Judgment of the Court (Grand Chamber) of July 25, 2018. http://curia.europa.eu/juris/liste.jsf?language=en&td=ALL&num=C-528/16.

22 European Parliament, “Directive 2001/18/EC of the European Parliament and Council of 12 March 2001, on the deliberate release into the environment of genetically modified organisms, and repealing Council Directive 90/220/EEC,” https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:02001L0018-20190726&from=EN.

23 Confédération paysanne and Others v. Premier ministre and Ministre de l’agriculture, de l’agroalimentaire et de la forêt (2018).

24 Kahneman, Daniel and Tversky, Amos, “The Psychology of Preference,” Scientific American 246 (1982)CrossRefGoogle Scholar; and Nebel, J. M., “Status Quo Bias, Rationality, and Conservatism about Value,” Ethics 125, no. 2 (2015): 449–76CrossRefGoogle Scholar.

25 See Conko, Gregory, Kershen, Drew L., Miller, Henry, and Parrot, Wayne A, “A Risk-Based Approach to the Regulation of Genetically Engineered Organisms,” Nature Biotechnology 34, no. 5 (2016): 493503 CrossRefGoogle Scholar. The present project supports the authors’ goal to promote a risk-based approach to biotech regulation, but both extends and qualifies that recommendation.

26 Most standard risk-analysis texts do not incorporate considerations of distributive justice, voluntariness, or other norms that must be satisfied if risk-cost-benefit analysis (RCBA) exercises are to be appropriately structured. See, for example, Vose, David, Risk Analysis (New York: John Wiley and Sons, 2000)Google Scholar. Since risk analysts are often not trained to take such considerations into account, RCBA decision-making may often be deeply flawed. But as David Schmidtz points out, these faults are not intrinsic to RCBA, and with appropriate full-cost accounting, various different considerations, including considerations of ethics and justice, could be taken into account. Schmidtz, David, “A Place for Cost-Benefit Analysis,” Noûs 35 (2001):148–71CrossRefGoogle Scholar.

27 See Daniel E. Walters and Jennifer Nash, “Public Engagement and Transparency in Regluation: A Field Guide to Regulatory Excellence.” Research paper prepared for the Penn Program on Regulation’s Best-In-Class Regulator Initiative, (June 2015). https://www.law.upenn.edu/live/files/4709-nashwalters-ppr-researchpaper062015.pdf Accessed October 2020. Walters and Nash recommend a much richer menu of norms for the evaluation of administrative rulemaking, including neutrality, procedural fairness, and incorporation of diverse viewpoints.

28 Slovic, Paul, “The Perception of Risk,” Science 236 (1985): 280–85CrossRefGoogle Scholar, and his later book on the same topic, The Perception of Risk (London: Earthscan Publications, 2004).

29 USDA-APHIS, Coordinated Framework for Regulation of Biotechnology (June 26, 1986) https://www.aphis.usda.gov/brs/fedregister/coordinated_framework.pdf.

30 While my discussion of transparency and responsiveness here is brief, these issues are also treated by Jennifer Kuzma in a recent presentation. See her “Unpacking and Evaluating Regulatory Policy Pathways for Gene-Edited Agricultural Products,” presented as part of a conference on Gene Editing in Agriculture and Food: Social Concerns, Public Engagement and Governance (October 20, 2020, Iowa State University). Professor Kuzma’s presentation is available at https://geneeditedfoods.soc.iastate.edu/conference/

31 Jaffe, “USDA’s New Biotech Rule Explained.”

32 Plant Protection Act, 114 STAT. 438 Public Law 106-224-June 20, 2000. Available from USDA-APHIS at https://www.aphis.usda.gov/plant_health/plant_pest_info/weeds/downloads/PPAText.pdf. Accessed October 2020.

33 For example, if a base pair insertion or deletion affects a codon frame, it is likely to have large-scale phenotype implications.

34 Jaffe, “USDA’s New Biotech Rule Explained.”

35 See Kuzma, Jennifer, “Regulating Gene-Edited Crops,” Issues in Science and Technology (Fall 2018): 8085Google Scholar.

36 In A Crack in Creation, Jennifer Doudna and Samuel Sternberg consider possible risky or ethically problematic applications of CRISPR (New York: Houghton Mifflin Harcourt, 2017).

37 One might consider the case in which producers and consumers both have an interest in impartial premarket testing, but have different beliefs about what that testing will show: while skeptical consumers believe that testing will reveal risks, producers believe that testing will verify that their product is safe. In that case, all parties will have an interest to promote such testing. Support for a premarket testing regime would be, in that context, a Nash equilibrium strategy, rationally choiceworthy for all concerned. On the other hand, if the public mistrusts the USDA itself, believing that the agency has been captured by its subject industry, then USDA testing will itself be subject to mistrust. Transparency and responsiveness might help to allay such problems, but might not be sufficient to resolve them.