For almost one hundred years in the United States and elsewhere, researchers, regulatory agencies, and the public have argued about what constitutes the ethical use of human subjects in biomedical experiments. Yet despite its continuing importance, much of this discussion has been forgotten. Few histories reach back further than the Nuremberg Code of 1946. In neglecting the lively early debate, truncated accounts limit our understanding of the development of research ethics in the United States.
According to Susan Lederer, during the first three decades of the twentieth century, and thus well before the horrors of Nazi Germany, biomedical researchers began to design large-scale clinical experiments for the first time. The resulting need for human subjects generated an intense public discussion that appeared in newspapers, medical journals, and even popular fiction. Many Americans did not like the idea of being used as research guinea pigs. Angry exposés of abuse appeared in pamphlets with lurid titles like “Foundlings Cheaper Than Animals.” As early as 1900, one legislator introduced a bill for the regulation of scientific experiments upon human beings in the District of Columbia to the United States Senate, albeit to no effect. In 1916, a number of concerned doctors proposed that the American Medical Association adopt a formal code of ethics.
Although no formal regulatory policy emerged, the intensity of the public outcry pushed biomedical researchers toward a form of limited self-regulation. Some began to ask clinical patients for permission to involve them in an experiment. A few tried out new serums on themselves and their families before using them in clinical trials. There was an occasional discussion of the need to avoid excessive risk. In Subjected to Science, a pathbreaking study of this early period, Lederer argues that contrary to popular wisdom, “ethical guidelines influenced the conduct of research with both human and animal subjects” well before World War II.