Hostname: page-component-5db6c4db9b-bhjbq Total loading time: 0 Render date: 2023-03-23T22:07:06.288Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true

How Talking Became Human Subjects Research: The Federal Regulation of the Social Sciences, 1965–1991

Published online by Cambridge University Press:  01 January 2009

Zachary M. Schrag*
George Mason University
Rights & Permissions[Opens in a new window]


Copyright © Donald Critchlow and Cambridge University Press 2009

Since the late 1990s, institutional review boards, or IRBs, have become increasingly assertive in their claims that they have the moral and legal authority to control the work of researchers in the humanities and social sciences. These boards often demand that university researchers complete ethical training courses, submit their proposed research for prior approval, and modify their research strategies to the boards’ satisfaction, before they observe, survey, or interview people. Scholars who fail to obey risk denial of funding, degrees, or promotion.

Not all of these conditions are required by the federal government, and IRBs may claim powers independent of federal regulations. But they invariably point to these regulations as a key source of their authority. In particular, they draw on Title 45, section 46 of the Code of Federal Regulations (abbreviated as 45 CFR 46), known as the “Common Rule” because it has been adopted by seventeen federal agencies that sponsor research. The present Common Rule, dating from 1991, is the third version of 45 CFR 46, which was originally codified in 1974 and revised in 1981. The regulations contain several references to medical matters, such as the “subjects’ disorder or condition” and “alternative procedures or courses of treatment,” suggesting that they were written with medical experimentation in mind. Yet the definitions of “human subject” and “research” seem to cover a great deal of nonmedical, nonexperimental research.Footnote 1 This raises the question of whether the regulations should govern work in such fields as anthropology, history, political science, and sociology.

Scholars trying to answer this question sometimes base their arguments on a vague understanding of history. Some, especially supporters of IRB review, believe that regulations were developed in response not only to infamous scandals in medical experimentation, but also in reaction to specific problems in social science research. For example, the CITI Program, a training program widely used by IRBs, begins with the statement that “development of the regulations to protect human subjects was driven by scandals in both biomedical and social/behavioral research.”Footnote 2 This statement is perhaps based on Robert S. Broadhead’s unsubstantiated 1984 claim that “IRBs were given [their] responsibilities because of a history of widespread ethical violations in both biomedical and social science research.”Footnote 3 Laura Stark downplays the significance of specific scandals, but she still argues that “from the outset, human subjects protections were intended to regulate social and behavioral researchers.”Footnote 4 Other scholars, dismayed by the awkward application of current regulations to nonbiomedical research, deduce that regulators must have included social science research by mistake.Footnote 5

This article draws on previously untapped manuscript materials in the National Archives that show that regulators did indeed think about the social sciences, just not very hard.Footnote 6 Officials in the Department of Health, Education, and Welfare (DHEW) and its successor, the Department of Health and Human Services (HHS), raised sincere concerns about dangers to participants in social science research, especially the unwarranted invasion of privacy as a result of poorly planned survey and observational research. They also understood the objections raised by social scientists, debated them within the department, and sought ways to limit unnecessarily cumbersome review. Thus, the regulation of social research was not mere oversight.

The application of the regulations to the social sciences, however, was far less careful than was the development of guidelines for biomedical research. In the 1970s, medical experimentation became a subject of national debate, with lengthy hearings in Congress, a new federal law covering “biomedical and behavioral research,” and additional hearings before and deliberations by specially constituted commissions. In contrast, regulators, employed by the Public Health Service and, for the most part, trained in biomedical science, spent little time investigating actual practices of social scientists or talking with them, relying instead on generalities and hypothetical cases. They failed to define the problem they were trying to solve, then insisted on a protective measure borrowed from biomedical research without investigating alternatives. Compared to medical experimentation, the social sciences were the Rosencrantz and Guildenstern of human subjects regulation. Peripheral to the main action, they stumbled onstage and off, neglected or despised by the main characters, and arrived at a bad end.


Institutional review boards were created in response to the explosion of medical research that followed World War II. In 1944, Congress passed the Public Health Service Act, greatly expanding the National Institutes of Health (NIH) and their parent, the Public Health Service (PHS). Between 1947 and 1957, the NIH’s research grant program grew from $4 million to more than $100 million, and the total NIH budget grew from $8 million in 1947 to more than $1 billion by 1966. The same legislation authorized the NIH to open its Clinical Center, a hospital built specifically to provide its researchers with people, some of them not even sick, on whom to experiment.Footnote 7

But medical experimentation carried risks—physical and ethical. In the late 1950s, a researcher fed hepatitis viruses to children admitted to a school for the mentally disabled. Though he obtained their parents’ consent, and reasoned that the children were bound to contract hepatitis anyway, later critics accused him of treating the children as guinea pigs.Footnote 8 In 1964, New York newspapers reported that the previous year, a highly respected cancer researcher, using NIH funds, had injected cancer cells into twenty-two patients of the Jewish Chronic Disease Hospital in Brooklyn. Having satisfied himself that the procedure was perfectly safe, and that the word “cancer” would unnecessarily trouble the patients, he neither explained the experiment nor sought patient consent.Footnote 9 In 1966, Harvard medical professor Henry Beecher published an influential article in the New England Journal of Medicine cataloguing these and twenty other “examples of unethical or questionably ethical studies.”Footnote 10 Beyond such episodes, NIH director James Shannon was troubled by a more general sense that medical research was shifting from a process of observation to one of experimentation involving potentially dangerous medication and surgery.Footnote 11 In early 1964, well before Beecher’s article appeared, the NIH had appointed an internal study group to investigate the ethics of clinical research.Footnote 12

Searching for a system of safeguards, the group looked to the NIH’s own Clinical Center. Since its opening in 1953, the center had required that risky studies there be approved by an NIH medical review committee. They also required the written consent of participating patients, who were considered “member[s] of the research team.”Footnote 13 In 1965, the National Advisory Health Council, the NIH’s advisory board, recommended that a comparable system be applied to the NIH’s extramural grants program as well, and in February 1966, Surgeon General William Stewart announced that recipients of Public Health Service research grants could receive money “only if the judgment of the investigator is subject to prior review by his institutional associates to assure an independent determination of the protection of the rights and welfare of the individual or individuals involved.”Footnote 14 This “prior review” requirement was the first federal requirement for IRBs outside the government itself. This initial pronouncement clearly focused on medical research—one of the tasks of the reviewers was to determine the “potential medical benefits of the investigation.”Footnote 15

But the NIH also sponsored some social science research through its “behavioral sciences” section that offered grants in psychology and psychiatry, as well as supporting some work in anthropology and sociology, which were termed “social sciences.”Footnote 16 Would these fields be covered, too? As Dael Wolfle, a National Advisory Health Council member who co-authored the 1965 recommendation, later noted, “It was most assuredly not our intent that the regulation we recommended … be extended to research based upon survey, questionnaire, or record materials. This type of research does not involve the kinds of harm that may sometimes result from biomedical studies or other research that actually intrudes upon the subjects involved.”Footnote 17 But Shannon, director of the NIH, had his own ideas. As he later recalled, “It’s not the scientist who puts a needle in the bloodstream who causes the trouble. It’s the behavioral scientist who probes into the sex life of an insecure person who really raises hell.”Footnote 18

Psychological probes of sex lives did in fact raise hell at the congressional level, though they did so in a somewhat oblique way. In June 1965, Congressman Cornelius Gallagher held hearings to investigate what he called “a number of invasion-of-privacy matters,” including “psychological testing of Federal employees and job applicants, electronic eavesdropping, mail covers, trash snooping, peepholes in Government buildings, the farm census questionnaire, and whether confidentiality is properly guarded in income-tax returns and Federal investigative and employment files.”Footnote 19 Gallagher was particularly concerned by the use of psychological tests, such as the Minnesota Multiphasic Personality Inventory, on federal employees and job applicants.Footnote 20 But in its wide-ranging investigation, Gallagher’s subcommittee asked the Office of Statistical Standards about all the questionnaires and forms used by the federal government, and a representative of that office mentioned some mental health research sponsored by the PHS. The witness assured the subcommittee that “questions of a personal or intimate nature often are involved, but participation is entirely voluntary, and it has been our view that the issue of invasion of privacy does not arise.”Footnote 21 Nevertheless, at the end of the investigation, Gallagher and three other congressmen asked that whenever the PHS sponsored personality tests, inventories, or questionnaires, it make sure that “protection of individual privacy is a matter of paramount concern” and that participation was voluntary.Footnote 22 The PHS responded respectfully, assuring the congressman that its “policy is one of endorsing, as guidelines in the conduct of research, the principle that participation in research projects involving personality tests, inventories and questionnaires is voluntary and, in those cases involving students below the college level, that the rights and responsibilities of the parents must be respected.”Footnote 23

This response did not commit the PHS to imposing review on social science work in general, and in June 1966 the NIH sought additional perspectives from anthropologists, psychologists, sociologists, and other scholars, many of them members of the NIH’s own Study Section in the Behavioral Sciences. The assembled social scientists acknowledged such potential dangers as psychological harms and the invasion of privacy. But, as sociologist Gresham Sykes reported, even participants who wanted clearer ethical standards had “serious reservations” about the PHS’s new policy. Sykes explained their reservations in terms that would echo for decades:

There are the dangers that some institutions may be over-zealous to insure the strictest possible interpretation, that review committees might represent such a variety of intellectual fields that they would be unwieldy and incapable of reasonable judgment in specialized areas, and that faculty factions might subvert the purpose of review in the jealous pursuit of particular interests. There is also the danger that an institutional review committee might become a mere rubber stamp, giving the appearance of a solution, rather than the substance, for a serious problem of growing complexity which requires continuing discussion. Effective responsibility cannot be equated with a signature on a piece of paper.Footnote 24

Similar concerns were voiced in August at the annual meeting of the Society for the Study of Social Problems. While the society endorsed the “spirit” of Stewart’s statement, it doubted that local review committees could competently and fairly evaluate the ethics of proposed research. Like the participants at the NIH conference, the society’s members feared that committees would be too sensitive to “political and personal considerations” and insensitive to “important differences in the problems, the data, and the methods of the different disciplines.” It called on Stewart to consider alternatives to local IRBs, such as national review panels composed of experts.Footnote 25 Queried around the same time by the NIH, anthropologist Margaret Mead found the whole idea absurd. “Anthropological research does not have subjects,” she wrote. “We work with informants in an atmosphere of trust and mutual respect.”Footnote 26 The American Sociological Association complained that “the administrative apparatus required appears far too weighty and rigid for rational use in the large majority of cases in the behavioral sciences.” It also warned that “a local committee may be incompetent or biased, and may threaten the initiation and freedom of research in some cases.”Footnote 27

Ignoring these arguments, on December 12, 1966, the Public Health Service announced explicitly that “all investigations that involve human subjects, including investigations in the behavioral and social sciences” would have to undergo the same vetting as medical experiments. The announcement did claim that only the most risky projects would require “thorough scrutiny.” In contrast, “a major class of procedures in the social and behavioral sciences does no more than observe or elicit information about the subject’s status, by means of administration of tests, inventories, questionnaires, or surveys of personality or background. … Such procedures may in many instances not require the fully informed consent of the subject or even his knowledgeable participation.”Footnote 28 Surgeon General Stewart later wrote that this statement addressed the sociologists’ concerns, but he offered no evidence that any sociologist agreed. He also pledged that “should we learn that a grantee is stopping research unlikely to injure human subjects, we would express to the grantee our concerns and clarify the intent of the relevant policy.”Footnote 29

Over the next few years, some IRBs—likely a small number—began reviewing nonmedical research, leading to some confusion.Footnote 30 By 1968, a Public Health Service memorandum noted that “we have had questions from medical school committees questioning whether there was any need to review projects in psychology, from psychologically oriented committees, questioning the need to review anthropological studies, and from everyone else questioning the need to review demonstration projects.”Footnote 31 The response was that such projects did need to be reviewed, but that the policy needed to be clarified “to avoid the necessity of obtaining assurances from the YMCA, the PTA, and the Traveler’s Aid Society.”Footnote 32

Despite such concerns, in April 1971 the Department of Health, Education, and Welfare (DHEW), the PHS’s parent, applied the IRB requirement to all department sponsored research.Footnote 33 At a meeting that month, when a review-committee member brought up the question of “behavioral and political science research,” department representatives insisted that “questionnaire procedures are definitely subject to the Department‘s policy.”Footnote 34 In December 1971, the department again reiterated that position in its The Institutional Guide to DHEW Policy on Protection of Human Subjects, known by its cover as the “Yellow Book,” again citing “discomfort, harassment [and] invasion of privacy” as possible consequences of such research.Footnote 35

Thus, despite broad opposition from social scientists, the Public Health Service established IRB review as a requirement for a broad class of nonbiomedical research. Then, thanks to the PHS’s position within DHEW, that requirement spread to the entire department. Yet this expansion of IRB review of social and behavioral research raised little controversy. In March 1972, the American Sociological Association’s executive director commented on the 1971 policy: “We haven’t had much flak about it so far. But we do have some concern that this could become a political football, shoving aside scholarly concerns.”Footnote 36 Truly problematic rules would require the intervention of Congress. And a scandal horrific enough to attract Congress’s attention required physicians.


The second step in the spread of IRBs was a medical scandal that helped discredit all research. In July 1972, reporter Jean Heller broke the news of the Tuskegee Syphilis Study, in which the Public Health Service had observed the effects of syphilis on 399 African American men for forty years without offering them treatment.Footnote 37 The next year, Congress debated several bills to rein in medical research.

Congress saw the problem as specifically medical. Senator Hubert Humphrey (D-Minn.) proposed a national board to “review all planned medical experiments that involve human beings which are funded in whole or in part with Federal funds.”Footnote 38 Senator Jacob Javits (R-N.Y.) told the Senate that he was concerned about “psychosurgery, organ transplants, genetic manipulations, sterilization procedures for criminal behavior, brain-washing, mind control and mind expanding techniques, and, yes, even the very concept of birth and death itself.”Footnote 39 Senate hearings in February and March 1973 likewise emphasized abuses of medicine and therapy. One concern was the delivery of health services, which had failed so miserably in the Tuskegee study and in cases where government funds had supported sterilization of dubious propriety. The other was what the Senate health subcommittee called “a wide variety of abuses in the field of human experimentation.”Footnote 40 These included the introduction of new drugs and medical devices, as well as surgical procedures.

The only nonmedical research the Senate investigated were behavioral experiments, such as B. F. Skinner’s “research in to the modification of behavior by the use of positive and negative rewards and conditioning.”Footnote 41 The subcommittee was particularly concerned about such research taking place in prisons and mental hospitals, where it might be disguised as treatment. Yet the Senate’s bill proposed to regulate a much broader category: “behavioral research.” As one observer noted, “On the basis of testimony … citing abuses primarily in clinical research, psychology as whole has found itself covered by the proposed legislation.”Footnote 42 Indeed, as time would prove, the Senate’s concern about behavior modification would lead to the regulation not just of psychology, but all manner of behavioral and social science.

The publicity over Tuskegee and Congress’s concern prompted DHEW officials to think about replacing their departmental guidelines with formal regulations. Although they believed that the 1971 Yellow Book provided all the necessary protection, they realized that outsiders in Congress and the general public would want more than a pamphlet. In August 1972, just weeks after the Tuskegee disclosures, they began circulating memos about the need for the department to publish formal regulations in the Federal Register if it did not want to have them imposed directly by Congress.Footnote 43 As the Congress kept on working, Charles R. McCarthy, then a recently arrived staffer at NIH, warned that some kind of legislation was coming. If DHEW did not come up with something impressive, Congress might establish a separate agency—outside the department—to control human experimentation.Footnote 44

As they discussed what kinds of regulations might mollify Congress, DHEW officials failed to define the scope of the discussion—whether they were debating the regulation of just medical research, or medical and behavioral research, or medical, behavioral, and social research. For the most part, their discussions assumed that the focus of any legislation and regulations would be biomedical research. This medical focus was indicated by both the recipients—medical officials within DHEW—and the title of the September 1972 memo calling for new regulations: “Biomedical Research and the Need for a Public Policy.” Likewise, the group established in January 1973 to study the problem was named the Study Group for Review of Policies on Protection of Human Subjects in Biomedical Research, with membership drawn from health agencies within the department.Footnote 45 At other times, psychiatry and psychology crept in.Footnote 46

And, on occasion, social science made an appearance as well. In September 1973, one NIH memo objected to one of the bills on the grounds that its rigid insistence on the subjects’ informed consent “places unacceptable constraints on behavioral and social science research.”Footnote 47 Donald Chalkley, the head of the NIH’s Institutional Relations Branch, complained that “by imposing on all behavioral and biomedical research and service, the rigors of a system intended to deal with the unique problems of high-risk medical experimentation, [the bill] would unduly hamper low-risk research in psychology, sociology, and education, and unnecessarily dilute the attention given to potentially serious issues in clinical investigation.”Footnote 48

Despite the confusion about the applicability of the proposals to nonbiomedical research, on October 9, 1973, DHEW announced its proposed regulations. At the core of the proposal was a reiteration of the IRB requirement that had been in place since 1971: “No activity involving any human subjects at risk supported by a DHEW grant or contract shall be undertaken unless the organization has reviewed and approved such activity and submitted to DHEW a certification of such review and approval.” The review would have to determine that “the rights and welfare of the subjects involved are adequately protected, that the risks to an individual are outweighed by the potential benefits to him or by the importance of the knowledge to be gained, and that informed consent is to be obtained by methods that are adequate and appropriate.”Footnote 49

The proposed regulations did not define “human subjects,” but they did suggest that the policy applied to “subjects at risk,” defined as “any individual who may be exposed to the possibility of injury, including physical, psychological, or social injury, as a consequence of participation as a subject in any research, development, or related activity which departs from the application of those established and accepted methods necessary to meet his needs.” This definition made no distinction among biomedical, behavioral, and social research. Two hundred comments arrived, some suggesting “limiting the policy to physical risks only [or the] differentiation of biomedical risks from behavioral risks.”Footnote 50 Within DHEW, officials voiced similar concerns. In May 1974, the Office of the Assistant Secretary for Planning and Evaluation suggested eliminating social science research from the regulations.Footnote 51 A memo prepared just after the regulations were finalized noted that some agencies within DHEW were fretting “about the extent to which these regulations would inhibit social science research.”Footnote 52

The department had little chance to respond to such critiques, since it feared Congress would strip DHEW of its powers entirely, entrusting human subjects protections to a new, independent federal agency.Footnote 53 McCarthy later conceded that “because of the time pressure imposed by Senator [Edward] Kennedy, the customary DHEW clearance points for the issuance of regulations were either bypassed or given extremely brief deadlines. The result was a set of flawed regulations.”Footnote 54 DHEW official Richard Tropp later complained, “It was understood within the department—and alluded to in the regulation’s preamble—that further negotiation would follow, to produce a consensus regulation crafted so as to be appropriate for all types of research. The urgency of the issue to the secretary’s office waned, however, and that has never happened.”Footnote 55

The final regulations were promulgated on May 30, 1974, and became 45 CFR 46. They succeeded in their main mission: persuading Congress to let the department keep control of its own research. Just weeks after the publication of the new regulations, Congress passed the National Research Act, which gave the secretary of DHEW the authority to establish regulations for IRBs, the very regulations that DHEW had just promulgated.Footnote 56

The two initiatives—one legislative, the other executive—were largely complementary. But the law diverged from the DHEW proposals in two ways. First, it required institutions receiving DHEW grants to have IRBs in place to review human subjects research, without specifying whether that included research not funded by DHEW. In contrast, the DHEW regulations applied only to grants and contracts from that department.Footnote 57 Second, the law passed by Congress limited its scope to “biomedical and behavioral research,” while DHEW’s regulations applied to all department-funded “research, development, and related activities in which human subjects are involved,” which could be construed to include social research as well.Footnote 58

Only the most careful observers noticed this latter discrepancy. Ronald Lamont-Havers, who had chaired the DHEW Study Group on the Protection of Human Subjects, had earlier distinguished four categories of human subjects research: biomedical, behavioral and psychological, societal, and educational and training.Footnote 59 Attuned to the differences among these areas, he noticed Congress’s limit and assumed that it applied to the regulations as well. In September 1974, he noted that the policy “is presently applicable only to biomedical and behavioral research” and argued that “the inclusion of ‘social’ … and ‘social sciences’” in two sections of a draft manual “is in error since the policy should deal only with biomedical and behavioral research.”Footnote 60 But technical amendments published in March 1975 failed to align the regulations with the law.Footnote 61

Embarrassed by the Tuskegee scandal and fearful of heightened congressional oversight or the establishment of a rival agency, DHEW had rushed the regulations into print. It had ignored warnings from within the department and outside commentators that the regulations might prove inappropriate for the social sciences, and it had diverged from the concerns of Congress and the law itself.


The third step in the development of the regulations came in response to a body established outside the department. Along with requiring DHEW to issue regulations, the National Research Act established a temporary National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research to explore the issues raised in the 1973 hearings. Originally authorized for only two years, the commission was extended twice, completing its work in September 1978.

The commission devoted little attention to nonbiomedical issues. In 1976, it commissioned a survey of current IRB practices that lumped together everything from psychological experiments to anthropological fieldwork as “behavioral research.”Footnote 62 The survey also sought information about harms from research but found little in the nonbiomedical categories. Of all 2,039 projects surveyed, only three reported a breach of confidentiality that had harmed or embarrassed a subject.Footnote 63 And of the 729 behavioral projects surveyed, only four reported “harmful effects.”Footnote 64

The commission did hear complaints from social scientists that universities, taking their cue from DHEW, had begun imposing IRB review on anthropology and sociology while ignoring the professional ethics of those fields. Anthropologist Murray Wax warned that “as some universities have applied and interpreted ‘human subjects protection’ they are stifling basic and traditional forms of ethnographic and anthropological field researches which had put no one at significant risks, except perhaps the fieldworker himself (or herself).”Footnote 65 At hearings held in 1977, all the sociologists who testified expressed reservations or total opposition to IRB review, while a representative of the American Anthropological Association called for IRBs to be limited to clinical or biomedical research.Footnote 66 The commission’s own staff sociologist noted that some agencies legitimately resisted the DHEW regulations “because they believe that the regulations are not appropriate to the research that they conduct. The Commission has not looked at that research. We are saying that these regulations should apply, but we have not looked in any detail at the educational research. There are a number of things that agencies do that could easily be construed to be research involving human subjects that we have not even thought about.”Footnote 67

Despite this inattention, the commission recommended defining human subjects research in a way that would encompass a great deal of social science work. In response to the National Research Act’s requirement that the commission consider “the boundaries between biomedical or behavioral research involving human subjects and the accepted and routine practice of medicine,” the commission defined “research” as “a formal investigation designed to develop or contribute to generalizable knowledge.”Footnote 68 In place of the applicability of the DHEW regulations to “subjects at risk,” the commission recommended that a human subject should be any “person about whom an investigator (professional or student) conducting scientific research obtains (1) data through intervention or interaction with the person, or (2) identifiable private information.” Taken together, these definitions expanded human subjects research immeasurably. Moreover, the commission recommended that any institution that received any federal funds for health or health research would have to institute IRB review for all human subjects research at that institution, even research not federally funded. In practice, this meant that every university researcher in the nation who asked questions of living persons would have to submit his or her studies for IRB approval.

The one big concession the commission made to the social sciences was to allow “expedited review” by a single IRB member, rather than a full board vote, for some categories of research, including surveys and interviews when “the subjects are normal volunteers and that the data will be gathered anonymously or that confidentiality will be protected by procedures appropriate to the sensitivity of the data.”Footnote 69 The choice of expedited review was a leap into the unknown. Approval (or rejection) of a project by anything less than a full IRB was forbidden by the 1974 regulations, and the commission’s IRB survey noted that in all the institutions it studied, “individual reviewers were never reported to make decisions for the committee regarding a proposal’s acceptability.”Footnote 70 Nevertheless, based on this provision, assistant staff director Barbara Mishkin asserted that “I think we have built in enough flexibility in these IRB recommendations to accommodate any social research, any social science research, any research in education, and so forth.”Footnote 71

As the commission completed its work, DHEW officials began thinking about how to translate its recommendations into new regulations. The task fell to Charles McCarthy, now the director of the Office for Protection from Research Risks (OPRR), which had been established in late 1974.Footnote 72 Knowing full well the rush in which the 1974 regulations were drafted, he had earlier argued that “when the Commission’s Report on IRB’s has been published and comments have been received, a general revision of the entire 45 CFR 46 should be undertaken.”Footnote 73 For most of a year, a committee with representatives of various agencies within the department worked on the task, coming to agreement on a general outline. The new regulations adopted the commission’s definitions of research and human subjects as well as the requirement for adequate confidentiality.Footnote 74

Unlike the commissioners, however, some DHEW officials questioned whether regulations designed for medical experimentation should really be applied to researchers who just watched and talked to other adults, leading to a debate in the spring and early summer of 1979. The department’s Office of the General Counsel argued for the narrowest applicability and the broadest exceptions. It was opposed by the Public Health Service, which included the Food and Drug Administration (FDA), the Alcohol, Drug Abuse, and Mental Health Administration (ADAMHA), and the National Institutes of Health, which in turn included McCarthy’s OPRR. The debate centered on three questions.

The first question was the degree of risk presented by survey and observational research. Deputy general counsel Peter B. Hamilton found that “most surveys are innocuous and to require IRBs to look at all such research in order to find the survey that is truly harmful, would be an unwarranted burden on IRBs that would likely distract them from concentrating on more risky research that needs their attention.” Hence, he suggested that rather than offer expedited review, the regulations should fully exclude all anonymous surveys and all survey research that did “not deal with sensitive topics, such as sexual behavior, drug or alcohol abuse, illegal conduct, or family planning.” More generally, he sought “to remove from IRB review categories of research that only through ‘worst case’ analysis present any risk to subjects, and are in almost all instances a waste of time for IRBs to review.”Footnote 75

In contrast, health officials believed that surveys and observation threatened serious harm in the form of invasion of privacy. In March 1979, ADAMHA administrator Gerald Klerman warned of the potentially “lifelong stigmatizing of individuals as a result of inappropriate or unauthorized disclosure of information about childhood behavior problems or mental illness” and noted that ADAMHA peer review groups had found inadequate protections of confidentiality. Thus, he argued that while survey and observational research might merit expedited review, they should not be excluded entirely.Footnote 76 By June, the NIH had agreed, claiming that “unethical invasions of privacy can and have occurred,” and that “inadvertent or compulsory disclosure of information collected in such research can have serious consequences for subjects’ future employability, family relationships or financial credit.” It also suggested that “some surveys can cause psychological distress for subjects.”Footnote 77 In making these claims, neither Klerman nor the NIH memo presented any specific examples of harmful or unethical research projects. Nor did they feel they had to. The NIH position hinted that regulation would be required even if the general counsel should show that all interview and observational research was innocuous.Footnote 78

The second question was whether IRB review was the right tool to protect against the risks of survey and observational research. The Office of General Counsel suggested that “less burdensome requirements might need to be imposed on survey research to provide some assurance against breach of confidentiality,” but that “the procedures in Part 46, even as we propose they be revised, are inappropriate for this purpose.”Footnote 79 The health agencies, in contrast, asserted that no alternative could “provide all of the vital protections which the IRB review process encompasses, including the review of ethical acceptability of the research, adequacy of the informed consent procedures, and procedures for insuring confidentiality of data.”Footnote 80 It is not clear why the health officials believed that IRBs were effective at these tasks. Just as they had presented no examples of harmful projects, they presented no examples of effective IRB intervention.

Finally, the two sides split over how closely the regulations should follow the specific language of the National Research Act. For example, because the law required only that institutions receiving funds establish “a board … to review biomedical and behavioral research,” Hamilton suggested requiring only review, not review and approval. And institutions receiving department funds would be required to maintain IRBs only for “biomedical or behavioral research involving human subjects” (as specified by the law) not other categories. As Hamilton noted, “if Congress had wished … to cover all human subjects research, rather than just biomedical and behavioral, it could have done so.”Footnote 81

But the health agencies were reluctant to cede power. The NIH director complained that the general counsel, rather than the PHS, had even been allowed to draft what he termed “health related regulations.”Footnote 82 And while recognizing that the general counsel’s version would best “fulfill the literal requirements of the Act,” he preferred “a reasonable interpretation of the Act” that would extend IRB review to projects not funded by the department and that was not limited to biomedical and behavioral research.Footnote 83

Despite the debate, both sides agreed that the bulk of social research should be excluded from the requirement of IRB review. Even ADAMHA’s Klerman made clear that he was primarily concerned about protecting “subjects of biomedical and behavioral research in institutions funded by PHS,” and “would be willing to go along with exemptions for research funded by the Office of Education.”Footnote 84 To this end, his office proposed that the regulations exclude “product and marketing research, historical research, journalistic research, studies on organizations, public opinion polls and management evaluations where the potential for invasion of privacy is absent or minimal,” a position adopted by the health agencies as a whole.Footnote 85 No one in the department advocated IRB review for surveys, interviews, and observations not directly concerned with health or criminal matters. What was at stake, therefore, was the precise wording or the exemptions for such projects, and the applicability of the regulations to surveys, interviews, and observations concerning physical and mental health.

On August 14, 1979, the department published draft regulations that would apply to all human subjects research—funded or not—“conducted at or supported by any institution receiving funds from the Department for the conduct of research involving human subjects,” which in practice meant every research university, or at least every one with a medical school. But it offered two alternative sets of exemptions. “Alternative A” was the general counsel’s version, excluding from review projects not dealing with “sensitive topics” and all surveys or observations when subjects could not be identified. “Alternative B,” reflecting the view of Klerman and the health agencies, offered the exemption for product or marketing research, journalistic research, historical research, and the like.Footnote 86

After months of debate, everyone could agree that the National Commission had exceeded its congressional mandate when it proposed IRB review for every interaction between a researcher and another person. The question on the table was how far to extend, and how best to phrase, the necessary exemptions. As it announced this question, the department noted, “These are ‘proposed’ regulations and public comment on them is encouraged.”Footnote 87 That public comment was not long in coming.


The fourth step in the development of the regulations was the only one in which social scientists had any real voice. In the early 1970s, as universities began requiring IRB review of projects in anthropology and sociology, researchers in those fields began complaining, though the National Commission had largely ignored their complaints. With the publication of the draft regulations in August 1979, new critics emerged, most prominently Ithiel de Sola Pool, a professor of political science at MIT. Pool became involved in IRB issues in 1975, when the MIT board told a colleague he could not interview Boston antibusing activists who were breaking the law, on the grounds that his interviews might be used against the criminals. Pool was outraged that his university would block research on so important a topic, and that it would deploy its power against a part-time assistant professor. This was enough to set him bitterly against both the 1974 regulations and the new proposals, which he considered an attack on free speech and the concept of a university.Footnote 88

Pool was joined by Edward L. “Pat” Pattullo, the director of the Center for the Behavioral Sciences at Harvard. Pattullo chaired Harvard’s IRB and believed that IRB review of nonmedical research was appropriate when subjects were being deceived or otherwise unable to protect their own interests.Footnote 89 He acknowledged that talking could hurt. “The fact that a considerable number of social studies have resulted in subjects experiencing boredom, humiliation, self-doubt, and outrage I do not question,” he argued. “Further, it would be surprising if there were not others in which breaches of confidentiality, especially, have led to more dire consequences—though I am not aware of any such cases. Nevertheless … the possible harm that inheres in most social research is of a kind that we decided long ago we must risk as the necessary price for a free society.”Footnote 90 He compared social scientists to investigative journalists, evoking the triumph of the press’s coverage of Watergate.

Based on this acceptance of harm, Pattullo offered a formula that, he believed, could distinguish between the kinds of research that should proceed without review and the more dramatic interventions of the sort deployed by some social psychologists: “There should be no requirement for prior review of research utilizing legally competent subjects if that research involves neither deceit, nor intrusion upon the subject’s person, nor denial or withholding of accustomed or necessary resources.”Footnote 91 By November 1979, twelve scholarly and education associations endorsed a proposal to insert Pattullo’s disclaimer into the regulations themselves.Footnote 92

At first, the Department of Health, Education, and Welfare (renamed the Department of Health and Human Services, or HHS, in May 1980) dismissed the complaints. McCarthy wrote that “all of these objections cited by Dr. Pool had been considered and rejected by the National Commission for the Protection of Human Subjects, and each had been considered and rejected by the Department prior to publication of the [draft regulations].”Footnote 93 But as critics commented on the draft regulations and published their complaints, the department began paying attention.Footnote 94 An unsigned memo of May 1980 noted that about 85 percent of those who formally commented on the two lists of exceptions preferred alternative A, reflecting the general counsel’s views of 1979 and widely perceived to “provide broader exemptions than alternative B.”Footnote 95

In July 1980, the new President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, the successor to the National Commission, held hearings specifically on social and behavioral research, giving Pool and other critics their most official forum yet. Following the hearings, the chairman informed the department: “We believe that efforts to protect human subjects are ultimately disserved by the extension of regulatory procedures to ever broader areas. In a word, the full panoply of prior review ought not to apply to activities in which there is no discernable risk to human subjects.” He called for exemption from review of “research involving questionnaires, interviews, or standard educational or psychological tests, in which the agreement of subjects to participate is already an implicit or explicit part of a research process which itself will involve little or no risk.”Footnote 96

But in that last clause, the commission implicitly rejected Pattullo’s argument that even risky conversations with consenting adults should proceed without review. Instead, the commission recommended exemptions for

research involving solely interview or survey procedures if (a) results are recorded in such a manner that subjects cannot reasonably be identified directly or through identifiers linked to the subjects, or (b) the research does not deal with information which, if confidentiality were breached, could place the subjects at risk of criminal prosecution, civil liability, loss of employment, or other serious adverse consequences, except in settings in which subjects may feel coerced to participate.

It also called for exemptions for surveys and interviews on any topic “if the respondents are elected or appointed public officials or persons running for public office” and to “survey activities involving solely product and marketing research, journalistic research, historical research, studies of organizations, public opinion polls, or management processes,” provided “the research presents no risk of harming subjects or of invading their privacy.” And it insisted that the regulations apply “only to research with human subjects that is conducted or supported by HHS.”Footnote 97 Taken together, these provisions exempted many more kinds of research than did either alternative A or B of the 1979 draft regulations.

Then, on November 4, 1980, Ronald Reagan won the presidency, and, McCarthy later recalled, “Everybody knew that this was not a time to try to propose a new regulation.”Footnote 98 Indeed, Secretary of HHS Patricia Roberts Harris decided to promulgate no new regulations during the transition to the new administration.Footnote 99 To get around this obstacle, McCarthy began promoting the proposed new regulations as a reduction of regulation, particularly in nonmedical fields. To make this argument, he had to distort the effects of both the 1974 regulations and their proposed replacements. First, he painted the status quo in dire terms, claiming that “current regulations extend protections to all research involving human subjects in institutions that receive HHS support” and “all behavioral and social science research involving humans is currently regulated by HHS.”Footnote 100 In a memo prepared for a superior, he suggested that the alternative to issuing new regulations was “to continue to extend coverage of all behavioral and social science research involving human subjects even if the research is essentially risk free.”Footnote 101

This was not true. Although many universities were requiring review of all research, within DHEW and HHS the regulations had only been applied to research sponsored by the Public Health Service, not other elements of the department.Footnote 102 Moreover, the department’s own general counsel had only months before assured readers of the New York Times that “the current policy applies only to research involving human subjects which is conducted or supported by HEW,” not unfunded research.Footnote 103 Although the department did believe that the law required institutions accepting any PHS funds to review all research, regardless of funding, new regulations would be neither necessary nor sufficient to reverse that interpretation. Having exaggerated the existing extent of regulation, McCarthy then claimed that the new rules were more lenient, stating that the “proposed new rules would exempt risk-free behavioral and social science research resulting in deregulation of about 80% of research.”Footnote 104 Since the record contains no hint of the complex investigation that would have been needed to determine such a figure, it was almost certainly an invention by McCarthy.Footnote 105 (Passing McCarthy’s arguments up to Secretary Harris, the director of NIH promised that the exemptions would cover “all risk-free behavioral and social science research,” though he noted that the definition of “risk-free” was debatable.)Footnote 106

McCarthy later admitted that his arguments had been deceptive. As he explained in 2004, he told the Reagan transition team that the new regulations were less stringent than the old ones. “Of course, they weren’t, but they looked like they were because we wrote some exceptions.” He pulled a similar ruse with the lame-duck secretary, packaging the new rules as “Diminished Regulations for the Protection of Human Subjects” while trusting “nobody down there in the last weeks of the Harris administration getting ready to leave office would actually read it. So they didn’t know what all that was about, but they could read the title.”Footnote 107

The general counsel’s office was still proposing to exempt almost all survey and interview research. To fight off this challenge, McCarthy wrote a memo for the surgeon general that revealed just how little justification he could muster for regulating the social sciences at all. McCarthy had to reach all the way back to the 1965 letter from Congressman Gallagher and two other members of Congress to Surgeon General Luther Terry, fretting about “the use of personality tests, inventories and questionnaires,” especially when “given to young people in our schools.” Yet these were not the types of research Pattullo and others had argued to exempt. McCarthy claimed that the “paucity of documented abuses in this kind of research is due to the success of our policy rather than the absence of need for protections,” thus equating correlation and causation. Finally, McCarthy warned that exempting social science “would place the Department’s regulations out of conformity with the Nuremberg Code” of 1947, even though that code also required research “based on the results of animal experimentation,” and was clearly designed only for medical experiments.Footnote 108 These were strained arguments—far too strained to be published in the Federal Register—but they seem to have worked, for the regulations did not exempt all interview research. On January 13, 1981, in the final week of the Carter presidency, Secretary Harris signed the final regulations, which were then promulgated on January 26, six days into Ronald Reagan’s presidency.

The announcement accompanying the new regulations restricted mandatory review to HHS-funded research. Since 1974, the department had argued that the National Research Act required any institution receiving Public Health Service funds to review all human subjects research at that institution, regardless of the funding of the individual project. Now, however, the department announced the opposite: “The HHS General Counsel has advised that there is no clear statutory mandate in the National Research Act to support a requirement for IRB review of other than Public Health Service–funded research.”

Moreover, the regulations themselves offered what the department termed “broad exemptions of categories of research which normally present little or no risk of harm to subjects.” These included research on classroom techniques, educational tests where subjects could not be identified, and reading publicly available records. And it exempted all

research involving survey or interview procedures, except where all of the following conditions exist: (i) Responses are recorded in such a manner that the human subjects can be identified, directly or through identifiers linked to the subjects, (ii) the subject’s responses, if they became known outside the research, could reasonably place the subject at risk of criminal or civil liability or be damaging to the subject’s financial standing or employability, and (iii) the research deals with sensitive aspects of the subject’s own behavior, such as illegal conduct, drug use, sexual behavior, or use of alcohol. All research involving survey or interview procedures is exempt, without exception, when the respondents are elected or appointed public officials or candidates for public office.

(Observational research was exempted in an almost identical provision that, mysteriously, did not give blanket permission to observe public officials in public.)

The announcement boasted that taken together, these exemptions would “exclude most social science research projects from the jurisdiction of the regulations” as well as “nearly all library-based political, literary and historical research, as well as purely observational research in most public contexts, such as behavior on the streets or in crowds.”Footnote 109 Indeed, since the regulations covered only research that asked about the “sensitive aspects” and were funded directly by HHS, it probably was true that only a handful of projects each year would automatically require review.

Many critics were mollified. Richard Tropp, who had complained of the department’s failure to consult its own experts in drafting the 1974 regulations, found that the 1981 revision “address[ed] the most critical informed consent issues, and [made] sweeping reductions in the applicability of the regulation to riskless social science research.”Footnote 110 Richard Louttit of the National Science Foundation claimed that “with minor exceptions, basic research in the following fields is exempt from IRB review: anthropology, economics, education, linguistics, political science, sociology, and much of psychology.”Footnote 111 The American Sociological Association told its members, “take heart—most sociological research is now exempt from human subjects regulations.”Footnote 112 And the New York Times reported that the rules appeared “generally to satisfy the complaints of social scientists and historians who had expressed fears that the regulations would unfairly hamper their work.”Footnote 113 Pattullo thought that the regulations “still require[d] some protections that are both unnecessary and unwise,” but he was delighted that universities would be free to ignore them for research not directly funded by the government.Footnote 114 Believing that McCarthy had been holding back more zealous officials within HHS, he wrote McCarthy that the new “rules are sensible, practical, comprehensible and likely to achieve the objective which prompted them,” and thanked McCarthy for his “patience, good humor, and quiet common sense.”Footnote 115

But Ithiel de Sola Pool was less sanguine. He was grateful, he reported, for the exemptions, but he still found IRB review inappropriate for some forms of scholarship not covered by the exemptions, such as reading personal correspondence with the permission of the owner, or observing private behavior, again with permission. And while he acknowledged that technically the regulations applied only to HHS grantees, and that few social scientists fit that category, he was concerned that universities would still apply the same policies to all research, regardless of funding. He called on scholars to demand that when their universities submitted the required “assurances” to DHEW, those assurances include a version of Pattullo’s maxim. He warned his readers that “our fight is not quite over.”Footnote 116


The final phase of regulation justified Pool’s pessimism. While he and Pattullo had won key concessions in 1981, they had lost their best chance for a wholesale exclusion of social research—as advocated by the office of general counsel—or a simple formula like Pattullo’s. The incomplete list of specific exemptions they had achieved soon proved vulnerable to interagency committees working quietly and with almost no public comment. Their work involved no research scandals or New York Times headlines, but they extended IRB review over most of the research that had been exempted in 1981.

The first stage of the backsliding concerned research not directly funded by the federal government. In December 1980, McCarthy opposed exempting all interview research on the grounds that “risks undertaken by individuals at their own initiative are not comparable to risks undertaken at the behest of investigators using Department funds.Footnote 117 In other words, it was fine for a reporter to ask intrusive questions, but a government grantee had to be held to a separate standard. Yet only three months later, McCarthy announced that his office would distribute model assurances that encouraged universities to impose federal standards even on unfunded research, gutting one of the components of the compromise.Footnote 118 By 1985, Pattullo estimated that “most” universities had signed such assurances.Footnote 119

The second stage affected the regulations themselves. Ironing out the differences in human subjects policy among various federal agencies had been an official goal since at least 1974, when representatives from several agencies constituted the Interdepartmental Study Group for the Development of a Uniform Federal Policy on the Protection of Human Research Subjects.Footnote 120 But in the haste to get the 1974 and 1981 regulations promulgated in some form, officials had postponed the hard work of getting a dozen or more agencies to agree on common language. With the 1981 regulations in place, they were ready to try again. A committee, chaired by McCarthy, was appointed in October 1983. Unlike the National Commission or the President’s Commission, the interagency committee was composed strictly of federal employees. This meant that it issued no reports and held no hearings or open meetings that might have alerted social scientists about its actions, nor were its records preserved.Footnote 121

A great deal of the committee’s work was simply getting a common set of regulations—based on the 1981 HHS code—through the approval process of so many agencies. To the extent that there was serious debate, much of it seems to have concerned parochial requests by participating departments.Footnote 122 Revising the basic exemptions was not one of the committee’s main tasks, except to clean up the grammar to make the exceptions easier to understand.Footnote 123 But because the compromise of 1981 relied not on Pattullo’s straightforward formula but on the admittedly “convoluted” list that McCarthy had crafted, it proved vulnerable to even small changes.Footnote 124 When the proposed Model Policy was published in the Federal Register in June 1986, it eliminated the 1981 regulations’ provision that such research be exempted from IRB review unless it “deals with sensitive aspects of the subject’s own behavior, such as illegal conduct, drug use, sexual behavior, or use of alcohol.”Footnote 125 The announcement offered no explanation for this change. A mere five years after McCarthy had boasted of the “deregulation of about 80% of research,” the committee he chaired was proposing to bring back under regulation some unknown quantity of that research, without stating any reason for doing so.

Yet social scientists stayed quiet. While the committee kept in touch with biomedical researchers, it made no special effort to contact social scientists, trusting them, it seems, to read the Federal Register. Footnote 126 Unlike the 1979 proposals, which had proposed a wholesale restructuring of 45 CFR 46, this proposed change affected just a single subparagraph of the regulations, so it was harder to grasp the significance. And perhaps most important, Ithiel de Sola Pool had died in 1984, and no one emerged to take his place.

The six comments that the committee later chose to paraphrase in the Federal Register called for more regulation, not less. The suggestion that stuck was that “the language be broadened to show that harming an individual’s reputation in the community was a risk as well as financial standing and employability.” Despite the objections of the representative of the Agency for International Development, McCarthy’s committee agreed, eliminating the exemption for survey, interview, and observation research if “(i) Information obtained is recorded in such a manner that human subjects can be identified, directly or through identifiers linked to the subjects; and (ii) any disclosure of the human subjects’ responses outside the research could reasonably place the subjects at risk of criminal or civil liability or be damaging to the subjects’ financial standing, employability, or reputation.”Footnote 127 This was enough to wake up some critics. Several called for much broader exemptions, with one arguing that the exemptions were “written primarily for medical and health research and should not apply to involvement of human subjects for general business interviews or surveys,” others arguing for an exemption for business research, and another asking for exemptions for all minimal-risk research. Another specifically complained that “reputation is a subjective term that is difficult to define operationally” and “suggested that the wording be changed to limit exceptions to specific risks of ‘professional and sociological damage.’”

The interagency committee noted all of these complaints in its Federal Register announcement of the new regulations, and replied simply that it “believes that the exemptions are sufficiently clear so that all types of research, not just biomedical or health research, may be reviewed using the specified criteria. In addition, the Committee has indicated that the exemptions … provide for the exemption of certain research including much of the research used by business (e.g., survey research) in which there is little or no risk.”Footnote 128 And that was that. On June 18, 1991, the new regulations went into effect. With the deletion of twenty-eight words about sensitive aspects, and the addition of two words—“or reputation”—the regulators had retaken much of the ground they had ceded ten years earlier.

A final decision rendered nearly meaningless even the modified exemptions. When the Office of General Counsel had first proposed the exemptions in March 1979, it had made explicit that “the regulations, as proposed, do not require any independent approval of a researcher’s conclusion that his or her research is within one of the exceptions. In taking this approach for purposes of discussion, we have weighed the possibility of abuse against the administrative burdens that would be involved in introducing another element of review which might not be very different from the review that the exceptions are intended to eliminate.”Footnote 129 But this interpretation did not make it into the Federal Register, leaving the regulations themselves ambiguous. As a result, in the early 1980s, some IRB chairs began telling researchers that they could not declare their own research exempt, but had to submit proposals for IRB review just to determine an exemption. In 1983, Richard Louttit of the National Science Foundation, who had helped draft the 1981 regulations, declared such policies “contradictory” and advised that, as far as grant proposals went, a principal investigator and institutional official were sufficient judges of what was or was not exempt.Footnote 130

Within HHS, officials took a somewhat more open position, leaving it up to individual institutions to decide who was wise enough to interpret the exemptions. In 1988, the assistant secretary for health told a correspondent, “In some institutions, the official who signs the form HHS-596 indicates on the form whether or not a project is considered exempt from required IRB review; in others, the Principal Investigator makes that determination; and in others, every project must be submitted to the IRB chairperson for a determination.” Any of these choices, he suggested, complied with the regulations.Footnote 131

But in 1995, the department reversed this stance. In a letter of official guidance sent to IRBs around the country, OPRR now advised that “investigators should not have the authority to make an independent determination that research involving human subjects is exempt and should be cautioned to check with the IRB or other designated authorities concerning the status of proposed research or changes in ongoing research.”Footnote 132 Although OPRR staff based the letter on their sense that some institutions were misreading the exemptions, the office presented no evidence that this was common, nor did it investigate the effect of the change on social science research in general.Footnote 133 In other words, with no investigation or public comment, OPRR effectively expanded IRB jurisdiction over the categories from which it had been so carefully excluded in 1981.

The result was a resurgence of IRB activity. Scholars who had been trained during the lull in regulations were astonished to learn that their work would now be subject to review. IRBs and regulators now claimed jurisdiction over oral history, journalism, and folklore—fields that had barely been discussed in the 1970s and 1980s.Footnote 134 In 1994, anthropologist Carolyn Fluehr-Lobban recalled that “the modus vivendi for the 1970s and much of the 1980s was that most of behavioral science research fell into the … low-risk category and was therefore exempt from federal regulation.” But she had heard enough complaints from anthropologists to suggest that the modus vivendi was over, and that “even small-scale, individual research projects are subject to institutional review.”Footnote 135 In a 1996 essay nostalgic for the methods used by University of Chicago sociologists in the decade and a half after World War II, Alan Sica feared that replicating their work would be made impossible by the “demand for legalistic informed consent documents to be ‘served’ on every subject scrutinized or spoken with by the romantic sociologist” and the “hawkeyed overseers of informed consent.”Footnote 136 Forgotten, it seems, was the 1981 promise that the exemptions would “exclude most social science research projects from the jurisdiction of the regulations.”

By 2006, a subcommittee of the American Association of University Professors complained of the regulations whose 1981 incarnation had been hailed as a grand compromise. The subcommittee lamented that the rules “give no weight at all to the academic freedom of researchers or to what the nation may lose when research is delayed, tampered with, or blocked by heavy-handed IRBs,” and it suggested revising the regulations to make clear that “research on autonomous adults whose methodology consists entirely in collecting data by surveys, conducting interviews, or observing behavior in public places, be exempt from the requirement of IRB review—straightforwardly exempt, with no provisos, and no requirement of IRB approval of the exemption.”Footnote 137 The goal was one Pool and Pattullo thought they had achieved a quarter-century earlier.


Why did the federal government require universities to impose IRB review on research in the humanities and social sciences?

It is easier to list the factors that did not shape the regulations. First, the regulations were not based on clear instructions from Congress. During the hearings that led up to the National Research Act of 1974, and the official reports that accompanied it, Congress showed no interest in regulating sociology or anthropology, much less linguistics, folklore, oral history, or journalism. The act itself specified its applicability only to biomedical and behavioral research, and the department’s own office of general counsel objected to extending the regulations beyond these categories. But the Public Health Service had followed its own path since 1965, and it did not afterward let Congress sway it from asserting jurisdiction over social science research. Whether the regulators exceeded their legal authority is a question best left to legal scholars—and perhaps the courts. But it is certain that they interpreted the statute in ways that would have surprised its authors.

Second, the application of the regulations to these fields were not based on national scandals like the Tuskegee Syphilis Study or behavior-control experiments in prisons. Some social science research was controversial in the 1960s and 1970s and remains controversial today. In 1983, McCarthy publicly claimed that he had been concerned about famous scandals involving jury bugging and sociologist Laud Humphreys’s deception of men who sought sex in public restrooms.Footnote 138 But the memos McCarthy wrote while helping to craft the regulations did not even mention these events. Moreover, jury bugging had been outlawed, and Pattullo’s formula allowed for IRB review of deceptive research, so those scandals cannot explain HHS’s refusal to adopt Pattullo’s proposal.Footnote 139 Nor did the relatively lax 1981 regulations result in a flood of unethical research—at least not one that was cited as justification for the restrictions imposed in 1991 and afterward.

Third, the regulations were not based on the study of the rights and responsibilities of social scientists. In particular, the regulations were not based on empirical evidence that IRB review protected participants in social science research. Such review was just emerging in the 1970s, and information about the review process was scant. The one survey of IRB review of sociology, conducted in 1978, found that IRB meddling was far more common than harm to subjects, and “government actions serve primarily to erode the integrity of research and the autonomy of those who do it.”Footnote 140 Expedited review, which today’s IRB advocates point to as a triumph of accommodation, was never even tested before it was added to the regulations. All of these omissions are best expressed in McCarthy’s December 1980 memo, which cited no congressional interest, scandal, or empirical research since Cornelius Gallagher’s letter of 1965.

What, then, did drive the regulation of social science research? One must acknowledge a sincere desire to protect the privacy of participants in research. Gallagher’s concern for privacy helped set in motion the regulation of the social sciences, and Gerald Klerman’s fear for research participants kept some interview and survey research subject to IRB review. But one also should note that Gallagher only called for policies (not formal regulations) governing conversations involving defined power relationships, such as employer-employee and principal-student. And even Klerman envisioned IRB review only for “research in the health and mental health fields,” and he advocated exemption from review for “product and marketing research, journalistic research, studies on organizations, historical research, public opinion polls and management evaluations where the potential for invasion of privacy is absent or minimal.”Footnote 141 He and other health officials objected to Pattullo’s blanket exemption because they wanted to keep review for a small set of studies, not because they advocated review for routine social science research using interviews, surveys, and observation.

The less attractive factor was simple bureaucratic turf-grabbing, and the breadth of the regulations reflects the reluctance of any bureaucracy to renounce authority. Throughout the process of drafting the regulations, the most powerful, determined players came from the Public Health Service, and they designed rules primarily for health research while resisting challenges from their parent department, Congress, the White House, and outside critics. This turf fight led to the rush in 1974 to publish regulations before Congress could pass a law, and the rush in 1981 to avoid giving the incoming Reagan administration the chance to reexamine the issue with a fresh eye. Had the department allowed more time for debate, it might have drafted the regulations more carefully. After 1981, haste was less of an issue, but bureaucratic imperatives were still crucial. In the interagency process, the Agency for International Development and the National Science Foundation, each of them sponsors of social science research, did get a seat at the table. But OPRR, part of a health agency, overruled AID’s concerns and later interpreted the exemptions in a way the NSF’s representative found absurd.

The result was a set of human subjects regulations that were written by health officials to serve the needs of health research. The announcement of the 1974 regulations came with the promise that “policies are also under consideration which will be particularly concerned … with the subject of social science research,” but neither DHEW nor HHS ever developed a policy particularly concerned with the social sciences.Footnote 142 At best, they offered a few adaptations or exceptions to the regulations developed for medical experimentation, but a basketball court with some holes in the floor is not a golf course. From the first version of the policies, the social sciences were included, but only in the marginal position they still occupy today.



1. Office of Science and Technology Policy et al., “Federal Policy for the Protection of Human Subjects: Notices and Rules,” Federal Register 56 (18 June 1991): 28002–28032.

2. Cohen, Jeffrey M., Bankert, Elizabeth, and Cooper, Jeffrey A., “History and Ethics,” CITI Course in the Protection of Human Research Subjects, (30 October 2006).Google Scholar

3. Broadhead, Robert S., “Human Rights and Human Subjects: Ethics and Strategies in Social Science Research,” Sociological Inquiry 54 (April 1984): 107. Broadhead cites two articles about controversial research, neither of which shows that such controversy led to IRBs’ powers.Google Scholar

4. Stark, Laura, “Victims in Our Own Minds? IRBs in Myth and Practice,” Law & Society Review 41 (December 2007): 779.Google Scholar

5. Wax, Murray L., “Human Rights and Human Subjects: Strategies in Social Science Research,” Sociological Inquiry 55 (October 1985): 423Google Scholar; Lederman, Rena, “The Perils of Working at Home: IRB ‘Mission Creep’ as Context and Content for an Ethnography of Disciplinary Knowledges,” American Ethnologist 33 (November 2006): 486Google Scholar; Kerr, Robert L., “Unconstitutional Review Board? Considering a First Amendment Challenge to IRB Regulation of Journalistic Research Methods,” Communication Law & Policy 11 (2006): 412Google Scholar; Gunsalus, C. K. et al. , “The Illinois White Paper—Improving the System for Protecting Human Subjects: Counteracting IRB ‘Mission Creep,’” University of Illinois Law & Economics Research Paper No. LE06-016, 2006, (25 May 2007), 4.Google Scholar

6. Stark, Laura Jeanine Morris, “Morality in Science: How Research Is Evaluated in the Age of Human Subjects Regulation” (Ph.D. diss., Princeton University, 2006), uses National Archives record group 443, but she reads different documents to answer different questions from those posed here.Google Scholar

7. Office of NIH History, “A Short History of the National Institutes of Health,” (3 March 2008); Jonsen, Albert R., The Birth of Bioethics (New York, 1998), 142.Google Scholar

8. Rothman, David J. and Rothman, Sheila M., The Willowbrook Wars: Bringing the Mentally Disabled into the Community (1984; reprint, New Brunswick, N.J., 2005), 260–66Google Scholar; Sullivan, Walter, “Project on Hepatitis Research Is Now Praised by State Critic,” New York Times, 24 March 1971.Google Scholar

9. Langer, Elinor, “Human Experimentation: Cancer Studies at Sloan-Kettering Stir Public Debate on Medical Ethics,” Science 143 (7 February 1964): 552Google Scholar.

10. Beecher, Henry K., “Ethics and Clinical Research,” New England Journal of Medicine 274 (16 June 1966): 1355.Google Scholar

11. James A. Shannon, interview by Mark Frankel, New York City, 13 May 1971. Recording in author’s possession.

12. House Committee on Government Operations, The Use of Social Research in Federal Domestic Programs: Part IV—Current Issues in the Administration of Federal Social Research, 90th Cong., 1st. sess., 1967, 217.

13. Curran, William J., “Governmental Regulation of the Use of Human Subjects in Medical Research: The Approach of Two Federal Agencies,” Daedalus 98 (Spring 1969): 575.Google Scholar

14. Public Health Service, PPO #129: Clinical Investigations Using Human Subjects, 8 February 1966, Res 3-1. Human Subjects Policy & Regulations 1965–67, National Institutes of Health, Central Files of the Office of the Director, NIH, 1960–1982, Record Group 443, National Archives, College Park, Md. (hereafter RG 443).

15. Haggarty, James A., “Applications Involving Research on Human Subjects,” 28 February 1966, Res 3-1. Human Subjects Policy & Regulations 1965–67, RG 443.Google Scholar

16. See NIH Study Committee, Biomedical Science and Its Administration (Washington, D.C., 1965), 130–31.Google Scholar Psychology seems to have dominated. In Fiscal Year 1972, for example, the NIH awarded fifty-eight research training grants in psychology, thirteen in sociology, and six in anthropology. Senate Committee on Labor and Human Resources, National Research Service Award Act, S. Report. 93-381, 93d Cong., 1st sess., 1973, 10.

17. Dael Wolfle to Patricia Harris, 11 December 1980, box 25, Human Subjects Corresp. 19811/2, Ithiel de Sola Pool Papers, MC 440, Institute Archives and Special Collections, MIT Libraries, Cambridge, Massachusetts.

18. Shannon, interview by Frankel.

19. House Committee on Government Operations, Special Inquiry on Invasion of Privacy, 89th Cong., 1st sess., 1965, 5.

20. Gallagher, Cornelius E., “Why House Hearings on Invasion of Privacy,” American Psychologist, November 1965, reprinted in House Committee on Government Operations, Special Inquiry on Invasion of Privacy, 397–99Google Scholar.

21. House Committee on Government Operations, Special Inquiry on Invasion of Privacy, 295.

22. Gallagher to Luther L. Terry, Surgeon General, 13 September 1965, Res 3-1. Human Subjects Policy & Regulations 1965–67, RG 443.

23. Philip R. Lee to Gallagher, 22 November 1965, Res 3-1. Human Subjects Policy & Regulations 1965–67, RG 443.

24. Sykes, Gresham M., “Feeling Our Way: A Report on a Conference on Ethical Issues in the Social Sciences,” American Behavioral Scientist 10 (June 1967): 11.Google Scholar

25. Minutes,” Social Problems 14 (Winter 1967): 347.Google Scholar

26. Mead, Margaret, “Research with Human Beings: A Model Derived from Anthropological Field Practice,” Daedalus 98 (Spring 1969): 361.Google Scholar

27. American Sociological Association, Resolution No. 9, 1 September 1966, in House Committee on Government Operations, The Use of Social Research, Part IV, 256–57.

28. Public Health Service, “Investigations Involving Human Subjects, Including Clinical Research: Requirements for Review to Insure the Rights and Welfare of Individuals: Clarification,” 12 December 1966, Res 3-1. Human Subjects Policy & Regulations 1965–67, RG 443.

29. House Committee on Government Operations, The Use of Social Research, Part IV, 220.

30. Although there was no comprehensive study of IRBs before 1976, anecdotal evidence suggests that universities began applying IRB requirements to nonmedical research in a large way only after 1971. See Bernard Barber, “Some Perspectives on the Role of Assessment of Risk/Benefit Criteria in the Determination of the Appropriateness of Research Involving Human Subjects,” December 1975, p. 3, box 5, meeting #15(A), tabs 14–17, National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research, Collection, National Reference Center for Bioethics Literature, Kennedy Institute of Ethics, Georgetown University (hereafter NCPHS-GU).

31. Confrey, Eugene A., “Status of action to revise policy statement on the use of human subjects in PHS-sponsored activities, and to improve the implementation of these policies,” 18 June 1968, Res 3-1. Human Subjects Policy & Regulations 1968–72, RG 443.Google Scholar

32. Hatchett, Stephen P., “Status Report of Experience with PPO #129,” 31 May 1968, Res 3-1. Human Subjects Policy & Regulations 1968–72, RG 443.Google Scholar

33. Origins of the DHEW Policy on Protection of Human Subjects,” in Senate Committee on Labor and Human Resources, National Advisory Commission on Health Science and Society, 92d Cong., 1st. sess., 1971, 1–3.Google Scholar

34. Conner, Mark E., “Summary Minutes: DHEW Policy on Protection of Human Subjects, Meeting at Massachusetts General Hospital, April 30, 1971,” Res 3-1. Human Subjects Policy & Regulations 1968–72, RG 443.Google Scholar

35. Department of Health, Education, and Welfare, The Institutional Guide to DHEW Policy on Protection of Human Subjects (Washington, D.C., 1971), 2.Google Scholar

36. Welsh, James, “Protecting Research Subjects: A New HEW Policy,” Educational Researcher, March 1972, 12.Google Scholar

37. Jones, James H., Bad Blood: The Tuskegee Syphilis Experiment (1981; reprint, New York, 1993), 1.Google Scholar

38. Senate, A Bill to Establish Within the Executive Branch an Independent Board to Establish Guidelines for Experiments Involving Human Beings, S. 934, 93d Cong., 1st sess., 1973.

39. Javits, Jacob K., remarks, Congressional Record, 15 February 1973.Google Scholar

40. Senate Committee on Labor and Human Resources, National Research Service Award Act, S. Report. 93-381, 93d Cong., 1st sess., 1973, 23.

41. Senate Committee on Labor and Human Resources, National Research Service Award Act, 15.

42. Sharland Trotter, “Strict Regulations Proposed for Human Experimentation,” APA Monitor 5 (February 1974): 8.

43. Frankel, Mark, “Public Policymaking for Biomedical Research: The Case of Human Experimentation” (Ph.D. diss., George Washington University, 1976), 187 and 298Google Scholar; Richard B. Stephenson to Robert P. Akers, 4 October 1972, Res 3-1-A Study Group Review of Policies 1973–75, RG 443.

44. Charles R. McCarthy to Study Group for Review of Policies on Protection of Human Subjects in Biomedical Research, 3 May 1973, Res 3-1. Human Subjects Policy & Regulations 1973–82, RG 443.

45. Frankel, “Public Policymaking for Biomedical Research,” 187, 202, 298.

46. Thomas J. Kennedy Jr. to Acting Director, NIH, 23 March 1973, Res 3-1. Human Subjects Policy & Regulations 1973–82, RG 443.

47. Kennedy to Assistant Secretary for Health, 7 September 1973, Res 3-1. Human Subjects Policy & Regulations 1973–82, RG 443.

48. D. T. Chalkley, draft letter to the editor of the Christian Century, 5 April 1974, Res 3-4. National Commission for the Protection of Human Subjects, Folder #1, 1974–75, RG 443.

49. DHEW, “Protection of Human Subjects: Proposed Policy,” Federal Register 38 (9 October 1973): 27882.Google Scholar

50. DHEW, “Protection of Human Subjects,” Federal Register 39 (30 May 1974): 18914, 18917.Google Scholar

51. Ronald Lamont-Havers to Robert S. Stone, 13 May 1974, Res 3-1-b-1. Interdept. Work. Group Uniform Fed. Pol., RG 443.

52. Thomas S. McFee to secretary, 5 June 1974, Res 3-1-B. Proposed Policy Protection Human Subjects 1974, RG 443.

53. McCarthy to Study Group for Review of Policies on Protection of Human Subjects in Biomedical Research, 3 May 1973, Res 3-1. Human Subjects Policy & Regulations 1973–82, RG 443; David F. Kefauver, cited by Frankel, “Public Policymaking for Biomedical Research,” 275 and 320. n. 406.

54. McCarthy, Charles R., “Reflections on the Organizational Locus of the Office for Protection from Research Risks,” in National Bioethics Advisory Commission, Ethical and Policy Issues in Research Involving Human Participants (Washington, D.C., 2001), (4 November 2006), H-8.Google Scholar

55. Tropp, Richard A., “A Regulatory Perspective on Social Science Research,” in Beauchamp, Tom L. et al. , eds., Ethical Issues in Social Science Research (Baltimore, 1982), 39.Google Scholar

56. National Research Act, PL 93-348, 88 Stat 342.

57. “Mechanisms for Applying Ethical Principles to the Conduct of Research Involving Human Subjects: The Institutional Review Board,” preliminary draft, 5 November 1976, p. 7, box 11, meeting #24, tabs 5–6, NCPHS-GU.

58. National Research Act, sec. 202.

59. Lamont-Havers, , “Discussion Paper re DHEW Administration of Ethical Issues Relating to Human Research Subjects,” draft, 3 April 1974, Res 3-1-b-1. Interdept. Work. Group Uniform Fed. Pol., RG 443.Google Scholar

60. Lamont-Havers to Deputy Director, NIH, 5 September 1974, Res 3-1-B. Proposed Policy Protection Human Subjects 1974, RG 443.

61. Department of Health, Education, and Welfare, “Protection of Human Subjects: Technical Amendments,” Federal Register 40 (13 March 1975): 11854–58.Google Scholar

62. Gray, Bradford H., Cooke, Robert A., and Tannenbaum, Arnold S., “Research Involving Human Subjects,” Science, new series, 201 (22 September 1978): 1094–101.Google Scholar

63. Survey Research Center, “Research Involving Human Subjects,” 2 October 1976, box 11, meeting #23, tab 3(a), NCPHS-GU.

64. Pattullo, E. L., “Modesty Is the Best Policy: The Federal Role in Social Research,” in Beauchamp, et al. , eds., Ethical Issues in Social Science Research, 382Google Scholar. The same could be said for studies conducted by the federal government itself. In 1979, officials within the Department of Health, Education, and Welfare found that most surveys were conducted without IRB review, “and there is no evidence of any adverse consequences.” “Applicability to Social and Educational Research,” attached to Peter Hamilton to Mary Berry et al., 27 March 1979, Res 3-1-B. Proposed Policy Protections Human Subjects 1978–79, RG 443.

65. Murray Wax to Bradford Gray, 23 April 1976, box 8, meeting #18, tab 14, NCPHS-GU.

66. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (NCPHS), Appendix to Report and Recommendations: Institutional Review Boards (Washington, D.C., 1978); NCPHS, Transcript of the Public Hearings, April 5, April 15, and May 3, 1977 on Institutional Review Boards (Bethesda, and Springfield, Va., 1977), 692.

67. NCPHS, Transcript, Meeting #39, February 1978 (Bethesda, 1978), 44.

68. In recent years, the federal officials suggested that some scholarship, including most oral history, does not require IRB review because it is not generalizable. Since the term “generalizable” was introduced primarily to distinguish research from the practice of medicine, this recent usage has scant relation to the intent of the commission. See NCPHS, Transcript, Meeting #15, 13–16 February 1976 (Bethesda, 1976), 317–18; James D. Shelton, “How to Interpret the Federal Policy for the Protection of Human Subjects or ‘Common Rule’ (Part A),” IRB: Ethics and Human Research 21 (November–December 1999): 6; Bruce Craig, “Oral History Excluded from IRB Review,” Perspectives, December 2003.

69. NCPHS, “Institutional Review Board; Report and Recommendations,” Federal Register 41 (30 November 1978): 56175, 56182.Google Scholar

70. NCPHS, Transcript, Meeting #23, October 1976 (Bethesda, 1976), 2-52–2-57; Survey Research Center, “Research Involving Human Subjects,” 2 October 1976, box 11, meeting #23, tab 3(a), NCPHS-GU.

71. NCPHS, Transcript, Meeting #39, February 1978, 44.

72. Charles R. McCarthy, interview by Patricia C. El-Hinnawy, Oral History of the Belmont Report and the NCPHS, 22 July 2004, (30 December 2006).

73. McCarthy and Donna Spiegler to Joel M. Mangel and Richard A. Tropp, 2 August 1978, Res 3-1. Human Subjects Policy & Regulations 1973–82, RG 443.

74. Hamilton to Mary Berry et al., 27 March 1979, Res 3-1-B. Proposed Policy Protections Human Subjects 1978–79, RG 443.

75. “Applicability to Social and Educational Research.”

76. Gerald L. Klerman to Assistant Secretary for Health and Surgeon General, 30 March 1979, Res 3-1-B. Proposed Policy Protections Human Subjects 1978–79, RG 443.

77. Julius Richmond to Acting General Counsel, draft, 12 June 1979, Res 3-1-B. Proposed Policy Protections Human Subjects 1978–79, RG 443.

78. Richmond to Acting General Counsel, draft, 12 June 1979.

79. “Applicability to Social and Educational Research.”

80. Richmond to Acting General Counsel, draft, 12 June 1979.

81. Hamilton to Berry et al., 27 March 1979.

82. Donald F. Frederickson, Director, NIH, to Assistant Secretary for Health and Surgeon General, 18 April 1979, Res 3-1-B. Proposed Policy Protections Human Subjects 1978–79, RG 443.

83. Richmond to Deputy General Counsel, draft memorandum, attached to Donald F. Frederickson, Director, NIH, to Assistant Secretary for Health and Surgeon General, 18 April 1979, Res 3-1-B. Proposed Policy Protections Human Subjects 1978–79, RG 443.

84. Klerman to Dick Beattie, Rick Cotton, and Peter Hamilton, 11 June 1979, Res 3-1-B. Proposed Policy Protections Human Subjects 1978–79, RG 443.

85. Klerman to Assistant Secretary for Health and Surgeon General, 30 March 1979; Hamilton to the Secretary, draft memo, 4 June 1979, Res 3-1-B. Proposed Policy Protections Human Subjects 1978–79, RG 443.

86. Proposed Regulations Amending Basic HEW Policy for Protection of Human Research Subjects,” Federal Register 44 (14 August 1979): 47693.Google Scholar

87. “Proposed Regulations Amending Basic HEW Policy,” 47688.

88. Pool to F. William Dommel, 8 November 1979, box 24, Human Subjects Mailings 2/4, Pool papers; Pool, “Censoring Research,” Society, November–December 1980, 40; Pool, “Prior Restraint,” New York Times, 16 December 1979; Pool, “Protecting Human Subjects of Research: An Analysis on Proposed Amendments to HEW Policy,” PS 12 (Autumn 1979): 452.

89. E. L. Pattullo, e-mail to the author, 14 August 2007.

90. Levine, Robert J. et al. , panelists, “The Political, Legal, and Moral Limits to Institutional Review Board (IRB) Oversight of Behavioral and Social Science Research,” in Knudson, Paula, ed., PRIM&R Through the Years: Three Decades of Protecting Human Subjects, 1974–2005 (Boston, 2006), 38–40.Google Scholar

91. Pattullo, Edward L., “The Political, Legal, and Moral Limits to Institutional Review Board (IRB) Oversight of Behavioral and Social Science Research,” in Knudson, , ed., PRIM&R Through the Years, 40.Google Scholar

92. American Association of University Professors, “Regulations Governing Research on Human Subjects,” Academe (December 1981): 363. The full text is printed as J. W. Peltason, “Comment on the Proposed Regulations from Higher Education and Professional Social Science Associations,” IRB: Ethics and Human Research 2 (February 1980): 10.

93. Director, OPRR, to Assistant Secretary for Health and Surgeon General, 15 October 1979, Res 3-1-B. Proposed Policy Protections Human Subjects 1979–80, RG 443.

94. For a range of critics, see Society (November–December 1980).

95. “Issues Related to HHS Human Subject Research,” c. 20 May 1980, Res 3-1-B. Proposed Policy Protections Human Subjects 1979–80, RG 443.

96. Morris Abram to Patricia Roberts Harris, 18 September 1980, Res 3-4. President’s Commission for Study of Ethical Problems in Medicine & Res, Folder #4, 1978–80, RG 443.

97. Abram to Harris, 18 September 1980.

98. McCarthy, interview by El-Hinnawy.

99. Federickson to the Secretary, 21 November 1980, Res 3-1-B. Proposed Policy Protections Human Subjects 1979–80, RG 443.

100. McCarthy to Director, NIH, 20 November 1980, Res 3-1-B. Proposed Policy Protections Human Subjects 1979–80, RG 443.

101. Richmond to the Secretary, draft, 25 November 1980, Res 3-4. President’s Commission for Study of Ethical Problems in Medicine & Res. 1981—Folder #1, RG 443.

102. “Applicability to Social and Educational Research”; Tropp, Richard A., “What Problems Are Raised When the Current DHEW Regulation on Protection of Human Subjects Is Applied to Social Science Research?” in NCPHS, The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Appendix, Volume II. (Washington, D.C., 1978), 18–1.Google Scholar

103. Bernstein, Joan Z., “The Human Research Subjects H.E.W. Wants to Protect,” New York Times, 24 January 1980Google Scholar; Ithiel de Sola Pool, “Censoring Research,” Society (November–December 1980): 39.

104. McCarthy to Director, NIH, 20 November 1980, Res 3-1-B. Proposed Policy Protections Human Subjects 1979–80, RG 443.

105. Alexander Capron of the President’s Commission later ridiculed a similar figure, which estimated exclusion of 50–80 percent of social research. Capron, Alexander, “IRBs: The Good News and the Bad News,” in Knudson, , ed., PRIM&R Through the Years, 72.Google Scholar

106. Federickson to the Secretary, 21 November 1980, Res 3-1-B. Proposed Policy Protections Human Subjects 1979–80, RG 443.

107. McCarthy, interview by El-Hinnawy.

108. Richmond to the Secretary, 8 January 1981, Res 3-4. President’s Commission for Study of Ethical Problems in Medicine & Res. 1981—Folder #1, RG 443. The memo indicates that it was drafted by McCarthy on 29 December 1980 and revised on 30 December.

109. Department of Health and Human Services, “Final Regulations Amending Basic HHS Policy for the Protection of Human Research Subjects,” Federal Register 46 (26 January 1981): 8369, 8373, 8386.Google Scholar

110. Tropp, “A Regulatory Perspective on Social Science Research,” 398.

111. Louttit, Richard T., “Government Regulations: Do They Facilitate or Hinder Social and Behavioral Research?” in Sieber, Joan E., ed., NIH Readings on the Protection of Human Subjects in Behavioral and Social Science Research: Conference Proceedings and Background Papers (Frederick, Md., 1984), 180.Google Scholar

112. Huber, Bettina, “New Human Subjects Policies Announced; Exemptions Outlined,” American Sociological Association Footnotes, November 1981, 1.Google Scholar

113. Reinhold, Robert, “New Rules for Human Research Appear to Answer Critics’ Fear,” New York Times, 22 January 1981.Google Scholar

114. Pattullo, E. L., “How General an Assurance?IRB: Ethics and Human Research 3 (May 1981): 8.Google Scholar

115. E. L. Pattullo, e-mail to the author, 14 August 2007; Patullo to McCarthy, 30 January 1981, Res 3-1. Human Subjects Policy & Regulations 1973–82, RG 443.

116. Pool to Members of the Committee of Concern About Human Subjects Regulations and Other Interested Parties, 30 January 1981, Res 3-1. Human Subjects Policy & Regulations 1973–82, RG 443.

117. Richmond to the Secretary, 8 January 1981

118. Pattullo, “How General an Assurance?” 8–9.

119. Pattullo, , “Governmental Regulation of the Investigation of Human Subjects in Social Research,” Minerva 23 (1985): 529.Google Scholar

120. Lamont-Havers to Robert S. Stone, 13 May 1974, Res 3-1-b-1. Interdept. Work. Group Uniform Fed. Pol., RG 443.

121. Joan P. Porter, interview by author, Washington, D.C., 2 August 2007. In 2007, my Freedom of Information Act request for records of the Common Rule was met with the reply that “no responsive records” exist.

122. “Concurrences of Departments and Agencies Including Proposed Departures from Model Policy,” 3 May 1985, RES 3-1-D Proposed Mondel [sic] Federal Policy Protection of Human Subjects, National Institutes of Health, OD Central Files, Office of the Director, NIH.

123. Porter, interview.

124. McCarthy, interview by El-Hinnawy.

125. Office of Science and Technology Policy, “Proposed Model Federal Policy for Protection of Human Subjects,” Federal Register 51 (3 June 1986): 20206.Google Scholar

126. Porter, interview.

127. Ibid.; Federal Policy for the Protection of Human Subjects: Notice and Proposed Rules,” Federal Register 53 (10 November 1988): 45663, 45672.Google Scholar

128. Office of Science and Technology Policy et al., “Federal Policy for the Protection of Human Subjects: Notices and Rules,” Federal Register 56 (18 June 1991): 28007.

129. “Applicability to Social and Educational Research.”

130. Louttit, , “Government Regulations,” in Sieber, , ed., NIH Readings, 179.Google Scholar

131. Robert E. Windom to Charlotte Kitler, 13 September 1988, RES-6-01. Human Subjects, OD Central Files, Office of the Director, NIH.

132. Office for Protection from Research Risks, “Exempt Research and Research That May Undergo Expedited Review,” OPRR Reports 95-02 (5 May 1995), (22 August 2007).

133. Joan P. Porter, e-mail to the author, 23 August 2007.

134. Shea, Christopher, “Don’t Talk to the Humans: The Crackdown on Social Science Research,” Lingua Franca, September 2000Google Scholar; Cohen, Patricia, “As Ethics Panels Expand Grip, No Field Is Off Limits,” New York Times, 28 February 2007.Google Scholar

135. Fluehr-Lobban, Carolyn, “Informed Consent in Anthropological Research: We Are Not Exempt,” Human Organization 53 (Spring 1994): 4, 5, 9 n. 7.Google Scholar

136. Sica, Alan, “Sociology as a Worldview,” American Journal of Sociology 102 (July 1996): 254.Google Scholar

137. American Association of University Professors, “Research on Human Subjects: Academic Freedom and the Institutional Review Board” (2006), (2 September 2008).

138. McCarthy, Charles R., “Introduction: The IRB and Social and Behavioral Research,” in Sieber, , ed., NIH Readings, 8–9.Google Scholar

139. Humphreys himself denied that he had deceived his interview subjects, but others more plausibly described the encounters as “blatant deception,” and even Humphreys conceded having “misrepresented” himself to his campus police in order to match license plates to names. The key point is that McCarthy and his colleagues did not study the effect that various wordings would have on that or other particular studies. Humphreys, Laud, Tearoom Trade: Impersonal Sex in Public Places, enlarged ed. (Chicago, 1975), 171, 199–203, 217.Google Scholar

140. Seiler, Lauren H. and Murtha, James M., “Federal Regulation of Social Research,” Freedom at Issue (November–December 1979): 30.Google Scholar

141. Klerman to Beattie, Cotton, and Hamilton, 11 June 1979; Klerman to Assistant Secretary for Health and Surgeon General, 30 March 1979.

142. DHEW, “Protection of Human Subjects,” Federal Register 39 (30 May 1974): 18914.Google Scholar