Even without written codes, ethical standards for human research existed before World War II
- Written by The Conversation
In his history of the Tuskegee syphilis study (formally known as the US Public Health Service Study of Untreated Syphilis in the Negro Male), historian James H Jones wrote:
[T]here was no system of normative ethics on human experimentation during the 1930s that compelled medical researchers to temper their scientific curiosity with respect for the patient’s rights.
The American Medical Association’s code of ethics did not address research on humans until 1946. The Nuremberg Code, often considered the foundational document of research ethics, dates from the 1947 verdict in the Doctors Trial – the military tribunal for German physicians on their participation in war crimes.
Thus, there were no explicit, written codes of ethics for nontherapeutic human research for American civilian researchers prior to World War II (this was not the case for American military researchers). But the lack of a written code or guidelines for civilian researchers does not mean that ethical standards for nontherapuetic research did not exist.
In fact there were norms regarding what could be done in the context of research on humans. Throughout the 19th century, researchers voiced their reaction to experiments that they deemed outrageous or unethical in oral addresses and in articles and commentaries in medical journals. Together, these sources created a de facto professional consensus on the ethical standards for research.
Human research ethics in the 19th century – risk and consent
According to these unwritten but nevertheless real ethical standards, experimentation on humans ought be preceded by experimentation on animals. It was also acceptable (and even laudable) for a researcher to experiment on himself or his family before turning to other research subjects. Self-experimentation could help justify nontherapeutic experimentation on others, but it was not the only factor in making a nontherapeutic experiment ethically permissible. There were two other factors to be considered: the avoidance of research-related injury and consent from the subject.
The first item in the Nuremberg Code says that the voluntary consent of the subject is essential. Yet, in the United States prior to World War II, norms concerning when consent should be sought were tied to the possibility of research-related harm.
In most cases, individual consent was expected for nontherapeutic research, but certain types of non-harmful, non-invasive research could be conducted without it. For example, getting consent from hospitalized patients before using them as subjects to test new medical equipment wasn’t considered necessary, so long the testing was non-invasive and not seen as potentially harmful.
Some investigators used people not considered capable of consenting – such as small children and the inmates of mental asylums – as research subjects for nontherapeutic experiments. Researchers were able to conduct experiments on such subjects with the consent of, say, a school or asylum administrator, but they were expected to be able to justify the risks they inflicted on their subjects. Elevated risks typically were not considered acceptable.
And experiments with healthy, mobile subjects required consent because they could simply walk out if displeased. Paying subjects was also an accepted practice at the time, even though it was understood that subjects might submit to experiments for pay that they otherwise wouldn’t consent to.
Even when a research subject was capable of providing valid consent, this was not viewed as carte blanche to cause severe or irreparable harm or to cause terminal disease or death. If a subject granted an investigator permission to do absolutely anything to him or her in the name of research, that subject would have been deemed troubled or suicidal. No investigator would have been permitted to take advantage of such an offer without severe reprimand from his peers – likely being scorned by his colleagues and finding himself with no opportunities for publishing his future research.
Early disagreement in human research ethics
An 1821 report of two prize-winning experiments performed by Boston-based Dr Enoch Hale opened with comments on the ethics of his research. Hale wanted to disprove the theory that there was a direct passage of fluid between the stomach and the kidneys, so he experimented on himself. He justified this decision stating that:
Experiments on our own race can never be performed to any considerable extent. If they are hazardous in their nature, they of course are never to be attempted, even if subjects could be found who would be willing to undergo them. And when they are not so, none but professional men can estimate the degree of inconvenience or risk to which they may be subjected by submitting to them. To obviate these difficulties, I have in the first dissertation made myself the subject of my experiments.
In Hale’s view, hazardous experiments on even willing subjects were verboten, and people outside of the medical research community could not effectively judge which research activities would be hazardous. The best course of action was for the researcher to use himself as a research subject.
Popular Science Monthly via Wikimedia Commons
Over 40 years later, highly influential French physiologist Claude Bernard presented a different view on using human subjects for research, asking, “Have we a right to perform experiments and vivisections on man?”
Bernard seemingly had no qualms about involuntary research or research that the subject undertook without fully understanding what was happening as long as it wasn’t likely to harm the subject. He focused on the issue of harm, writing:
Christian morals forbid only one thing, doing ill to one’s neighbor. So, among the experiments that might be tried on man, those that can only harm are forbidden, those that are innocent are permissible, and those that may do good are obligatory.
While both Hale and Bernard held that you could not harm the subjects of nontherapuetic research, they differed in their opinions of what we call informed consent today.
To Hale, informed consent was crucial – the only permissible research subjects were those who truly understood what they were agreeing to do. But to Bernard, informed consent was inconsequential. For example, Bernard famously thought it acceptable to feed a woman condemned to die worm larvae and then check to see if they had developed into worms during the post-mortem exam. Since the woman would not be physically harmed by ingesting the larvae, Bernard considered the experiment morally acceptable. He was not concerned with whether the woman would have been willing to knowingly eat the larvae or if unknowingly eating the larvae was an offense to her rights or dignity.
Controversy over consent and harm
The debate over consent reached another milestone in 1897. That year, Dr Giuseppe Sanarelli announced that he had discovered the cause of yellow fever, a much-feared and often deadly disease. While initially viewed as a triumph, drawing to a close the long search for the cause of yellow fever, his research was soon subject to criticism.
Wellcome Images via Wikimedia Commons, CC BY
Sanarelli had injected five people with inactivated, filtered preparations of the microbe Bacillus icteroides, which he claimed caused them to develop yellow fever.
But Sanarelli had not obtained permission from his subjects. Of the five subjects who became ill, some also underwent biopsies of the liver and kidneys so that Sanarelli could ascertain what was happening in these organs.
Consent – or lack thereof in Sanarelli’s case – became a point of controversy. Renowned physician William Osler declared that:
“[t]o deliberately inject a poison of known high degree of virulency into a human being, unless you obtain that man’s sanction…is criminal.”
Likewise, there were concerns expressed about the risks to the five subjects – risks not only from what Sanarelli believed was yellow fever but also from organ biopsies.
Osler was not alone in condemning the research. Albert Leffingwell, a trained physician who had stopped practicing medicine to advocate for vivisection (operations on live animals) reform, learned of Sanarelli’s research in a newspaper. The article on the experiment said that “unscientific persons” might be “disposed to criticize” the research.
In response, Leffingwell fired off a letter to the editor asking, “Must condemnation of such deeds be relegated to the despised class of ‘unscientific persons?’” He followed this up with a paper for the American Humane Association’s 1897 convention. Leffingwell stressed the “helpless condition of these victims of scientific research,” stating that:
[a]pparently the victims were newly arrived emigrants from Europe, detained at a quarantine station…doubtless belonging to what an American writer has distinguished as “the lowest orders of the people.”
He went on, saying:
Whether men, women or children, it was necessary that they should be ignorant, so that they should not be able to connect their future agonies with the kind old man who had simply pricked them with a needle; they must be so poor and friendless that no one would care to interest the authorities in their behalf; and they must be absolutely in the experimenter’s power.
While commentators could – and did – disagree about whether such groups as prisoners, soldiers and paid volunteers could give truly voluntary consent, all agreed that consent was necessary in the case of hazardous nontherapuetic research. Forgoing consent to such a hazardous experiment, as Sanarelli did, was a major ethical lapse.
An ongoing evolution
So were there research ethics standards before World War II? Certainly, though it was quite rare for them to be explicitly taught or publicly articulated. While there was no written code of ethics, there were both norms and procedures for reprimanding those who broke those norms. What was considered ethical was occasionally debated, and which elements were perceived as the most essential to the ethical conduct of research changed over time, but that doesn’t mean we can excuse pre-World War II ethical lapses on the grounds that there were no rules or norms when the research was conducted. Rather, we should understand our place in the centuries-old history of thought about what should and should not be done to humans in the name of research.
This is the third part of The Conversation’s series On Human Experiments. Please click on the links below to read other articles:
Part One:Human experiments – the good, the bad, and the ugly
Part Two:How national security gave birth to bioethics
Alison Bateman-House does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.
Authors: The Conversation