In 1916, a 19-year-old British student, Austin Bradford Hill, who hoped to become a physician, joined the Royal Naval Air Service. He was posted to the Greek Islands to support an attack on the Dardanelles.
Within a year, he developed pulmonary tuberculosis and was sent home to die. An artificial pneumothorax saved his live, but a career as a physician was now out of the question.
Hill’s First RCT
Instead, Sir Austin Bradford Hill decided to study economics and became, according to Sir Richard Doll, M.D., the greatest medical statistician of the 20th century, despite the fact that he held no degree in either medicine or statistics. In 1992, Doll wrote that Hill had more influence on the past 50 years of medical science than many winners of the Nobel Prize for medicine.
What was Hill’s great contribution? He is widely recognized as the father of the Randomized Clinical Trial (RCT) and promoting a process by which researchers and patients were ‘blinded’ as to the exact treatment they were receiving.
According to an August 11, 2016 New England Journal of Medicine (NEJM) article “History of Clinical Trials The Emergence of the Randomized, Controlled Trial, ” by Laura Bothwell, Ph.D. and Scott Podolsky, M.D., the first modern RTC is credited to Hill and the British Medical Research Council (MRC) in 1948 for an evaluation of streptomycin for the treatment of tuberculosis.
In that trial, efforts were made to hide from the assessor whether (or not) streptomycin was being administered to the patient and, when practicable, to also hide that same information from the patients as well. Ethical considerations played a major part.
The Council had very tough dilemma. Was it ethical to withhold a drug that had been effective in animal experiments and had had encouraging clinical results in the few published reports? To make matters even more difficult, there was only a small amount of the drug in Britain and no further supplies were forthcoming.
The Council agreed to use the limited supplies to treat patients with two conditions that had proven to be fatal: military tuberculosis and tuberculous meningitis. There wasn’t, however, enough streptomycin left over to treat all of the people desperately ill with other types of tuberculosis.
Faced with the Solomon-like dilemma, the Council decided that the greater good, the most ethical approach would be to seize the opportunity to design a strictly controlled trial and thereby speedily and effectively reveal the value of the treatment. The question of whether it was ethically justifiable to withhold the drug from any patient was, therefore, answered with an unhesitating, “Yes, ” wrote Doll.
History of Comparative Therapies
Comparing therapies to determine effectiveness has been around since the Book of Daniel, where nutritional comparisons were made. But not until the Age of Enlightenment in the mid-1700s, did the idea of comparative and randomized trials take root.
Peter Armitage writes that randomization had been proposed in a limited way over 300 years ago by van Helmont, a medicinal chemist. He challenged academics of his day to compare the efficacy of his treatment to theirs.
van Helmont proposed taking a few hundred “poor People” out of hospitals, camps and elsewhere who have fever, pleuresis, etc. and divide them into halves by lots, and said, “let us see how many funerals both of us shall have.” Apparently, no one took him up on his offer.
Controlled Trials
Controlled trials emerged with growing frequency during the mid-1700s. Scottish surgeon James Lind who, in 1753, published, A Treatise of the Scurvy, implemented a controlled trial which successfully demonstrated that a diet with citrus fruit was effective against scurvy in sailors at sea, “thereby providing a touchstone for subsequent generations of researchers who gradually embraced comparative trial methods, ” wrote Bothwell and Podolsky.
That was a good year for sciences as it is also the year that the Swede Carl Linnaeus published Species Plantarum, beginning the start of the scientific classification of plants.
Bothwell and Podolsky write that loosely controlled trials began to show up in the 18th and 19th centuries, “often conducted by skeptics to test the utility of unorthodox remedies ranging from mesmerism to homeopathy.”
Alternate-Allocation
Then came the most recent methodologic ancestor of RTCs, “alternate-allocation” trials of the later 19th century.
Conventionally dated to Johannes Fibiger’s 1898 study of diphtheria antitoxin in 484 patients in Copenhagen, alternate-allocation entailed treating every other patient (or, in Fibiger’s case, patients seen every other day) with a particular experimental remedy, withholding it from the others, and then comparing outcomes. “But Fibiger’s was only the most famous use of a technique that increasingly appeared in the medical literature from the 1890s onward, one that could (though only occasionally did) involve patient or researcher blinding, use of placebos for control groups, and statistical analysis of results, ” wrote Bothwell and Podolsky.
As early as 1899, a Dr. Williams described applying a glycerin–hydrogen peroxide solution “to the skin of every alternate patient” to treat desquamation owing to scarlet fever. Medical journals published numerous primary reports of alternate-allocation studies for the next 50 years.
But with the discovery of germs, Bothwell and Podolsky note that major shifts in the social and scientific structure of medicine in the late 19th and early 20th centuries created new opportunities and demands for more rigorous clinical research methods.
The Payers Influence
In the first part of the 20th century hospitals were expanding and new biologic and vaccine industries were emerging. Chemists developed novel therapeutic compounds, and, according to Bothwell and Podolsky, “an unregulated subeconomy of fraudulent replicas of new agents flourished.” All these factors, write the authors, motivated clinical investigators to pursue more sophisticated approaches for evaluating experimental therapies.
And payers wanted to know what they should pay for.
The major influenza outbreak of the early 20th century became a very significant problem for payers. Met Life Insurance Company lost over $20 million—a monumental sum in those days—to unproven treatments. The underwriters at Met Life needed a way to separate the therapeutic wheat from the chaff.
So this major insurance payer began funding alternate-allocation trials at multiple major hospitals in northeastern U.S.
In 1931, James Burns Amberson and colleagues published a study in which a coin flip randomly determined which of two seemingly equally divided groups of patients would receive sanocrysin for the treatment of tuberculosis.
Bothwell and Podolsky write that the number of alternate-allocation studies, however, was itself dwarfed by the number of articles promoting therapies on the basis of other forms of evidence, from laboratory and physiological justifications to case reports. “Many producers of new treatments lacked economic, regulatory, or social incentives to rigorously evaluate their products in controlled trials, and many researchers simply continued relying on standard methods that were widely accepted by scientists and society.”
In 1935, an editorialist cited one trial’s protocol which called for administering the serum only in alternate cases, but researchers hesitated: “The main difficulty encountered was the inability of our special investigators to withhold this promising agent from any stricken child…. Our sentiment overruled our reason.”
When Doll qualified in medicine in 1937, new treatments were almost always introduced on the grounds that in the hands of professor A or in the hands of a consultant at one of the leading teaching hospitals, the results in a small series of patients (seldom more than 50) had been superior to those recorded by professor B (or some other consultant) or by the same investigator previously. Under these conditions variability of outcome, chance, and the unconscious (leave alone the conscious) in the selection of patients brought about apparently important differences in the results obtained; consequently, there were many competing new treatments.
Selection Bias
The alternative-allocation school began to be challenged over this obvious problem of selection bias.
Hill, of the 1948 British Research evaluation, was concerned enough about researchers’ capacity to figure out (and hence cheat) allocation schemes that, in an attempt to address selection bias, he replaced alternate-allocation with strict concealed randomization of patients to treatment or control groups.
“The blinding of researchers to patients’ assignments, if at all possible, soon accompanied concealed random allocation in the emerging definition of the ideal study, in which bias was to be eliminated, ” wrote Bothwell and Podolsky.
The British led the way and U.S. investigators soon followed the RCT model to handle all the new pharmaceuticals being developed.
Blinding of patient selection began. But that wasn’t enough. Assessments by researchers could be biased. So then came the double-blind method.
Efficacy and Reimbursement
To be considered clinically effective and eligible for payment by insurers and the government, you have to establish that your device is safe and effective to a statistically significant level.
In 1962, U.S. Congress passed the Kefauver–Harris Amendments to the Food, Drug, and Cosmetic Act, as the RCT had become the obvious methodology by which the Food and Drug Administration (FDA) could require pharmaceutical manufacturers to demonstrate therapeutic safety and efficacy before drug approval.
By 1970, the FDA required that drug producers submit RCT results with new drug applications. The same requirement for devices followed soon after. (CMS and the payers determine reimbursement. FDA determines whether it can be commercialized in the first place).
The Randomness of Hill
So the chance tuberculosis infection of a young British soldier led the way to our modern version of RCTs.
But says, Doll, even without Hill, randomization would have come about sooner or later, “perhaps introduced by [David] Rutstein, [M.D.] in the U.S. Rutstein collaborated with Hill in the design of an Anglo-American trial of adrenocorticotrophic hormone, cortisone, and aspirin in the treatment of acute rheumatic fever.”
“Randomization would have been adopted much more slowly, however, without Bradford Hill’s understanding of medical susceptibility and medical ethics and without his concern for simplicity of design and clarity of presentation.”




