How to use medical evidence IV: uncontrolled trials
Here is another post about how to evaluate the validity of a medical claim. My last one dealt with case reports as medical evidence. This discussion is about the next rung on the ladder of reliability of medical evidence — the uncontrolled trial.
Researchers can do other things to series of cases besides simply describe what the patients are like; they can manipulate the situation in various ways. For example, if a doctor looks at her series of patients and becomes convinced that a particular therapy will work for the disease, she can give the therapy to the next patient, or series of patients, that come her way with the problem, and see what happens. This would constitute one version of an uncontrolled trial, and is probably the oldest kind of treatment research doctors have used. Venerable as the technique is, it is easy to see how an experiment like this could yield misleading results.
First, the patient group is subject to the same selection bias of the case series — the assortment of people with the problem who come to see the doctor are unlikely to represent a random sample of all people with the disease. Next, the only way the doctor could decide the treatment might be helping would be to compare what happens in the patients who get the new therapy with the patients she saw in the past who did not. Such so-called historical controls are the weakest sort of control group. This is because they are subject to the same kind of selection bias as the experimental group, those who get the treatment. Worse than that, since they were seeing the doctor at an earlier time, they may not even be representative of the patients with the disease who are seeing her now. Finally, a doctor who believes that a particular treatment will work (which is, after all, why she is doing the experimental trial in the first place) is hardly the best person to decide impartially if that is so. All of us want our theories to be correct, so her evaluation is bound to be slanted — it is only human nature at work.
Uncontrolled trials like this are particularly susceptible to what logicians call the fallacy of post hoc, ergo propter hoc, translated from the Latin as “after this, therefore because of this.” Anyone who has watched late-night cable television has seen countless examples of this logical trap, in the form of personal testimonials from people who had this or that problem, took the pill or bought the product, and the problem went away. The fallacy, of course, is that the two events may be entirely unrelated, just as the fact I may drink coffee every morning before the sun comes up does not cause the heavens to move in that way.
Trials like this are also highly prone to suffer from the placebo effect, the trick the human mind plays on us to believe so much that a particular treatment is working we actually will it to happen. The power of wishful thinking in the human mind is astonishing. Even more astonishing is that, in some situations, the “useless” placebo, a sugar pill or its equivalent, actually does improve the situation, if only slightly (15-30% by most estimates). So oftentimes people get a little better no matter what the therapy. The only way we can be sure the improvement is from the therapy (as a therapy, and not a placebo) is to blind both the patient and the observer to knowing which patients got the treatment and which ones got the placebo.