Posts Tagged ‘Flexner Report’
A newly minted physician, one who has just graduated from medical school, is not yet ready (or licensed) to practice medicine. The next phase in medical training is called residency — a 3 to 5 year span of time during which the new doctor is given teaching, supervision, and increasingly allowed to function independently in his or her chosen specialty.
Since 2003 residents have been limited to working 80 hours per week, averaged over 4 weeks, with no individual stretch being longer than 16 hours. The rationale for this time restriction is reasonable and difficult to argue with: it dates back to the famous Libby Zion case. She died, and analysis of the case implicated resident fatigue and lack of supervision as contributing significantly to her death. The tragedy focused attention on the ways that overworked, overtired, and poorly supervised residents can harm patients. We don’t want that. But we don’t know the right balance between the patient care service residents provide and their education.
There is no doubt that for many years residents put in too many long hours — well over 100 per week was common. I did that when I trained in the late 1970s. The first year, long called the internship year, was the most brutal. In my case that amounted to at least 120 hours per week, often longer. We got every third Sunday afternoon off — if it was quiet. Subsequent years were less onerous, but always were 100 hours or more.
There is also no doubt that medicine is a career you learn by doing, so sitting in a lecture hall until you see your first patient as a physician is not the way to train doctors, although it was once done that way a century ago in the era before the Flexner Report. Have we found the right balance among the competing claims of resident education, practical on-the-job experience, and patient safety? A recent review article from the Annals of Surgery, “A systematic review of the affects [sic] of resident duty hour restrictions in surgery: impact on resident wellness, training, and patient outcomes,” gives us some useful information about the question. It is particularly useful because surgery residents must have more than abstract cognitive skills; they also need quick and decisive decision making skills and physical dexterity, which they can only get through practice. The American College of Surgeons has been particularly concerned about the effect reduced duty hours has on resident skills.
The review linked above is what is called a meta-analysis. This is a technique in which many smaller studies are pooled together to yield a larger data set, in this case a total of 135 separate articles. The results were disconcerting for advocates of duty hours restrictions. First, there was no improvement in patient safety. In fact, some studies suggested worse patient outcomes. Resident formal education may well have suffered; in 48% of the studies resident performance on standardized tests of their fund of knowledge declined, with 41% reporting no change. Importantly, only 4% reported improved resident performance. Resident well-being is difficult to measure, but there are some survey tools that assess burn-out; 57% of the studies that examined this showed improved resident wellness and 43% showed no change. So the bottom line is that, for surgical residents, duty hour restrictions were associated with better rested residents doing no better, and often worse, on assessments of their knowledge base. Patient safety, a key goal of the new rules, did not improve and actually may have been worse.
What are we to make of this? I can understand how resident test performance suffered. It suggests to me that most learning takes place at the patient bedside or in the operating room. In this analysis the additional free time for independent study didn’t help, either because residents didn’t do it or it’s not as effective. But what about patient safety? Why did that actually go down in more than a few of the studies?
One reason my be the problem of hand-offs of care. When duty shifts of residents are shorter they need to hand-off care of their patients to someone else. It’s well known that these are potentially risky times since the resident assuming care probably doesn’t know the patient as well. Under the old system, I often would admit a sick patient and stay caring for that patient for 24-36 hours. When that happens you really get to know the details of your patients well. From an educational perspective, you also see the natural history of an illness as it evolves. Finally, you develop a closer relationship with patients and their families than happens if residents are continually coming and going.
For myself, I am conflicted over how well we are doing training residents under the new rules. I don’t want to be like an old fart sitting on the porch and yelling at the neighborhood kids to get off my lawn. The old days were not necessarily the Good Old Days. The system I trained under was brutal to residents and sometimes dangerous to patients. But it also crammed an immense amount of practical experience into the available time. Today’s residents are denied that experience, and it shows. I am occasionally astonished by encounters with senior residents who have seen only a couple of cases in their lives of several not uncommon serious ailments.
What can we do? Some medical educators think that new advances in computer simulations and the like will substitute for lack of encounters with the real thing. Procedural specialties like surgery are particularly interested in simulations. We use them in pediatric critical care as well, and they help.
The bottom line is that the duration of residency has not changed in half a century or more, yet we are demanding that residents know more and more. Then we shorten their effective training time by duty hour restrictions; for some specialties it’s the equivalent of lopping a year off the residency. From what I have seen in my young colleagues, the practical result is that the first year or two of independent practice amounts to finishing the residency, acquiring the needed experience. Perhaps we should be honest about that and have the first couple post-residency years of being a “real doctor” be structured as getting mentorship from an experienced physician. As things stand, I think a fair number of finishing residents aren’t quite — almost, but not quite — ready to have the training wheels taken off their bikes.

We have been training physicians the same way for a century, ever since the famous Flexner Report of 1910. That report was commissioned by the Carnegie Foundation in an attempt to improve medical education. Up until then many medical schools were simply terrible. Many were proprietary schools, owned by doctors and run for profit rather than education. Many doctors met their first actual patient after they graduated.
During the decade following Flexner report these proprietary schools either closed or merged with universities, becoming the institution’s medical school. Within a fairly short time the model of medical school as a four year course divided into two preclinical years (studying basic medical science) and two clinical years (learning to treat patients) was the standard. We’ve been doing it that way ever since.
There have long been calls to change this. Various schemes have shortened the usual eight year process of four years of college followed by four years of medical school, usually by shortening the college part. A recent op-ed in the New England Journal of Medicine renews the call for shortening the process, this time by making medical school three years instead of four. A counter-point essay follows, arguing to keep medical school at four years.
What do I think? I think the arguments for shortening medical school are beside the point. Two of the main reasons the advocates give are to reduce student debt and lengthen the useful practice careers (by one year!) of doctors. The latter, they write, would improve the doctor shortage. But really, if the problem is student debt, there are many direct ways to address that. Likewise, if one thinks we need more doctors, then train more.
I think we should keep medical school at four years. There is already far more to learn than can be learned in that period, so shortening things would only make it worse. There is also the maturation factor; to function as a doctor you need how to think like one and act like one. That takes time — I’m still learning at age 61. Lopping a crucial year of the process is not the answer.
Students wishing to go to medical school — premedical students — have gone through pretty much the same process for nearly a century. The requirements have some variability according to the whims of particular medical schools, but in general a person wishing to go to medical school needs a four year collage degree, during which he or she has completed two years of chemistry (including organic chemistry), a year of physics, a year of mathematics, and a year of biology. They also have to take a standardized test, the Medical College Admission Test (MCAT), which is intended to introduce some measure of comparability between what students from various colleges and universities know in common. (There are a few exception to this pathway: a few institutions have programs that combine undergraduate teaching and medical school training, producing a physician in six or seven, instead of eight years.)
Like premedical students, medical students have had pretty much the same training program for a century. Our current way of doing things dates back to the shakeup caused by the Flexner Report in 1910. This was a detailed survey of all American medical schools, and it showed how bad many of them were. As a result, many closed, and many others either merged with stronger schools or upgraded their teaching to what leaders of the time considered to be the gold standard: two years of basic science training followed by two years of practical, hands-on experience seeing patients. Although we have tinkered around the edges since then, little has fundamentally changed.
Here’s how things looked back then:

However, the world has changed. For one thing, medical science has exploded, yet we still train physicians for the same eight years we always have. Nobody wants to extend the duration of medical training; after all, with a typical residency added on after medical school it takes eleven years to produce a physician, often longer.
Now things look more like this — more diverse students, computers everywhere:

The heavily science-influenced training also raises another issue: GPA in science courses and MCAT scores predict well what a student’s academic performance will be in medical school (which is a bit tautological in itself), but they do not predict how good a physician he or she will be. Several medical schools have grappled with this by using what they term “holistic review” of prospective medical students, including objective measures (well, as objective as can be) of things like curiosity, interpersonal skills, empathy, and capacity for growth. An interesting recent article from Boston University School of Medicine assesses how successful these efforts have been.
The first, and to me unsurprising, result is that using this broader evaluation tool resulted in medical school classes that are more diverse in age, experience, race, and cultural background. Yet the average entering GPA and MCAT scores did not change, and student academic performance was just as high. Faculty at Boston University did note a couple of changes, though, including this key one:
The general sense of the faculty, particularly those who teach our small-group problem seminars, is that the students are more collegial, more supportive of one another, more engaged in the curriculum, and more open to new ideas and to perspectives different from their own. Some of these observations are subjective and difficult to quantify, but there is a striking, and uncoached, consensus among the experienced faculty members.
What about premedical training? What about the stereotypic hyper-competitive, obnoxious, fanatical premed, determined to get into medical school at any cost? Should we do something to change that culture? Or is it the best way to develop our future doctors? My own view is that these aspects of premedical training drive away many good students; they could be fine physicians, just lousy premeds. They never even consider the career. Another essay in the same issue of the New England Journal of Medicine describes Mount Sinai School of Medicine’s experience with that issue.
The Mount Sinai program, called Humanities and Medicine Program, provisionally admits, while still in college, students to medical school who are not premeds — they study whatever they want to. They don’t take the MCAT. They get their needed science background for medical training via a series of special summer boot camps. The most interesting thing to me is that for twenty-five years Mount Sinai has admitted half their class through the traditional pathway and half through the Humanities and Medicine Program: the academic performance, traditionally measured, of both groups in medical school has been the same. Here is what they have to say about its goals:
By eliminating MCAT use, outdated requirements, and “premed syndrome,” we aim to select students on the basis of a more holistic review of their accomplishments, seeking those who risk taking academically challenging courses; are more self-directed than traditional medical students; pursue more scientifically, clinically, and socially relevant courses; and pursue independent scholarship.
For myself, I applaud these efforts. Twenty years ago I spent four years on the admissions committee of Mayo Medical School, a highly selective medical school. I was discouraged over how committee members often mouthed the words of wanting more diverse, humanistic (whatever that means) students. But when it came time to vote on candidates, MCAT scores and GPA trumped all.
I also have a personal bias. I took the bare minimum of science courses in college, choosing a double major in history and religion. Even back then (1973) I was admitted to several medical schools. I never felt underprepared; you learn what you need to know when you need to know it. And I think that undergraduate experience made me a better doctor.
So I’d like to see more of this.
A couple of conversations I’ve had with patients’ families over the past month have made me realize that many folks don’t know how our system produces a pediatrician, a radiologist, or a surgeon. And a lot of what people know is wrong. Physicians are so immersed in what we do that we forget that the process is a pretty arcane one. Just what are the mechanics of how doctors are trained? Understanding your physician’s educational journey should help you understand what makes him or her tick. As it turns out, a lot of standard physician behavior makes more sense when you know were we came from. This post concerns some important history about that.
Most physicians in the nineteenth century received their medical educations in what were called proprietary medical schools. These were schools started as a business enterprise, often, but not necessarily, by doctors. Anyone could start one, since there were no standards of any sort. The success of the school was not a matter of how good the school was, since that quality was then impossible to define anyway, but of how good those who ran it were at attracting paying students.
There were dozens of proprietary medical schools across America. Chicago alone, for example, had fourteen of them at the beginning of the twentieth century. Since these schools were the private property of their owners, who were usually physicians, the teaching curriculum varied enormously between schools. Virtually all the teachers were practicing physicians who taught part-time. Although being taught by actual practitioners is a good thing, at least for clinical subjects, the academic pedigrees and skills of these teachers varied as widely as the schools — some were excellent, some were terrible, and the majority were somewhere in between.
Whatever the merits of the teachers, students of these schools usually saw and treated their first patient after they had graduated because the teaching at these schools consisted nearly exclusively of lectures. Although they might see a demonstration now and then of something practical, in general students sat all day in a room listening to someone tell them about disease rather than showing it to them in actual sick people. There were no laboratories. Indeed, there was no need for them because medicine was taught exclusively as a theoretical construct, and some of its theories dated back to Roman times. It lacked much scientific basis because the necessary science was itself largely unknown at the time.
As the nineteenth century progressed, many of the proprietary schools became affiliated with universities; often several would join to form the medical school of a new state university. The medical school of the University of Minnesota, for example, was established in 1888 when three proprietary schools in Minneapolis merged, with a fourth joining the union some years later. These associations gave medical students some access to aspects of new scientific knowledge, but overall the American medical schools at the beginning of the twentieth century were a hodgepodge of wildly varying quality.
Medical schools were not regulated in any way because medicine itself was largely unregulated. It was not even agreed upon what the practice of medicine actually was; there prevailed at the time among physicians several occasionally overlapping but generally distinct views of what the real causes of disease were. All these views shared a basic fallacy — they regarded a symptom, such as fever, as a disease in itself. Thus they believed relieving the symptom was equivalent to curing the disease.
The fundamental problem was that all these warring medical factions had no idea what really caused most diseases; for example, bacteria were only just being discovered and their role in disease was still largely unknown, although this was rapidly changing. Human physiology — how the body works — was only beginning to be investigated. To America’s sick patients, none of this made much difference, because virtually none of the medical therapies available at the time did much good, and many of the treatments, such as large doses of mercury, were actually highly toxic.
There were then bitter arguments and rivalries among physicians for other reasons besides their warring theories of disease causation. In that era before experimental science, no one viewpoint could definitely prove another wrong. The chief reason for the rancor, however, was that there were more physicians than there was demand for their services. At a time when few people even went to the doctor, the number of physicians practicing primary care (which is what they all did back then) relative to the population was three times more than it is today. Competition was tough, so tough that the majority of physicians did not even support themselves through the practice of medicine alone; they had some other occupation as well — quite a difference from today.
In sum, medicine a century ago consisted of an excess of physicians, many of them badly trained, who jealously squabbled with each other as each tried to gain an advantage. Two things changed that medical world into the one we know today: the explosion of scientific knowledge, which finally gave us some insight into how diseases actually behaved in the body, and a revolution in medical education, a revolution wrought by what is known as the Flexner Report.
In 1910 the Carnegie Foundation commissioned Abraham Flexner to visit all 155 medical schools in America (for comparison, there are only 125 today). What he found appalled him; only a few passed muster, principally the Johns Hopkins Medical School, which had been established on the model then prevailing in Germany. That model stressed rigorous training in the new biological sciences with hands-on laboratory experience for all medical students, followed by supervised bedside experience caring for actual sick people.
Flexner’s report changed the face of medical education profoundly; eighty-nine of the medical schools he visited closed over the next twenty years, and those remaining structured their curricula into what we have today—a combination of preclinical training in the relevant sciences followed by practical, patient-oriented instruction in clinical medicine. This standard has stood the test of time, meaning the way I was taught in 1974 was essentially unchanged from how my father was taught in 1942.
The advance of medical science had largely stopped the feuding between kinds of doctors; allopathic, homeopathic, and osteopathic schools adopted essentially the same curriculum. (Although the original homeopathic schools, such as Hahnemann in Philadelphia, joined the emerging medical mainstream, homopathic practice similar to Joseph Hahnemann’s original theories continues to be taught at a number of places). Osteopathy maintains its own identity. It continues to maintain its own schools, of which there are twenty-three in the United States, and to grant its own degree—the Doctor of Osteopathy (DO), rather than the Doctor of Medicine (MD). In virtually all respects, however, and most importantly in the view of state licensing boards, the skills, rights, and privileges of holders of the two degrees are equivalent.
A couple of conversations I’ve had with patients’ families over the past month have made me realize that many folks don’t know how our system produces a pediatrician, a radiologist, or a surgeon. And a lot of what people know is wrong. Physicians are so immersed in what we do that we forget that the process is a pretty arcane one. Just what are the mechanics of how doctors are trained? Understanding your physician’s educational journey should help you understand what makes him or her tick. As it turns out, a lot of standard physician behavior makes more sense when you know were we came from. This post concerns some important history about that.
Most physicians in the nineteenth century received their medical educations in what were called proprietary medical schools. These were schools started as a business enterprise, often, but not necessarily, by doctors. Anyone could start one, since there were no standards of any sort. The success of the school was not a matter of how good the school was, since that quality was then impossible to define anyway, but of how good those who ran it were at attracting paying students.
There were dozens of proprietary medical schools across America. Chicago alone, for example, had fourteen of them at the beginning of the twentieth century. Since these schools were the private property of their owners, who were usually physicians, the teaching curriculum varied enormously between schools. Virtually all the teachers were practicing physicians who taught part-time. Although being taught by actual practitioners is a good thing, at least for clinical subjects, the academic pedigrees and skills of these teachers varied as widely as the schools — some were excellent, some were terrible, and the majority were somewhere in between.
Whatever the merits of the teachers, students of these schools usually saw and treated their first patient after they had graduated because the teaching at these schools consisted nearly exclusively of lectures. Although they might see a demonstration now and then of something practical, in general students sat all day in a room listening to someone tell them about disease rather than showing it to them in actual sick people. There were no laboratories. Indeed, there was no need for them because medicine was taught exclusively as a theoretical construct, and some of its theories dated back to Roman times. It lacked much scientific basis because the necessary science was itself largely unknown at the time.
As the nineteenth century progressed, many of the proprietary schools became affiliated with universities; often several would join to form the medical school of a new state university. The medical school of the University of Minnesota, for example, was established in 1888 when three proprietary schools in Minneapolis merged, with a fourth joining the union some years later. These associations gave medical students some access to aspects of new scientific knowledge, but overall the American medical schools at the beginning of the twentieth century were a hodgepodge of wildly varying quality.
Medical schools were not regulated in any way because medicine itself was largely unregulated. It was not even agreed upon what the practice of medicine actually was; there prevailed at the time among physicians several occasionally overlapping but generally distinct views of what the real causes of disease were. All these views shared a basic fallacy — they regarded a symptom, such as fever, as a disease in itself. Thus they believed relieving the symptom was equivalent to curing the disease.
The fundamental problem was that all these warring medical factions had no idea what really caused most diseases; for example, bacteria were only just being discovered and their role in disease was still largely unknown, although this was rapidly changing. Human physiology — how the body works — was only beginning to be investigated. To America’s sick patients, none of this made much difference, because virtually none of the medical therapies available at the time did much good, and many of the treatments, such as large doses of mercury, were actually highly toxic.
There were then bitter arguments and rivalries among physicians for other reasons besides their warring theories of disease causation. In that era before experimental science, no one viewpoint could definitely prove another wrong. The chief reason for the rancor, however, was that there were more physicians than there was demand for their services. At a time when few people even went to the doctor, the number of physicians practicing primary care (which is what they all did back then) relative to the population was three times more than it is today. Competition was tough, so tough that the majority of physicians did not even support themselves through the practice of medicine alone; they had some other occupation as well — quite a difference from today.
In sum, medicine a century ago consisted of an excess of physicians, many of them badly trained, who jealously squabbled with each other as each tried to gain an advantage. Two things changed that medical world into the one we know today: the explosion of scientific knowledge, which finally gave us some insight into how diseases actually behaved in the body, and a revolution in medical education, a revolution wrought by what is known as the Flexner Report.
In 1910 the Carnegie Foundation commissioned Abraham Flexner to visit all 155 medical schools in America (for comparison, there are only 125 today). What he found appalled him; only a few passed muster, principally the Johns Hopkins Medical School, which had been established on the model then prevailing in Germany. That model stressed rigorous training in the new biological sciences with hands-on laboratory experience for all medical students, followed by supervised bedside experience caring for actual sick people.
Flexner’s report changed the face of medical education profoundly; eighty-nine of the medical schools he visited closed over the next twenty years, and those remaining structured their curricula into what we have today—a combination of preclinical training in the relevant sciences followed by practical, patient-oriented instruction in clinical medicine. This standard has stood the test of time, meaning the way I was taught in 1974 was essentially unchanged from how my father was taught in 1942.
The advance of medical science had largely stopped the feuding between kinds of doctors; allopathic, homeopathic, and osteopathic schools adopted essentially the same curriculum. (Although the original homeopathic schools, such as Hahnemann in Philadelphia, joined the emerging medical mainstream, homopathic practice similar to Joseph Hahnemann’s original theories continues to be taught at a number of places). Osteopathy maintains its own identity. It continues to maintain its own schools, of which there are twenty-three in the United States, and to grant its own degree—the Doctor of Osteopathy (DO), rather than the Doctor of Medicine (MD). In virtually all respects, however, and most importantly in the view of state licensing boards, the skills, rights, and privileges of holders of the two degrees are equivalent.