Posts Tagged ‘Dartmouth Atlas’

“Cowboy” doctors mostly increase costs and risks without benefiting patients

May 12, 2015  |  General  |  2 Comments

Some months back I read an interesting interview with Jonathan Skinner, a researcher who works with the group at the renowned Dartmouth Atlas of Health Care. More than anyone else I can think of, the people at the Dartmouth Atlas have studied and tried both to understand and to explain the amazing variations we see in how medicine is practiced in various parts of the country. It turns out that specific conditions are treated in quite different ways depending upon where you live. Atul Gawande documented a detailed example of the phenomenon in an excellent New Yorker article here. A major determinant appears to be local physician culture, how we doctors “do things here.” The disturbing observation is that patient outcomes aren’t much different, just cost. Of course it’s more than cost. Doing more things to patients also increases risk, and adding risk without benefit is not what we want to be doing.

Skinner is interested in something else, a phenomenon he calls “cowboy doctors.” By this he means physicians who are individual outliers, who go against the grain by substituting their own individual judgements for those of the majority of their peers. In theory such lone wolf practitioners could go both ways. They could do less than the norm, but almost invariably they do more — more tests, more treatments, more procedures. Such physicians not only may put their patients at higher risk, they also add to medical costs. I have met physicians like that and have usually found them to be defiant in their nonconformity. A few revel in it. They maintain they are doing it for the good of their patients, but there is more than a little of that old physician ego involved. There is also the subtext of what many physicians feel these days, especially old codgers like me who have been practicing for 35 years: it is the tension between older notions of medicine as an art, a craft, and newer evidence-based, team driven practice. Skinner describes it this way:

It’s the individual craftsman versus the member of a team. And you could say, ‘Well, but these are the pioneers.’ But they’re less likely to be board-certified; there’s no evidence that what they’re doing is leading to better outcomes. So we conclude that this is a characteristic of a profession that’s torn between the artisan, the single Marcus Welby who knows everything, versus the idea of doctors who adapt to clinical evidence and who may drop procedures that have been shown not to be effective.

Leaving aside outcomes and moving on to costs, Skinner and his colleagues were quite surprised to discover how much these self-styled cowboys and cowgirls were adding to the nation’s medical bills. They found that such physicians accounted for around 17% of the variability in regional healthcare costs. To put that in dollars, it amounts to a half-trillion dollars. That is an astounding number.

So what we are looking at here is a dichotomous explanation for the huge regional variations in medical costs. On the one hand we have physicians who conform to the local culture, stay members of the herd and go along with the group, even if the group does things in a much more expensive way that confers no additional benefit to patients. On the other hand we have self-styled mavericks who scorn the herd and believe they have special insight into what is best, even if all the research shows they’re wrong.

I think what is coming from all this cost and outcome research is that best practice, evidence-based medicine (when we have that — often we don’t for many diseases) will be enforced by the people who pay the bills and professional organizations. Yes, some will bemoan this as the loss of physician autonomy and the reduction of medical practice to cookbooks and protocols. I sympathize with that viewpoint a little, especially since I am the son and grandson of physicians whose practice experience goes back to 1903. But really, there are many things we used to do that we know now are useless or even harmful. An old professor of mine had a favorite saying for overeager residents: “Don’t just do something — stand there!”

For those who would like to dive into the data and see the actual research paper from the National Bureau of Economic Research describing all this, you can read it here.

 

The promises and pitfalls of healthcare quality performance measures

December 29, 2014  |  General  |  No Comments

The quality-measurement enterprise in U.S. health care is troubled. Physicians, hospitals, and health plans view measurement as burdensome, expensive, inaccurate, and indifferent to the complexity of care delivery. Patients and their caregivers believe that performance reporting misses what matters most to them and fails to deliver the information they need to make good decisions.

Thus begins a recent editorial in the New England Journal of Medicine. It was accompanied by another entitled “Getting More Performance from Performance Measurement.” These represent the rumblings of discontent with the status of current efforts to measure the quality of healthcare patients are getting.

Everyone wants high-quality healthcare. It’s obvious in the abstract. But how do we know what that is? It’s well known that healthcare delivery varies widely across the country. This was shown many years ago by the Dartmouth Atlas of Healthcare, which documented astonishing differences in how medicine was practiced, and of course therefore how much it cost, even between places right next door to each other. These variations persist today. Why? The diseases and disorders being treated don’t vary like that. It turns out, unsurprisingly, that local medical culture and traditions play a huge role. When a new physician comes to the area, he or she tends to fall in line with how things are done there. The obvious goal here should be to deliver the best and most effective healthcare — not skimping on useful care but not overdoing things and adding risk to the patient in the bargain. How can we do that? These days everybody is trying to figure that out, and our current efforts, as discussed in the above articles, aren’t doing as well as they could.

A key distinction to understand is the difference between process measures and outcome measures. A process measure is something that keeps track of a particular activity that we know or assume will lead to better outcomes. A good example is washing our hands. Documenting that we did that is a process measure. We know, however, that it will decrease the number of hospital-acquired infections, an outcome measure. Marking the surgical site before surgery is a process measure; eliminating wrong-site surgeries is an outcome measure. Unfortunately, very quickly things get more complicated than these simple examples. One chronic complaint from physicians is that we are held responsible for outcomes over which we have no power to influence the results. Another complaint is that, as with medical credentialing (I wrote about that swamp here), there are a host of players involved in performance measures and many have their own metrics that differ from each other. From the second essay:

The current measurement paradigm, however, does not live up to its potential. Many observers fear that a proliferation of measures is leading to measurement fatigue without commensurate results. An analysis of 48 state and regional measure sets found that they included more than 500 different measures, only 20% of which were used by more than one program. Similarly, a study of 29 private health plans identified approximately 550 distinct measures, which overlapped little with the measures used by public programs.

A mess like that is a prescription for cynicism among hospitals and physicians — and failure. We need a much smaller, much more manageable set of measurements that everybody agrees are real indicators of good medical care. I think this means, among other things, that we can’t have every payer concocting their own scheme. That is asking for chaos.

We have had some successes in linking a process measure to an outcome measure. A good example is planned delivery of infants who were almost, but not quite, at term. Sometimes there is a good medical reason for doing this. But in the past this was often done for the convenience of the doctor or the parents. Sometimes that meant a baby was delivered too early and had to spend time in a neonatal intensive care unit. As a result of close monitoring of early deliveries, of making sure they were really medically necessary, the rate of early delivery has fallen to a quarter of what it was several years ago. That’s real progress, and it came from performance improvement projects. The author is optimistic:

The science and practice of performance measurement have advanced substantially in the past decade, and increased transparency regarding results means that we know more quickly what works and what doesn’t. Furthermore, all stakeholder groups are now invested in getting more performance out of measurement, which should ultimately drive the care improvement that patients need and deserve.

Maybe. I know this is all inevitable and good for patients in the long run. But I think we will have many more growing pains — false leads, useless measurements — before we get there.