The perils of patient satisfaction scores
Nearly all physicians are now subject to patient satisfaction ratings. In my case, and many thousands of my colleagues across the country, it is via the survey tool sold to healthcare facilities by the Press Ganey Company. There are also many, many online sources that rate physicians, such as this one and this one. The idea is a good one: physicians should be subject to feed-back from patients about patient perceptions of how good a job the doctors do. If nothing else, how are we otherwise to change our behavior if we don’t find out where our problems are? The surveys don’t measure medical competence, but they could be a good metric of another aspect of how good we are as physicians. But, as currently used, patient satisfaction surveys are riddled with problems. They don’t measure what they’re supposed to measure, and they can easily drive physician behavior the wrong way.
I’ve read the Press Ganey survey forms, and the questions they ask are all very reasonable. I’d like to see the results if all the parents of my patients would fill one out. But that’s the problem. It is a fundamental principle of statistics that the sample (those who fill out the survey) you use to analyze the whole data set (which would be all the patients) is representative of the entire group. This doesn’t happen. Although the forms are sent out to a random sample of patients, a very nonrandom distribution of them are returned. Perhaps only the patients who are happy, or those who are unhappy, send them back. This is in fact likely. For the analysis to have any validity at all the patients who do not return the forms must also be randomly distributed among all those sent forms. But a valid survey, one in which efforts are made to get a very high return rate using such things as follow-up calls or contacts, is much more expensive to do.
There is another problem. Patient satisfaction and good medical care do not entirely overlap. It is certainly true that an experienced and skilled physician can and should deliver bad news to patients in a way in which the patient feels understood and accepting. But not infrequently doctors have to tell a patient that what the patient wants is not good medical care. This might be something as simple as not prescribing antibiotics for a viral illness, even though the patient may want that, to not prescribing narcotics to a drug-seeking patient in the emergency department. Both of these scenarios are common, and so can be the result — a dissatisfied patient. This issue would also be solved by a getting surveys from a truly random sample of patients, since the dissatisfied antibiotic or drug-seeking ones would be washed out by all the others. But now the mad ones fill out the forms — many others toss them in the trash.
This is not a trivial issue. Recent research has strongly suggested that the most satisfied patients often don’t get the best care; they are more likely to be admitted to the hospital (an often dangerous place), and they may even have a higher death rate. The best doctors can easily have the worst patient satisfaction scores.
I don’t want you to think I am against holding physicians accountable for what we do — I’m not. Patient satisfaction is a key component of how to do that. But we must have better tools, especially since we are now tying a doctor’s income to the satisfaction score. What we do now can easily result in statistical nonsense. Any scientist will tell you that bad data are worse than no data.
For what it’s worth, I looked for my own scores on several of the big physician rating sites. Good news! I got 4 stars (excellent)! The number of reviews I could find, out of the thousands of patients I’ve seen over 35 years of practice? One — a single review. Maybe it just means I’m not very memorable. But thanks anyway to whoever the reviewer was. Still, one out of many thousands doesn’t seem to be a very representative sample.