Medical research is a conversation between the new and the old. What I mean by that is the findings of new studies need to be compared to previous ones, because most times the reason for doing the research in the first place is to answer questions or test theories raised by previous research. Understanding the historical context of a particular research finding is vital. If you don’t know how a particular new finding compares to older ones you won’t understand the importance of the research. Unfortunately, journalists are less and less inclined to give readers that context when they write about the newest and shiniest medical research. The result is public confusion, and it leads to misleading headlines.
Journalists also like conflict, so they tend to write their stories from the angle that a particular medical finding contradicts a previous one, even when it really doesn’t. Journalists also want to write about unusual, unexpected things. As the saying goes, dog bites man isn’t news — man bites dog is hot stuff.
The result of the Babel of medical journalism is that many people just ignore it all, assuming (not unreasonably) that another study may come out the next year contradicting whatever exciting finding this year brings. Fat is bad! No, fat is not so bad! Coffee is bad! Except when it isn’t! And so on.
There is an excellent editorial in a recent issue of the New England Journal of Medicine by Susan Dentzer, a respected health journalist and editor of Health Affairs, about the pitfalls of all of this and how it might be improved. She describes the problem this way: “Journalists sometimes feel the need to play carnival barkers, hyping a story to draw attention to it. This leads them to frame a story as new or different — depicting study results as counterintuitive or a break from the past — if they want it to be featured prominently or even accepted by an editor at all.”
Her solution is pretty simple — journalists need to supply readers with the context, the shades of grey, that are part of interpreting the results of any research study. I don’t really expect that to happen much. The demands of the 24/7 news cycle are too overwhelming. Readers, though, can read more critically, which is one reason I’ve been posting in this blog about how to interpret the validity of research data.
Gary Schwitzer, a professor at the University of Minnesota School of Journalism, keeps an excellent blog about these issues. I check it frequently (he has useful things to say about the Dentzer piece). You can also find a link to it on the right on my blogroll list.