Posted by jay on June 24, 2002, at 2:35:20
I found a really great column from the Toronto Star newspaper on why we should be far more cautious when being influenced by a drug study. As the author put's it, there are many, many techincal aspects to drug studies that can only be found in the deep technical language of a study. Besides that, we'd better prep for graduate and doctoral courses in med study statistics. Also, the 'abstracts' we often quote on here can be very mis-leading. (Note his reference to the almost 'religious' acceptance of so-called 'alternative' therapies that have usually nil science to back them up. "5 out of 15 rats prefer..."...c'mon!..heh) Anyhow...here is the story:from: www.thestar.com
Digest medical stories with a grain of salt
By Jay Ingram
Science writers like me should spend more of our time cautioning readers not to take us seriously. I don't mean that you should be concerned that we're putting you on (although that may happen from time to time), but rather that we should not be seen as "authorities" or writers "of record." When you read this column, you're reading what I choose to write, no more, no less.You should also be wary of where I'm getting my information because before it trickles down to me, it has been massaged by those doing the work and writing about it.
As a nice example, researchers at the University of California at Davis have found that articles in the top medical journals tend to spin their statistics. Jim Nuovo and Joy Melnikow found that most of the time evaluations of new medicines quoted only the most positive statistic, something called "relative-risk reduction." This is the percentage difference between a group receiving the medication and a comparison group given a placebo.
Nuovo and Melnikow pointed out that a different statistic, the "number needed to treat" (NNT), is just as important. The problem is that it's rarely quoted.
The NNT is an estimate of the number of patients that would have to be treated with, say, a new drug before the medication actually prevented a serious outcome, like death, heart attack or stroke.
So an NNT of five would mean that you'd expect at least four patients receiving the drug to suffer a serious outcome before you enjoyed your first success. This selection of statistics is perhaps more of a problem for your doctor than for you, but it could easily influence how you're treated.
If that weren't enough, the June 5 edition of the Journal Of The American Medical Association reported that news releases from medical journals routinely exaggerate the importance of studies while at the same time ignoring their limitations. This report came from researchers Steven Woloshin and Lisa Schwartz. (Of course, I came across this report in —— what else —— a news release from the Journal.)
Among many criticisms of the standards established for news releases, one in particular stood out in my mind: Of all the studies that received industry funding, fewer than one in four actually acknowledged the receipt of that funding.
Woloshin and Schwartz also had concerns about the number of media reports coming out of scientific meetings, rather than the peer-reviewed journals. They make the point that experimental results presented at meetings are often in a preliminary form, many fail in the long term to live up to their promise, and doctors and patients alike could be seriously misled by them.
So, there is spin everywhere, from the way the medical reports themselves are written to the way they're described by the journals trying to get air time or print coverage. Pointing out these flaws in medical and scientific reporting is one thing, but suggesting remedies is more difficult.
No one should be surprised that news releases from medical journals represent a scaled-down, highlighted version of the paper that actually appears in the journal. Few reporters, even fewer editors and almost no readers would be interested (or capable of understanding) such details.
To suggest, as Woloshin and Schwartz did, that both reporters and scientists emphasize the preliminary nature of the findings is unrealistic. For one thing, such a caveat presented at the end of a report would usually be ignored by the audience. And if the information is so uncertain, maybe it shouldn't be reported at all. After all, if an experimental Alzheimer's drug has been shown only tentatively to be effective in mice, do any of us who have Alzheimer's in the family really need to know that?
I'm sure none of this is new and I'm equally sure not much will come out of these revelations.
I've come to believe that most of the scientific/medical information that washes over us every day is ignored, except for those pieces that somehow seem to be relevant to our lives, right here and right now. And when you consider the amount of hype and propaganda about alternative remedies that is alive and well on the Internet, these flaws in the reporting of medical research seem somehow less disturbing.
Of course, you should really get the original papers and read them yourself.
poster:jay
thread:110591
URL: http://www.dr-bob.org/babble/20020617/msgs/110591.html