Shown: posts 142 to 166 of 166. Go back in thread:
Posted by Jost on July 18, 2006, at 19:41:23
In reply to Re: couldn't have said it better myself, posted by SLS on July 18, 2006, at 6:34:58
I'm late to this discussion, so maybe this has already been covered, but I have a few questions about the numbers in the Star-d study.
(My math is extremely rusty, so I could be wrong on my numbers-- so I guess I'm offering this with question marks.)
The numbers seem to be as follow:
original number considered for study: 4041
accepted into study: 2876The first treatment was citalopram (celexa)
Level I
remission 30% (eg approximately 950)
responders 10-15 (eg 287-whatever 15% is) (these people were allowed to to into Level II because they didn't have "remissions" or become "symptom-free"Ineffective, or too many Side effects to complete 70% (I infer that this number includes the responders, so there were 50-55% with no response, 10-15% with some response)
There were therefore 1914 people eligible to enter Level II.
Note: those who had remissions were to be followed for another 12 months, to evaluate the continuations of their remission, or other outcomes. I didn't notice any mention of them in the later discussion, however, they may be presumed to have some among them who were having a placebo effect, who might need another therapy later, if the study is continuing.
Of the 1914 eligible people, 1439 chose to continue into Level II
This means that almost 500 people dropped out at this point, leaving only 75% of the non-remitters in the study.
These people were given the choice to switch or augment citalopram with another AD. (People opted not to be randomized at this level of the study.)
51% (727) switched to another AD
39% (565) augmented
10% (149) chose to switch or augment with CBT, and were excluded from the final results.So about 68% of the eligible people, or 1290 people participated in Level II.
Of these 68%, or 1290 people, there were remissions in:
25% of the switchers, or 188 people, and
30% of the augmenters, or 181 people
This means that of the second 1914 eligible, or 1439 participants, 369 had remissions. What 25% and 30% of 68% is, I haven't calculated, but I guess it's between 17-23% (I could be wrong on that...)Altogether, of the 2876 people who entered level I, 1319 people (or about 46%) had remissions.
On one hand, remission is a higher standard than response, so this could be taken as a positive sign. On the other hand, it means that 64% had some sort of not-entirely-satisfying outcome.
The study, as reported online on the NIMH page, doesn't further divide these into those who had or didn't have a response less than remission, or further detail the quality of the less-than-remission responses. It also doesn't report on the progress of those who had remissions, and what, if any, further and different treatment they might have needed.
Maybe my math is wrong, because it seems to differ from the statistics that SLS quotes, which are more optimistic. If so, I'm interested in how one should analyze the results.
I should also note, however, that there must be some sense among scientists that treatments do work in a way that they take seriously. I say this because there are ethical concerns beginning to be reported about using placebos in studies. This ethical concern is about the use of placebos in patients for whom there is a proven treatment--for whom it is not considered ethically acceptable to give no treatment in a study of potentially helpful drugs.
I personally have had good results with certain ADs, so I'm not in any way debunking their value. I also believe that there are other ADs, which may work better, or other combinations that may work better, than those in the study.
I just wonder about exact outcome of the study, just on the simplest level, in the numbers. (Again, I apologize in advance, if I got this wrong.)
thanks, Jost
Posted by linkadge on July 18, 2006, at 19:59:43
In reply to Re: couldn't have said it better myself, posted by SLS on July 17, 2006, at 22:01:03
"Currently marketed antidepressants work significantly better than our most recent clinical trials indicate"
In your opinion. My opinion is that they work significantly worse, seeing as that trial may just be the first one in 10 that shows the drug performed better than placebo. Drug companies do not have to disclose failed drug trials.
Even if the placebo response was to increas slightly, there are still a whole host of miserable drug responces. Its not unheard of to get large trials showing only 35% of people respond to the active drug.
There is an argument too that trial data from 3 decades ago isn't too relyable, this is evidenced by the fact that some of the initial results for the TCA's have been hard to replicate.
I would still argue that antidepressants may produce impressive initial results owing to their ability to quickly reduce REM sleep.
Thats why you get all those accounts of people saying, "the world looked so much brighter".. thats an effect of REM sleep reduction. It usually subsides as the cholinergic system attempts to regain sensitivity.
Linakdge
Posted by linkadge on July 18, 2006, at 20:06:15
In reply to Re: couldn't have said it better myself, posted by SLS on July 17, 2006, at 23:01:41
There are also doctors who believe that the drugs work no better than placebo, and that most people recover due to time.
The head doctor, Dr. Powers, I saw at Grand River Hospital, Waterloo, Ontario, Canada, told me that most of the drug effect is placebo.
I don't know if 85% of patients get better as a result of medications. Sometimes patients can try combinations of medications for years. Depression too often gets better on its own. Sometiems people like to attribute their recovery to a drug, but it may not be an appropriate association.
I don't think there is any concrete data to suggest that 85% of depression sufferers get better with meds and combinations.
Linkadge
Posted by linkadge on July 18, 2006, at 20:07:37
In reply to Re: couldn't have said it better myself, posted by Klavot on July 18, 2006, at 1:56:43
Well, either zoloft did better than placebo or it didn't.
I think that the primary measures are the ones most likely to be indicitive of drug effect.
Linkadge
Posted by linkadge on July 18, 2006, at 20:13:32
In reply to Re: couldn't have said it better myself, posted by SLS on July 18, 2006, at 6:34:58
"Nor does it help the 15% to disable the institutions dedicated to discovering ways to treat it."
I'm not telling people not to try drugs. I wouldn't want to jinx anyone's response. Though, if somebody jinxed my response, I would argue that it wasn't really a response.
I remember I was doing fairly well on clomipramine untill somebody told me that it was genotoxic, and that the cardiac conduction effects could cause heard problems down the road.
Guess, I was never really responding to it in the first place.My therapist thried the ol'.... "well everything causes cancer these days". Didn't work.
Linkadge
Posted by linkadge on July 18, 2006, at 20:32:10
In reply to Re: couldn't have said it better myself, posted by Jost on July 18, 2006, at 19:41:23
>I should also note, however, that there must be >some sense among scientists that treatments do >work in a way that they take seriously. I say >this because there are ethical concerns >beginning to be reported about using placebos in >studies. This ethical concern is about the use >of placebos in patients for whom there is a >proven treatment--for whom it is not considered >ethically acceptable to give no treatment in a >study of potentially helpful drugs.
I would argue that the so called "ethical concerns" are due to the fact that certain researchers wan't the placebo's out of their trials so that people don't end up seeing how well the placebo often performs.
When placebo and active drug are often so neck- and-neck, I see no ethical concerns in using placebo. If you're just as likely to get better on placebo, its not ethically wrong to only use this in your trial.
If your math is right in that 64% of people had inadequate results from antidepressants despite the best possable care, then clearly this is confirming what I know.
We, here on psychobabble are *not* the only ones who don't do so great on these drugs.
Just cause you don't have or know how to use the internet, or don't have the ambition to go online, doesn't mean that your treatment works fine. There are plenty of people who could probably confirm similary modest results but for whatever reason havn't connected. It really depends on what you "want" to see.
Linkadge
Posted by linkadge on July 18, 2006, at 20:36:54
In reply to Re: couldn't have said it better myself, posted by linkadge on July 18, 2006, at 20:32:10
This summer the result of a study by two psychologists was released in the American Psychological Association's e-journal. This study obtained and reviewed 47 research test studies submitted to the FDA for approval of the most recent antidepressants (Prozac, Paxil, Zoloft, etc). The psychologists found that in judging the effect of a treatment, in terms of how well it does over and above a placebo (sugar pill), the effects of the antidepressants were deemed "clinically negligible".
A second study done by a Seattle psychiatrist, reviewed 96 clinical trials and discovered that in 76% of the cases reviewed the response to antidepressant drugs were duplicated by the placebo. In other words 76% of the time the sugar pill created the same "benefits" that the drug did.
http://www.biohealthinfo.com/html/resources/articles/article_archive/antidepressants.html
Linkadge
Posted by cecilia on July 18, 2006, at 21:52:38
In reply to Re: couldn't have said it better myself, posted by SLS on July 18, 2006, at 6:34:58
> > Perhaps, but that doesn't really help when you're in the other 15%.
>
> Nor does it help the 15% to disable the institutions dedicated to discovering ways to treat it.
>
>
> - ScottI certainly would never tell anyone not to try AD's. Certainly if 85% eventually find something that helps that's pretty good. But the remaining 15% is still a huge number of people. A pdoc who sees 100 patients a week will have 15 of them not respond to anything. (Probably a lot more than that, since many people get their 1st and 2nd AD's tried from their GPs and only go to a pdoc if those don't work.) Pdocs certainly don't tell people those odds. You see public service TV commercials "Depression is treatable". They don't say "Depression is treatable unless you're in the unlucky 15%." It makes me angry. Cecilia
Posted by SLS on July 19, 2006, at 5:31:08
In reply to Re: couldn't have said it better myself, posted by cecilia on July 18, 2006, at 21:52:38
> > > Perhaps, but that doesn't really help when you're in the other 15%.
> >
> > Nor does it help the 15% to disable the institutions dedicated to discovering ways to treat it.
> >
> >
> > - Scott
>
> I certainly would never tell anyone not to try AD's. Certainly if 85% eventually find something that helps that's pretty good. But the remaining 15% is still a huge number of people. A pdoc who sees 100 patients a week will have 15 of them not respond to anything. (Probably a lot more than that, since many people get their 1st and 2nd AD's tried from their GPs and only go to a pdoc if those don't work.) Pdocs certainly don't tell people those odds. You see public service TV commercials "Depression is treatable". They don't say "Depression is treatable unless you're in the unlucky 15%." It makes me angry.Me too. Me too.
Which tricyclic did you combine with Parnate?
- Scott
Posted by SLS on July 19, 2006, at 5:57:36
In reply to Re: couldn't have said it better myself, posted by linkadge on July 18, 2006, at 19:59:43
> There is an argument too that trial data from 3 decades ago isn't too relyable, this is evidenced by the fact that some of the initial results for the TCA's have been hard to replicate.
Whose argument would that be?
They already have been.
I thought we already covered this.
All you have to do is thoroughly evaluate the prospects and allow only those with true MDD to participate in studies by including only the more severe cases as scored on standardized depression rating scales - just like was done 20-30 years ago.
One more...
J Psychiatr Res. 2005 Mar;39(2):145-50. Related Articles, Links
Click here to read
Severity of depressive symptoms and response to antidepressants and placebo in antidepressant trials.Khan A, Brodhead AE, Kolts RL, Brown WA.
Northwest Clinical Research Center, Bellevue, MA, USA. akhan@nwcrc.net
Although increased pre-treatment severity of depressive symptoms is thought to suggest better outcome with tricyclic antidepressants, it is unclear if such a pattern exists among those depressed patients treated with newer antidepressants. If such a pattern with newer antidepressants were observed, it would have implications for the design and conduct of future antidepressant trials. We reviewed the data from 329 depressed adult patients that were part of 15 multi-center, randomized, double blind, placebo-controlled antidepressant clinical trials at our center. Based on patients' pre-treatment scores on the 17-item Hamilton Depression Rating Scale (HAM-D), patients were sub-grouped to one of four severity of depression groups: low moderate, high moderate, moderately severe, and severe. The effect size was 0.51 in the low moderate group, 0.54 in the high moderate group, 0.77 in the moderately severe group and 1.09 in the severe group. An analysis of variance revealed a statistically significant interaction between treatment and severity of depressive symptoms. A correlational analysis revealed that in the group of depressed patients assigned to antidepressants, higher levels of pre-treatment depressive symptoms were significantly associated with greater changes in response to antidepressant treatment. Although a similar pattern was seen among the depressed patients assigned to placebo, it did not reach statistical significance. The results of this study suggest that antidepressant-placebo differences may be larger among those depressed outpatients with higher pre-treatment HAM-D scores compared to those depressed outpatients with lower pre-treatment scores. These findings may help in the design of future antidepressant clinical trials.
- Scott
Posted by linkadge on July 19, 2006, at 19:33:57
In reply to Re: couldn't have said it better myself, posted by SLS on July 19, 2006, at 5:57:36
>They already have been.
I just don't understand how you can't envision all of the clincical trials that show nothing at all that never see the light of day.The argument too (surely you are not ignorant of) the fact that the use of active placebos often produce a significant narrowing of the gap between antidepressant and placebo sucess rates.
It has been well documented that patients' beleifs can often predict response to antidepressants.
http://www.docguide.com/news/content.nsf/news/8525697700573E1885256CFC00521C5A
When taken in the context of active vs. placebo, it is not unreasonable to suggest that patients know when they are getting an active drug. When the patient comes to believe that they are recieving the active drug thats when these beliefs can play a significant role in the outcome of the trial. This is why active placebo's are so fiercely opposed by drug companies.
The following exerpts were taken from:
http://bjp.rcpsych.org/cgi/content/full/180/3/193
In addition, Greenberg et al found that the effects of older antidepressants compared with placebo were less than the effects of newer ones in a meta-analysis of three-arm trials comparing a new antidepressant and an old one with placebo (Greenberg et al, 1992). They suggested that this was due to reduced expectations of the performance of the older antidepressants.
Moncrieff et al (1998) found lower effect sizes in trials using active placebos
Quitkin et al also criticised the findings from active placebo-controlled trials on the basis that drug improvement rates were lower than expected.
The Hamilton Rating Scale for Depression (Hamilton, 1960) has been criticised because it contains a large number of items relating to sleep and anxiety, which is likely to favour any active drug with sedative properties. (Ie the highly sedating antihistamine derivitive TCAs)
Selective reporting of outcomes is a potential problem.
Failure to perform ‘intention to treat’ analysis has been shown to inflate apparent treatment effects in antidepressant trials (Bollini et al, 1999).
****************************
Publication bias is also a concern (Antonuccio et al, 1999; Moncrieff, 2001) and a recent meta-analysis SHOWED THAT SPONSORSHIP WAS THE STRONGEST PREDICTOR OF OUTCOME IN COMPARITIVE TRIALS WITH SELECTIVE SEROTONIN REUPTAKE INHIBITORS (Freemantle et al, 2000).
*****************************It is agreed that there is great heterogeneity among such trials of antidepressants, with a substantial proportion finding no difference between drug and placebo.
Morris & Beck (1974) found that around a third of trials of tricyclic antidepressants were negative.
Rogers & Clay (1975) FOUND THAT 64% OF TRIALS OF IMIPRAMINE WERE NEGATIVE, and McNair (1974)
The Medical Research Council trial found no difference between imipramine and placebo on the main categorical outcome and negligible differences in individual symptoms (Medical Research Council, 1965).
Although it was reported that imipramine was superior to placebo at 3 weeks, there was no difference at the end of the 5-week treatment period or at the end of followup. In addition, the main report on efficacy excluded the 159 Black patients who were said to have shown a poorer response to imipramine (Raskin et al, 1970). (IE Rife with bias)
Recent studies are almost all conducted with out-patients and most are sponsored by the pharmaceutical industry. It is possible that there is a greater potential for placebo effects and therefore for amplified placebo effects in people with milder disorders. However, so few studies test the integrity of the double-masking that it is difficult to know to what extent this is the case (Even et al, 2000).
Many substances not conventionally classified as antidepressants have been found to be superior to placebo or to have equivalent efficacy to antidepressants in trials of treatment of depression. The list includes various neuroleptics (Robertson & Trimble, 1982), barbiturates (Blashki et al, 1971), benzodiazepines (Imlah, 1985), buspirone (Robinson et al, 1990), some stimulants (Rickels et al, 1970) and more recently Hypericum extract (Philipp et al, 1999). These observations might imply that depression is susceptible to a variety of non-disease-specific pharmacological actions such as sedation or psychostimulation, as well as the effects of suggestion.
I honestly don't see how you can conclude in such a cut and dry manner that antidepressants are unequivically effective.Linkadge
Posted by SLS on July 19, 2006, at 19:47:48
In reply to Re: couldn't have said it better myself, posted by linkadge on July 19, 2006, at 19:33:57
> I honestly don't see how you can conclude in such a cut and dry manner that antidepressants are unequivically effective.
It's pretty easy once you get the hang of it.
:-)
- Scott
Posted by linkadge on July 20, 2006, at 16:32:23
In reply to Re: couldn't have said it better myself, posted by SLS on July 19, 2006, at 19:47:48
>It's pretty easy once you get the hang of it.
To each his own. :)
Linkadge
Posted by Karen44 on July 20, 2006, at 21:56:22
In reply to Re: couldn't have said it better myself, posted by Jost on July 18, 2006, at 19:41:23
> I'm late to this discussion, so maybe this has already been covered, but I have a few questions about the numbers in the Star-d study.
>
> (My math is extremely rusty, so I could be wrong on my numbers-- so I guess I'm offering this with question marks.)
>
> The numbers seem to be as follow:
>
> original number considered for study: 4041
> accepted into study: 2876
>
> The first treatment was citalopram (celexa)
>
> Level I
>
> remission 30% (eg approximately 950)
> responders 10-15 (eg 287-whatever 15% is) (these people were allowed to to into Level II because they didn't have "remissions" or become "symptom-free"
>
> Ineffective, or too many Side effects to complete 70% (I infer that this number includes the responders, so there were 50-55% with no response, 10-15% with some response)
>
> There were therefore 1914 people eligible to enter Level II.
>
> Note: those who had remissions were to be followed for another 12 months, to evaluate the continuations of their remission, or other outcomes. I didn't notice any mention of them in the later discussion, however, they may be presumed to have some among them who were having a placebo effect, who might need another therapy later, if the study is continuing.
>
> Of the 1914 eligible people, 1439 chose to continue into Level II
>
> This means that almost 500 people dropped out at this point, leaving only 75% of the non-remitters in the study.
>
> These people were given the choice to switch or augment citalopram with another AD. (People opted not to be randomized at this level of the study.)
>
> 51% (727) switched to another AD
> 39% (565) augmented
> 10% (149) chose to switch or augment with CBT, and were excluded from the final results.
>
> So about 68% of the eligible people, or 1290 people participated in Level II.
>
> Of these 68%, or 1290 people, there were remissions in:
>
> 25% of the switchers, or 188 people, and
> 30% of the augmenters, or 181 people
>
>
> This means that of the second 1914 eligible, or 1439 participants, 369 had remissions. What 25% and 30% of 68% is, I haven't calculated, but I guess it's between 17-23% (I could be wrong on that...)
>
> Altogether, of the 2876 people who entered level I, 1319 people (or about 46%) had remissions.
>
> On one hand, remission is a higher standard than response, so this could be taken as a positive sign. On the other hand, it means that 64% had some sort of not-entirely-satisfying outcome.
>
> The study, as reported online on the NIMH page, doesn't further divide these into those who had or didn't have a response less than remission, or further detail the quality of the less-than-remission responses. It also doesn't report on the progress of those who had remissions, and what, if any, further and different treatment they might have needed.
>
> Maybe my math is wrong, because it seems to differ from the statistics that SLS quotes, which are more optimistic. If so, I'm interested in how one should analyze the results.
>
> I should also note, however, that there must be some sense among scientists that treatments do work in a way that they take seriously. I say this because there are ethical concerns beginning to be reported about using placebos in studies. This ethical concern is about the use of placebos in patients for whom there is a proven treatment--for whom it is not considered ethically acceptable to give no treatment in a study of potentially helpful drugs.
>
> I personally have had good results with certain ADs, so I'm not in any way debunking their value. I also believe that there are other ADs, which may work better, or other combinations that may work better, than those in the study.
>
> I just wonder about exact outcome of the study, just on the simplest level, in the numbers. (Again, I apologize in advance, if I got this wrong.)
>
> thanks, Jost
>
>In my opinion, the study is pseudo science--psychiatry's attempt to pretend they know how to do research and engage in real scientific study. What a crock!!! It is a manipulation of th data.
Karen
Posted by SLS on July 20, 2006, at 22:37:37
In reply to Re: couldn't have said it better myself, posted by Karen44 on July 20, 2006, at 21:56:22
> In my opinion, the study is pseudo science--psychiatry's attempt to pretend they know how to do research and engage in real scientific study. What a crock!!! It is a manipulation of th data.
What did you find in the STAR*D results that led you to your conclusion?
Where did you find the report of results and statistical evaluation? It wasn't part of the summary press release I cited.
- Scott
Posted by SLS on July 20, 2006, at 22:49:40
In reply to Re: couldn't have said it better myself, posted by Karen44 on July 20, 2006, at 21:56:22
> > In my opinion, the study is pseudo science--psychiatry's attempt to pretend they know how to do research and engage in real scientific study. What a crock!!! It is a manipulation of th data.
>
> What did you find in the STAR*D results that led you to your conclusion?
>
> Where did you find the report of results and statistical evaluation? It wasn't part of the summary press release I cited.I apologize.
I think there were enough numbers given to form an impression. I'm sure you'll be able to show me where they were manipulated.
- Scott
Posted by SLS on July 20, 2006, at 23:39:45
In reply to Re: couldn't have said it better myself » Karen44, posted by SLS on July 20, 2006, at 22:37:37
> What a crock!!! It is a manipulation of th data.
If they were going to manipulate the data, you would think they would go for something higher than 20% for Step 3.
- Scott
Posted by Kon on July 21, 2006, at 18:59:59
In reply to Re: couldn't have said it better myself, posted by SLS on July 19, 2006, at 5:57:36
>The results of this study suggest that antidepressant-placebo differences may be larger among those depressed outpatients with higher pre-treatment HAM-D scores compared to those depressed outpatients with lower pre-treatment scores.
The problem is that other studies/reviews suggest otherwise. For instance Kirsch et al write:
"In response to the point about severity of depression, we reiterate that the NICE data did not demonstrate a gradient. The Ansgt et al meta-analysis cited also did not show convincing evidence of a gradient and other research we cited suggests that antidepressants are not very effective in more severe conditions. We did acknowledge that some research had shown such a gradient and we thank those authors who drew our attention to the fact that the same group have reproduced this finding. We still feel the evidence so far is inconclusive. Professor Taylor (rapid responses) cites further evidence against a relationship of efficacy and severity for tricyclic antidepressants as well as SSRIs.
For this debate (full free articles) see:
Posted by SLS on July 21, 2006, at 19:57:40
In reply to Re: couldn't have said it better myself, posted by Kon on July 21, 2006, at 18:59:59
Hi Kon.
Thanks for the link and references! I wish I could read it all.
I have not yet been terribly impressed with the authors' use of the works of others. It doesn't seem that they review them thoroughly enough. They also like to take snippets out of context as they have done with the works of Quitken et. al.
I found it interesting that Joanna Moncrieff cited in her editorial published here:
http://bjp.rcpsych.org/cgi/content/full/180/3/193
the following paper:
The British Journal of Psychiatry 127: 599-603 (1975)
© 1975 The Royal College of PsychiatristsA statistical review of controlled trials of imipramine and placebo in the treatment of depressive illnesses
SC Rogers and PM Clay
A method of reviewing a series of clinical trials by extracting the basic data in the form of 2 x 2 tables and analysing these by Fisher's two-tailed Exact Test is described, and illustrated by published imipramine-placebo trials. The results suggest that the benefit of this drug in patients with endogenous depression who have not become institutionalized is indisputable, and that further drug-placebo trials in this condition are not justified. Two of the three trials of imipramine in neurotic depression gave results showing significant improvement. Possible explanations of the apparent failure of this drug in groups of patients with undifferentiated depression are discussed.
I really didn't understand why she included this as a reference. It appears to support the thesis that imipramine, a tricyclic antidepressant, is indisputably effective in endogenous depression, a true MDD variant, such that the ethics of further placebo controlled trials in this patient population are questioned. Perhaps I'm missing something.
- Scott
Posted by SLS on July 22, 2006, at 5:32:47
In reply to Re: couldn't have said it better myself, posted by SLS on July 21, 2006, at 19:57:40
It is my recommendation to anyone who reads this work of Joanna Moncrieff and Irving Kirsch regarding the issue of the effectiveness of antidepressants to also read the literature they cite in their papers. This is easily done by clicking on the hyperlinks they conveniently provide. It is important to keep in mind the theses being discussed when doing so, so as not to confuse those of the the various authors. I also recommend that one keep in mind the differences between proposed hypotheticals and actual collected data.
- Scott
Posted by Karen44 on July 22, 2006, at 8:09:26
In reply to Re: couldn't have said it better myself, posted by SLS on July 22, 2006, at 5:32:47
Sorry, I sent a message back, but I guess I did something wrong as it did not go through, and right now I am feeling too depressed to answer with anything intelligent. Maybe later.
Karen
Posted by linkadge on July 22, 2006, at 16:14:22
In reply to Re: couldn't have said it better myself, posted by Karen44 on July 22, 2006, at 8:09:26
Two people can recieve the same data, and come to entirely different conclusions.
Linkadge
Posted by ttee on July 27, 2006, at 6:36:59
In reply to Re: couldn't have said it better myself, posted by linkadge on July 22, 2006, at 16:14:22
> Two people can recieve the same data, and come to entirely different conclusions.
>
> LinkadgeTTEE:
Not when the two people are both getting paid big bucks from a drug/medical device company. They tend to all agree on giving their thumbs up on the product they are getting paid to push.
- TTEE
Kaiser Daily Health Policy Report
Tuesday, July 25, 2006
Opinion
Pharmaceutical Companies Often Hire 'Star' Researchers To Sell Products Despite Questionable Evidence, Commentary States
Pharmaceutical companies often hire researchers for "star power, an A-list cast with names that themselves sell a product and pull other doctors along, even when the evidence for a treatment is not strong," New York Times reporter Benedict Carey writes in a commentary. For example, according to Carey, the journal Neuropsychopharmacology last week published an article on a new depression treatment -- a $15,000 chest implant that FDA approved in 2005 despite concerns about the effectiveness of the device -- and failed to disclose the financial ties of the authors to the manufacturer, Cyberonics, as well as the names of some of the authors and other consultants hired by the company. Those unnamed individuals "are precisely the sorts of experts the field relies on to help evaluate highly disputed data," Carey writes, adding that, although the "device begged for some more public analysis," the only researchers with the ability to evaluate the effectiveness of the implant "were ... on the company's payroll." According to Carey, "One of the supposed strengths of American science is that it is decentralized and diverse. ... But when many or most of the leading figures are playing for the same team -- an all-star team -- that lineup itself may carry the day, regardless of the science" (Carey, New York Times, 7/25).
Posted by SLS on July 27, 2006, at 9:48:32
In reply to Re: couldn't have said it better myself, posted by ttee on July 27, 2006, at 6:36:59
> > Two people can recieve the same data, and come to entirely different conclusions.
That's why we have the science of statistics.
> Not when the two people are both getting paid big bucks from a drug/medical device company.
Statistics is supposed to help standardize the presentation of data. How we interpret the significance of this presentation is subjective and becomes opinion.
As for VNS, I don't think there has been much global motivation to fund studies to investigate it. That leaves Cyberonics. What do you do when no one wants to investigate a possibly effective treatment? That's quite a quandary. Fortunately, there has been some interest within academic circles. There have been studies conducted at Medical University of South Carolina, Harvard, and Columbia Presbyterian. You can review their research on Medline. The NIMH has funded some work, but not much. We need much more money allocated to this division of the NIH to investigate mental illness. They are given only 1 billion dollars for all mental illnesses combined.
The results for VNS are weak. At most, only about 30 percent of the patients treated respond after a 12 week acute phase clinical trial. One recent 10 week trial showed only a 17 percent response rate. This was still higher than sham, though. However, interestingly and significantly, with continued treatment, more respond after a year. One must take into consideration the desperate treatment-resistence of this patient population.
"Those unnamed individuals "are precisely the sorts of experts the field relies on to help evaluate highly disputed data," Carey writes, adding that, although the "device begged for some more public analysis," the only researchers with the ability to evaluate the effectiveness of the implant "were ... on the company's payroll.""
How can we be sure the unnamed individuals were on the company's payroll? Where's the data?
I'd like to see the link to this thing.
- Scott
Posted by SLS on July 27, 2006, at 10:18:54
In reply to Re: couldn't have said it better myself, posted by SLS on July 27, 2006, at 9:48:32
> I'd like to see the link to this thing.
I think it was posted along another thread.
Thanks.
- Scott
This is the end of the thread.
Psycho-Babble Medication | Extras | FAQ
Dr. Bob is Robert Hsiung, MD, bob@dr-bob.org
Script revised: February 4, 2008
URL: http://www.dr-bob.org/cgi-bin/pb/mget.pl
Copyright 2006-17 Robert Hsiung.
Owned and operated by Dr. Bob LLC and not the University of Chicago.