Psycho-Babble Medication | about biological treatments | Framed
This thread | Show all | Post follow-up | Start new thread | List of forums | Search | FAQ

Re: debate » Betula

Posted by Larry Hoover on April 19, 2008, at 13:33:57

In reply to OOOOOOOOOOOOooooooooo, posted by Betula on April 19, 2008, at 3:41:34

> Hello all!
>
> Its great we have an open debate about these sorts of things.

I'll restrict my commentary to debate.

> Here is the link to the webpage of the lead author of that article:
>
> http://psy.hull.ac.uk/Staff/i.kirsch/
>
> So. I'm going to make some points.
>
> 1) One paper doesn't prove a theory - it takes quite a few for something to become universally accepted.

Then why did you conclude that your proposition was "FACT", and on the basis of but one publication?

Science doesn't prove things. Scientific proof is a fallacy. Science *disproves* things. Science is advanced on falsification. What is left is either consistent or inconsistent with hypotheses or theories.

> 2) The author is a PROFESSOR at a large civic university in the UK. (note its harder to become a professor in the UK - there doesn't exist sub categories such as 'assistant professor' etc as there does in other countries.)

We are not debating Kirsch, but the quality of his arguments.

> I doubt any of us here are actual professors.

And how do you come to believe that? You know nothing of the qualifications possessed by members of this anonymous group.

> He has quite a distinguished publication record - again, something I'm sure none of any of us have.

Again, how do you know? And how does prior publication come to bear on an analysis of this one?

> 3) The paper got published in a reputable, peer reviewed journal.

That's debatable, on its face. We're not talking about JAMA, BMJ, Lancet, Science, Nature et al. It got published, period.

> That means that it was reviewed by other academics working in precisely the same field as him. They must have thought it acceptable for publication, otherwise, it would be sent to the trash.

You have an entirely naive belief in what transpires during peer review. Primarily, it is a search for errors, or blatant inconsistencies among the hypothesis, method, data, results, and conclusions. It is not an assessment of the "value" of the work.

> And it took a year from submission to acceptance, so that would imply it had a couple of revisions at least.

Not necessarily. And one year is very typical.

> So the peer reviewers would have been doing their job properly.

That is a gross fallacy. Here are what a couple of experts in peer review have stated (from wiki):

"Drummond Rennie, deputy editor of Journal of the American Medical Association is an organizer of the International Congress on Peer Review and Biomedical Publication, which has been held every four years since 1986. He remarks, 'There seems to be no study too fragmented, no hypothesis too trivial, no literature too biased or too egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.'"

"Richard Horton, editor of the British medical journal The Lancet, has said that, 'The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability not the validity of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.'"

Note the statement that peer review does not assess validity. Peer review is merely the beginning of the process of scrutiny. Validity is determined by the world at large, not an editorial board or peer review committee. You will note that my critique depended heavily on issues of validity, which Kirsch et al failed to even discuss.

> 4) The journal the paper appeared in, PLoS Medicine. Journals do not want to lose money.

Relevance? It certainly has no bearing on scientific validation, in any case.

> The reputation of a journal (and also of the authors for that matter) quickly goes down the drain if they publish something that is trash.

Indeed. As I earlier stated, it is perhaps relevant to consider that the paper was *not* published by BMJ, JAMA, Lancet, NEJM, Science, Nature or other top tier journals.

> Other academics quickly see through fudged results etc.

No, they do not. Fudged data cannot be assessed by any review mechanism. Failure to replicate is usually the beginning of such challenges. It certainly cannot be determined by peer review.

> This means that journals select the very best papers they can. Therefore, I highly doubt that this paper is 'flawed' in any way.

I'll address flaws (other than those I found) momentarily.

> 5) Therefore I personally believe that this paper would not have been published if it were faulty/flawed/trash in any way. IT WAS PEER REVIEWED for heavens sake!

The idea that peer review should be the limit of critical review is absurd. What would you do with two papers, both peer reviewed, with absolutely contradictory and mutually exclusive results? It is incumbent on readers to engage critical thinking processes, which did *not* occur with this paper. Its conclusions were swallowed whole by the lay press (and some scientists, too), without aforethought.

> 6) I personally do not believe dismissals of the paper coming from people in general, unless they are a) the authors of the paper or

You expect the authors to diss themselves?!?

> b) suitably qualitifed academics working in the field i.e. the 'peers'. Its as simple as that.

Well, let's see what his peers have said, shall we? From the reviews appended to the original article, and BMJ:

"In conclusion, the paper of Kirsch and his colleagues presents nothing that was not previously known, but it does introduce empirically unsupported conclusions and erroneous interpretation that are potentially misleading."

Oh, I said the same things in my critique.

"Among other things, these applications have revealed that the misuse of ordinal scaled data can produce erroneous data and drive inaccurate conclusions. Consequently, concerns must be raised over the accuracy of the results of the meta-regression performed by Kirsch et al, given they have undertaken sophisticated mathematical operations on data which do not support such activities. Moreover, it is worth noting that even the calculation of a mean, a standard deviation, and a change score are invalid on ordinal data, given that these all assume equal interval scaling."

Translation: The statistical methods applied during the meta-analysis (of the ordinal Hamilton Depression Scale scores) are not meaningful. Ergo, any conclusions therefrom suffer from the same limitation.

"In each case the null hypothesis that the Kirsch et al estimator is unbiased has been tested and overwhelmingly rejected."

Re-analysis of Kirsch's methods demonstrate that his methodology negatively biased the outcomes.

And, even if one accepts the premise that these data are analyzable via this methodolgy, a recalculation under more rigorous procedures provides this outcome:

"If the weighted mean difference is used (an equally, or more valid approach given that all studies utilised the same outcome measure, namely the HRSD) effect sizes expressed in HRSD scores are larger than reported in this study (2.8 vs 1.8), and paroxetine and venlafaxine reach the NICE criteria for 'clinical significance' (HRSD change > 3)."

Aside, I had estimated the effect size plotted on Table 4 at about d=3, so I feel validated that my common-sensical critical-thinking test of Kirsch's stats is supported mathematically.

> 7) Of course, science evolves and develops and new things come to light, but at this time, I think that this paper is valid and the conclusions should be accepted into the bigger the scheme of things, including papers that show 'the drugs work'.

The paper is not valid. That was the point of my critique, and of other reviewers.

Appending your other post here:

> And oh, I meant to say, why is it so that people can not accept the conclusion of the paper?

It fails when subjected to critical thinking. I do not form conclusions about a paper until I have done so. As the paper fails on multiple fronts, its conclusions are irrelevant. I form my own conclusions.

> Does it challenge your world view in such a way that you simply have to deny the findings of it? Well, it would appear so.

My world view is totally unknown to you. In debate, it does not enter into the exposition in any way. I deny the findings because they are methodologically unsound. I deny the conclusions because they are not supported by the data. And I deny the external validity of the paper because it is not representative of the body of evidence available to me.

> Of course things are never black and white. This paper might be a complete and utter anomaly, (like I said before one paper doesn't really 'prove' anything) but that doesn't mean we can't consider it, relfect on it, and see that the authors may have some very good points.

I did not see any "very good points". If only he had similarly criticized psychotherapy, which has an apparent effect size against placebo of only 0.149, *way* below the NICE criterion.

J Consult Clin Psychol. 2003 Dec;71(6):973-9.
Establishing specificity in psychotherapy: a meta-analysis of structural equivalence of placebo controls.Baskin TW, Tierney SC, Minami T, Wampold BE.
Department of Counseling Psychology, University of Wisconsin-Madison, 53706, USA.

Placebo treatments in psychotherapy cannot adequately control for all common factors, which thereby attenuates their effects vis-a-vis active treatments. In this study, the authors used meta-analytic procedures to test one possible factor contributing to the attenuation of effects: structural inequalities between placebo and active treatments. Structural aspects of the placebo included number and duration of sessions, training of therapist, format of therapy, and restriction of topics. Results indicate that comparisons between active treatments and structurally inequivalent placebos produced larger effects than comparisons between active treatments and structurally equivalent placebos: moreover, the latter comparison produced negligible effects, indicating that active treatments were not demonstrably superior to well-designed placebos.


> We shouldn't readily dismiss it because it challenges our beliefs.

It has nothing to do with my beliefs. I didn't get that far.

If his evidence was sound (in my perhaps not so humble scientific opinion), I would say that.

> It makes me sad to see that people are so very narrow minded.

<Spock eyebrow>

> Goodday to everyone, and I'm leaving now.

Why? I thought you welcomed open debate.

> I do not want to inhabit a playground for people <snip>.

I cannot grasp the basis for this remark.

Lar


 

Thread

 

Post a new follow-up

Your message only Include above post


Notify the administrators

They will then review this post with the posting guidelines in mind.

To contact them about something other than this post, please use this form instead.

 

Start a new thread

 
Google
dr-bob.org www
Search options and examples
[amazon] for
in

This thread | Show all | Post follow-up | Start new thread | FAQ
Psycho-Babble Medication | Framed

poster:Larry Hoover thread:823248
URL: http://www.dr-bob.org/babble/20080412/msgs/824271.html