Antidepressant trial’s upended results show need for sharing all data


By Jon Jureidini and Melissa Raven

20 Sep 2015

In 2001, a “landmark” study published in the prestigious Journal of the American Academy of Child and Adolescent Psychiatry purported to show the safety and effectiveness of using a common antidepressant to treat adolescents. But soon after its publication, both researchers and journalists raised questions about the research. And in an article we published today in the BMJ, we’ve shown that the original published findings were biased and misleading.

Known as Study 329, the randomised controlled trial compared paroxetine (Paxil, Seroxat, Aropax, among other brand names) with a placebo and an older antidepressant (imipramine) for treatment of adolescent depression. It was funded by SmithKline Beecham – now GlaxoSmithKline (GSK) – the manufacturer of paroxetine.

The research has been repeatedly criticised, and there have been numerous calls for it to be retracted. Our study, Restoring Study 329 was conducted under an initiative called restoring invisible and abandoned trials (RIAT), which encourages abandoned or misreported studies to be published or formally corrected to ensure doctors and patients have complete and accurate information to make treatment decisions.

Fundamental problems

To re-analyse the evidence of effectiveness and safety of paroxetine, we used documents posted online by GSK, including the clinical study report, which was submitted to the US Food and Drug Administration (FDA) to gain approval for paroxetine to be prescribed to adolescents. We also had access to other publicly available documents and individual participant data, as well as other documents provided by GSK.

The clinical study report had significant problems; although it reported more adverse events than the original article, it omitted many problems our re-analysis found in the case report forms for individual patients.

We found that paroxetine was no more effective than a placebo, which is the opposite of the claim in the original paper. We also found significant increases in harms with both paroxetine and imipramine. Compared with the placebo group, the paroxetine group had more than twice as many severe adverse events, and four times as many psychiatric adverse events, including suicidal behaviours and self-harm. And the imipramine group had significantly more heart problems.

Our re-analysis has implications beyond Study 329; it has repercussions for all of evidence-based medicine.

First, we identified ten strategies used by researchers in this clinical trial to minimise apparent harms. This included inconsistent classification of adverse events and ignoring their severity. Several of these strategies can be readily identified in reports of other trials. They influence the apparent safety of drugs, and can be used to present particular drugs in a favourable or unfavourable light. Many of these strategies may also be used to bias reporting of non-drug treatments.

Study 329 compared paroxetine with a placebo and an older antidepressant (imipramine) for treatment of adolescent depression. me and the sysop/Flickr, CC BY-ND

Second and more importantly, our findings show influential peer-reviewed research published in leading medical journals can be seriously misleading. And that it’s not possible to adequately scrutinise trial outcomes, particularly in relation to harms, simply on the basis of what’s written in the body of clinical study reports, which can contain important errors.

We know that selective reporting is common in psychiatric literature. And there are clearly no grounds to believe that such misrepresentation is restricted to psychiatric studies.

Lessons for all

It’s clear to us now that access to full individual patient level data, backed up by case report forms and the pre-specified protocols, are required to judge the validity of published reports of clinical trials. Only that degree of detail allows independent researchers to check how harms are recorded and reported. And whether researchers involved in clinical trials have accurately reported outcomes.

Our re-analysis was demanding because we were breaking new ground. And it was extremely time consuming because the data made available to us by GSK was in a form that made our work highly inefficient. But we have now established a methodology for this kind of work. And it’s clear that if data are provided in a user-friendly format, it’ll be possible for researchers to carry out similar re-analyses reasonably inexpensively.

If other trials are found to contain similar errors – whether intentional or inadvertent – it might be time to change the requirements for submissions to drug regulators (such as Australia’s Therapeutic Goods Administration, the US FDA, and the European Medicines Agency), who are responsible for evaluating the safety and efficacy of prescribed drugs.

Indeed, if other re-analysis reach the sort of conclusions we did, it should become clear to editors of medical journals that trial results should not be published unless all the data are available for independent scrutiny both before and after publication. Peer reviewers also need to become far more critical of manuscripts they review.

Undoubtedly, there would be resistance to such changes. But scrutiny is warranted for drugs that are likely to be prescribed to millions of patients with potentially adverse outcomes and limited benefits.

This article originally appeared on The Conversation.

Jon Jureidini is Research Leader, Critical and Ethical Mental Health research group, Robinson Research Institute at University of Adelaide.

Melissa Raven is a postdoctoral research fellow at University of Adelaide.

Already a member?

Login to keep reading.

Email me a login link