BMJ Publishing Mandate
“The BMJ’s mission is to lead the debate on health, and to engage, inform, and stimulate doctors, researchers and other health professionals in ways that will improve outcomes for patients. We aim to help doctors to make better decisions.
To achieve these aims we publish original research articles, review and educational articles, news, letters, investigative journalism, and articles commenting on the clinical, scientific, social, political, and economic factors affecting health. We are delighted to consider articles for publication from doctors and others, and from anywhere in the world.
We can publish only about 7% of the 7000-8000 articles we receive each year, but we aim to give quick and authoritative decisions. For all types of articles the average time from submission to first decision is two to three weeks and from acceptance to publication eight to 10 weeks. These times are usually shorter for original research articles…”
7 September 2014 — Original Submission of Restoring Study 329 is sent to the BMJ (R1).
3 November 2014 — Elizabeth Loder, MD, sends a letter by email to Dr Jureidini and the RIAT team. She states:
“We recognise the value of this paper but we have not yet reached a final decision on it. We believe the paper needs extensive revision and clarifications in response to a number of matters identified by the peer reviewers and editors… We hope very much that you will be willing and able to revise your paper as explained below in the report ”
The BMJ team deciding on publication comprises: Elizabeth Loder, MD (chair); Angela Wade (statistician); Jose Merino; Wim Weber; Tiago Villanueva; and Emma Parish.
The academic reviewers were:
- Florian Naudet, PhD, Rennes University;
- Peter Doshi, Associate Editor, BMJ;
- Hilde PA van der Aa, Clinical Researcher, VU University Medical Centre;
- Sarah Hetrick, Senior Research Fellow, Orygen Youth Health Research Centre, Centre for Youth Mental Health, University of Melbourne;
- Ernest Berry, self-employed Safety & Security Consultant; and
- David Henry, University of Toronto.
3 November 2014 — The BMJ review committee sends 135 questions and comments (with some duplication and overlap) to the 329 team.
There were many questions about bias and conflict of interest of the 329 team, which was interesting given that the original study was ghostwritten by an author hired by GSK, and the listed authors of the original study had clear conflicts of interest.
The only authorship question asked of the 329 team was their level of certainty about whether the original study 329 was ghostwritten. They responded that this had been unequivocally established.
The comments included the following three:
- Please present a true ITT analysis (in other words, analyze all subjects in the groups to which they were randomised, regardless of whether they received the study drug or not). Our statistician suggests that you consider having several columns in your results table. The first would present an ITT analysis using LOCF, the second using imputation and correcting for strata (12 centres). The third column could show the per protocol or complete case analysis using LOCF and the fourth the per protocol or complete case analysis using imputation. This would allow readers to judge for themselves the effects, if any, of using more modern methods of analysis, while still showing the originally intended efficacy analysis.
- We were disappointed that you did not examine the CRFs for all subjects. This seems a serious problem. It is, we understand, a major undertaking to review all of these documents, but seems necessary to set the record straight. After all, the trial itself was a major effort on the part of the original investigators.
- I think that the change of coding between Lack of Efficacy and Adverse Event is difficult and could be misleading. Many times, discontinuation occurs for both lack of efficacy and adverse events, since one can easily consider that adverse events like dry mouth can be more acceptable in the case of treatment efficacy. This point could be addressed in the discussion and I’m not sure that a a posteriori interpretation of the CRF can give a perfect information about the individual patient experience (even if it is very better than aggregated data of course…). Morever, I also think that a lack of efficacy can be considered for patients even if they are responder upon the HDRS. Patients are not just a score on a scale. The authors’ a posteriori proposal for recoding this can be thus erroneous. On behalf of the committee, Dr Loder also commented on the same point: We agree with reviewers that coding of adverse events needs to be redone by people who are independent of your group.
2 December 2014 — Dr Jueidini writes to Dr Loder at BMJ. The letter includes the following:
“Thank you for the exhaustive editorial and peer review of our important paper. It is the better for it. We have completed our response to reviewers and revised the manuscript accordingly…
There remain three points of potential disagreement: the use of “more modern techniques” to analyse efficacy outcomes; the desirability of completing the analysis of CRF’s; and whether we took adequate steps to ensure independence in analysing adverse events.
“More modern techniques” to analyse efficacy outcomes – When it comes to efficacy, the point behind trials and statistical testing is to thoroughly test a manufacturer’s claims because the well-being of vulnerable people is at stake. Trials are designed to weed out bogus claims.
The contract under which GSK provided us with access to data specified that we were to follow the SKB protocol. This seemed appropriate. For a trial to be considered misleading, it should be by the standards that the initial researchers worked under, not “by modern standards”. And we wished to avoid real or perceived bias in the way that we analysed the data…Using the protocol methodology, we could find no hint of efficacy. It is not our place to adopt ever more sophisticated methods to find hints of efficacy… if more “modern” methods of data imputation could have in any way retrieved this study, one imagines GSK would have done so… we ended up deciding that the choice of analytic technique was a potential source of bias (our own bias), and that in the efficacy analysis, we should wherever possible stick to the methodology prescribed by the original SKB a priori protocol.
Completing the analysis of CRF’s – …we think it better that you do not require us to complete an analysis of the CRFs. When this Rewrite began, we had no expectation that we would get access to the CRFs… GSK were initially resistant to making the CRFs available. We did negotiate access to Appendix H (77,000 pages of CRFs, compared to approximately 5,500 pages in appendices A-G combined). But…the conditions under which GSK granted access were so restrictive that GSK’s expectation may have been that we would be doing something similar to what FDA do, no more than dip into the occasional record to confirm that, for instance, individual patients existed… As events have transpired, our RIAT article has come to be about more than restoring Study 329. Once GSK granted access to the CRFs, the article has something very important to say about data access. It is unclear whether such access will ever be repeated; there is no commitment on the part of GSK to do this again for other groups.
Apparently “completing” the evaluation of AEs direct from the CRFs risks doing a number of things.
- giving the impression that the dataset was complete when there were at least 1000 pages missing…
- giving the impression that using GSK’s periscope is a reasonable approach to data access. In fact, neither we nor our readers really do have access to the CRFs in a meaningful way …
- distracting from what may be a major contribution to scientific debate – that there is no such thing as complete analysis, and conclusions from a trial are provisional and subject to improvement by others having equal access to the data.
Analysing adverse events – We believe it is important that the adverse event profile of both paroxetine and ultra-high dose imipramine, never before or since subject to valid testing in children, is provisional. We want to invite others, including GSK, to engage with this study.
- As noted in our submitted paper, the original protocol for Study 329 makes no mention of how AEs from this trial would be coded… blind coding is irrelevant. The blinding that counts is whether the clinician was blind to the drug the child was on when s/he deemed that child to be having an AE and used the clinical descriptors that now appear on the records.
After that blind act, GSK coded these events. They may not have coded them blind. We used a much better coding system and coded blind. We did so because we anticipated the lack of understanding of readers who were not familiar with coding – we did not do so because the paper was methodologically stronger as a result of coding blind…
Across all codings, we would expect disinterested coders to rate our efforts more highly than GSKs. We have certainly done better than GSK did originally in this study where there are some very clear breaches of good coding practice… But the key point is this. If GSK engage in such an exercise, they will demonstrate the benefits of data access. Once there is data access, there is nothing to be gained by investigators (in this case us) being biased. We have a huge incentive to be genuine…
We urge you… to publish the reviews as they stand and to wait and see if GSK (who are after all in the best position to carry out all kinds of analyses) respond by adopting the analyses proposed by the reviewers, and if so, what the outcomes are and what the scientific community would make of anything GSK offered in this area.
In conclusion, we offer you a heavily thought out attempt that may provide a basis for setting a first set of standards for future RIAT efforts. In the case of an already published article, RIAT is not intended to be a conduit for criticism and bickering, but rather a serious and thorough analysis of the results of a study in a manner that aims at opening up rather than closing down debate.
“I believe we are getting close to a version we will all find acceptable. At this point we are offering provisional acceptance provided you satisfactorily address the remaining points raised by reviewers. themselves.
On other matters some changes are necessary.
Deadline: Because we are trying to facilitate timely publication of manuscripts submitted to BMJ, your revised manuscript should be submitted by one month from todays date. If it is not possible for you to submit your revision by this date, we may have to consider your paper as a new submission.”
Attached to this letter is a second round of comments.
17 March 2015 — Dr Jureidini acknowledges Dr. Loder’s letter on behalf of the 329 team. He sends a reply:
“Thank you for your second round of reviews and your offer of provisional acceptance.
It has been fascinating seeing how the paper has been read by the reviewers. While our goal with the responses has always been to address in full the reviewers’ points, there has been a lot of learning about the process of authorship on the way…
Prompted by your reviewers, we have reworked the title and abstract to better reflect our view that our paper is as much about authorship and the authority of published conclusions as it is about the specifics of Study 329.
There is an important point related to blinding on which there appears to have been some confusion, hopefully now clarified. Dr Henry appears concerned that non-blind coding of Serious AEs might have affected the findings. As per our previous letter, we are of the view that the original allocation needs to be blind – not the coding. The SAEs were coded blind. There were 6 “extra” non-serious events described within the narratives that were left uncoded or were coded and never transcribed. It was not possible to be blind to these, because allocation status was written into the narratives.
At least one of the missing events was a failure to transcribe ‘Withdrawal Syndrome’. GSK had coded ‘Withdrawal Syndrome’ and ‘Migraine’ for one patient but only copied over ‘Migraine’ to Appendix D. Something similar may have happened for the other events – but this is less clear.
For those who think blind coding is important, we have had two MedDRA trained coders review a set of redacted SAEs. Both coders pulled out the additional 6 events that GSK had either left uncoded or not transcribed. We can supply the redacted SAEs to BMJ and will make them available online with the manuscript.
We hope that our responses, covered in more detail in our ‘response to reviewers’ document, will allow BMJ to proceed towards publication. If accepted, it would be great to get an indication from you for likely publication date.”
20 March 2015 — The 329 team sends in its revised manuscript (R2) and responses to the second round of reviewer comments. Wherever possible, the team agreed with the Reviewers and made the suggested changes / revisions.
March 22, 2015 — BMJ acknowledges receipt of the revised submission. The Editorial Office sends a note to Professor Jureidini that it is “presently being given full consideration for publication in BMJ.”
March 23, 2015 — Vivien Chen, BMJ Technical Editor (Research) writes to Jon Jureidini explaining that completing a checklist, including some amendments, will be necessary to “prepare the article” for the BMJ “post-acceptance process
March 26, 2015 — Jon Jureidini replies to Vivien Chen, providing the figures requested, revised Tables 2 & 3, Appendix 1, and the revised manuscript (R3).
15 April 2015 — Having heard nothing since March, Jon Jureidini e-mails Elizabeth Loder to ask about the delay.
15 April 2015 — Elizabeth Loder responds that the revision is being discussed at a manuscript meeting a week hence (Thursday, April 23). The reason that BMJ has sent their latest responses to the reviewers again is not clear.
May 1, 2015 — Having heard nothing since April 15, Jon Jureidini e-mails Elizabeth Loder to find out what is happening. He asks: “Is there some particular part of our paper that accounts for the delay? Something we could help with?”
May 4, 2015 — Elizabeth Loder emails John Jureidini apologizing for the delay and explaining that dealing with the first “RIAT” paper is a learning experience. She explains that further changes are recommended and notes that: “We hope very much that you will be willing to make the changes that we recommend.” She provides detailed instructions on how to submit the revised paper.
The main change requested is described as follows:
“The second point you make is about reporting and coding of AE. I am afraid we continue to find this less convincing, particularly the recoding of some of the AEs, especially given that you may be perceived to have a bias due to involvement in litigation. The sort of analysis you do was not specified in the original study and goes beyond what would have been done at the time of the trial. In fact, you use a classification scheme that was not in use when the study was done. It was also unclear why you do not do any statistical tests on the AEs. This is the least convincing part of the paper and no one felt it was fair. This really detracts from the main point of the paper which was the reanalysis of the efficacy findings, showing that the original claim of superiority rested on post-hoc outcomes. We continue to feel very uneasy about this because of the fact that you did not examine all case report forms. This is beyond your control, but it does reduce our confidence in the findings and is a major limitation. One editor commented that the emphasis on AEs seems like “the tail wagging the dog.”
We believe that you need to either present the AEs as they were originally coded and make fewer claims about them, or else ask completely independent investigators to code the AEs, report inter-rater agreement, and so on. It would only make sense to recode AEs, however, if you were also going to apply new methods to the efficacy data.”
May 8, 2015 — Jon Jureidini writes to Elizabeth Loder noting that because the paper had been provisionally accepted on March 3, it is unusual that two months later, new revisions are being requested along with suggested revisions that the RIAT team believed were already addressed.
Regarding her comment that “you may be perceived to have a bias due to involvement in litigation”, he points out that:
“Involvement in litigation does have a potential for bias, as we have acknowledged, but it does not disqualify us from analyzing data.”
Regarding her comment that the emphasis on adverse events is like “the tail wagging the dog”, he responds:
“Far from our emphasis on adverse events being like “the tail wagging the dog”, we think it is ground-breaking work that needs to be in the foreground, along with the fact that the paper is also a study in authorship and the effects of authorship on access to data. We are therefore unwilling to weaken our analysis.”
He directs her attention to the rationale on this point previously submitted to the reviewers. He agrees to make several other changes.
May 8, 2015 — Elizabeth Loder acknowledges Jon Jureidini’s letter, noting: “Thanks for getting back to me so quickly. I’ll discuss this with the editorial team and let you know our thoughts.”
May 21, 2015 — Elizabeth Loder writes to Jon Jureidini:
“…I’ve had a long discussion with Peter Doshi about how to move forward with this paper. It is our intention at this point to make a decision in-house (without additional outside peer review) if we can agree on a few points. We will send the final version of the paper for review by our legal team…”
She acknowledges the imputation analysis that she had insisted on previously, but continues to express concern regarding the coding of adverse events:
“…it would work well to present both the ADECS and MedDRA adverse event data in the body of the paper, and acknowledge in the discussion the different interpretations that result from using the two systems… , how was blinding to drug assignment achieved… Can you also provide references or other information that will convince us that [your] process of coding is reliable, unbiased and reproducible?… The example you give about coding the scratch and emotional lability as suicidal ideation made everyone who read it worry quite a bit about the level of subjectivity that might be involved here — and I hope you are not offended if I say we felt that left everyone open to criticism given that you have acted as expert witnesses in court cases that presumably focused on AEs and harms…”
She concludes by noting that she looks forward to a revised version of the paper “If these recommendations are acceptable to you”.
May 21, 2015 — Jon Jureidini responds:
“our adverse event reporting is integral to our analysis of Study 329, and …we are committed to publishing the MedDRA coding and our analysis of it roughly as it currently stands…We are unwilling to use the original AE coding that scores suicidality as ’emotional lability’ or to hire an external rating team for AE coding, and we lack the resources to invest thousands of hours to examine all of the CRFs.
We understand your concerns about perceived COI in relation to David Healy’s potential expert witness status, but we remind you that this issue has been present and public throughout the process…and the adverse event source data will be available for anyone to examine.”
May 23, 2015 — David Healy does a written analysis of the BMJ comments and process for the RIAT team. The following comments are included:
“BMJ ask that we specify what was done to make the coding reliable, unbiased and reproducible… There is not a single other article about a clinical trial in the published literature that specifies these steps.“
BMJ have a primary concern – shielding the journal from a legal action…Along with this concern, their repeated insistence on adhering to items of the protocol (while at the same time blithely introducing imputation which has no place in the protocol) demonstrate the real trap… Conceding on imputation [is] one of those Janus things – two faces – good to be seen to co-operate, bad to have given them the impression that if they push we will buckle. GSK [and others] believe that a rigid adherence to analysis per protocol is the answer to all of life’s problems… This is exactly what Andrew Witty’s and industry’s proposals for Data Access hope for… Data Access on the wrong terms would leave us all in a worse bind than now…. companies [will be able to] design protocols in such a manner that the evidence from the trial can never come to light as BMJ are demonstrating here.
The solution…is to make the data available…because a commitment to the data and its possible meanings is primary even though this means that bias is revealed and may be dissected in the process.”
June 10, 2015 — The BMJ Editorial Office sends Jon Jureidini an email acknowledging receipt of the third version of the paper (R3), noting that it “has been successfully submitted online and is presently being given full consideration for publication in BMJ.”
June 15, 2015 — Elizabeth Loder writes to Jon Jureidini with a statement of provisional acceptance pending further revisions:
“we would like to publish it in the BMJ as long you are willing and able to revise it as we suggest in the report below from the manuscript meeting: we are provisionally offering acceptance but will make the final decision when we see the revised version.”
She advises that the provisional acceptance is conditional upon BMJ receiving the revised version within one month.
June 16, 2015 — The BMJ Editorial Office sends Jon Jureidini an email acknowledging receipt of the fourth version of the paper (R4).
June 28, 2015 — Elizabeth Loder writes to Jon Jureidini advising that the paper is under legal review and that this is likely to result in a delay.
July 6, 2015 — Jon Jureidini writes to BMJ Editor Fiona Godlee saying:
“I understand that you have now become directly involved in editorial management of our paper, and given how drawn out and difficult the process has been, I am taking the liberty of writing durectlky to you with a copy to Dr Elizabeth Loder…
We understand…that you want to check some of our data transcription and analysis prior to publication… It seems to us that each time we satisfy a requirement, a new one emerges, and we have lost confidence that this “last step” will be the end of the matter.
In fact, it is not clear what BMJ hopes to achieve by further checking our work…”
July 6, 2015 — Fiona Godlee replies the same day by email to Jon Jureidini:
“it is our view that publication of the paper still carries risks… we have narrowed them down to one aspect of the study: the categorisation of the adverse events, and more specifically the self-harm and suicidal ideation. It is on this categorisation that we want to have independent checks…
We believe that this checking could be done quite quickly…we remain committed to your paper and to the spirit of the RIAT enterprise…”
July 7, 2015 — Jon Jureidini emails Fiona Godlee:
“Thank you for replying so promptly to yesterday’s letter. However your response did not address some of our major concerns. How do you intend to resolve the inevitable differences of judgment that will arise in the checking process? What will be the mechanism for adjudicating whether any difference is problematic?… If you do have a mechanism… we need from you a clear time line to implement that mechanism, not just reassurance that it ‘could be done quite quickly’.
Finally, how can we be assured that that if this issue is resolved, BMJ will not find another issue that delays a clear final decision about acceptance or rejection?”
July 20, 2015 — Elizabeth Loder emails Jon Jureidini:
“I am writing to let you know that we have identified a person to independently evaluate some of the outcomes. I am hopeful we will be able to issue a final decision soon.”
July 29, 2015 — Elizabeth Loder sends a letter to Jon Jureidini requesting a number of changes suggested by the independent reviewer. Once again, she notes that the RIAT team has one month to make the revisions and re-submit the paper.
July 29, 2015 — Jon Jureidini provides the requested changes with a summary, submits a fifth version online (R5) and notes that the two main requested changes had already been addressed in version 4 (R4).
Elizabeth Loder had suggested:
“If you agree with the points our reviewer picked up on, we would like you to explain her assistance in the methods section of the paper.”
Jon Jureidini responded that:
“We do not think the checkers contribution was as great as some other reviewers, for example Dr Doshi. We will be happy for all reviewers to be acknowledged, and we propose that you make an editorial note about the extraordinary number and involvement of reviewers in this paper.”
August 1, 2015 — Elizabeth Loder emails Jon Jureidini adding one additional matter that she forgot to include in the changes suggested on July 29. This was:
“On page 142 of 149 you say your analyses support the idea of drug dependence and withdrawal effects. I don’t see convincing evidence of that in the paper. In any case, that also goes beyond the objectives of the original study or this reanalysis and should be removed.”
August 2, 2015 — Jon Jureidini submits a sixth version of the paper (R6) with amendments to two Figures and a Table heading.
August 3, 2015 — The BMJ Editorial Office confirms receipt of R6.
August 3, 2015 — Elizabeth Loder writes to Jon Jureidini that “Restoring Study 329: efficacy and harms of paroxetine and imipramine in treatment of major depression in adolescence” has been accepted for publication. The letter explains the process in some detail.
August 3, 2015 — Elizabeth Loder emails Jon Jureidini requesting the upload of some documents.
August 12, 2015 — Jon Jureidini emails Vivien Chen with a concern that if the article is published on line the first week of Sept, neither he nor Joanna Lenoury will have time to approve the proofs.
August 12, 2015 — Vivien Chen responds that the RIAT article has been move to the Sept 12 issue.
August 28, 2015 — Elizabeth Loder emails Jon Jureidini to suggest that the publication date be moved to Sept 26, to avoid coinciding with important Jewish holidays.
Jon Jureidini vetoes this suggestion.
Sept, 2015 — The RIAT team submits the PICO for the final article (Abstract and summary AE Table below):
Study question: What does reanalysis of SmithKline Beecham’s Study 329 (a multicentre double blind, placebo controlled study of paroxetine and imipramine in adolescents with unipolar major depression) show about the need for access to clinical trial data sources?
Summary answer: Access to the full individual patient level dataset, backed up by the case report forms (CRFs) and the a priori protocol, is required to judge the validity of published reports of clinical trials. Reanalysis based on these documents showed that, contrary to the original trial report, efficacy was not established for either paroxetine or imipramine, which both increased harms.
What is known and what this paper adds: In the absence of access to primary data, misleading conclusions in publications of trials can seem definitive. This paper makes it clear that it is not possible to adequately scrutinise trial outcomes simply on the basis of what is reported in the body of clinical study reports (CSRs), which can contain important errors.
This has important implications for clinical practice, research, regulation of trials, and licensing of drugs.
Table 1| Adverse events (AE) for paroxetine and placebo groups in Study 329 according to clinical study report (CSR), paper by Keller and colleagues (ADECS coded), and RIAT reanalysis (MedDRA coded)
|Paroxetine (n=93)||Placebo (n=87)||AE ratio|
|AEs in taper phase*||45||—||47||10||—||10||2.2|
|Severe AEs in taper phase*||13||—||13||1||—||1||6.2|
|Suicidal and self injurious patients|
|*Paroxetine n=19, placebo n=9|
Sept 16, 2015 — Just over a year has passed since the original submission was sent to BMJ. This timeframe is significantly longer than the average 8 to 10 weeks. The paper is published by the BMJ at 11:30 pm London.