Figure 1 demonstrates the cumulative impact of reporting and citation biases. Of 105 antidepressant trials, 53 (50%) trials were considered positive by the FDA and 52 (50%) were considered negative or questionable (Fig. 1a). While all but one of the positive trials (98%) were published, only 25 (48%) of the negative trials were published. Hence, 77 trials were published, of which 25 (32%) were negative (Fig. 1b). Ten negative trials, however, became ‘positive’ in the published literature, by omitting unfavorable outcomes or switching the status of the primary and secondary outcomes (Fig. 1c). Without access to the FDA reviews, it would not have been possible to conclude that these trials, when analyzed according to protocol, were not positive. Among the remaining 15 (19%) negative trials, five were published with spin in the abstract (i.e. concluding that the treatment was effective). For instance, one article reported non-significant results for the primary outcome (p = 0.10), yet concluded that the trial ‘demonstrates an antidepressant effect for fluoxetine that is significantly more marked than the effect produced by placebo’ (Rickels et al., 1986). Five additional articles contained mild spin (e.g. suggesting the treatment is at least numerically better than placebo). One article lacked an abstract, but the discussion section concluded that there was a ‘trend for efficacy’. Hence, only four (5%) of 77 published trials unambiguously reported that the treatment was not more effective than placebo in that particular trial (Fig. 1d). Compounding the problem, positive trials were cited three times as frequently as negative trials (92 v. 32 citations in Web of Science, January 2016, p < 0.001, see online Supplementary material for further details) (Fig. 1e). Among negative trials, those with (mild) spin in the abstract received an average of 36 citations, while those with a clearly negative abstract received 25 citations. While this might suggest a synergistic effect between spin and citation biases, where negatively presented negative studies receive especially few citations (de Vries et al., 2016), this difference was not statistically significant (p = 0.50), likely due to the small sample size. Altogether, these results show that the effects of different biases accumulate to hide non-significant results from view. - www.cambridge.org