Many sources for studies (throughout the world) should be explored:
- Bibliographic databases (Medline, Embase, etc.)
- Publicly available clinical trials databases such as clinicaltrials.gov, etc.
- Databanks of pharmaceutical firms (e.g. clinical trial results)
- Conference proceedings
- Personal contacts
- Unpublished reports
As discussed earlier in this course, beware of publication bias. Studies in which the intervention is not found to be effective, or as effective as other treatments, may not be submitted for publication. (This is referred to as the "file-drawer problem".) Studies, with 'significant results' are more likely to make it into a journal. Recent initiatives in online journals, such as PLoS Medicine, and databases of trial results may encourage increased publication of results from scientifically valid studies, regardless of outcome. Even so, in an imperfect world, realize it is possible for an overview based only on published studies to have a bias towards an overall positive effect.
Construction of a "funnel plot" is one method for evaluating whether or not publication bias has occurred.
Suppose there are some relevant studies with small sample sizes. If nearly all of them have a positive finding (p < 0.05), then this may provide evidence of a "publication bias" because of the following reason. It is more difficult to show positive results with small sample sizes. Thus, there should be some negative results (p > 0.05) among the small studies.
A "funnel plot" can be constructed to investigate the latter issue. Plot sample size (vertical axis) versus p-value or magnitude of effect (horizontal axis).
Notice that the p-values for some of the small studies are relatively large, yielding a "funnel" shape for the scatterplot.
Notice that none of the p-values for the small studies are large, yielding a "band" shape for the scatterplot and the suspicion of publication bias. This is evidence to suggest that there does exist a degree of 'publication bias'.