Materials and Methods

Search Criteria

Using the h5-Index from Google Scholar Metrics, we selected the 6 highest ranking oncology journals based on index scores. We searched PubMed using the following search string:((((((“Journal of clinical oncology : official journal of the American Society of Clinical Oncology”[Journal] OR “Nature reviews. Cancer”[Journal]) OR “Cancer research”[Journal]) OR “The Lancet. Oncology”[Journal]) OR “Clinical cancer research : an official journal of the American Association for Cancer Research”[Journal]) OR “Cancer cell”[Journal]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND (((meta-analysis[Title/Abstract] OR meta-analysis[Publication Type]) OR systematic review[Title/Abstract]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND ((“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]). This search strategy was adapted from a previously established method that is sensitive to identifying systematic reviews and meta-analyses (Montori 2005). Searches were conducted on May 18 and May 26, 2015.

Review selection and data extraction

We used Covidence (covidence.org) to initially screen articles based on the title and abstract. To qualify as a systematic review, studies had to summarize evidence across multiple studies and provide information on the search strategy, such as search terms, databases, or inclusion/exclusion criteria. Meta-analyses were classified as quantitative syntheses of results across multiple studies \cite{25194857}. Two screeners independently reviewed the titles and abstracts of each citation and made a decision regarding its appropriateness for inclusion. Next, screeners held a meeting to revisit the citations in conflict and arrive at a final consensus. Following the screening process, full-text versions of included articles were obtained via EndNote. Full text screening was then completed and conflicts resolved by group consensus excluding addition articles that did not meet requirements to be considered.

Coding Abstracts

To ensure the accuracy of the coding process, an abstraction manual was developed and piloted prior to training coders. A training session was conducted to familiarize coders with the process, and subsets of studies from the screening process were jointly coded as a group. After training, each coder was given three new articles to code independently. These data were analyzed for inter-rater agreement by calculating the Cohen’s kappa. As inter-rater agreement was acceptable (k=0.65; agreement=75 percent), each coder was assigned an equal number of articles for data abstraction. Elements from the abstraction manual are presented in Table 1. After the initial coding process, validation checks were conducted such that each coded element was verified by a second coder and a meeting was held to discuss disagreements and settle them by consensus. Figure 1 details the study selection process. Data from the final sample of 182 articles were analyzed using STATA 13.1 and are publicly available on Figshare ().