Sarah Khan edited untitled.tex  over 8 years ago

Commit id: 457852271f99a4f2d5a8e2a8c1c8a5fe8fa76f2d

deletions | additions      

       

Using the h5-Index from Google Scholar Metrics, we selected the six oncology journals with the highest index scores. We searched PubMed search using the following search string: ((((((“Journal of clinical oncology : official journal of the American Society of Clinical Oncology”[Journal] OR “Nature reviews. Cancer”[Journal]) OR “Cancer research”[Journal]) OR “The Lancet. Oncology”[Journal]) OR “Clinical cancer research : an official journal of the American Association for Cancer Research”[Journal]) OR “Cancer cell”[Journal]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND (((meta-analysis[Title/Abstract] OR meta-analysis[Publication Type]) OR systematic review[Title/Abstract]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND ((“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]). This search strategy was adapted from a previously established method that is sensitive to identifying systematic reviews and meta-analyses (Montori 2005). Searches were conducted on May 18 and May 26, 2015.  \textbf{Screening and data extraction}  We used Covidence (covidence.org) to initially screen articles based on title and abstract. To qualify as a systematic review, articles had to summarize evidence across multiple studies and provide information on the search strategy, such as search terms, databases, or inclusion/exclusion criteria. criteria \cite{babineau2014product}.  Meta-analyses were classified as quantitative syntheses of results across multiple studies (Onishi 2014). Two screeners independently reviewed the titles and abstracts of each citation and made a decision regarding its suitability for inclusion based on the definitions previously described. Next, the screeners held a meeting to revisit the citations in conflict and arrive at a final consensus. Following the screening process, full-text versions of included articles were obtained via EndNote. To standardize the coding process, an abstraction manual was developed and pilot tested. After completing this process, a training session was conducted to familiarize coders with abstracting the data elements. A subset of studies was jointly coded. After the training exercise, each coder was provided with three new articles to code independently. Each coder was next assigned an equal subset of articles for data abstraction. We coded the following elements: a) whether methodological quality or risk of bias was assessed, and if so the tool used; b) whether authors developed a customized measure; c) whether methodological quality was scored, and if so, what scale was used; d) whether authors identified high risk of bias or low-quality studies; e) whether high risk of bias or low-quality studies were included in the estimation of summary effects; f) how risk of bias or quality appraisal information was presented in the article; and g) whether follow-up analyses were conducted to explore the effects of bias on study outcomes (such as subgroup analysis, sensitivity analysis, or meta-regression).   \subsection{\textbf{Data Analysis}}