Sarah Khan edited untitled.tex  almost 9 years ago

Commit id: 960974a855509783c6cdad945b3babcb088ab2fb

deletions | additions      

       

We conducted a PubMed search using the following search string:  (((((("Journal of clinical oncology : official journal of the American Society of Clinical Oncology"[Journal] OR "Nature reviews. Cancer"[Journal]) OR "Cancer research"[Journal]) OR "The Lancet. Oncology"[Journal]) OR "Clinical cancer research : an official journal of the American Association for Cancer Research"[Journal]) OR "Cancer cell"[Journal]) AND ("2007/01/01"[PDAT] : "2015/12/31"[PDAT]) AND "humans"[MeSH Terms]) AND (((meta-analysis[Title/Abstract] OR meta-analysis[Publication Type]) OR systematic review[Title/Abstract]) AND ("2007/01/01"[PDAT] : "2015/12/31"[PDAT]) AND "humans"[MeSH Terms]) AND (("2007/01/01"[PDAT] : "2015/12/31"[PDAT]) AND "humans"[MeSH Terms]). Our method for identifying meta-analyses and systemic reviews was based on the Montori method that has been used in other databases \cite{Montori_2005}. There were three main journals that articles for oncology were pulled from and these include \textit{Clinical Cancer Research},\textit{Journal of Clinical Oncology}, and \textit{The Lancet Oncology}. We excluded articles if they were individual patient data meta-analyses, narrative reviews, editorials, scoping reviews, case reports, data collection studies, or meta-analyses where the analysis was not the primary focus of the study \ref{fig:FIGURE_1}.The search was limited to English language articles on 18 May 2015 and 26 May 2015.   \subsection{Article Selection and Data Extraction}  For our review process we utilized Covidence (Covidence.Org) for the initial screening process.We screened articles based on the abstract and title. Our focus in this study was to look at methodological quality in systematic reviews and meta-analyses in Oncology Journals. The full text versions of articles were imported via Endnote and stored in PDF format from the Internet and through inter-library loan. A special coding key for quality measure abstraction was developed and pilot tested on a convenience sample of fifty articles \ref{fig:2}. \ref{fig:FIGURE_2} .  The articles were stored on Google Drive, where we also stored the coding sheets and documents related to our study such as team assignments, article coding assignments, and abstraction manuals. There was a training session designed for individual coders in which 3 articles from various journal types were coded as a group. Also the pair assignments for coding were instructed to code three articles independently and the data from that convenience sample was analyzed for inter-rater agreement using Kappa statistic. During that training session, the team also came to a consensus and modified the abstraction manuals to settle disagreements found between the coding of two coders. Each coder had their own range of articles to code. After the first round of coding, coders within the pair switched and validated the codes of the first round. After this second round of coding, and additional final code was created where disagreements in coding were resolved by consensus. The systematic reviews and meta-analyses chosen for our study were required to be of similar methodological standing and meet strict criteria found in Cochrane systematic reviews. Our exclusions included studies such as meta-analyses that were not the primary aim of the study, narrative reviews instead of systematic reviews, reviews of reviews, case reports, study collected data, case-control studies, and individual patient data meta-analyses.    Data extraction for quality assessment involved whether quality or risk of bias was assessed, tools used to measure risk of bias or quality, whether authors used individual methods to assess quality, whether quality or risk of bias was graded, and what scale was used to grade quality or risk of bias. We also observed whether high risk of bias or low quality studies were found, whether high risk of bias or low quality studies were included in the studies, how risk of bias or quality was presented in the article. In the situation where high risk of bias or low quality studies were found and included, we assessed whether subgroup analysis, meta-regression, or sensitivity analyses were used to deal with quality of reporting.