Sarah Khan edited untitled.tex  almost 9 years ago

Commit id: 76b38a9104a0848363c81b6625038db8c32627b5

deletions | additions      

       

\subsection{Article Selection and Data Extraction}  For our review process we utilized Covidence (Covidence.Org) for the initial screening process.We screened articles based on the abstract and title. Our focus in this study was to look at methodological quality in systematic reviews and meta-analyses in Oncology Journals. The full text versions of articles were imported via Endnote and stored in PDF format from the Internet and through inter-library loan. A special coding key for quality measure abstraction was developed and pilot tested on a convenience sample of fifty articles \ref{fig:2}. The articles were stored on Google Drive, where we also stored the coding sheets and documents related to our study such as team assignments, article coding assignments, and abstraction manuals. There was a training session designed for individual coders in which 3 articles from various journal types were coded as a group. Also the pair assignments for coding were instructed to code three articles independently and the data from that convenience sample was analyzed for inter-rater agreement using Kappa statistic. During that training session, the team also came to a consensus and modified the abstraction manuals to settle disagreements found between the coding of two coders. Each coder had their own range of articles to code. After the first round of coding, coders within the pair switched and validated the codes of the first round. After this second round of coding, and additional final code was created where disagreements in coding were resolved by consensus.   The systematic reviews and meta-analyses chosen for our study were required to be of similar methodological standing and meet strict criteria found in Cochrane systematic reviews. Our exclusions included studies such as meta-analyses that were not the primary aim of the study, narrative reviews instead of systematic reviews, reviews of reviews, case reports, study collected data, case-control studies, and individual patient data meta-analyses.  Data extraction for quality assessment involved whether quality or risk of bias was assessed, tools used to measure risk of bias or quality, whether authors used individual methods to assess quality, whether quality or risk of bias was graded, and what scale was used to grade quality or risk of bias. We also observed whether high risk of bias or low quality studies were found, whether high risk of bias or low quality studies were included in the studies, how risk of bias or quality was presented in the article. In the situation where high risk of bias or low quality studies were found and included, we assessed whether subgroup analysis, meta-regression, or sensitivity analyses were used to deal with quality of reporting.  \textbf{Data Analysis}\subsection{}  We performed a descriptive analysis of the frequency and percentage use of quality assessment tools. We tabulated the frequency of quality assessment tools used, type of tools, types of scales used, how the quality information was presented, types of methods used to deal with risk of bias or low quality. We also looked at frequency of high risk of bias or low quality studies being included in data set of articles, and if studies were included, were they dealt with using subgroup analysis, meta-regression, or sensitivity analysis (figure 2). Statistical analyses were performed with STATA version 13.1 software (State Corporation, College Station, Texas, USA).    \section{Results}