Matt Vassar edited untitled.tex  almost 9 years ago

Commit id: 5591cc65943e2556251a56be18c6a9f40f25c9a2

deletions | additions      

       

To standardize the coding process, an abstraction manual was developed and pilot tested. After completing this process, a training session was conducted to familiarize coders with abstracting the data elements. A subset of studies were jointly coded as a group. After the training exercise, each coder was provided with 3 new articles to code independently. Each coder was next assigned an equal subset of articles for data abstraction. We coded the following elements: a) whether methodological quality or risk of bias was assessed, and if so the tool used; b) whether authors developed a customized measure; c) whether methodological quality was scored, and if so, what scale was used; d) whether authors found high risk of bias or low quality studies; e) whether high risk of bias or low quality studies were included in the estimation of summary effects; f) how risk of bias or quality appraisal information was presented in the article; and g) whether follow up analyses were conducted to explore the effects of bias on study outcomes (such as subgroup analysis, sensitivity analysis, or meta-regression).   \subsection{\textbf{Data Analysis}}  We performed a descriptive analysis of the frequency and percent use of quality assessmenttools. We tabulated the frequency of quality assessment  tools used, type of tools, types of scales used, how the quality information was presented, and types of methods used to deal with risk of bias or low quality. In assessing the types of tools used to measure quality, we created some additional categories to account for the variations in approaches. We coded an appraisal as "author's custom measure" if authors described their own approach to evaluating study quality. In situations where the author used a quality assessment method adapted from another study, we coded this as "adapted criteria". Some studies indicated (either in the abstract or from the methods section) that methodological quality was assessed, but there was no specific detail beyond this generic statement. These were coded as "unspecified". Statistical analyses were performed with STATA version 13.1 software (State Corporation, College Station, Texas, USA).   \section{Results}  The Pubmed search resulted in 337 articles from four journals. After screening titles and abstracts, 79 were excluded because they were not SRs or meta-analyses. An additional 76 articles were excluded after full text screening. Two articles could not be retrieved after multiple attempts. A total of 182 articles were included in this study (Figure 1).