Sarah Khan edited untitled.tex  almost 9 years ago

Commit id: 9ee1e0331f10ca3858bfdcb9f3dbd42b18fa1ccc

deletions | additions      

       

The use of systematic reviews and meta-analyses has become increasingly important in evidence-based medicine as clinicians seek reliable information on treatments and care guidelines in their medical practice \cite{14764293}. Since systematic reviews synthesize evidence from multiple studies, clinicians are able to better understand the individual trials comprising the review as well as the efficacy of the therapy summarized across all available, relevant evidence. One essential feature that lends confidence to the findings of a review is an appraisal of the methodology of studies comprising the review. In cases where systematic reviewers have concluded that primary studies are of high methodological quality or have low potential for biased outcomes, clinicians can have more confidence in the study findings. For example, Yang et al. evaluated the toxicity and efficacy of chemotherapy plus cetuximab in relation to chemotherapy alone in patients with advanced non-small cell lung cancer. The systematic review comprised of four trials. A risk of bias assessment of these trials was conducted, and the authors concluded that risk of bias was low for overall survival and one-year survival rates but high for all other outcomes due to a lack of blinding. Hence, the reviewers concluded that chemotherapy plus cetuximab was better than chemotherapy alone for improving overall survival; the risk of bias assessment played an important role in the interpretation of the summary effect.     Many scales are designed in response to concerns regarding methodological quality among primary studies; however, recent evidence indicates scales may not be the best way to appraise studies. Rather, certain design features should be reviewed to provide a clearer picture of bias in trials \cite{lohr1999assessing}.The \textit{Cochrane Handbook for Systematic Review of Interventions} is continually updated to improve the assessment of methodological quality in clinical studies  and advocates for appraising the risk of bias of all primary studies included for review \cite{higgins2011cochrane}. Major reporting guidelines for systematic reviews have been published and suggest some form of quality appraisal. The first guideline, published in 1996, was referred to as the Quality of Reporting of Meta-Analyses (QUORUM) and advocatedfor  use of a  methodological quality measures as an appraisal tool. tool for appraisal.  More recently, however, QUORUM's predecessor, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), called for evaluating the risk of bias of primary studies. This recommendation, consistent with the Cochrane Collaboration, accounts for criticisms of the quality scales, including that certain components of these tools often have no known role in contributing to the validity of findings, such as whether investigators reported oversight by an institutional review board. Inclusion of such items can artificially inflate the overall quality score of a particular study.   Despite a clear move toward progress in this area, there are still significant differences in quality assessment practices between systematic reviews \cite{higgins2008cochrane}. In fact, little is known about the application of methodological quality or risk of bias measures in clinical specialties like oncology. To address this issue, we conducted a study of the oncology literature to assess how often quality and risk of bias assessments were used in oncology systematic reviews, determine the prevalence of approaches reported by the authors, and examine the ways that such evaluations are incorporated into the reviews.   \section{Methods}  \subsection{Search Criteria criteria  and Eligibility} eligibility}  Using the h5-Index from Google Scholar Metrics, we selected the 6 six  oncology journals with the highest index scores. We searched PubMed search using the following search string:((((((“Journal string: ((((((“Journal  of clinical oncology : official journal of the American Society of Clinical Oncology”[Journal] OR “Nature reviews. Cancer”[Journal]) OR “Cancer research”[Journal]) OR “The Lancet. Oncology”[Journal]) OR “Clinical cancer research : an official journal of the American Association for Cancer Research”[Journal]) OR “Cancer cell”[Journal]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND (((meta-analysis[Title/Abstract] OR meta-analysis[Publication Type]) OR systematic review[Title/Abstract]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND ((“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]). This search strategy was adapted from a previously established method that is sensitive to identifying systematic reviews and meta-analyses (Montori 2005). Searches were conducted on May 18 and May 26, 2015. \textbf{Screening and data extraction}  We used Covidence (covidence.org) to initially screen articles based on title and abstract. To qualify as a systematic review, studies articles  had to summarize evidence across multiple studies and provide information on the search strategy, such as search terms, databases, or inclusion/exclusion criteria. Meta-analyses were classified as quantitative syntheses of results across multiple studies (Onishi 2014). Two screeners independently reviewed the titles and abstracts of each citation and made a decision regarding its suitability for inclusion based on the definitions previously described. Next, the screeners held a meeting to revisit the citations in conflict and arrive at a final consensus. Following the screening process, full-text versions of included articles were obtained via EndNote. To standardize the coding process, an abstraction manual was developed and pilot tested. After completing this process, a training session was conducted to familiarize coders with abstracting the data elements. A subset of studies were was  jointly coded as a group. coded.  After the training exercise, each coder was provided with 3 three  new articles to code independently. Each coder was next assigned an equal subset of articles for data abstraction. We coded the following elements: a) whether methodological quality or risk of bias was assessed, and if so the tool used; b) whether authors developed a customized measure; c) whether methodological quality was scored, and if so, what scale was used; d) whether authors found identified  high risk of bias or low quality low-quality  studies; e) whether high risk of bias or low quality low-quality  studies were included in the estimation of summary effects; f) how risk of bias or quality appraisal information was presented in the article; and g) whether follow up follow-up  analyses were conducted to explore the effects of bias on study outcomes (such as subgroup analysis, sensitivity analysis, or meta-regression). \subsection{\textbf{Data Analysis}}  We performed a descriptive analysis of the frequency and percent use of quality assessment tools used, type types  of tools, types of scales used, how the quality information was presented, and types of methods used to deal with risk of bias or low quality. In assessing the types of tools used to measure quality, we created some additional categories to account for the variations in approaches. We coded an appraisal as "author's custom measure" if authors described their own approach to evaluating study quality. In situations where the author used a quality assessment method adapted from another study, we coded this as "adapted criteria". criteria."  Some studies indicated (either in the abstract or from the methods section) that methodological quality was assessed, but there was no specific detail beyond this generic statement. These were coded as "unspecified". "unspecified."  Statistical analyses were performed with STATA version 13.1 software (State Corporation, College Station, Texas, USA).   \section{Results}  The Pubmed PubMed  search resulted in 337 articles from four journals. After screening titles and abstracts, 79 were excluded because they were not SRs systematic reviews  or meta-analyses. An additional 76 articles were excluded after full text screening. Two articles could not be retrieved after multiple attempts. A total of 182 articles were included in this study (Figure 1).Methodological quality or risk of bias assessment was conducted in 42\% (77/182) of systematic reviews (2). Of these, 51.95\% (40/77) found either low methodological quality or high risk of bias in primary studies comprising systematic reviews. Studies with an unclear risk of bias or unknown methodological quality were reported in 41.56\% (32/77) of reviews; no issues with study quality or risk of bias were reported in 6.49\% (5/77) of cases.  Most common approaches to evaluating Methodological quality or  risk of bias or methodological quality were assessment was conducted in 42\% (77/182) of systematic reviews. Of  the use 77 articles where assessment  of tools designed by authors (23.4\%, n=18/77). The Cochrane Risk methodological quality or risk  of Bias tool bias  was the most commonly report standardized measure used by identified, 51.95\% (40/77) found either low methodological quality or high risk of bias in primary studies comprising  systematic reviewers (14.3\%, n=11/77), followed by the Newcastle-Ottawa scale (10.4\%, 8/77), the Jadad scale (10.4\%, 8/77), QUADAS-2 (5.19\%, 4/77), and QUADAS (3.9\%, 3/77). 2. Measures adapted from previous work reviews. Studies with an unclear risk of bias or unknown methodological quality  were reportedby 13\% (10/77) 2. Other measures used only once and are reported  in Table 1 and comprised 10.4\% (8/77) 41.56\% (32/77)  of the approaches used. reviews; Five cases (6.49\%) reported no issues with study quality or risk of bias.  The most common approaches to evaluating risk of bias or methodological quality were those designed by authors (23.4\%, 18/77). The Cochrane Risk of Bias Tool was the most commonly reported standardized measure used by systematic reviewers (14.3\%, 11/77), followed by the Newcastle--Ottawa scale (10.4\%, 8/77), the Jadad scale (10.4\%, 8/77), QUADAS-2 (5.19\%, 4/77), and QUADAS (3.9\%, 3/77). Measures adapted from previous work were reported by 13\% (10/77). Other measures used only once are reported in Table 1 and represented 10.4\% (8/77) of the approaches used.  There were 25 studies with low quality or high risk of bias that were included with (78\%, n=35/45) 3.From 35/45).From  included studies, subgroup analysis was conducted in 13\%, n=11/77) 3. 11/77).  Meta regression was used to address bias and quality problems in 9\% of the 45 articles that assessed quality 3. quality.  Sensitivity analysis was used to address bias and quality reporting issues in 18\% of studies analyzed 3. analyzed.  We examined the scales by which reviewers scored or categorized studies. This information was reported in 56 systematic reviews. For risk of bias assessments, the high/medium/low format was used most commonly (20\%, n=11/56) 11/56)  followed by high/low/unclear (14\%, n=8/56). 8/56).  Methodological quality was most commonly assessed using a 0-5 point scale (16.07\%, 9/56) followed by Good/Fair/Poor (7.14\%, 4/56) a and  1-9 point scale (5.36\%, n=3/56). 3/56).  Methodological quality information were was  articulated largely in narrative format (44\%, n=34/77) 34/77)  or not at all (44\%, n=34/77) 5. 34/77).  Additional forms of presentation included combinations of figures and narratives (5\%, n=4/77) 5. 4/77) .  The combination of table and narrative was also used more than single formats of presentation (3\%, n=2/77) 5. 2/77). Single formats of presentation either as a table or figure were used more than the combination of all three forms of presentation (3\%, 2/77). The combination of tables, figures, and narrative was used in 1\% of assessed articles.  \section{Discussion/Conclusion}  This study provides a comprehensive and recent assessment of methodological quality and risk of bias assessment in oncology journals. Our main findings indicate that reporting of quality assessment in systematic reviews and meta-analyses in major oncology journals was is  moderate to low, with actual assessment of methodological quality being present in only 48\% of studies. This is low in comparison to with  similar studies assessing frequency of risk of bias evaluations. Hopewell et al., for example, found that 80\% of non-Cochrane reviews reported methods for evaluating methodological quality or risk of bias. The inclusion of studies with high risk of bias or low quality in deriving summary effects was also an issue, with 76\% of studies in our sample including such studies in calculating results; however, this is comparable to with  the proportion of trials with high risk of bias included in previous studies. Hopewell et al.  reported that 75\% of trials in their study contained one or more trials with a high risk of bias \cite{hopewell2013incorporation}. It should be noted that Hopewell et al. used the Cochrane Database of Systematic Reviews, which is known for its stringent adherence to Cochrane guidelines, of which risk of bias evaluations are a routine part. This may contribute to differences between our findings and theirs. Despite the presence of high risk of bias or low quality studies, most review authors did not conduct a  further analysis to explore the influence of bias on study outcomes. Perhaps more interesting was the number of systematic reviews reporting an assessment of study quality that let quality, but leave  the reader to wonder what came had become  of these those  evaluations. A significant number included such studies without further mention of the quality of evidence. Narrative styles of presenting information for quality assessment were the most common means of presenting this information; however, the use of a table format would provide readers with easier access to quality or risk of bias information. We advocate for a more structured approach, such as tables published in the review, to display such information. Future research should continue to investigate these evaluation practices from in  systematic reviews in other clinical specialties. While the majority of research has been confined largely to Cochrane systematic reviews reviews,  as well as those published in high profile medical journals, there is a need to understand whether systematic reviewers in clinical specialties conduct these assessments, and assessments and,  if so, what assessment assessments  they used. use.