ABSTRACT BACKGROUND: Systematic reviews compare data across multiple studies to answer a research problem. One important test to perform in systematic reviews is statistical heterogeneity. Statistical heterogeneity measures variance between studies and significant heterogeneity yields a meta-analysis unusable. One new method for presenting heterogeneity graphically to compare study design features and population characteristics is evidence-based mapping (Althuis 2014). The benefit of graphically mapping out heterogeneity is that it allows researchers to properly deal with heterogeneity during meta-analysis. METHODS: A PubMed search of six oncology journals was conducted looking for systematic reviews and meta-analyses. This search strategy was adapted from a previously established search method (Montori 2005). Covidence.org was used to screen manuscripts based on title and abstract. Two coders then independently evaluated the manuscripts for 10 different elements. The evidence-based mapping method heterogeneity was applied to a manuscript chosen by the author. Stata 13.1 was used analyze the coders data and was then searched for trends in heterogeneity use. RESULTS: The initial PubMed search yielded 337 manuscripts from 6 different journals. Post-screen/coding exclusions left 182 manuscripts across 4 journals for analysis. Of these papers, 50% used varying combinations of heterogeneity tests and of those only 8% have too much heterogeneity to complete the meta-analysis. Of the studies which measured heterogeneity, 25% utilized a random-effects model, 4% utilized a fixed-effects model, and 21% used both. The results from the evidence-based mapping show variance in average patient age, reporting of length of sedation use, study facility, and sedation mode. CONCLUSION: It is the impression of this study that the use of quantitative and qualitative heterogeneity measurement tools are underutilized in the four oncology journals evaluated. These assessments should be applied in meta-analyses to reduce the risk of spurious findings being integrated into medical practice. This tool will help determine whether or not a meta-analysis can be performed prior to investing time in said meta-analysis. This is preferable to performing a quantitative measurement of heterogeneity after the fact to indicate whether or not the study analysis is trustworthy. KEYWORDS: Heterogeneity, Meta-analysis, Systematic Review, Evidence-based mapping; Oncology, Palliative care
INTRODUCTION In order for systematic reviews to make accurate inferences concerning clinical therapy, the primary studies that constitute the review must provide valid results. The Cochrane Handbook for Systematic Reviews states that assessment of validity is an “essential component” of a review that “should influence the analysis, interpretation, and conclusions of the review”(p. 188) . The internal validity of a review’s primary studies must be considered to ensure that bias has not compromised the results, leading to inaccurate estimates of summary effect sizes. In ophthalmology, there is a need for closer examination of the validity of primary studies comprising a review. As an illustrative example, Chakrabarti et al. (2012) discussed emerging ophthalmic treatments for proliferative (PDR) and nonproliferative diabetic retinopathy (NDR) noting that anti-vascular endothelial growth factor (VEGF) agents consistently received recognition as a possible alternative treatment for diabetic retinopathy. Treatment guidelines from the Scottish Intercollegiate Guidelines Network and the American Academy of Ophthalmology consider anti-VEGF treatment as merely _useful as an adjunct_ to laser for treatment of PDR; however, the Malaysian guidelines indicate that these same agents were _to be considered in combination_ with intraocular steroids and vitrectomy. Most extensively, the National Health and Medical Research Council guidelines _recommend the addition_ of anti-VEGF to laser therapy prior to vitrectomy . The evidence base informing these guidelines is comprised of trials of questionable quality. Martinez-Zapata et al. (2014) conducted a systematic review of this anti-VEGF treatment for diabetic retinopathy, which included 18 randomized controlled trials (RCTs). Of these trials, seven were at high risk of bias while the rest were unclear in one or more domains. The authors concluded, “there is very low or low quality evidence from RCTs for the efficacy and safety of anti-VEGF agents when used to treat PDR over and above current standard treatments" . Thus, low quality evidence provides less confidence regarding the efficacy of treatment, makes suspect guidelines advocating use, and impairs the clinicians ablility to make sound judgements regarding treatment. Over the years, researchers have conceived many methods in attempt to evaluate the validity or methodological quality of primary studies. Initially, checklists and scales were developed to evaluate whether particular aspects of experimental design, such as randomization, blinding, or allocation concealment were incorporated into the study. These approaches have been criticized for falsely elevating quality scores. Many of these scales and checklists include items that have no bearing on the validity of study findings, such as whether investigators used informed consent or whether ethical approval was obtained . Furthermore, with the proliferation of quality appraisal scales, it was found that the choice of scale could alter the results of systematic reviews due to weighting differences of scale components . Two such scales, the Jadad scale - also called the Oxford Scoring System and the Downs and Black checklist were among the popular alternatives. Quality of Reporting of Meta-analyses (QUORUM) , the dominant reporting guidelines at that time, called for the evaluation of methodological quality of the primary studies in systematic reviews. This recommendation was short lived as the Cochrane Collaboration began to advocate for a new approach to assess the validity of primary studies. This new method assessed the risk of bias of 6 particular design features of primary studies, with each domain receiving a rating of either low, unclear, or high risk of bias . Following suit, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) - updated reporting guidelines, now calls for the evaluation of bias in all systematic reviews . A previous review examining primary studies from multiple fields of medicine revealed that the failure to incorporate an assessment of methodological quality can result in the implementation of interventions founded on misleading evidence . Yet, questions remain regarding the assessment of quality and risk of bias in clinical specialties. Therefore, we examined ophthalmology systematic reviews to determine the degree to which methodological quality and risk of bias assessments were conducted. We also evaluated the particular method used in the evaluation, the quality components comprising these assessments, and how systematic reviewers integrated primary studies with low quality or high risk of bias into their results.
Background: Publication bias (PB) can cause an exaggerated estimate of summary effects in systematic reviews (SR). The extent of PB assessment by SRs within oncology journals remains to be determined. Methods: This study looked at SRs from high impact factor oncology journals between 2007 and 2015 using a PubMed search. Articles were sorted and coded for PB. An additional assessment of BP from unevaluated SRs was performed using Egger’s regression and the trim-and-fill method. Findings: Of 182 included SRs, 52 preformed a PB assessment. The most common form of assessment was a funnel plot supplemented by Egger’s regression or Begg’s test (44%, 23/52). PB was a routine finding in these SRs (19%, 10/52). SRs that stated following a reporting guideline frequently failed to do so with regards to assessing PB. The magnitude of effect sizes generally decreased when conducting our independent assessments of PB among SRs in our sample that did not evaluate for it. Interpretation: Our study shows that there exists an underutilization of PB assessments by SRs in clinical oncology. Additionally the methodological validity of SRs can be increased by adhering to reporting guidelines, and through the search of grey literature and clinical trials registries. Funding: No external source of funding
Abstract Purpose: The purpose of this study was to evaluate the quality of reporting in the abstracts of oncology systematic reviews using PRISMA guidelines for abstract writing. Methods: Oncology systematic reviews and meta-analyses from four journals - The Lancet Oncology, Clinical Cancer Research, Cancer Research, and Journal of Clinical Oncology - were selected using a PubMed search. The resulting 337 abstracts were sorted for eligibility and 182 were coded based on a standardized abstraction manual constructed from the PRISMA criteria. Eligible systematic reviews were coded independently and later verified by a second coder, with disagreements handled by consensus. One hundred eighty-two abstracts comprised the final sample. Results: The number of included studies, information regarding main outcomes, and general interpretation of results were described in the majority of abstracts. In contrast, risk of bias or methodological quality appraisals, the strengths and limitations of evidence, funding sources, and registration information were rarely reported. By journal, the most notable difference was a higher percentage of funding sources reported in Lancet Oncology. No detectable upward trend was observed on mean abstract scores after publication of the PRISMA extension for abstracts. Conclusion: Overall, the reporting of essential information in oncology systematic review and meta-analysis abstracts is suboptimal and could be greatly improved. Keywords: Review, Systematic; Meta-Analysis; Cancer; Medical Oncology; Abstracting as Topic; Funding
ABSTRACT AND KEY WORDS AIM: This study aimed to the reporting and utilization of methodological quality measures in addressing low quality and risk of bias in major oncology journals. METHODS: We performed a search of systematic reviews from high impact factor journals in oncology from 2007 to 2015 through PubMed. Covidence was used to screen articles based on the title and abstract. The methodological quality and reporting of risk of bias were evaluated by three rounds of coding from two independent reviewers using the same checklist. Differences in assessment were resolved through group consensus. RESULTS: Quality assessment was examined in 182 articles after exclusion. Quality or risk of bias assessment was assessed in 48% of articles. More common were tools adapted from authors’custom sources (23%), others (14%), and the Cochrane Risk of Bias Tool (13%). Low quality or high risk of bias studies was detected in 40 studies. Subgroup analysis was conducted in 14%, meta-regression in 10%, and sensitivity analysis in 21%. Low quality or risk of bias were not reported in 32 studies. Quality measures were articulated in narrative format (44%), not at all (44%), or in a combination of tables and figures (12%) . CONCLUSIONS: Quality and risk of bias were assessed in only half of systematic reviews; moreover, when addressed, the methods of assessment were more commonly determined by the authors rather than following recommended guidelines. This analysis provides further evidence for inconsistent quality measure reporting for clinical findings in oncology manuscripts. Differences between bias assessment and quality reporting could misrepresent intervention results in oncology journals. KEYWORDS: meta-analysis;oncology;quality; risk of bias;systematic review INTRODUCTION The use of systematic reviews and meta-analyses has become increasingly important in evidence-based medicine as clinicians seek reliable information on treatments and care guidelines in their medical practice . Since systematic reviews synthesize evidence from multiple studies, clinicians are able to better understand the individual trials comprising the review as well as the efficacy of the therapy summarized across all available, relevant evidence. One essential feature that lends confidence to the findings of a review is an appraisal of the methodology of studies comprising the review. In cases where systematic reviewers have concluded that primary studies are of high methodological quality or have low potential for biased outcomes, clinicians can have more confidence in the study findings. For example, Yang et al. evaluated the toxicity and efficacy of chemotherapy plus cetuximab in relation to chemotherapy alone in patients with advanced non-small cell lung cancer. The systematic review comprised of four trials. A risk of bias assessment of these trials was conducted, and the authors concluded that risk of bias was low for overall survival and one-year survival rates but high for all other outcomes due to a lack of blinding. Hence, the reviewers concluded that chemotherapy plus cetuximab was better than chemotherapy alone for improving overall survival; the risk of bias assessment played an important role in the interpretation of the summary effect. Many scales are designed in response to concerns regarding methodological quality among primary studies; however, recent evidence indicates scales may not be the best way to appraise studies . Rather, certain design features should be reviewed to provide a clearer picture of bias in trials .The _Cochrane Handbook for Systematic Review of Interventions_ is continually updated to improve the assessment of methodological quality in clinical studies and advocates for appraising the risk of bias of all primary studies included for review . Major reporting guidelines for systematic reviews have been published and suggest some form of quality appraisal. The first guideline, published in 1996, was referred to as the Quality of Reporting of Meta-Analyses (QUORUM) and advocated use of a methodological quality measures tool for appraisal. More recently, however, QUORUM’s predecessor, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), called for evaluating the risk of bias of primary studies. This recommendation, consistent with the Cochrane Collaboration, accounts for criticisms of the quality scales, including that certain components of these tools often have no known role in contributing to the validity of findings, such as whether investigators reported oversight by an institutional review board. Inclusion of such items can artificially inflate the overall quality score of a particular study. Despite a clear move toward progress in this area, there are still significant differences in quality assessment practices between systematic reviews . In fact, little is known about the application of methodological quality or risk of bias measures in clinical specialties like oncology. To address this issue, we conducted a study of the oncology literature to assess how often quality and risk of bias assessments were used in oncology systematic reviews, determine the prevalence of approaches reported by the authors, and examine the ways that such evaluations are incorporated into the reviews. METHODS Search criteria and eligibility Using the h5-Index from Google Scholar Metrics, we selected the six oncology journals with the highest index scores. We searched PubMed search using the following search string: ((((((“Journal of clinical oncology : official journal of the American Society of Clinical Oncology”[Journal] OR “Nature reviews. Cancer”[Journal]) OR “Cancer research”[Journal]) OR “The Lancet. Oncology”[Journal]) OR “Clinical cancer research : an official journal of the American Association for Cancer Research”[Journal]) OR “Cancer cell”[Journal]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND (((meta-analysis[Title/Abstract] OR meta-analysis[Publication Type]) OR systematic review[Title/Abstract]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND ((“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]). This search strategy was adapted from a previously established method that is sensitive to identifying systematic reviews and meta-analyses (Montori 2005). Searches were conducted on May 18 and May 26, 2015. SCREENING AND DATA EXTRACTION We used Covidence (covidence.org) to initially screen articles based on title and abstract. To qualify as a systematic review, articles had to summarize evidence across multiple studies and provide information on the search strategy, such as search terms, databases, or inclusion/exclusion criteria . Meta-analyses were classified as quantitative syntheses of results across multiple studies (Onishi 2014). Two screeners independently reviewed the titles and abstracts of each citation and made a decision regarding its suitability for inclusion based on the definitions previously described. Next, the screeners held a meeting to revisit the citations in conflict and arrive at a final consensus. Following the screening process, full-text versions of included articles were obtained via EndNote. To standardize the coding process, an abstraction manual was developed and pilot tested. After completing this process, a training session was conducted to familiarize coders with abstracting the data elements. A subset of studies was jointly coded. After the training exercise, each coder was provided with three new articles to code independently. Each coder was next assigned an equal subset of articles for data abstraction. We coded the following elements: a) whether methodological quality or risk of bias was assessed, and if so the tool used; b) whether authors developed a customized measure; c) whether methodological quality was scored, and if so, what scale was used; d) whether authors identified high risk of bias or low-quality studies; e) whether high risk of bias or low-quality studies were included in the estimation of summary effects; f) how risk of bias or quality appraisal information was presented in the article; and g) whether follow-up analyses were conducted to explore the effects of bias on study outcomes (such as subgroup analysis, sensitivity analysis, or meta-regression). DATA ANALYSIS We performed a descriptive analysis of the frequency and percent use of quality assessment tools used, types of tools, types of scales used, how the quality information was presented, and types of methods used to deal with risk of bias or low quality. In assessing the types of tools used to measure quality, we created some additional categories to account for the variations in approaches. We coded an appraisal as “author’s custom measure” if authors described their own approach to evaluating study quality. In situations where the author used a quality assessment method adapted from another study, we coded this as “adapted criteria.” Some studies indicated (either in the abstract or from the methods section) that methodological quality was assessed, but there was no specific detail beyond this generic statement. These were coded as “unspecified.” Statistical analyses were performed with STATA version 13.1 software (State Corporation, College Station, Texas, USA). RESULTS The PubMed search resulted in 337 articles from four journals. After screening titles and abstracts, 79 were excluded because they were not systematic reviews or meta-analyses. An additional 76 articles were excluded after full text screening. Two articles could not be retrieved after multiple attempts. A total of 182 articles were included in this study (Figure 1). Methodological quality or risk of bias assessment was conducted in 42% (77/182) of systematic reviews. Of the 77 articles where assessment of methodological quality or risk of bias was identified, 51.95% (40/77) found either low methodological quality or high risk of bias in primary studies comprising systematic reviews. Studies with an unclear risk of bias or unknown methodological quality were reported in 41.56% (32/77) of reviews; Five cases (6.49%) reported no issues with study quality or risk of bias. The most common approaches to evaluating risk of bias or methodological quality were those designed by authors (23.4%, 18/77). The Cochrane Risk of Bias Tool was the most commonly reported standardized measure used by systematic reviewers (14.3%, 11/77), followed by the Newcastle–Ottawa scale (10.4%, 8/77), the Jadad scale (10.4%, 8/77), QUADAS-2 (5.19%, 4/77), and QUADAS (3.9%, 3/77). Measures adapted from previous work were reported by 13% (10/77). Other measures used only once are reported in Table 1 and represented 10.4% (8/77) of the approaches used. There were 25 studies with low quality or high risk of bias that were included with (78%, 35/45).From included studies, subgroup analysis was conducted in 13%, 11/77). Meta regression was used to address bias and quality problems in 9% of the 45 articles that assessed quality. Sensitivity analysis was used to address bias and quality reporting issues in 18% of studies analyzed. We examined the scales by which reviewers scored or categorized studies. This information was reported in 56 systematic reviews. For risk of bias assessments, the high/medium/low format was used most commonly (20%, 11/56) followed by high/low/unclear (14%, 8/56). Methodological quality was most commonly assessed using a 0-5 point scale (16.07%, 9/56) followed by Good/Fair/Poor (7.14%, 4/56) and 1-9 point scale (5.36%, 3/56). Methodological quality information was articulated largely in narrative format (44%, 34/77) or not at all (44%, 34/77). Additional forms of presentation included combinations of figures and narratives (5%, 4/77) . The combination of table and narrative was also used more than single formats of presentation (3%, 2/77). Single formats of presentation either as a table or figure were used more than the combination of all three forms of presentation (3%, 2/77). The combination of tables, figures, and narrative was used in 1% of assessed articles. DISCUSSION/CONCLUSION This study provides a comprehensive and recent assessment of methodological quality and risk of bias assessment in oncology journals. Our main findings indicate that reporting of quality assessment in systematic reviews and meta-analyses in major oncology journals is moderate to low, with actual assessment of methodological quality being present in only 48% of studies. This is low in comparison with similar studies assessing frequency of risk of bias evaluations. Hopewell et al., for example, found that 80% of non-Cochrane reviews reported methods for evaluating methodological quality or risk of bias. The inclusion of studies with high risk of bias or low quality in deriving summary effects was also an issue, with 76% of studies in our sample including such studies in calculating results; however, this is comparable with the proportion of trials with high risk of bias included in previous studies. Hopewell et al. reported that 75% of trials in their study contained one or more trials with a high risk of bias . It should be noted that Hopewell et al. used the Cochrane Database of Systematic Reviews, which is known for its stringent adherence to Cochrane guidelines, of which risk of bias evaluations are a routine part. This may contribute to differences between our findings and theirs. Despite the presence of high risk of bias or low quality studies, most review authors did not conduct a further analysis to explore the influence of bias on study outcomes. Perhaps more interesting was the number of systematic reviews reporting an assessment of study quality, but leave the reader to wonder what had become of those evaluations. A significant number included such studies without further mention of the quality of evidence. Narrative styles of presenting information for quality assessment were the most common means of presenting this information; however, the use of a table format would provide readers with easier access to quality or risk of bias information. We advocate for a more structured approach, such as tables published in the review, to display such information. Future research should continue to investigate these evaluation practices in systematic reviews in other clinical specialties. While the majority of research has been confined largely to Cochrane systematic reviews, as well as those published in high profile medical journals, there is a need to understand whether systematic reviewers in clinical specialties conduct these assessments and, if so, what assessments they use. Chemotherapy with cetuximab versus chemotherapy alone for chemotherapy-naive advanced non-small cell lung cancer.Yang ZY1, Liu L, Mao C, Wu XY, Huang YF, Hu XF, Tang JL.
ABSTRACT Objectives: We evaluated the use of clinical trials registries in published obstetrics and gynecological systematic reviews and meta-analyses. Methods: A review of publications between January 1, 2007, and December 31, 2015, from six obstetrical and gynecological journals (_Obstetrics & Gynecology, Obstetrical & Gynecological Survey, Human Reproduction Update, Gynecologic Oncology, British Journal of Obstetrics and Gynaecology, and American Journal of Obstetrics & Gynecology_) was completed to identify eligible systematic reviews. All systematic reviews included after exclusions were independently reviewed to determine if clinical trials registries had been included as part of the search process. Studies that reported using a trials registry were further examined to determine whether trial data was included in the analysis. Results: Our initial search resulted in 292 articles, which was narrowed to 256 after exclusions. Of the 256 systematic reviews meeting our selection criteria, 47 utilized a clinical trials registry. Eleven of the 47 systematic reviews found unpublished data, and added the unpublished trial data into their results. Conclusion: A majority of systematic reviews in clinical obstetrics and gynecology journals do not conduct searches of clinical trials registries or do not make use of data obtained from these searches.