Matt Vassar edited textbf_Introduction_In_order_for__.tex  almost 9 years ago

Commit id: 19ade3bebb562d66e5240ec451ed10db11199e22

deletions | additions      

       

In ophthalmology, there is a need for closer examination of the validity of primary studies comprising a review. As an illustrative example, Chakrabarti et al (2012) discussed emerging ophthalmic treatments for proliferative (PDR) and nonproliferative diabetic retinopathy (NDR) noting that anti-vascular endothelial growth factor (VEGF) agents consistently received recognition as a possible alternative treatment for diabetic retinopathy. Treatment guidelines from the Scottish Intercollegiate Guidelines Network and the American Academy of Ophthalmology consider anti-VEGF treatment as merely \textit{useful as an adjunct} to laser for treatment of PDR; however, the Malaysian guidelines indicate that these same agents were \textit{to be considered in combination} with intraocular steroids and vitrectomy. Most extensively, the National Health and Medical Research Council guidelines \textit{recommend the addition} of anti-VEGF to laser therapy prior to vitrectomy. The evidence base informing these guidelines is comprised of trials of questionable quality. Eldaly et al. (2014) conducted a systematic review of this anti-VEGF treatment for diabetic retinopathy, which included 18 randomized controlled trials (RCTs). Of these trials, seven were at high risk of bias while the rest were unclear in one or more domains. The authors concluded, “there is very low or low quality evidence from RCTs for the efficacy and safety of anti-VEGF agents when used to treat PDR over and above current standard treatments". Thus, low quality evidence provides less confidence regarding the efficacy of treatment, makes suspect guidelines advocating use, and impair the clinicians ablility to make sound judgements regarding treatment.   Validity and quality have been very heavily researched areas, particularly in recent years. Validity has been described as Over  the ability of the instrument to measure what it is believed it is measuring (Moher 1995). Researchers years, researchers  have used conceived  manydifferent  methods to in  attempt to evaluate the validity and or methodological  quality of primary studies. Initially, checklists and scales were developed to evaluate whether particular quality items, aspects of experimental design,  such as randomization, blinding, or  allocation concealment, etc., concealment  were addressed in incorporated into  the study. Although these are effective at evaluating specific components of study validity, they are often denounced These approaches have been criticized  for falsely elevating quality scores. Many of these scales and checklists include items that have no bearing on a study’s actual quality, the validity of study findings,  such as whether there was investigators used  informed consent or whetherthere was  ethical approval was obtained  (1995). Furthermore, with the proliferation of quality appraisal scales,  it was suggested found  that the choice of scaleor checklist  could alter the results of systematic reviews due to disparate weighting of scale components  (Jüni 1999), and because of this, two main tools emerged as the most accurate: the Jadad scale (Jadad 1996) and the Downs and Black checklist (Downs 1998). Reporting guidelines developed by the predominant review board at this time, QUORUM, necessitated the evaluation of methodological quality of the primary studies in systematic reviews. The Cochrane Collaboration decided there was a need for a new tool, and in 2008 it developed the Cochrane Risk of Bias Tool. Its development was based on a combination of empirical and theoretical considerations, leading to a focus on risk of bias rather than study 'quality' and a division of assessments into six bias domains (Stern 2013). The risk of bias was then ranked as low, high, or unclear with no score calculation. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), which provides the updated reporting guidelines, now calls for the evaluation of bias in all systematic reviews (Moher 2009). As previous studies have shown, quality and risk of bias assessments are often conducted in high-quality systematic reviews. Yet, there are still many unanswered questions on how systematic reviews in clinical specialties appraise quality and risk of bias. We specifically investigated ophthalmology journals to examine the degree to which quality and risk of bias assessments are conducted. We then explored what method was used in their evaluation, what quality components made up these assessments, and how each systematic review integrated primary studies with low quality and high risk of bias into their results.