Kale Goerke edited 5.tex  almost 9 years ago

Commit id: 6752233c0179cd2b86e4990e6f171cdf758e23f8

deletions | additions      

       

The broad influence that systematic reviews of randomized controlled trials have on clinical decision-making proves how crucial these types of studies are for patient treatment. Health care providers use the information gained systematic reviews to attempt to give their patients the best treatment possible. Nonetheless, when the original studies that constitute these systematic reviews have foundational errors in study design they can be put at high risk of bias, which, as a result, decreases the overall benefit gained from systematic reviews \cite{25481532}. (Katikireddi 2015).).  In our examination of ophthalmology journals, only 47.80\% (87/182) of reviews reported the assessment of MQ/ROB. Out of all 182 reviews included in our study, only 10.44\% (19/182) explicitly stated We found  that low MQ/high ROB were excluded from their review while 36.81\% (67/182) included articles with low MQ/high ROB. Nearly as many studies were unclear about inclusion the Cochrane Risk  of low MQ/high ROB 11.54\% (21/182) as those that were excluded. Of Bias Tool was used most commonly for evaluation of MQ/ROB at 20.69\% (18/87) followed by Jadad (19.54\%; 17/87). The Delphi List,  the 67 articles with low MQ/high, only 31.34\% (21/67) performed a subgroup analysis, 23.88\% (16/67) included a meta-regression analysis, measure described by Sanderson et al,  and 46.27\% (31/67) performed QUADAS were each used 5 times (5.75\%) accounting for  a sensitivity analysis. These analyses usually account total of 17.25\% of MQ/ROB measures. Downs and Black, QUADAS-2, CASP  for RCTs, and  the low MQ/high ROB included in these reviews, but Newcastle-Ottawa scale were  each was left out of more than half of used  the studies including low MQ/high ROB. least often by reviews in our study.  Moreover, there is extensive disagreement over which MQ/ROB tool or checklist is Many authors in our study used custom measures to evaluate MQ/ROB. These custom measures were either a combination of previously published tools and  the most effective, if effective at all, at giving author’s own personal components or solely  the most accurate portrayal author’s choice  ofa study’s  quality (Zeng 2015) (Jüni 1999). To assess the effectiveness components. Many  of each scale and tool, we examined each component to these custom measures did not  evaluate which provides the most important aspects  of extensive evaluation of MQ/ROB. The Downs and Black Checklist study quality. Allocation concealment, which is highly agreed upon as a very important component for MQ/ROB (Lundh 2008),  was the most extensive, assessing 62.79\% (27/43) only present in 50.0\% (7/14)  of listed custom  measures. The CASP Checklist for RTCs and the Newcastle-Ottawa scale followed, both assessing 34.88\% (15/43) Blinding, specifically double blinding  of measures. QUADAS-2 (32.56\%; 14/43) was slightly more thorough than that QUADAS (30.23\%; 13/43) at assessing MQ/ROB. We found that Jadad (27.91\%; 12/43), both  the Delphi List (18.60\%; 8/43), patient  and the Cochrane Risk assessor, was assessed in only 42.86\% (6/14) reviews using a custom created too . Response rate/withdrawal (64.29\%; 9/14), blinding  of Bias Tool (16.28\%; 7/43) the assessor alone (57.14\%; 8/14), valid and objective outcome measures (57.14\%; 8/14), and inclusion/exclusion criteria (57.14\%; 8/14)  were the three least extensive only components evaluated in >50.0\%  of the tools used. studies.  In our study we found addition, there were seven studies (Tanna 2010, Diener-West 1992, Markowitz 1992, Ding 2008, Rogers 2010, McGimpsey 2009, and Dueker 2007)  thatthe Cochrane Risk of Bias Tool was  used most commonly for evaluating clinical information in addition to  MQ/ROB at 20.69\% (18/87) followed by Jadad (19.54\%; 17/87), both of which evaluate two of measures that could falsely inflate  theleast amount of  quality measure. The Delphi List, the measure described by Sanderson et al, and QUADAS were each used 5 times (5.75\%) accounting for a total of 17.25\% score  ofMQ/ROB measures. The most extensive MQ/ROB tools (Downs and Black, CASP, QUADAS-2, CASP for RCTs, and the Newcastle-Ottawa scale) were each used  the least. review.  Many authors Out of all 182 reviews included  in our study used custom measures to evaluate MQ/ROB. These custom measures study, only 10.44\% (19/182) explicitly stated that low MQ/high ROB  were either a combination excluded from their review while 36.81\% (67/182) included articles with low MQ/high ROB. Nearly as many studies were unclear about inclusion  of previously published tools and low MQ/high ROB 11.54\% (21/182) as those that were excluded. Of  the author’s own personal measures or solely 67 articles with low MQ/high, only 31.34\% (21/67) performed a subgroup analysis, 23.88\% (16/67) included a meta-regression analysis, and 46.27\% (31/67) performed a sensitivity analysis. These analyses usually account for  the author’s measures. Many of low MQ/high ROB included in  these custom measures did not evaluate important aspects reviews, but each was left out of more than half  of study quality. Allocation concealment, the studies including low MQ/high ROB.   Moreover, there is extensive disagreement over  which MQ/ROB tool or checklist  is highly agreed upon as the most effective, if effective at all, at giving the most accurate portrayal of  a very important study’s quality (Zeng 2015) (Jüni 1999). To assess the effectiveness of each scale and tool, we examined each  component for MQ/ROB (Lundh 2008), was only present in 50.0\% (7/14) to evaluate which provides the most  of custom measures. Blinding, specifically double blinding extensive evaluation  of both MQ/ROB. The Downs and Black Checklist was  the patient most extensive, assessing 62.79\% (27/43) of listed measures. The CASP Checklist for RTCs  and the assessor, was assessed in only 42.86\% (6/14) reviews using custom components. Response rate/withdrawal (64.29\%; 9/14), blinding Newcastle-Ottawa scale followed, both assessing 34.88\% (15/43)  of measures. QUADAS-2 (32.56\%; 14/43) was slightly more thorough than that QUADAS (30.23\%; 13/43) at assessing MQ/ROB. We found that Jadad (27.91\%; 12/43),  the assessor alone (57.14\%; 8/14), valid Delphi List (18.60\%; 8/43),  andobjective outcome measures (57.14\%; 8/14), and inclusion/exclusion criteria (57.14\%; 8/14) were  the only components evaluated in >50.0\% Cochrane Risk  of the studies. In addition, there Bias Tool (16.28\%; 7/43)  wereseven studies (Tanna 2010, Diener-West 1992, Markowitz 1992, Ding 2008, Rogers 2010, McGimpsey 2009, and Dueker 2007) that used clinical information in addition to MQ/ROB measures that could falsely inflate  the quality score three least extensive  of the review. tools used.  In conclusion, our study has suggested that most authors in ophthalmology rarely assess MQ/ROB in their systematic reviews. Moreover, when MQ/ROB is evaluated, the majority of the tools and custom measures used do not provide as extensive of an assessment as other commonly used tools. We recommend that systematic reviewers adopt the use of the Downs and Black scale for the evaluation of MQ/ROB due to the extensive range of components assessed. Future research could examine the relationship between the reviewers’ choice of MQ/ROB tools and extent to which these tools evaluate MQ/ROB.