Sarah Khan edited untitled.tex  almost 9 years ago

Commit id: 4047da3107fb3b34c3100031b4a4dd1fbd8697cc

deletions | additions      

       

\section{Discussion}  Our main findings were that comprehensive reporting of quality measures in systematic reviews and meta-analyses in major oncology journals was moderate to low, with actual assessment of methodological being present in 91 of the 182 articles (50 percent), and inclusion of studies with high risk of bias or low quality present in 35 of the 46 or 76 percent of studies. In addition the inclusion of studies with high risk of bias or low quality was not the only problem in these studies, but rather than out of the included articles, further analysis was not conducted to address additional bias. The completeness of quality measure reporting in oncology journals appears to be lower compared to reports in related fields such as orthodontics in which PRISMA results were 64.1 percent compared to the quality assessment found in oncology journals of 50 percent \cite{tunis2013association}. The use of high risk of bias or low quality appears to not be the focus in the assessment of quality in oncology journals.   The variety of quality assessment scales also indicates a problem in reporting consistently and makes comparison amongst similar studies problematic \cite{Balk_2002}.   Another point of interest was that despite presence of high quality bias or low quality studies being included in the data set, most oncology journals did not conduct further analysis to address increased bias, and it is possible that due to varied criteria of assessing quality, most studies lack a clear awareness of which types of tools to use \cite{chalmers1983bias}. Grading of scales proves to be a problem due to lack of consistent types of scales within papers \cite{J_ni_1999}.  Our study faced certain limitations, but also maintained strengths in evaluating quality of reporting. The analysis was conducted over a short period of time of less than three months, but to make up for that deficiency we increased the rounds of coding cycles to make our analysis more thorough despite the time challenge. challenge \cite{Devereaux_2002}.  In addition, the articles pooled from our search were not distributed equally in number, which would indicate that the results refer to one particular journal rather than many. Our coding procedure also assessed over several years, so that the trend of reporting was not of primarily one year, but that of several years. The assessment of methodological quality in meta-analyses and systematic reviews