Sarah Khan edited untitled.tex  almost 9 years ago

Commit id: bcd6c74ba46310715b5aad9071ebfcf7f63e802d

deletions | additions      

       

There were 35 studies in which low quality or high risk of bias were found and included with (76.09 percent, n=35/46) \ref{fig:FIGURE_14}.From included studies, subgroup analysis was conducted in 17.58 percent n=16/91) \ref{fig:FIGURE_15}. Meta regression was used to address bias and quality problems in 8.79 percent of the 46 articles that assessed quality \ref{fig:FIGURE_16}. Sensitivity analysis was used to address bias and quality reporting issues in 17.58 percent of studies analyzed \ref{fig:FIGURE_17}.   Quality measures were articulated largely in narrative format (47.25 percent, n=43/91) or not at all (39.56 percent, n=36/91). Additional forms of presentation included combinations of figures and narratives (4.40 percent, n=4/91) \ref{fig:FIGURE_18}. The combination of table and narrative was also used more than single formats of presentation (3.30 percent, n=3/91) \ref{fig:FIGURE_18}.  \section{Discussion} \section{Discussion/Conclusion}  Our main findings were that comprehensive reporting of quality measures in systematic reviews and meta-analyses in major oncology journals was moderate to low, with actual assessment of methodological being present in 91 of the 182 articles (50 percent), and inclusion of studies with high risk of bias or low quality present in 35 of the 46 or 76 percent of studies. In addition the inclusion of studies with high risk of bias or low quality was not the only problem in these studies, but rather than out of the included articles, further analysis was not conducted to address additional bias. The completeness of quality measure reporting in oncology journals appears to be lower compared to reports in related fields such as orthodontics in which PRISMA results were 64.1 percent compared to the quality assessment found in oncology journals of 50 percent \cite{tunis2013association}. The use of high risk of bias or low quality appears to not be the focus in the assessment of quality in oncology journals.   The variety of quality assessment scales also indicates a problem in reporting consistently and makes comparison amongst similar studies problematic \cite{Balk_2002}.   Another point of interest was that despite presence of high quality bias or low quality studies being included in the data set, most oncology journals did not conduct further analysis to address increased bias, and it is possible that due to varied criteria of assessing quality, most studies lack a clear awareness of which types of tools to use \cite{chalmers1983bias}. Grading of scales proves to be a problem due to lack of consistent types of scales within papers \cite{J_ni_1999}.  Our study faced certain limitations, but also maintained strengths in evaluating quality of reporting. The analysis was conducted over a short period of time of less than three months, but to make up for that deficiency we increased the rounds of coding cycles to make our analysis more thorough despite the time challenge \cite{Devereaux_2002}. In addition, the articles pooled from our search were not distributed equally in number, which would indicate that the results refer to one particular journal rather than many. Our coding procedure also assessed over several years, so that the trend of reporting was not of primarily one year, but that of several years.  Narrative styles of presenting information for quality assessment were the most common means of describing quality measures. This result makes sense when considering that the scales and methods of assessing quality were made my authors or other authors independent descriptions of quality measures rather than a standardized format of grading or measuring quality \cite{Kamal_2014}. Using a narrative method would allow the author to be more descriptive than a figure or table as a form of presentation.  The use of narratives for describing quality measures is the consequence of using a wide variety of quality  assessment tools and scales.   The sporadic use of quality measures has detrimental effects on the validity of findings in oncology trial journals. Inconsistent reporting  ofmethodological  quality assessment tools and scales results  in meta-analyses misinterpretation of clinical trial information  and systematic reviews thereby negatively impacts the patient.   In conclusion, the quality assessment in major oncology journals has room for improvement in particular with regards to the varied number of individual quality assessment tools for each study, which detract from the ability to compare data. Additionally scales for grading high risk of bias or low quality need to be more uniform to compare to other studies. In situations where high risk of bias is included, additional analysis is important to counter bias of results.  \section{Acknowledgements}  We are grateful for the assistance of planning study design and coding format from Matt Vassar, PhD, from Oklahoma State University Cencer of Health Sciences, and to Chelsea Koller, Jonathan Holmes, and David Herrmann for assistance in compiling and coding data from our set of articles.