Sarah Khan edited untitled.tex  almost 9 years ago

Commit id: 461760b4a7c881d0787882d6e613136b026cc515

deletions | additions      

       

258 articles were identified to meet initial search criteria. Quality assessment was conducted on 182 articles after exclusion of  ineligible articles. Quality and risk of bias assessment was (50 percent) of articles assessed. Most common tools used were those adapted from other sources (24.47 percent, n=25/91), author independently assessed (20.88 percent, n=19/91) and unspecified (13.19 percent, n=12/91). In assessing risk of bias, high/medium/low scale was used most (18.97 percent, n=11/58) followed by high/medium/unclear (13.79 percent, n=8/58), and quality was assessed through author created scales (29.31 percent, n=17/58) and the Jadad scale (15.52 percent, n=9/58). Low Quality or High risk of bias studies were found in 46 studies with 76 percent (n=35/46) including those studies.From included studies, subgroup analysis was conducted in 17.58 percent, meta regression in 8.79 percent, and sensitivity analysis in 17.58 percent. In 37 studies reporting of low quality or high risk of bias was unknown. Quality measures were articulated largely in narrative format (47.25 percent, n=43/91) or not at all (39.56 percent, n=36/91).  \subsection{\textbf{Conclusions:}}  Quality and risk of bias were assessed in half of the systematic reviews and meta-analyses coded, however methods of assessment are determined by authors independently rather than following well known scales means of reporting such as CONSORT. Independently developed quality scales were developed and used rather than well-known scales  such as Jadad and Newcastle-Ottawa Scale.High risk of bias and low quality studies were included in most of these studies yet subgroup analyses, meta regression, and sensitivity analysis were not used to deal with inclusion of these  studies.This analysis provides further evidence of the lack of consistency in reporting quality measures can also be found  for clinical findings in the field of oncology. Differences between assessment of bias and quality reporting could negatively impact the clinical application of findings treatments and procedures presented  in major  oncology journals. \subsection{\textbf{Keywords:}}  bias;meta-analysis;oncology;quality;systematic review  \section{Introduction} 

The first major guideline for reporting in systematic reviews and meta-analyses came out in 1996 and was referred to as the Quality of Reporting of Meta-Analyses (QUORUM), and has been followed thereafter by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement \cite{Moher_2011}. After publication of the QUORUM statement there was an improvement in how items from the checklist were reported and quality of reporting improved in critical care literature\cite{Delaney_2007}. Despite a clear move toward progress in reporting of quality measures, there still remains significant differences in designs and gradings between each study\cite{higgins2008cochrane}. According to The Cochrane Collaboration, criteria for evaluation of bias in studies should be domain-based and in fact that further analysis would be needed if high risk of bias or low quality studies are included \cite{higgins2008cochrane}. Reporting of quality measures and methodological quality assessment is crucial for clinicians to have the best information for patient care.   The aim of our study was to assess how quality and risk of bias are evaluated in a sample of oncology systematic reviews and meta-analyses, how quality and risk of bias are graded, whether high risk of bias or low quality studies tend to be included, and if so, what are the methods of analyzing that are used to deal with high risk of bias and low quality. In addition we assessed how this information is reported within studies analyzed.   \section{Methods}  \subsection{Search Criteria and Eligibility}   We conducted a PubMed search using the following search string: 

Our overall dataset included 337 studies during just the identification process of articles (Figure 1). From that initial dataset 79 articles were excluded during the screening process due to the fact that they were neither meta-analyses nor systematic reviews. The remaining set of articles that were assessed for meeting our eligibility criteria was 258. An additional 76 studies were removed after individual and group consensus was reached about reasons to remove those articles from our dataset. The studies removed were genetic studies, individual patient data meta-analyses, genomic studies, histological studies, and a letter to an editor. Our final dataset contained 182 articles.   Within this data set, quality or risk of bias assessment was conducted in 91 articles (50 percent) \ref{fig:FIGURE_3}. Most common tools used were those adapted from other sources (24.47 percent, n=25/91) such as other authors \ref{fig:FIGURE_6}. The second highest used tools were those in which the author independently assessed (20.88 percent, n=19/91) and those that were unspecified (13.19 percent, n=12/91) \ref{fig:FIGURE_7} . QUORUM was the fourth highest used tool in Oncology Journals and was used 12.09 percent, n=11/91\ref{fig:FIGURE_7}.  In assessing risk of bias, high/medium/low scale was used most commonly (18.97 percent, n=11/58) followed by high/medium/unclear (13.79 percent, n=8/58), and quality was assessed through author created scales (29.31 percent, n=17/58) and the Jadad scale (15.52 percent, n=9/58) \ref{fig:FIGURE_8}. Low Quality or High risk of bias studies were found in 46 studies out of the 91 studies that assessed quality \ref{fig:FIGURE_10}. There were 37 studies in which it could not be determined whether Low Quality or High Risk of Bias studies were isolated \ref{fig:FIGURE_11}.   There were 35 studies in which low quality or high risk of bias were found and included with 76.09 percent (n=35/46) (76.09 percent, n=35/46)  \ref{fig:FIGURE_14}.From included studies, subgroup analysis was conducted in 17.58 percent n=16/91) \ref{fig:FIGURE_15}. Meta regression was used to address bias and quality problems in 8.79 percent of the 46 articles that assessed quality \ref{fig:FIGURE_16}. Sensitivity analysis was used to address bias and quality reporting issues in 17.58 percent of studies analyzed \ref{fig:FIGURE_17}. Quality measures were articulated largely in narrative format (47.25 percent, n=43/91) or not at all (39.56 percent, n=36/91). Additional forms of presentation included combinations of figures and narratives (4.40 percent, n=4/91) \ref{fig:FIGURE_18}. The combination of table and narrative was also used more than single formats of presentation (3.30 percent, n=3/91) \ref{fig:FIGURE_18}.  \section{Discussion}  Our main findings were that comprehensive reporting of quality measures in systematic reviews and meta-analyses in major oncology journals was moderate to low, with actual assessment of methodological being present in 91 of the 182 articles (50 percent), and inclusion of studies with high risk of bias or low quality present in 35 of the 46 or 76 percent of studies. In addition the inclusion of studies with high risk of bias or low quality was not the only problem in these studies, but rather than out of the included articles, further analysis was not conducted to address additional bias. The completeness of quality measure reporting in oncology journals appears to be lower compared to reports in related fields such as orthodontics in which PRISMA results were 64.1 percent compared to the quality assessment found in oncology journals of 50 percent \cite{tunis2013association}. The use of high risk of bias or low quality appears to not be the focus in the assessment of quality in oncology journals.   The variety of quality assessment scales also indicates a problem in reporting consistently  Our study faced certain limitations, but also maintained strengths in evaluating quality of reporting. The analysis was conducted over a short period of time of less than three months, but to make up for that deficiency we increased the rounds of coding cycles to make our analysis more thorough despite the time challenge. In addition, the articles pooled from our search were not distributed equally in number, which would indicate that the results refer to one particular journal rather than many. Our coding procedure also assessed over several years, so that the trend of reporting was not of primarily one year, but that of several years.  \section{Conclusion}