Sarah Khan edited untitled.tex  almost 9 years ago

Commit id: ded011452cb16ea4db143925889c04a1d628cbea

deletions | additions      

       

We conducted a PubMed search using the following search string:  (((((("Journal of clinical oncology : official journal of the American Society of Clinical Oncology"[Journal] OR "Nature reviews. Cancer"[Journal]) OR "Cancer research"[Journal]) OR "The Lancet. Oncology"[Journal]) OR "Clinical cancer research : an official journal of the American Association for Cancer Research"[Journal]) OR "Cancer cell"[Journal]) AND ("2007/01/01"[PDAT] : "2015/12/31"[PDAT]) AND "humans"[MeSH Terms]) AND (((meta-analysis[Title/Abstract] OR meta-analysis[Publication Type]) OR systematic review[Title/Abstract]) AND ("2007/01/01"[PDAT] : "2015/12/31"[PDAT]) AND "humans"[MeSH Terms]) AND (("2007/01/01"[PDAT] : "2015/12/31"[PDAT]) AND "humans"[MeSH Terms]). Our method for identifying meta-analyses and systemic reviews was based on the Montori method that has been used in other databases \cite{Montori_2005}. There were three main journals that articles for oncology were pulled from and these include \textit{Clinical Cancer Research},\textit{Journal of Clinical Oncology}, and \textit{The Lancet Oncology}. We excluded articles if they were individual patient data meta-analyses, narrative reviews, editorials, scoping reviews, case reports, data collection studies, or meta-analyses where the analysis was not the primary focus of the study \ref{fig:FIGURE_1}.The search was limited to English language articles on 18 May 2015 and 26 May 2015.   \subsection{Article Selection and Data Extraction}  For our review process we utilized Covidence (Covidence.Org) for the initial screening process.We screened articles based on the abstract and title. Our focus in this study was to look at methodological quality in systematic reviews and meta-analyses in Oncology Journals. The full text versions of articles were imported via Endnote and stored in PDF format from the Internet and through inter-library loan. A special coding key for quality measure abstraction was developed and pilot tested on a convenience sample of fifty articles \ref{fig:FIGURE_2} . The articles were stored on Google Drive, where we also stored the coding sheets and documents related to our study such as team assignments, article coding assignments, and abstraction manuals. \ref{fig:FIGURE_2}.  There was a training session designed for individual coders in which 3 articles from various journal types were coded as a group. Also the pair assignments for coding were instructed to code three articles independently and the data from that convenience sample was analyzed for inter-rater agreement using Kappa statistic. independently.  During that training session, the team also came to a consensus and modified the abstraction manuals to settle disagreements found between thecoding of  two coders. Each coder had their own range of articles to code. After the first round of coding, coders within the pair switched and validated the codes of the first round. After this second round of coding, and additional final code was created where disagreements in coding were resolved by consensus.The systematic reviews and meta-analyses chosen for our study were required to be of similar methodological standing and meet strict criteria found in Cochrane systematic reviews. Our exclusions included studies such as meta-analyses that were not the primary aim of the study, narrative reviews instead of systematic reviews, reviews of reviews, case reports, study collected data, case-control studies, and individual patient data meta-analyses.    Data extraction for quality assessment involved whether quality or risk of bias was assessed, tools used to measure risk of bias or quality, whether authors used individual methods to assess quality, whether quality or risk of bias was graded, and what scale was used to grade quality or risk of bias. We also observed whether high risk of bias or low quality studies were found, whether high risk of bias or low quality studies were included in the studies, how risk of bias or quality was presented in the article. In the situation where high risk of bias or low quality studies were found and included, we assessed whether subgroup analysis, meta-regression, or sensitivity analyses were used to deal with quality of reporting. 

  \section{Results}  Our overall dataset included 337 studies during just the identification process of articles (Figure 1). From that initial dataset 79 articles were excluded during the screening process due to the fact that they were neither meta-analyses nor systematic reviews. The remaining set of articles that were assessed for meeting our eligibility criteria was 258. An additional 76 studies were removed after individual and group consensus was reached about reasons to remove those articles from our dataset. The studies removed were genetic studies, individual patient data meta-analyses, genomic studies, histological studies, and a letter to an editor. Our final dataset contained 182 articles.   Within this data set, quality or risk of bias assessment was conducted in 91 articles (50\%) \ref{fig:FIGURE_3}. Most common tools used were those adapted from other sources (24.47\%, (24\%,  n=25/91) such as other authors \ref{fig:FIGURE_3}. The second highest used tools were those in which the author independently assessed (20.88\%, (21\%,  n=19/91) and those that were unspecified (13.19\%, (13\%,  n=12/91) \ref{fig:FIGURE_3}. QUORUM was the fourth highest used tool in Oncology Journals and was used 12.09\%, 12\%,  n=11/91\ref{fig:FIGURE_3}. Quality or High Risk of Bias studies were isolated \ref{fig:FIGURE_4}.There were 35 studies in which low quality or high risk of bias were found and included with (76.09\%, (76\%,  n=35/46) \ref{fig:FIGURE_4}.From included studies, subgroup analysis was conducted in 17.58\%, 17\%,  n=16/91) \ref{fig:FIGURE_4}. Meta regression was used to address bias and quality problems in 8.79\% 9\%  of the 46 articles that assessed quality \ref{fig:FIGURE_4}. Sensitivity analysis was used to address bias and quality reporting issues in 17.58\% 18\%  of studies analyzed \ref{fig:FIGURE_4}. In assessing risk of bias, high/medium/low scale was used most commonly (18.97\%, (19\%,  n=11/58) followed by high/medium/unclear (13.79\%, (14\%,  n=8/58), and quality was assessed through author created scales (29.31\%, (29\%,  n=17/58) and the Jadad scale (15.52\%, (16\%,  n=9/58) \ref{fig:FIGURE_5}. Low Quality or High risk of bias studies were found in 46 studies out of the 91 studies that assessed quality \ref{fig:FIGURE_5}. There were 37 studies in which it could not be determined whether Low Quality measures were articulated largely in narrative format (47.25\%, (47\%,  n=43/91) or not at all (39.56\%, (40\%,  n=36/91) \ref{fig:FIGURE_6}. Additional forms of presentation included combinations of figures and narratives (4.40\%, (4\%,  n=4/91) \ref{fig:FIGURE_7}. The combination of table and narrative was also used more than single formats of presentation (3.30\%, (3\%,  n=3/91) \ref{fig:FIGURE_7}. \section{Discussion/Conclusion}  Our main findings were that comprehensive reporting of quality measures in systematic reviews and meta-analyses in major oncology journals was moderate to low, with actual assessment of methodological being present in 91 of the 182 articles (50\%), and inclusion of studies with high risk of bias or low quality present in 35 of the 46 or 76\% of studies. In addition the inclusion of studies with high risk of bias or low quality was not the only problem in these studies, but rather than out of the included articles, further analysis was not conducted to address additional bias. The completeness of quality measure reporting in oncology journals appears to be lower compared to reports in related fields such as orthodontics in which PRISMA results were 64.1\% 64\%  compared to the quality assessment found in oncology journals of 50\% \cite{tunis2013association}. The use of high risk of bias or low quality appears to not be the focus in the assessment of quality in oncology journals. The variety of quality assessment scales also indicates a problem in reporting consistently and makes comparison amongst similar studies problematic \cite{Balk_2002}.   Another point of interest was that despite presence of high quality bias or low quality studies being included in the data set, most oncology journals did not conduct further analysis to address increased bias, and it is possible that due to varied criteria of assessing quality, most studies lack a clear awareness of which types of tools to use \cite{chalmers1983bias}. Grading of scales proves to be a problem due to lack of consistent types of scales within papers \cite{J_ni_1999}.  Our study faced certain limitations, but also maintained strengths in evaluating quality of reporting.The analysis was conducted over a short period of time of less than three months, but to make up for that deficiency we increased the rounds of coding cycles to make our analysis more thorough despite the time challenge \cite{Devereaux_2002}.  In addition, the articles pooled from our search were not distributed equally in number, which would indicate that the results refer to one particular journal rather than many. Our coding procedure also assessed over several years, so that the trend of reporting was not of primarily one year, but that of several years. Narrative styles of presenting information for quality assessment were the most common means of describing quality measures. This result makes sense when considering that the scales and methods of assessing quality were made my by  authors or other authors independent descriptions of quality measures rather than a standardized format of grading or measuring quality \cite{Kamal_2014}. Using a narrative method would allow the author to be more descriptive than a figure or table as a form of presentation. The use of narratives for describing quality measures is the consequence of using a wide variety of quality assessment tools and scales. The sporadic use of quality measures has detrimental effects on the validity of findings in oncology trial journals. Inconsistent reporting of quality assessment tools and scales results in misinterpretation of clinical trial information and thereby negatively impacts the patient.   In conclusion, the quality assessment in major oncology journals has room for improvement in particular with regards to the varied number of individual quality assessment tools for each study, which detract from the ability to compare data. Additionally scales for grading high risk of bias or low quality need to be more uniform to compare to other studies. In situations where high risk of bias is included, additional analysis is important to counter bias of results.