Matt Vassar edited section_Results_subsection_Search_results__.tex  almost 9 years ago

Commit id: 558603d3fff94f2c4c1b455b6d73403171b5cf0c

deletions | additions      

       

\subsection{Search results}  Our PubMed search resulted in 337 articles whichwas  decreased to 258 after screening of titles and abstracts. Full text screening excluded genetic studies, individual patient meta-analyses, primary studies, genomic studies, narrative studies, histologic studies, and letters to the editor. Additionaly two articles could not be retrieved. This left us with 182 systematic reviews from four journals (Figure 1). The greatest percentage of abstracts taken for this study originated from the \textit{Journal of Clinical Oncology}, in which one hundred and two articles (58\%) were included. \textit{The Lancet Oncology} came in second with fifty-six abstracts (32\%), \textit{Clinical Cancer Research} came in third with only eighteen abstracts (10\%), and \textit{Cancer Research} came in last with only 1 abstract (0\%) (Figure 2b). The majority of these abstracts were structured (83\%) (Figure 2a). \subsection{Abstract's methods section}  Across all journals, components of the PRISMA extension vary greatly in terms of reporting (Fig 1). The majority of systematic reviews or meta-analyses incorporated these terms in the title. Additionally, the number of included studies, information regarding main outcomes, and general interpretation of results were described in the majority of abstracts in our analysis. Other components were reported less frequently. Information related to the search, such as the databases and dates searched, were reported in only about half of abstracts. Risk of bias or methodological quality appraisals were not often mentioned. The strengths and limitations of evidence were rarely reported either. Funding sources and registration information were seldom, if ever, reported in these abstracts.  The abstracts of the included systematic reviews lacked important information about search and exclusion criteria. Over half of abstracts (54\%) did not include the publication status of the articles that they were using for their data. Only about half (49\%) of the abstracts listed the databases the investigators searched, and only about half of the abstracts (47\%) followed PRISMA’s guideline to list the dates of the searches (Figure 2a).  \subsection{Methodological quality assessment}  Once data was collected, more than 81\% of all the abstracts did not mention any quality or risk of bias assessments for the articles included (Figure 2c), even though 50\% of the articles assessed study quality or risk of bias in the full article. Of those abstracts that did include assessments, twenty-five (14\%) mentioned quality assessment, and eight (4\%) mentioned risk of bias. Only one abstract (1\%) mentioned both assessments in their abstract.  \subsection{Abstract's results section}  Although the majority of abstracts reported results in some form, statistical analysis was lacking. Ninety-eight abstracts (54\%) did not report either effects or confidence intervals for data comparison (Figure 2d). On the other hand, most that did include statistics included both effect size and confidence intervals (43\%). A small number of abstracts (3\%) only reported effect size. Effect size and confidence intervals are not the only way to measure results. Abstracts were also tallied when they reported percentages, cost, or time-spans. A staggering one hundred and thirteen abstracts (62\%) did not even report any of these sorts of units for result comparisons (Figure 2a).  \subsection{Abstract's conclusion section}  Strengths and/or limitations within the investigative process were not reported in the data collection or analysis in one hundred thirty-eight abstracts (76\%). Luckily, 96\% of abstracts included an interpretation of their results; however, only 102 abstracts (56\%) included an implication with those interpretations. Lastly, reporting of any funding information was scarce, as only 25 abstracts (14\%) included the information in their abstracts (Figure 2a). No abstracts included a registration number.    \subsection{Comparing Journals}  Catagories of abstract reporting for each journal is outlined in Figure 3. Major differences between journals occurred with listing of funding information, and structuring of abstracts. The Lancet Oncology was found to have a much higher rate of providing information about funding but a much lower rate of structured abstracts.   \subsection{Mean summary score}  A final value that was calculated for each journal was a mean summary score (Figure 4). This took into account the 17 items coded for and assigned a value of 1 point for each item. The scores for each journal were, 8.1 8.6 and 7.9 for The Lancet Oncology, Journal of clinical Oncology, and Clinical Cancer Research respectively. \textit{Cancer Research} not included because it contained only one systematic review.