Chelsea Koller edited section_Results_subsection_Search_results__.tex  almost 9 years ago

Commit id: eecc1d0195581f21755df5f467ee1d25377c3250

deletions | additions      

       

\subsection{Search results}  Our PubMed search resulted in 337 articles which was decreased to 258 after screening of titles and abstracts. Full text screening excluded genetic studies, individual patient meta-analyses, primary studies, genomic studies, narrative studies, histologic studies, and letters to the editor. Additionaly two articles could not be retrieved. This left us with 182 systematic reviews from four journals (Figure 1). The greatest percentage of abstracts taken for this study originated from the \textit{Journal of Clinical Oncology}, in which one hundred and two articles (58\%) were included. \textit{The Lancet Oncology} came in second with Fifty-seven abstracts (32\%), and \textit{Clinical Cancer Research} came in last with only eighteen abstracts (10\%) (Figure 2a). 2b).  The majority of these abstracts were structured (83\%) (Figure 2b). 2a).  \subsection{Abstract's methods section}  The abstracts of the included systematic reviews lacked important information about search and exclusion criteria. Over half of abstracts (54\%) did not include the publication status of the articles that they were using for their data (Figure 2d). data.  Only about half (49\%) of the abstracts listed the databases the investigators searched (Figure 2k), and only about half of the abstracts (47\%) followed PRISMA’s guideline to list the dates of the searches (Figure 2j). 2a).  \subsection{Methodological quality assessment} 

\subsection{Abstract's results section}  Although the majority of abstracts reported results in some form, statistical analysis was lacking. Ninety-eight abstracts (54\%) did not report either effects or confidence intervals for data comparison (Figure 2l). 2d).  On the other hand, most that did include statistics included both effect size and confidence intervals (43\%). A small number of abstracts (3\%) only reported effect size. Effect size and confidence intervals are not the only way to measure results. Abstracts were also tallied when they reported percentages, cost, or time-spans. A staggering one hundred and thirteen abstracts (62\%) did not even report any of these sorts of units for result comparisons (Figure 2h). 2a).  \subsection{Abstract's conclusion section}  Strengths and/or limitations within the investigative process were not reported in the data collection or analysis in one hundred thirty-eight abstracts (76\%) (Firgure 2i). (76\%).  Luckily, 96\% of abstracts included an interpretation of their results; however, only 102 abstracts (56\%) included an implication with those interpretations (Figure 2g). interpretations.  Lastly, reporting of any funding information was scarce, as only 25 abstracts (14\%) included the information in their abstracts (Figure 2e). 2a).  No abstracts included a registration number.   \subsection{Comparing Journals}  Catagories of abstract reporting for each journal is outlined in Figure 3. Major differences between journals occurred with listing of funding information, and structuring of abstracts. The Lancet Oncology was found to have a much higher rate of providing information about funding but a much lower rate of structured abstracts.   \subsection{Mean summary score}  A final value that was calculated for each journal was a mean summary score. score (Figure 4).  This took into account the 17 items coded for and assigned a value of 1 point for each item. The scores for each journal were, 8.1 8.6 and 7.9 for The Lancet Oncology, Journal of clinical Oncology, and Clinical Cancer Research respectivly. respectively.  \textit{Cancer Research} not included because it contained only one systematic review.