Chelsea Koller edited section_Results_subsection_The_Whole__.tex  almost 9 years ago

Commit id: 8d8d40dc6e1d3eefbfb5878c562b58046759871c

deletions | additions      

       

Most of the abstracts were structured (83\%) (Figure 2b).    Once data was extracted, more than 81 percent of all the abstracts did not mention any quality or risk of bias assessments (Figure 2c), even though 50 percent of the articles assessed study quality or risk of bias in the methods of their articles (Sarah).  Over half of abstracts (54 percent) did not include the publication status of the articles that they were using for their data (Figure 2d).  Reporting of any funding information was scared, as only 25 abstracts (14 percent) included the information in their abstracts (Figure 2e).  The conclusions of the abstracts almost always had an interpretation of the data (96 percent) (Figure 2f).  While 96 percent of abstracts included an interpretation in their conclusions, only 102 abstracts (56 percent) included an implication with those interpretations (Figure 2g).  Effect size and confidence intervals are not the only way to measure results. Abstracts could also report percentages, cost, or time-spans. One hundred and thirteen abstracts (62 percent) did not report any of these sorts of units for result comparisons (Figure 2h).  Strengths and/or limitations were not reported in the data collection or analysis in one hundred thirty-eight abstracts (76 percent) (Firgure 2i).  Out of all of the articles, only about half of the abstracts (47 percent) followed PRISMA’s guideline to list the dates of the searches (Figure 2j).  Only about half (49 percent) of the oncology abstracts listed the databases searched (Figure 2k).  As far as the presentation of data, ninety-eight abstracts (54 percent) did not report either effects or confidence intervals for data comparison (a). However, seventy-eight abstracts (43 percent) included both effect size and confidence intervals. Six abstracts (3 percent) only reported effect size (Figure 2l).