Chelsea Koller added section_Discussion_Abstract_Structure_According__.tex  almost 9 years ago

Commit id: 1e84a09b3188c3765c27850827efeb1d4b4ca03e

deletions | additions      

         

\section{Discussion}  Abstract Structure  According to our results, only 83 percent of oncology systematic reviews and meta-analysis abstracts are structured. In a similar analysis of systematic review abstracts in oral implantology journals, Kiriakou (2013) found that 97.9 percent of abstracts were structured. It makes sense that more aware authors would write structured abstracts because they contain more information, are easier to read, are easier to search, and are welcomed by readers (Hartley 2014). Not only are structured abstracts favored by readers, but they are also favored by publishers as “over 50 percent of the papers with more detailed abstracts… [are] subsequently published” (von Hardenburg 2013). Structured abstracts may also lead to improved collaboration in the scientific and medical communities and also keep valuable research from being unpublished or unrecognized.  Databases  In our review of oncology literature, only 49% of abstracts stated which search databases were used. In a similar study published in Systematic Reviews, 60 percent of the systematic reviews from various areas of medicine reported the databases they used in their abstracts (Beller 2013). Oncology publications appear to report fewer database searches in abstracts when compared to other areas of medicine. Our analysis shows that all years since 2007, except two, systematic reviews have reported database searches in lower than 60 percent of their abstracts. However, oncology abstracts show an upward trend in database reporting of 2.55 percent per year, but oncology systematic reviews have a long way to go until they reach an acceptable report rate.  Search Date  Only 47 percent of all the oncology systematic review abstracts included a search date. According to a similar study in 2013 by Beller et. al. analyzing systematic reviews from all areas of medicine, found that 90 percent of abstracts included a search date. This suggests that oncology journals are behind in compliance with regards to the PRISMA guidelines. Despite having a positive percentage growth rate of 2.39 percent, percentage of abstracts including search dates have been consistently below 70 percent in oncology systematic review abstracts.  Publication Information  Publication statuses in the criteria for included studies were less than 46 percent. This suggests that reporting of publication information for included studies is seen as unimportant over half of the time in abstract writing.  Risk of Bias/Quality Assessment  Eighty-one percent of the abstracts for oncology systematic reviews did not report risk of bias or quality assessment. We know from a related research article currently being written (Sarah) that exactly half of the 182 meta-analyses and systematic reviews that we analyzed assessed risk of bias or quality within the article itself. That means that 31 percent of the articles that we analyzed did an assessment but did not report that they did so in their abstracts. Thus, quality assessment and risk of bias are under-reported in oncology systematic reviews and meta-analyses. Compared to other systematic review articles in many other areas of medicine, who only reported the results for a risk of bias 32 percent of the time (Willis\cite{Willis_2011}), systematic reviews in oncology are ahead since 50 percent report bias and 69 percent report that bias in their abstracts.  "While almost all recent diagnostic accuracy reviews evaluate the quality of included studies, very few  consider results of quality assessment when drawing conclusions. The practice of reporting systematic reviews of  test accuracy should improve if readers not only want to be informed about the limitations in the available  evidence, but also on the associated implications for the performance of the evaluated tests."\cite{Ochodo_2014}  Any Results\cite{23668725}  Statistical evaluation of the results and putting the numbers into effect ratios or confidence intervals was poorly reported in oncology abstracts with only 46 percent of abstracts reporting either. Comparing the three different oncology journals, The Lancet Oncology reported effect ratios and confidence intervals the least. The Journal of Clinical Oncology had the best percentage of risk ratios and/or confidence intervals reported. Again, oncology articles seem to be behind in adhering to PRISMA guidelines. Also, Willis found that out of whole abstracts, only 32 percent reported the results for a risk of bias. So, the abstracts would have reported that many or less.  (Faggion 2014)  Units  Only 38 percent of our abstracts reported their results in units that a lay-person might understand. This is one area in which authors and journals can work to significantly improve PRISMA compliance.  Interpretations and Implications  A great percentage of the oncology abstracts included in this study contained an interpretation of the results (96 percent), which is consistent with the findings of Kiriakou in which 93.6 percent of dental abstracts had conclusions (2013). However, we broke our conclusion evaluations into two parts: interpretation and implications; and we found that implications of results were under-reported at 56 percent. The Lancet Oncology was the worst at implications. We believe that this severely reduces the value of an abstract because implications are how readers turn into real-world applications.  Strengths and Limitations\cite{25420433}  Only 24 percent of oncology abstracts of systematic reviews and meta-analyses adequately reported the strengths or limitations in their abstracts. In dental meta-analyses and systematic reviews, only 35.5 percent of abstracts reported limitations. While reporting limitations is a problem in both fields of medicine, oncology is still behind other medical specialties in adhering to PRISMA guidelines.  Funding and Registration  Funding in oncology abstracts was frequently under-reported with 86 percent of abstracts not reporting from where their funding was coming. Clinical Cancer Research did not report funding at all in any of their abstracts, and The Lancet Oncology had the highest percentage of information about funding with 33 percent (20 abstracts). None of the abstracts reported any registration numbers, which is consistent with Kiriakou’s findings that zero percent of dental systematic review and meta-analyses abstracts reported registration numbers either. It appears that oncology is not behind other areas of medicine not reporting registration numbers because hardly anyone is reporting these. This is a huge part of abstract writing that must change if journals and authors want their abstracts to be PRISMA compliant.  Possible sources of error  We are confident that there were no errors in the coding of whether abstracts were structured or unstructured, included databases or search dates, or mentioned study quality or risk of bias because the coding was objective and straightforward. However, there were several disagreements in the more subjective coding between researchers, such as in the definitions of participants, interventions, comparators, and outcomes in the objective statements of articles. However, by coding as a team, we came to decisions where both parties could agree.  Coding for inclusion criteria was difficult because it was more subjective. Even though two researchers agreed on definitions by the time that results were tabulated, the split down the middle for comparators and publication statuses could be either a result of authors not following PRISMA guidelines or the result of differing opinions of the two researchers for this study. Either way, care should be taken in the future for authors to clearly define these criteria and include them in their abstracts.  Coding for units was difficult because it was more subjective. Even though two researchers agreed on definitions by the time that results were tabulated, tabulated number of abstracts including units might be slightly higher or lower than another pair making the same analysis and could be the result of differing opinions of the definitions of units for the two researchers for this study. Either way, care should be taken in the future for authors to clearly define these criteria and include them in their abstracts.  Interpretation of the results was relatively easy to code compared to what implications of a result meant. The authors had a rough start trying to define what were considered implications of a study and what were not. However, the authors did come to a consensus, so the coding should be consistent. There might b a couple of accidental inconsistencies.  The authors had a rough start trying to define what were considered strengths and limitations of a study and what were not. However, the authors did come to a consensus, so the coding should be consistent. There might b a couple of accidental inconsistencies.  Future Extensions of the Current Work  Currently, other researchers within our program are assessing the quality of abstracts for systematic reviews and meta-analyses for articles in obstetrics, pediatric oncology, and anesthesiology journals. We hope to analyze more abstracts for quality in other areas of medicine as well.