Quality of systematic review and meta-analysis abstracts in oncology journals


Purpose: The purpose of this study was to evaluate the quality of reporting in the abstracts of oncology systematic reviews using PRISMA guidelines for abstract writing.

Methods: Oncology systematic reviews and meta-analyses from four journals - The Lancet Oncology, Clinical Cancer Research, Cancer Research, and Journal of Clinical Oncology - were selected using a PubMed search. The resulting 337 abstracts were sorted for eligibility and 182 were coded based on a standardized abstraction manual constructed from the PRISMA criteria. Eligible systematic reviews were coded independently and later verified by a second coder, with disagreements handled by consensus. One hundred eighty-two abstracts comprised the final sample.

Results: The number of included studies, information regarding main outcomes, and general interpretation of results were described in the majority of abstracts. In contrast, risk of bias or methodological quality appraisals, the strengths and limitations of evidence, funding sources, and registration information were rarely reported. By journal, the most notable difference was a higher percentage of funding sources reported in Lancet Oncology. No detectable upward trend was observed on mean abstract scores after publication of the PRISMA extension for abstracts.

Conclusion: Overall, the reporting of essential information in oncology systematic review and meta-analysis abstracts is suboptimal and could be greatly improved.

Keywords: Review, Systematic; Meta-Analysis; Cancer; Medical Oncology; Abstracting as Topic; Funding


Scanning journal abstracts allows clinicians to quickly determine the relevance of a particular article to their clinical practice (Fleming 2012). The abstract should be written clearly and sufficiently detailed such that clinicians can decide whether to read on if the article is in hand or to download an electronic version for further reading (Hopewell 2008). A recent study found that users of biomedical literature that searched PubMed predominately viewed abstracts exclusively after reviewing titles returned from their searches. These abstract views were well over two times as likely as full-text views (Islamaj 2009). However, despite the importance of abstracts to convey essential information to users of research, clear and comprehensive reporting of core study aspects remains an issue. In an effort to address concerns about the quality and clarity of abstract reporting in clinical trials, the Consolidated Standards of Reporting Trial (CONSORT) group developed a minimum set of essential information for inclusion in an abstract (Hopewell 2008). Since the CONSORT abstract extension was published in 2008, some improvement in abstract reporting has been noted but still remains an issue (Can 2011).

More recently, systematic reviews have played a growing role in decision making for clinic practice. While allowing biomedical literature users access to a higher quality of evidence, systematic reviews are still hampered by issues in the quality of abstract reporting (Beller 2013). This prompted the release of an extension of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement that detailed a checklist of essential items to include in a systematic review abstract.

Since the extension of the PRISMA statement was published in 2013, no formal evaluation has been conducted on guideline adherence in medical journals. Only Kiriakou et. al.’s investigation of systematic review abstracts in oral implantology has been conducted to date (Kiriakou 2014). We, therefore, analyzed the extent to which systematic review authors reported this information in abstracts from a sample of leading oncology journals. We analyzed how well authors published in these journals adhered to the PRISMA extension guidelines for abstracts and whether this adherence had changed since the release of the PRISMA extension.

Materials and Methods

Search Criteria

Using the h5-Index from Google Scholar Metrics, we selected the 6 highest ranking oncology journals based on index scores. We searched PubMed using the following search string:((((((“Journal of clinical oncology : official journal of the American Society of Clinical Oncology”[Journal] OR “Nature reviews. Cancer”[Journal]) OR “Cancer research”[Journal]) OR “The Lancet. Oncology”[Journal]) OR “Clinical cancer research : an official journal of the American Association for Cancer Research”[Journal]) OR “Cancer cell”[Journal]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND (((meta-analysis[Title/Abstract] OR meta-analysis[Publication Type]) OR systematic review[Title/Abstract]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND ((“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]). This search strategy was adapted from a previously established method that is sensitive to identifying systematic reviews and meta-analyses (Montori 2005). Searches were conducted on May 18 and May 26, 2015.

Review selection and data extraction

We used Covidence ( to initially screen articles based on the title and abstract. To qualify as a systematic review, studies had to summarize evidence across multiple studies and provide information on the search strategy, such as search terms, databases, or inclusion/exclusion criteria. Meta-analyses were classified as quantitative syntheses of results across multiple studies (Onishi 2014). Two screeners independently reviewed the titles and abstracts of each citation and made a decision regarding its appropriateness for inclusion. Next, screeners held a meeting to revisit the citations in conflict and arrive at a final consensus. Following the screening process, full-text versions of included articles were obtained via EndNote. Full text screening was then completed and conflicts resolved by group consensus excluding addition articles that did not meet requirements to be considered.

Coding Abstracts

To ensure the accuracy of the coding process, an abstraction manual was developed and piloted prior to training coders. A training session was conducted to familiarize coders with the process, and subsets of studies from the screening process were jointly coded as a group. After training, each coder was given three new articles to code independently. These data were analyzed for inter-rater agreement by calculating the Cohen’s kappa. As inter-rater agreement was acceptable (k=0.65; agreement=75 percent), each coder was assigned an equal number of articles for data abstraction. Elements from the abstraction manual are presented in Table 1. After the initial coding process, validation checks were conducted such that each coded element was verified by a second coder and a meeting was held to discuss disagreements and settle them by consensus. Figure 1 details the study selection process. Data from the final sample of 182 articles were analyzed using STATA 13.1 and are publicly available on Figshare ().