Chelsea Koller edited 5.tex  almost 9 years ago

Commit id: 93032788eb5d5628e66862caeb7c16c95fcc9711

deletions | additions      

       

(((((("Journal of clinical oncology : official journal of the American Society of Clinical Oncology"[Journal] OR "Nature reviews. Cancer"[Journal]) OR "Cancer research"[Journal]) OR "The Lancet. Oncology"[Journal]) OR "Clinical cancer research : an official journal of the American Association for Cancer Research"[Journal]) OR "Cancer cell"[Journal]) AND ("2007/01/01"[PDAT] : "2015/12/31"[PDAT]) AND "humans"[MeSH Terms]) AND (((meta-analysis[Title/Abstract] OR meta-analysis[Publication Type]) OR systematic review[Title/Abstract]) AND ("2007/01/01"[PDAT] : "2015/12/31"[PDAT]) AND "humans"[MeSH Terms]) AND (("2007/01/01"[PDAT] : "2015/12/31"[PDAT]) AND "humans"[MeSH Terms]).  Of the resulting 337, 79 were excluded by Confidence (confidence.org) based on an initial screening of the titles and abstracts. The remaining 258 full-text articles were obtained via EndNote; and PDFs were gathered from the internet and through inter-library loan. Google Drive was used to store full-text copies of articles in the study, store full-text PDF copies of articles included in the study, store the coding sheets, store other documents pertinent to the study, team assignments, article coding assignments, and abstraction manuals. Coding keys were developed for each study and pilot-tested on a handful of articles. Training sessions were held to discuss coding keys. The whole group coded three articles together using the key, and then each assigned pair of researchers coded three different articles independently. (Data were analyzed for inter-rater agreement using Kappa statistic????) Each partner coded the assigned half the articles, and then validated the partner's coding of the opposite half of the articles. Validation checks were completely within pairs, and disagreements were settled by consensus. Articles were excluded if they were genetic studies (n=40), individual patient data (n=20), not a meta-analysis or systematic review (n=9), genomic study (n=3), the article could not be retrieved (n=2), a letter to the editor (n=1), or a histological study (n=1). The remaining 182 articles were coded based on a standardized abstraction manual constructed from the criteria of PRISMA. Eligible trials were coded independently by members of the research team. Verification checks of each coded element were performed by a second team member, and disagreements were handled by consensus. 1.1 Search Criteria  A PubMed search was conducted using the following search string: ((((((“Obstetrical & gynecological survey”[Journal] OR “Obstetrics and gynecology”[Journal]) OR “American journal of obstetrics and gynecology”[Journal]) OR “Gynecologic oncology”[Journal]) OR “Human reproduction update”[Journal]) OR “BJOG : an international journal of obstetrics and gynaecology”[Journal]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND (((meta-analysis[Title/Abstract] OR meta-analysis[Publication Type]) OR systematic review[Title/Abstract]) AND (“2007/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]) AND ((“2012/01/01”[PDAT] : “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]). This search strategy was adapted from a previously published approach that is sensitive to identifying systematic reviews and meta-analyses (Montori 2005). Searches were conducted on 18 May and 26 May 2015.  1.2 Review selection and data extraction  We used Covidence (covidence.org) to initially screen articles based on the title and abstract. To qualify as a systematic review, studies had to summarize evidence across multiple studies and provide information on the search strategy, such as search terms, databases, or inclusion/exclusion criteria. Meta-analyses were classified as quantitative syntheses of results across multiple studies (Onishi 2014). Two screeners independently categorized all articles, and a follow up meet was held to discuss differences in screening. All disagreements were settled by consensus. Full-text articles were obtained via EndNote. To ensure accurate coding and clear directions, an abstraction manual were developed and piloted prior to training coders. A training session was conducted to familiarize coders with the process, and a subset of studies from the screening process were jointly coded as a group. After training, each coder was given 3 new articles to code independently. These data were analyzed for inter-rater agreement by calculating the Kappa statistic. As inter-rater agreement was acceptable (k=0.65; agreement=75 percent),each coder was assigned an equal number of articles to code. After the initial coding process, validation checks were conducted such that each coded element was verified by a second coder. After these checks were performed, coders met to discuss disagreements and settle them by consensus. Analysis of the final data was conducted using STATA 13.1. Google Drive was used to store full-text PDFs of included articles, coding sheets, and other documents pertinent to the study.