this is for holding javascript data
Matt Vassar edited textbf_Methods_textbf_Search_Criteria__.tex
almost 9 years ago
Commit id: b88067d4da9a58dace065109e4ca51bdf07b70ac
deletions | additions
diff --git a/textbf_Methods_textbf_Search_Criteria__.tex b/textbf_Methods_textbf_Search_Criteria__.tex
index 68d69af..4e62086 100644
--- a/textbf_Methods_textbf_Search_Criteria__.tex
+++ b/textbf_Methods_textbf_Search_Criteria__.tex
...
To standardize the coding process, an abstraction manual was developed and pilot tested. After completing this process, a training session was conducted to familiarize coders with abstracting the data elements. A subset of studies were jointly coded as a group. After the training exercise, each coder was provided with 3 new articles to code independently. Inter-rater agreement of this data was calculated using Cohen's kappa. Since inter-rater agreement was high (k=0.86; agreement=91 percent), each coder was assigned an equal subset of articles for data abstraction. We coded the following elements: a) statistical test used to evaluate heterogeneity; b) a priori threshold for statistical significance; c) type of model (random, fixed, mixed, or both); d) whether authors selected a random effects model based on significance of the heterogeneity test; e) whether authors used a random effects model without explanation; f) what type of plot was used to evaluate heterogeneity, if any; g) whether the plot was published as a figure in the manuscript; h) whether follow-up analysis was conducted, and if so, the type of analysis (subgroup, meta-regression, and/or sensitivity analysis); i) whether heterogeneity was mentioned in writing only; and j) whether authors concluded there was too much heterogeneity to perform a meta-analysis. After the initial coding process, validation checks were conducted such that each coded element was verified by the other coder. After these checks were performed, coders met to discuss disagreements and settle them by consensus. Analysis of the final data was conducted using STATA 13.1.
\textbf{Evidence-Based Mapping}
To construct evidence maps, we We modified an approach
proposed by Althuis, Weed, and Frankenfield
to perform the evidence mapping process \cite{Althuis_2014}.
The Althuis This approach
focuses focused on observational studies and included a
map step to
assess evaluate covariates adjusted for
in multivariate analyses across the primary studies. In
the present this study, we included
a table based on an evaluation of particular risk of bias components pertinent to the selected
article. articles. We
took performed the following steps during the evidence mapping
process: exercise:
1. We selected a systematic review that compared interventions and measured a specific outcome.
2. We formulated a research question based on the PICOS (population, intervention, control, outcome, study design) method.
3. We reviewed the primary studies from the selected systematic review to find a natural division to begin mapping. We examined the methods sections of each primary study in detail on relevant design features and considered all aspects of the PICOS question as we compared studies. Our initial goal was to categorize studies into two groups, and we created a diagram to visually depict this sorting of primary studies into the relevant categories. After the initial diagram was constructed, we determined additional groupings that could further differentiate primary studies.
3. 4. We
thoroughly reviewed the primary studies from the selected systematic review to find developed a
natural division to begin mapping. We examined the methods sections of each primary study in detail to understand how second table informed by the
studies were conducted. We considered all aspects Cochrane Risk of
Bias tool and the
PICOS question as we conducted our comparisons. Our goal was to categorize studies into two groups. We next constructed a diagram, or flow chart, to display the different groups in an easily readable format. CONSORT Guidelines. Each
primary study was
then placed into the category to which it belonged.
After the initial diagram was constructed, we examined
the methods sections of the primary studies further to
identify any additional groupings that could determine if it might be
added susceptible to
the diagram. As before, each study was displayed by grouping. This concluded the construction of the first diagram. bias.
4. We developed a second table informed by the Cochrane Risk of Bias tool and the CONSORT Guidelines. Each primary study was examined to determine if it might be susceptible to bias. The following components were evaluated: allocation concealment, blinding, eligibility of participants, and sequence generation. 5. Last, we compiled all other defining characteristics that could be
extracted sources of heterogeneity from the trials. These included patient population characteristics, outcome evaluation characteristics, additional interventions, and study design characteristics. This additional information was placed in the final table to display a summary of the heterogeneity mapping exercise.
...