Evaluation: Evaluating the impact of public health interventions is
key to evidence-based management. Unfortunately, it is often logistically, and
ethically, impossible to develop classic experimental tests, with paired
controls, of public health interventions. Dynamic models have been instrumental
in allowing the development of in silico
controls in against which to quantify the impact of public health
interventions. Retrospective evaluation
of expected dynamics in the absence of controls are necessarily out-of-sample
predictions and thus easily subject to criticism. One can evaluate the performance of the model
in predicting observed dynamics before and after the intervention as a formal
test of goodness-of-fit from which one can assess confidence in the
out-of-sample predictions for the non-intervention case (see Niger example
below).
Between late 2003 and mid 2004, a large measles outbreak in Niamey,
Niger resulted in over 10,000 cases and 400 deaths. In April 0f 2004, Medecins
Sans Frontieres (MSF) conducted an outbreak response immunization (ORI)
campaign, targeting children between 5-59 months of age, with the goal of
reaching 50% of the target population in the city. Outbreaks of immunizing
pathogens, like measles, are necessarily self-limiting, i.e. the spread of
infection removes the susceptibles, through natural immunization, necessary to
permit continued spread, and an outbreak will necessarily decline once a
sufficient fraction of the susceptible population has been infected. Further, many directly transmittable
infections, are highly seasonal due to regular patterns of human aggregation
(e.g. due to school, holiday, or agricultural cycles
Grassly 2006), which may further hasten
the decline of an outbreak in the absence of interventions. Thus,
while the 2003-4 outbreak in Niamey declined following the ORI, MSF conducted a
series of modeling analyses to retrospectively quantify the impact of the
vaccination campaign on the ORI beyond that which would have been expected due
to the natural course of the outbreak. In separate analyses, a dynamic
susceptible-infected-recovered (SIR) model was fit to the pre-campaign time
series to estimate local transmission rates and pre-outbreak susceptibility for
the outbreak at the commune
Grais 2008 and health zone levels
Grais 2006. Strong seasonality in the rate of measles
transmission in Niamey due to rural-urban migration
Bharti 2011 meant that
transmission rates likely dropped naturally shortly after the ORI, helping to
hasten the end of the outbreak, but also the limiting the impact of the
campaign itself. Conlan et al
Grais 2008
estimated that the campaign resulted in an 11% reduction of outbreak size. However,
the authors concluded that ORI, as a strategy, could have resulted in much greater
impact had the response time been faster.
Rather than fit an explicit model, Minetti et al
Minetti 2013 presented an empirical comparison of campaign impact during a measles outbreak
in Malawi in 2010. Here, as in the example from Niamey, an ORI was conducted during
an outbreak and the goal was to evaluate the impact of the campaign on reducing
case burden over and above the natural progression of the outbreak. The authors
developed a metric of campaign impact based on the change in the relative
incidence of cases in age classes targeted by the campaign and those not
targeted; i.e. campaign impact should be reflected in a relatively fewer cases
in the age groups targeted by the campaign. To assess the potential for
time-varying dynamics that may have biased this metric – e.g. because attack
rates in the age classes targeted by the campaign may have declined faster than
in the non-campaign age classes – Minetti et al.
Minetti 2013 compared the relative
age-specific attack rates throughout the outbreak in districts that did not
have campaigns. Thus, they used the non-campaign districts as a control in
which to assess the impact of the epidemic dynamics themselves on the cessation
of the outbreak. They illustrated both that there were no temporal trends in
the relative attack rates in the absence of campaigns and documented the
observed variability in campaign target and non-target age groups, which
allowed a quantification of the observed change in campaign districts necessary
to indicate significance relative to random chance. This was an uncommon analysis because it required that comparable
surveillance be collected in regions where campaigns were not conducted.
In both the Niger and Malawi case studies, understanding of the
dynamics of measles outbreaks was critical to the development of an appropriate
metric for evaluating the campaigns. Simple declines in disease rates are not a
sufficient indicator of program success as natural phenomena, such as
susceptible depletion due to epidemic spread, may cause declines that could be
misinterpreted as due to interventions. Dynamic models, or at least a dynamic
understanding of epidemic behavior, can be useful in defining an appropriate
null expectation against which to evaluate the impact of interventions.
Dynamic models are commonly used to evaluate the
potential impact of campaigns prior to implementation. In principle, this is directly analogous to
the example from Niamey, though both the campaign and non-campaign strategies
would be simulated prior to implementation.
This application particularly relevant when there may be unexpected
dynamic feedbacks as a result of the intervention (see (ref to "Dynamic Feedbacks") below).
Though
rarely done, simulation prior to the implementation of interventions provides
an opportunity to test model predictions and 1) evaluate the ability of models
to be used in future scenarios, 2) to quantify the performance of interventions
relative to expectation or 3) identify critical uncertainties in model (e.g.
mis-specified parameters or model structure) that can be improved for future
applications.