As well as the challenges associated with the sensitivity of forecasts to the assumptions made in development, there are many challenges encountered when attempting to replicate disruptions and real-world behaviours. This is illustrated by the comparison of recorded air passenger data and the forecast passenger numbers shown in Fig. \ref{369924} and Fig. \ref{729121} following the eruption of Eyjafjallajökull in 2010 \cite{Steele_2011}. In this example, whilst the system dynamics model built has been able to accurately predict the scale and duration of the disruption to air travellers in the local vicinity of the disruption, the simulation results are not intuitively matched to the traffic levels directly preceding or following the eruption. This is because when considering these time periods it becomes apparent that the seasonal variation of passenger numbers is not present in the computer-generated model. Determining the model granularity necessary to capture real-world effects, such as these annual travel patterns, whilst limiting the complexity of models is a challenging topic, which may require that model variables and methods have different levels of granularity in order to achieve the transparency needed for wider acceptance. This requires careful thought as to the boundaries between influences internal and external to the modelled system, and is also dependent on having sufficient data available to be able to extract accurate models of real-world effects. Furthermore, even with detailed consideration of the influences that should and should not be included within a simulation to demonstrate the ability to reproduce disruptive phenomena to an accepted level of robustness and generalisation (i.e. the scope of the model), it remains difficult to anticipate where the emergent properties of real-world systems will arise that will cause actual responses to diverge from predicted results. The sensitivity of disruptive events to assumed initial conditions gives some indication of the robustness of the prediction generated (as shown in Fig. \ref{156500} and Fig. \ref{803519}) in a closed sense (i.e. assuming these initial conditions include the necessary variables that will ultimately cause disparity between simulated and real-world effects), which helps illustrate why simulations of disruptions are often regarded as good indicators of trends rather than methods for obtaining exact values \cite{Wang2014}. Consequently, Steele's study, and many others like it, illustrate how simulations may be able to accurately represent aspects of observed real-life behaviours, but may not always be able to reproduce all external effects and influences.