Having outlined the problem definition and research strategy in the previous chapter, the challenges faced when attempting to model or simulate future technology substitutions are first considered here in more general terms, before discussing some of the specific concerns that arise when using comparisons of dissimilar time series as a basis for modelling, and when attempting to reproduce real-world behaviours. Typical approaches employed to deal with time series analysis are then presented, as well as techniques better suited to capturing real-world dynamics. This is followed by an overview of commonly used goodness-of-fit measures that might be employed in comparing observed and predicted datasets from models. Based on these discussions relevant techniques and data sources are then mapped against the analysis stages envisaged to address the research questions posed in chapter 3. An overview of expected method limitations is also provided, although a more detailed review of each stage of analysis is also included in chapters 5 and 6.

Challenges presented by using modelling and simulations in forecasting

Forecasts of prospective market conditions are commonly used in commerce and policy making, increasingly based on computer-generated simulations of the world, in order to provide some guidance on the implications of possible future scenarios. The acceptance of conclusions produced in this inherently speculative manner can vary greatly based on the evidence provided and the target audience concerned, particularly in the case of disruptive findings. Whilst computer-generated visions of the future are now found in many different industries \cite{8bk9uk,S.A.S2013,RN854,RN732,RN636}, and it has been argued that these may be capable of replicating human behaviours (as illustrated in chapter 2), the professed usefulness of these estimates varies greatly. Simulated models of human decision-making may anticipate the success or failure of strategic commercial ventures, and are gradually incorporating more in-depth socio-economic analysis in their functionality. Applying ever-more sophisticated computational techniques, which may use complex numerical simulation and qualitative human-factors in an attempt to capture real-world influences, leads to increased questions of the validity of the forecasting assumptions being made and the ultimate credibility of predictions created. This is particularly true in instances where large disruptive changes are presented in conclusions \cite{govuk}, such as may be found in technology substitutions. Equally the use of computational methods can be prohibitive to audiences in comparison to other, more transparent, forecasting approaches (such as industry surveys or expert reasoning) due to the complexity that is often associated with these techniques. In many cases the additional need for skilled software developers and operators to generate these simulations leads to increased methodological uncertainties, necessitating lengthy calibration processes for computer-generated models to ensure that a customer-endorsed level of credibility is achieved. This in turn often requires significant development periods and computational expense. To address this increased complexity forecasters may attempt to expose the presence of any uncertainties in their methods, demonstrate the robustness of the software platforms used, or provide evidence of the elimination of human errors from procedures, amongst many other validation approaches used in reporting findings. Cross-checking simulations against alternative methods, replicating historical data as a benchmark of performance, and providing clarifying statements for any conclusions made in order to help rationalise findings are also common practice, although the extent to which all of these are carried out varies notably from case to case. In order to better understand which of the many possible forecast features are most critical for demonstrating the credibility and validity of computer-generated predictions (with emphasis on the acceptance of simulated technological disruptions), a systematic review is presented in this section, identifying common factors that emerge from existing literature on simulation methodologies and virtual models of disruptions. These factors are mapped against two specific simulation techniques often used to predict disruptions (i.e. agent-based modelling and system dynamics, introduced in more detail in section \ref{655650}). Subsequently, the relevance of each theme is assessed for a sample audience via a structured survey to gauge the most effective means of demonstrating forecast credibility.

Research strategy for identifying and ranking validation themes

To identify and rank commonly occurring validation themes appearing in accounts of modelling and simulation both qualitative and quantitative methods are applied here to gauge the extent to which different validation methods are employed to build credibility in forecasts. A combined approach is better-suited for the purpose of this review due to the difficulty in quantifying many of the more intangible themes associated with model validation (and their subjective interpretations) \cite{Sterman_2002}. At the same time, this also enables a more extensive range of perspectives to be considered in the identification of patterns and trends than would otherwise be possible using a purely numerical analysis of citations \cite{Sterman_2002}. A combination of human and computer-based processing methods have been adopted to analyse the opinions captured within this exploratory study. Where more automated approaches have been applied (relating to pattern recognition and statistical analysis - see section \ref{284814} for more details on these concepts), manual cross-checking has also taken place to ensure that the validation themes presented are consistent with actual patterns observed from the literature study and survey results. An overview of the main methodological steps taken to structure survey questions is summarised in Fig. \ref{595505}.