Conclusions from review of modelling approaches and validation techniques

Building on the research questions and strategy outlined in chapter 3, this chapter has examined the implications for investigating technology substitutions by means of modelling and simulation techniques in more detail. There inherently comes additional challenges with any form of computer-generated techniques which are not as evident in other forms of hypothesis testing, many of which arise from the additional complexity associated with both the creation and the interpretation of these models. This is often due to a lack of immediate transparency or intuitiveness that occurs when hiding much of the derivation and execution behind a screen of automation. In this regard cross-checking of results against historical data is encouraged as best practice, including the use of sensitivity studies to test initial parameters and assumptions, and the behaviours demonstrated in the models in comparison to real-world complexities. An inductive assessment of modelling and simulation literature sources was subsequently applied, in combination with empirical data obtained through a systematically structured survey, to identify, map, and rank general themes relating to the validity of computational forecasting techniques (with the aim of determining the most critical influences on professed model credibility for technological forecasting). As such, 50 separate validation themes have been presented in Fig. \ref{222026} (derived using a systematic literature review process), along with their specific applicability to agent-based and system dynamics modelling techniques, as well as to general simulations of disruptive changes. Subsequently, these themes have been scored against responses from a mixed audience of academic, commercial, and industrial participants (amongst others), with the compiled results revealing the perceived effectiveness of each validation theme for establishing the credibility of the simulation with different occupational groups. General themes that affect all participants, such as demonstrating methodological rigour, providing evidence of traceability, elaborating the researcher’s subjectivity, and exploring the human factors that may or may not have an influence on the simulation, are unsurprisingly considered as prerequisites to forecast credibility. However the survey data does also show specific occupational behaviours and trends emerging, such as the relative importance of the degree of informativeness of predictions made and elaborating the impact locality to commercial participants (who will frequently be required to act on the information they receive), or the increased emphasis placed on error-checking in academic communities. Turning then to some more specific challenges faced relating to the comparison of time series and capturing real-world behaviours in simulated environments, an overview of potentially applicable techniques to address these issues was presented in sections \ref{284814} and \ref{655650} respectively. Coupled with this, the means of assessing whether these challenges have been met was considered in section \ref{948563} where a summary of goodness-of-fit and statistical measures highlighted a range of evaluation criteria that will need to be considered in demonstrating the practicality of subsequent computer-generated models. Combining all of these findings, appropriate techniques were then selected and outlined for the stages envisaged in answering the research questions posed in chapter 3, along with a preliminary view of potential method limitations. This was followed by an outline of the data sources selected to support the proposed method.