As ABM is dependent on the interactions between agents in addition to the behaviours that individual agents exhibit, analogies with social and dynamic network analysis (topology) techniques can be made \cite{Macal_2006}. Following on from this, recent developments in Dynamic Network Analysis (DNA) have enabled the modelling of growth and reshaping of networks based on agent interaction processes \cite{Macal_2006}. In parallel, advances in understanding of human learning processes have allowed ABM to replicate more realistic learning behaviours through the use of neural networks, evolutionary and genetic algorithms, or other reinforcement learning techniques \cite{Macal_2006}. Considering more specifically reinforcement learning techniques, such as stochastic learning automata, it has also been demonstrated that these techniques can be combined with game theory to design adaptive agents that adjust their future actions based on feedback from their environment and the results of competitions \cite{Macal_2006,Zhao2008,Borrill}. In economic models where competitive market environments have been considered this has led to behavioural trends resembling foresight in agents \cite{Tesauro_2000}. In this manner, ABM presents a powerful means of simulating some of the complex dynamics observed in real life.
In terms of the core methodology itself, the diverse range of applications for ABM means that whilst
there was initially considerable scepticism about the value of
any results published using these techniques, publications
based on ABM are now usually considered
methodologically sound. Nevertheless, verification of results
remains a crucial test of ABM publications, and in many
cases this verification has proved difficult to conclusively prove or disprove, especially when simulating hypothetical conditions,
or recreating conditions that may be subject to abductive
fallacies \cite{Lorenz_2009}. In most current applications, the verification of ABM approaches rests on the technology used beneath the
modelling (as this is where it can be most easily challenged)
and demonstrating that individual, macroscopic, behaviours are plausible in one-to-one interactions. From a methodological
and epistemological point of view, if the technology is well-built
(i.e. well-coded) then the methodology and epistemology
are now becoming more well respected for scientific research
results. However, a considerable amount of time still needs to
be designated to the documentation, programmatic testing, and
evaluation of case studies and scenarios before formal results
can be published based on ABM.
Causal Loop Diagrams and System Dynamics
Causal Loop Diagrams (CLDs) and System Dynamics (SD) form
a well-established method for exploring dynamic behaviours
in evolving systems when it is not necessary to incorporate
emergent influences \cite{Lorenz_2009}, and provide a powerful tool for
recognising structures (i.e. feedback loops and influences) in nonlinear
systems from simple examination of the structure of
the models. CLDs in particular can provide a means of
translating qualitative statements (as maybe found in historical
descriptions or policy statements) into conceptual models that
can be related to mathematical expressions of quantitative
attributes, and so sits on the border between softer interpretivist
methodologies and harder functionalist approaches. Additionally, the visual construction of CLDs provides a much
more intuitive way to generate mathematical descriptions of
complex phenomenon through the use of stocks (also known as 'levels'), flows, and
rates as partial derivatives in the system, without the need for
detailed mathematical understanding (making it more accessible
to audiences from all disciplines). The system dynamics modelling process is described very well in the work of Sterman \cite{sterman2000business}, illustrating how dynamic hypotheses can be formulated and tested. CLDs and System Dynamics enable numerous
soft stocks (often considered to be intangible or qualitative values) to be incorporated directly into the structure of the model alongside more clearly defined metrics. In this manner, system dynamics assists with determining the
susceptibility of measured parameters to a range of
connected influences not always associated with quantitative metrics. Soft stocks such as 'confidence’ cannot be easily
built into other modelling constructs without the assistance of
CLDs and System Dynamics \cite{Fowler_2003}, however it is possible
to use System Dynamics in conjunction with ABM as a means of ensuring that any behavioural rules
assigned to agents at the microscopic level are consistent with global behavioural dynamics. In contrast to ABM though, the implicit
link between System Dynamics and partial derivatives provides
a means to dimensionally verify units and measurements applied in conceptual models. This provides an additional check
of the rationality of any models proposed, and a capability
to identify the dimensionality of new parameters not conventionally
modelled. In practical terms, this method can therefore act as a means of triangulation for any conceptual
models generated by other methodologies, to improve the
robustness of the research analysis. As always limitations
apply to this methodology, which in this case are principally
relating to the modelling of emergent properties, and more
general restrictions to applications at a macroscopic level due
to the deterministic nature of the underlying calculations (see
\cite{Borshchev}). However, care also has to be taken to ensure that data
is used to build System Dynamics models, as opposed to just
the application of ”judgement”, otherwise the rigour behind the
method is lost and conclusions generated are open to scrutiny
\cite{RN798}. In this regard the formulation of the CLDs based on established historical models, data, and existing academic findings is advisable for verification and validation purposes, prior to extension to new phenomenon.
Goodness-of-fit, summary statistics, and optimisation control measures for comparing observed and simulated behaviours
In order to demonstrate methodological rigour it is necessary to examine 'goodness-of-fit' measures when testing hypotheses with any form of modelling or simulation. These measures describe how well the current model fits a set of observations, typically by considering the discrepancies between observed and predicted values (i.e. residuals), and consequently provides an indication of how well the model will predict a future set of observations. Strong performance in terms of these goodness-of-fit measures does not guarantee that a model is foolproof, bearing in mind George Box's apt conclusion that "All models are wrong, but some are useful", but suggest that the assumptions made hold closely enough that the model can be considered useful in practice. In reality it is almost always certain that the model is false since it will be impossible to perfectly realise all assumptions made for one or more reasons. As such, goodness-of-fit measures must be taken in the context of the hypothesis being examined in order to determine how close these measures should be to the ideal formulation to demonstrate a robust model for practical applications. For example, if the purpose of the model is for prediction, then it may not be too important which independent variables are included in the model as long as the fit appears reasonable. By contrast, if the model is looking to examine which variables should be included in the structure in the first place, then potentially fit is not as important as long as the behaviours match expectations. This equally means that there is no perfect goodness-of-fit measure. However, a range of measures have been established to address different perspectives on a model's fit to real-world observations, which can be used to explore the likely practicality of the model. In this regard, several commonly used goodness-of-fit, summary statistic, and optimisation control measures are presented in Table \ref{table:goodness_of_fit_measures} that have been demonstrated as useful when considering time series, as demonstarted in the works of Sterman and Oliva \cite{sterman1984appropriate,oliva1995vensim,performance}(the notation \(A_t\) and \(F_t\) refer to the actual and forecast values at time \(t\) respectively throughout).