A computational model of serendipity

\label{sec:our-model}

Figure \ref{fig:model} recapitulates the ideas from the previous section, integrating them into a computationally meaningful process model. The model is summarised in text form at the end of this section, in our working definition of serendipity.

It is worth remarking that many things might go wrong. A serendipity trigger might not arise, or might not attract interest. If interest is aroused, a path to a useful result may not be sought, or may not be found. If a result is developed, it may turn out to be of little value. Prior experience with a related problem could be informative, but could also hamper innovation. Similarly, multiple tasks, influences, and contexts can help to foster an inventive frame of mind, but they may also be distractions.

Figure \ref{fig:1b} ignores these unserendipitous possibilities to focus on the key features of “successful” serendipity. The trigger is denoted here by \(T\). The prepared mind corresponds to those preparations, labeled \(p\) and \(p^{\prime}\), that are relevant to the discovery and invention phases, respectively. These preparations may include training, current attitude, access to relevant knowledge sources, and so on. A focus shift takes place when the trigger is observed to be interesting. The now-interesting trigger is denoted \(T^\star\), and is common to both the discovery and the invention phases. The bridge is comprised of the actions based on \(p^{\prime}\) that are taken on \(T^\star\) leading to the result \(R\), which is ultimately given a positive evaluation.

\label{fig:1b}

\label{fig:1c}

\label{fig:model}

Figure \ref{fig:1c} expands this schematic into a sketch of the components of one possible idealised implementation of a serendipitous system. An existing generative process is assumed. This may be based on observations of the outside world, or it may be a purely computational process. In any case, its products are passed on to the next stage. After running this data through a feedback loop, certain aspects of the data are singled out, and marked up as “interesting.” Note that this designation need not arise all at once: rather, it the outcome of a reflective process. In the implementation envisioned here, this process makes use of two primary functions: \(p_1\), which notices particular aspects of the data, and \(p_2\), which offers reflections about those aspects. Together, these functions build up a “feedback object,” \(T^{\star}\), which consists of the original data and further metadata. This is passed on to an experimental process, which has the task of verifying that the data is indeed interesting, and determining what it may be useful for. This is again an iterative process, relying on functions \(p^{\prime}_1\) and \(p^{\prime}_2\), which build a contextual understanding of the trigger by devising experiments and assessing their results. Once implications or applications have been found, a result is generated, which is passed to a final evaluation process, and, from there, to applications.

The ellipses at the end of the workflow in Figure \ref{fig:1c} are intended to suggest that applications are open-ended; one important class of applications will result in changes to one or more of the system’s modules, either by expanding the knowledge base that it draws on, or adjusting its methods. This corresponds to Merton’s notion of “extending an existing theory” \cite{merton1948bearing}. Note that earlier components of the workflow cannot, in general, anticipate what the subsequent phases will produce or achieve. If the system’s next steps could be anticipated, we would not say that the behaviour was serendipitous, and this global condition pushes back on each of the component modules. Serendipity does not adhere to one specific part of the system, but to its operations as a whole.

Although Figures \ref{fig:1b} and \ref{fig:1c} treat the case of successful serendipity, as the earlier remarks suggest, each step is fallible, as is the system as a whole. Thus, for example, a trigger that has been initially tagged as interesting may prove to be fruitless in the verification stage. Similarly, a system that implements all of the procedural steps in Figure \ref{fig:1c}, but that for whatever reason is never able to achieve a result of significant value cannot be said to have potential for serendipity. However, a system that only produces results of high value would also be suspect, since it would indicate a tight coupling between trigger and outcome. Fallibility is a “meta-criterion” that transcends the criteria from Section \ref{sec:by-example}. Summarising, we propose the following:

\label{def:serendipity} (1) Within a system with a prepared mind, a previously uninteresting trigger arises due to circumstances that the system does not control, and is classified as interesting by the system; and, (2) The system uses the trigger and prior preparation, together with relevant computational processing, networking, and experimental techniques, to obtain a novel result that is evaluated favourably by the system or by external sources.

The constituent terms in this definition are purposefully general: for our purposes it is their relationship that matters. A trigger, for example, is not defined in terms of a specific data structure, nor is a bridge constrained to be drawn from a specific set of reasoning techniques. We view such generality as a strength, but it does leave further work for anyone who aims to apply the definition in practice. Section \ref{specs-overview} presents further structure that helps to make that work more routine.