Related work

\label{sec:related}

An active research community investigating computational models of serendipity exists in the field of information retrieval, and specifically, in recommender systems \cite{Toms2000}. In this domain, and view serendipity as an important factor for user satisfaction, alongside accuracy and diversity. Serendipity in recommendations is understood to imply that the system suggests unexpected items, which the user considers to be useful, interesting, attractive or relevant. Definitions differ as to the requirement of novelty; , for example, describe systems that suggest items that may already be known, but are still unexpected in the current context. While standardised measures such as the \(F_1\)-score or the (R)MSE are used to determine the accuracy of a recommendation (i.e. whether the recommended item is very close to what the user is already known to prefer), there is no common agreement on a measure for serendipity yet, although there are several proposals \cite{Murakami2008, Adamopoulos2011, McCay-Peet2011,iaquinta2010can}. In terms of our model, these systems focus mainly on producing a serendipity trigger and predicting the potential for serendipitous discovery on the side of the user. Intelligent user modeling could bring other components of serendipity into play, as we will discuss in Section \ref{sec:computational-serendipity}.

Recent work has examined related topics of curiosity \cite{wu2013curiosity} and surprise \cite{grace2014using} in computing. This latter work seeks to “adopt methods from the field of computational creativity [\(\ldots\)] to the generation of scientific hypotheses.” This is an example of an effort focused on computational invention.

Paul André et al.  have examined serendipity from a design perspective. Like us, these authors proposed a two-part model encompassing “the chance encountering of information, and the sagacity to derive insight from the encounter.” According to André et al., the first phase is the one that has most frequently been automated – but they suggest that computational systems should be developed that support both aspects. They specifically suggest to focus on representational features: domain expertise and a common language model.

These features seem to exemplify aspects of the prepared mind. However, as we mentioned above, the bridge is a distinct process that mental preparation can support, but that it does not necessarily fully determine. For example, participants in a poetry workshop may possess a very limited understanding of each other’s aims or of the work they are critiquing, and may as a consequence talk past one another to a greater or lesser degree – while nevertheless finding the overall process of participating in the workshop illuminating and rewarding (often precisely because such misunderstandings elucidate poor communication choices!). Various social strategies, ranging from Writers Workshops to open source software, pair programming, and design charettes \cite[p. 11]{gabriel2002writer} have been developed to exploit similar emergent effects and to develop new shared language. We have recently investigated the feasibility of using designs of this sort in multi-agent systems that learn by sharing and discussing partial understandings \cite{corneli2015computational,corneli2015feedback}.

develop a discussion of serendipitous rendezvous in a multi-agent system for a graph exploration problem, in which “[h]aving more data about their colleagues, better decisions are made about the potential serendipity path.” This has some similarity to the discursive scenario described above, and shows that asymmetric partial knowledge can support serendipitious findings. These examples suggest that a distinction between emergent knowledge of other actors and knowledge about an underlying domain may be useful – although the distinction may be less relevant if the underlying domain itself has dynamic and emergent features. Social coordination among human users of information systems is a current research topic. point out that naive end users often talk about serendiptious occurrences, which presents another route for research and evaluation.

The SerenA system developed by Deborah Maxwell et al.  offers a case study of several of the points discussed above. This system is designed to support serendipitous discovery for its (human) users \cite{forth2013serena}. The authors rely on a process-based model of serendipity \cite{Makri2012,Makri2012a} that is derived from user studies, including interviews with 28 researchers, looking for instances of serendipity from both their personal and professional lives. This material was coded along three dimensions: unexpectedness, insightfulness, and value. This research aims to support the process of forming bridging connections from an unexpected encounter to a previously unanticipated but valuable outcome. The theory focuses on the acts of reflection that foment both the creation of a bridge and estimates of the potential value of the result. While this description touches on all of the features of our model, SerenA largely matches the description offered by André et al.  of discovery-focused systems, in which the user experiences an “aha” moment and takes the creative steps to realise the result. SerenA’s primary computational method is to search outside of the normal search parameters in order to engineer potentially serendipitous (or at least pseudo-serendipitous) encounters. In recent joint work \cite{colton-assessingprogress}, we presented a diagrammatic formalism for evaluating progress in computational creativity. It is useful to ask what serendipity would add to this formalism, and how the result compares with other attempts to formalise serendipity, notably Figueiredo and Campos’s ‘Serendipity Equations’. Figueiredo and Campos describe serendipitous “moves” from one problem to another, which transform a problem that cannot be solved into one that can. In our diagrammatic formalism, we spoke about progress with systems rather than with problems. It would be a useful generalisation of the formalism – and not just a simple relabelling – for it to be able to tackle problems as well. However, it is important to notice that progress with problems does not always mean transforming a problem that cannot be solved into one that can. Progress may also apply to growth in the ability to posit problems. In keeping track of progress, it would be useful for system designers to record (or get their systems to record) what problem a given system solves, and the degree to which the computer was responsible for coming up with this problem.

As remark, anomaly detection and outlier analysis are part of the standard machine learning toolkit – but recognising new patterns and defining new problems is more ambitious. Establishing complex analogies between evolving problems and solutions is one of the key strategies used by teams of human designers \cite{Analogical-problem-evolution-DCC}. Kazjon Grace presents a computational model of the creation of new concepts and interpretations, but this work did include the ability to create new higher order relationships necessary for complex analogies. New patterns and higher-order analogies were considered in Hofstadter and Mitchell’s Copycat and the subsequent Metacat, but these systems operated in a simple and fairly abstract “microdomain” \cite{hofstadter1994copycat,DBLP:journals/jetai/Marshall06}. The relationship between serendipity and novel problems receives considerable attention in the current work, since we want to increasingly turn over responsibility for creating and maintaining a prepared mind to the machine.