deletions | additions
diff --git a/SPECS-begins.tex b/SPECS-begins.tex
index 4a19f04..de258de 100644
--- a/SPECS-begins.tex
+++ b/SPECS-begins.tex
...
creativity, and provides a much-needed set of customisable evaluation
guidelines, the \emph{Standardised Procedure for Evaluating Creative
Systems} (SPECS) \cite{jordanous:12}. Originally designed to evaluate the concept of creativity, the three step SPECS process firstly requires the evaluator to define the concept(s) they are evaluating the system on. This definition is then converted into standards that can eventually be used to test and evaluate individual systems, or comparatively evaluate multiple systems.
%
We
give follow a slightly modified version of her earlier evaluation
guidelines, in that rather than attempt a definition and evaluation of
{\em creativity}, we follow the three steps for \emph{serendipity}.
%\newpage
\subsubsection*{Step 1: A computational definition of serendipity}
\begin{quote} {\em Identify a definition of serendipity that your
system should satisfy to be considered serendipitous.}\end{quote}
\noindent We adopt the
Section \ref{sec:our-model} model as our definition of serendipity
for Step 1.
%% This situation can be pictured schematically as follows: described above.
diff --git a/SPECS-continues.tex b/SPECS-continues.tex
index dc41014..407f237 100644
--- a/SPECS-continues.tex
+++ b/SPECS-continues.tex
...
\begin{quote} {\em Using Step 1, clearly state what standards you use to evaluate the serendipity of your
system. }\end{quote}
\noindent With our definition
and other features of the model in mind, we propose the following standards for evaluating serendipity in computational systems.
They represent These criteria allow the
key parts of our definition in a form that allows evaluator to assess the degree
to which they are met: of seredipity that is present in a given system's operation.
%% Serendipity relies on a reassessment or reevaluation -- a \emph{focus shift} in which something that was previously uninteresting, of neutral, or even negative value, becomes interesting.
...
\begin{quote} {\em Test your serendipitous system against the standards stated in Step 2 and report the
results.}\end{quote}
\noindent
We will devote the entire next section In Section \ref{sec:computational-serendipity}
to testing we pilot our framework
in respect to by examining the degree of serendipity of existing computational
systems focussing on serendipity. systems, looking for ways that they could become more serendipitous enhanced. We will also use the framework to guide the high-level design of a novel system.
diff --git a/cc-intro.tex b/cc-intro.tex
index a187982..830d098 100644
--- a/cc-intro.tex
+++ b/cc-intro.tex
...
The 13 criteria from Section \ref{sec:literature-review} specify the
conditions and preconditions that are conducive to serendipitous
discovery. These criteria have been further formalised
in Section
\ref{specs-overview}.
% \ref{specs-overview} using SPECS.
%%
\citeA{pease2013discussion} used a
slightly different version variant of
the these SPECS criteria to
discuss analyse three examples of
potentially serendipitous behaviour:
in dynamic
investigation problems, model generation, and poetry flowcharts. Two
additional examples
using the revised criteria are
described below. These example serve the purpose of illustrating discussed below using our revised
criteria, criteria.
As Campbell \citeyear{campbell2005serendipity} writes, ``serendipity
presupposes a smart mind,'' and
also show forays of computational intelligence
into domains known these examples suggest potential
directions for
serendipity further work in
their everyday cultural context. computational intelligence. We then
turn to a more elaborated thought experiment that
evaluates
these ideas in the course of developing describes a new
system
design.
As designed with our criteria in mind.
Before describing these examples, as a
contrast, baseline, we introduce the
notion of \emph{minimally serendipitous
systems}: systems}. According to our
standards, there are various ways to achieve a result with \emph{low}
serendipity: if the observation was likely, if further developments
happened with little skill, and if the the value of the result was
low, then we would not say the outcome was serendipitous. We would be
prepared to attribute ``minimal serendipity'' to cases where the
observation was \emph{moderately} likely, \emph{some} skill or effort
was involved, and the result was only \emph{fairly good}.
For However,
for computational systems, if most of the skill involved lies with the
user, then there is little reason to call the system's operation
serendipitous -- even if it consistently does its job very well. For
example, machines can learn to recognise
instances or approximate certain types
of
a given pattern
quite consistently, patterns, but it is
an interesting surprise if more surprising when a computational system
independently finds an entirely new kind of pattern. Furthermore, the
position of the evaluator is important: a spell-checking system might
suggest a particularly fortuitous substitution, but we would not
expect the spell-checker to know when it was being clever. In such a
case, we may say serendipity has occurred, but not that we have a
serendipitous system.
%% If the system learns an $N$th fact or
%% If applied to a system which could be described as minimally
...
%% more for infelicities than for exceptional wit.
\subsection{Case Studies: Prior art}
\label{sec:priorart}
\paragraph{Evolutionary music improvisation systems.}
\citeA{jordanous10} reported a computational jazz improvisation system using genetic algorithms. Genetic algorithms, and evolutionary computing more generally, could encourage computational serendipity. We examine Jordanous's system (later given the name
{\em {\sf GAmprovising} \cite{jordanous:12}) as a case study for evolutionary computing in the context of our model of computational serendipity: to what extent does
GAmprovising {\sf GAmprovising} model serendipity?
GAmprovising {\sf GAmprovising} uses genetic algorithms to evolve a population of \emph{Improvisors}. Each Improvisor is able to randomly generate music based on various parameters such as
the range of notes to be used, preferred notes to be used, rhythmic implications around note lengths and other musical parameters \cite{jordanous10}. These parameters are what defines the Improvisor at any point in evolution. After a cycle of evolution, each Improvisor is evaluated via a fitness function based on Ritchie's \citeyear{ritchie07} criteria
of how creative the Improvisor is. Ritchie's criteria use for creativity. This model relies on user-supplied ratings of
how novel the novelty and
how appropriate appropriateness of the music produced by the Improvisor
is, to calculate 18 criteria that collectively
evaluate indicate how creative a system is. The most successful Improvisors
(as deemed by the (according to this fitness function) are used to seed a new generation of Improvisors, through crossover and mutation operations.
The
GAmprovising {\sf GAmprovising} system can be said to have a \textbf{prepared mind} through its background knowledge of what musical
knowledge concepts to embed in the Improvisors and the evolutionary abilities to evolve Improvisors. A \textbf{serendipity trigger} comes from the combination of the mutation and crossover operations
previously employed in the genetic algorithm, and the user input feeding into the fitness function to evaluate produced music. A
\textbf{bridge}, from the genetic algorithm operations and user input, to the result \textbf{bridge} is built by
the through creation of new Improvisors. The \textbf{results} are the various musical improvisations produced by the fittest Improvisors (as well as, perhaps, the parameters that have been considered fittest).
%
The likelihood of serendipitous evolution is greatly enhanced by the use of mutation and crossover operations within the genetic algorithm, to increase the diversity of search space covered by the system during evolution. However the \textbf{chance} of any particular Improvisor being discovered is low, given the massive dimensions of the search space. Interesting developments in evolution would be guided by \textbf{curiosity} through the particular human user identifying Improvisors as interesting at that time. \textbf{Sagacity} is determined by how likely the user is to appreciate the same Improvisor's music (or similar music) over time, as tastes of the user may change. The \textbf{value} of the results are maximised through employing a fitness function.
Evolutionary systems such as GAmprovising necessarily operate in a \textbf{dynamic world} which The likelihood of serendipitous evolution is
evolving continuously and may also be affected greatly enhanced by
changes in user tastes as they evaluate musical output from Improvisors. The \textbf{multiple contexts} arise from the
possibility use of
having multiple users evaluate random mutation and crossover operations within the
musical output (though this is as yet not implemented formally) or through genetic algorithm, which increase the
user changing their preferences over time. \textbf{Multiple tasks} are carried out diversity of search space covered by the system
including evolution of Improvisors, generation during evolution. The \textbf{chance} of
music by individual Improvisors, capturing any particular pair of
Improvisor and user
ratings of a sample evaluation is low, given the massive dimensions of the
Improvisors' output, and fitness calculations. \textbf{Multiple influences} are captured through search space. The evolution of the
various combinations population of
parameters that Improvisors could be
set and described as \textbf{curiosity} about how to satisfy the
potential range musical tastes of
values for each parameter. Table \ref{caseStudies} summarizes how serendipity in such a
system can be described in terms particular human user who identifies certain Improvisors as interesting. The system's \textbf{sagacity} corresponds to the likelihood that the user will appreciate a given Improvisor's music (or similar music) over time. One challenge here is that the tastes of
our model. the user may change. The \textbf{value} of the results are maximised through employing a fitness function.
Evolutionary systems such as {\sf GAmprovising} necessarily operate in a \textbf{dynamic world} which is evolving continuously and that must, in particular, take into account the evolution of the user's tastes. The \textbf{multiple contexts} arise from the possibility of having multiple users evaluate the musical output or through the user changing their preferences over time. A variant of the system that would cater to multiple users is not yet implemented formally -- a revised system with these features would be curious about the more complex problem of satisfying multiple different users' preferences simultaneously. \textbf{Multiple tasks} are carried out by the system including evolution of Improvisors, generation of music by individual Improvisors, capturing of user ratings of a sample of the Improvisors' output, and fitness calculations. \textbf{Multiple influences} are captured through the various combinations of parameters that could be set and the potential range of values for each parameter.
%% Table \ref{caseStudies} summarizes how serendipity in such a system can be described in terms of our model.
\paragraph{Recommender systems.}
% Stress distinction between serendipity on the system- vs. serendipity on the user's side.
As discussed in Section \ref{sec:related}, recommender systems are one
of the primary contexts in computing where serendipity is
addressed. considered. Most discussions of serendipity in recommender systems focus on suggesting items to a user that will be likely to introduce new ideas that are unexpected, but close to what the user is already interested in.
If the latter connection exists, such a system must A recommendation of this type will be called
pseudoserendipitous. (possibly pseudo-)serendipitous. As we noted, these systems mostly focus on
supporting discovery, but some architectures also seem to take account of invention, such as the Bayesian methods surveyed in Chapter 3 of \citeNP{shengbo-guo-thesis}. Recommender systems \emph{stimulate} serendipitous discovery, by \emph{simulating} when this is likely to occur. In respect to related work, we therefore have to distinguish serendipity on the
side of the user
side from serendipity in the system.
Current As we have indicated, most current research
mainly addresses in this area the first aspect and tries to find and assess \textbf{serendipity triggers} by exploiting patterns in the search space. For example,
\cite{Herlocker2004} \citeA{Herlocker2004} as well as
\cite{Lu2012} \citeA{Lu2012} associate less popular items with
a higher high unexpectedness. Clustering
was is also frequently used to discover latent structures in the search space. For example,
\cite{Kamahara2005} \citeA{Kamahara2005} partition users into clusters of common interest, while
\cite{Onuma2009} \citeA{Onuma2009} as well as
\cite{Zhang2011} \citeA{Zhang2011} perform clustering on both users and items. In the work by
\cite{Oku2011}, \citeA{Oku2011}, the user is allowed to select two items in order to mix their features in
some a sort of conceptual
blending. blend.
Note that
in the course of evolution of these and other systems it is
typically generally the system's developers who
adapt the system; plan and perform adaptations; even in the Bayesian case, the system has limited autonomy. Nevertheless, the impetus to develop increasingly autonomous
recommender systems is present, especially in complex domains where hand-tuning is either very cost-intensive or infeasible.
In the context of With this
paper, challenge in mind, we
therefore want to investigate how serendipity could be achieved on the system side, and potentially be reflected back to the user. In terms of our model, current systems have at least the makings of a \textbf{prepared mind}, comprising both a user- and a domain model, both of which can be updated dynamically. User behaviour (e.g.~following up on these recommendations) may serve as a \textbf{serendipity trigger} for the system, and change the way it makes recommendations in the future. A \textbf{bridge} to a new kind of recommendation may be found by pattern matching, and especially by looking for exceptional cases:
when new elements are introduced into the domain which do not cluster well, or different clusters appear in the user model that do not have obvious connections between them. The intended outcome of recommendations depends on the organisational mission, and can in most cases be situated between making money and empowering the user. The serendipitous \textbf{result} on the system side would
resemble be learning a
novel learnt new approach that helps to address these goals.
%%%
\begin{table}[Ht!]
...
\multicolumn{1}{c}{} & \multicolumn{1}{c}{\textbf{Evolutionary music systems}} & \multicolumn{1}{c}{\textbf{Recommender systems}} \\[-.1in]
\multicolumn{1}{l}{\em Components} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\
\cline{2-3}
\textbf{Serendipity trigger} &
Evolutionary Previous evolutionary operations
and together with user input & Input from user behaviour \\
% \cline{2-3}
\textbf{Prepared mind} & Musical knowledge, evolution mechanisms & Through user/domain model \\
% \cline{2-3}
...
\cline{2-3}
\textbf{Chance} & If discovered in huge search space & Through imperfect knowledge/if learning from user behaviour \\
% \cline{2-3}
\textbf{Curiosity} &
If Aiming to have a particular user
notes take note of an Improvisor & Making unusual recommendations \\
% \cline{2-3}
\textbf{Sagacity} & User appreciation of Improvisor over time & Updating
models recommendation model after user behaviour \\
% \cline{2-3}
\textbf{Value} & Via fitness function
(proxy (as a proxy measure of creativity) & As per business metrics/objectives \\
\cline{2-3}
\multicolumn{1}{l}{\em Factors} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\
\cline{2-3}
\textbf{Dynamic world} & Continuous computational evolution and changes in user tastes& As precondition for testing system's influences on user behaviour\\
%\cline{2-3}
\textbf{Multiple contexts} & Multiple users
opinions? opinions -- would change the curiousity profile & User model and domain model\\
% \cline{2-3}
\textbf{Multiple tasks} & Evolving Improvisors, generating music, collecting user input, fitness calculations & Making recommendations, learning from users, updating models \\
% \cline{2-3}
...
\end{tabular}
\par}
\bigskip
\caption{Summary: applying
our computational serendipity model to
positive two case studies\label{caseStudies}}
\end{table}%
\normalsize
%%%
The imperfect knowledge about the user's preferences and interests represents a main
component source of
\emph{chance}. Furthermore, chance can play an important role if a system had the capacity to learn from user behaviour. \textbf{chance}. Combined with the ability to learn, \textbf{curiosity} could be described as the urge to make recommendations specifically for the purposes of finding out more about users, possibly to the detriment of other metrics over the short term. Measures of \textbf{sagacity} would relate to the system's ability to draw inferences from user behaviour. For example, the system might
do decide to initiate an A/B
testing test to decide how
a novel recommendation
strategies influence strategy influences conversion. The \textbf{value} of recommendation strategies can be measured in terms of traditional business metrics or other organisational objectives.
Recommender systems have to cope with a \textbf{dynamic world} of changing user preferences and a changing collection of items to recommend. A dynamic environment which nevertheless exhibits some degree of regularity represents a precondition for useful A/B testing. As mentioned above the primary \textbf{(multiple) contexts} are the user model and the domain model. A system matching the description here would have \textbf{multiple tasks}: making useful recommendations, generating new experiments to learn about users, and building new models. Such a system could avail itself of \textbf{multiple influences} related to
experimental design, psychology, and domain understanding.
Recommender systems have to cope with a \textbf{dynamic world} of changing user preference ratings and new items in the system. At the same time, such a dynamic environment which nevertheless exhibits some regularity represents a precondition for useful A/B testing. As mentioned above the primary \textbf{(multiple) contexts} are the user model and the domain model. A system matching the description here would have \textbf{multiple tasks}: making useful recommendations, generating new experiments to learn about users, and building new models. Such a system could avail itself of \textbf{multiple influences} related to
experimental design, psychology, and domain understanding. Table \ref{caseStudies}
summarizes summarises how the components, dimensions and
factors of our model
could of serendipity can be mapped to
recommender systems, in comparison to evolutionary music systems
from computational creativity. and the ``next-generation'' recommender systems discussed above.
% As a general comment, we would say that this is largely how
% \emph{research and development} of recommender systems works, but
diff --git a/connections.tex b/connections.tex
index 64c26fa..ed8f7a2 100644
--- a/connections.tex
+++ b/connections.tex
...
The features of our model
matches match and
expands expand upon Merton's \citeyear{merton1948bearing} description of the ``serendipity pattern.'' $T$ is an unexpected observation; $T^\star$ highlights its interesting or anomalous features and recasts them as ``strategic data''; and, finally, the result $R$ may include updates to $p$ or $p^{\prime}$ that inform further phases of research.
%% Although they do not directly figure in our definition, the supportive
%% dimensions and factors can be interpreted using this schematic to
%% flesh out the description of serendipity in working systems.
From the point of view of the system under consideration, $T$ is
indeterminate. Furthermore, one must assume that relatively few of
...
asks how these features \emph{might} be useful. These routines
suggest the relevance of a computational model of \textbf{curiosity}.
%
Rather than a simple look-up rule, $p^{\prime}$ involves creating new knowledge. A simple example is found in clustering systems, which generate new categories on the fly. A more complicated example, necessary in the case of updating $p$ or $p^{\prime}$, is automatic programming. There is
ample room a need for \textbf{sagacity} in this
sort of affair.
%
Judgment of the \textbf{value} of the result $R$ may be carried out
``locally'' (as an embedded part of the process of invention of $R$)
diff --git a/model.tex b/model.tex
index 10ee371..8f65662 100644
--- a/model.tex
+++ b/model.tex
...
\section{Our computational model of serendipity} \label{sec:our-model}
Figure \ref{model-diagram} recapitulates the ideas from the previous
section. Dashed paths show
some of the
various things that could go wrong.
The serendipity trigger might not arise, or might not attract
interest. If interest is aroused, a path to a useful result may not
be sought, or if it is sought, may not be found. If a result is
developed, it may turn out not be of value. Prior experience with
related problems may help with the exploration, but may also restrict
innovative thinking. Multiple tasks, influences, and contexts can
play a varied role: they can provide vital material and help to foster
an inventive frame of mind, and
can help send the investigator in a
new and fruitful direction -- but they can also be distractions.
Failures of curiousity or sagacity will undermine the process -- and
although serendipity does not reduce to luck, there is
some luck
involved as well.
\begin{figure}[h!]
...
\caption{A heuristic map of the features of serendipity introduced in
Section \ref{sec:by-example}. The central black line traces first
the process of \emph{discovery} in which an initial trigger combines
with mounting curiousity to effect a
focus shift; \emph{focus shift}, followed by a
process of \emph{invention} in which a prepared mind draws on
various resources and makes use of its
powers of sagacity to find a bridge to
a valuable result. In a typical chaotic fashion,
even paths
with that are initially nearby
initial conditions can have very different outcomes: some end
in failure of one form or another, while others yield results of
differing value.}
\label{model-diagram}
...
of the environmental factors listed above.
\begin{quote}
\begin{enumerate}[itemsep=2pt,labelwidth=9em,leftmargin=6em,rightmargin=2em] \begin{enumerate}[itemsep=2pt,labelwidth=9em,leftmargin=9em,rightmargin=2em]
\item[\emph{(\textbf{1 - Discovery})}] \emph{Within a system with a prepared mind, a previously uninteresting serendipity trigger arises due to circumstances that the system does not control, and is classified as interesting by the system; and,}
\item[\emph{(\textbf{2 - Invention})}] \emph{The system, by subsequently processing this trigger and background information together with relevant reasoning, networking, or experimental techniques, obtains a novel result that is evaluated favourably by the system or by external sources.}
\end{enumerate}
diff --git a/recommendations.tex b/recommendations.tex
index b0f3695..3324946 100644
--- a/recommendations.tex
+++ b/recommendations.tex
...
\end{itemize}
\begin{itemize}
\item \textbf{Sociality}:
As Campbell \citeyear{campbell2005serendipity} writes:
``serendipity presupposes a smart mind.'' We may be aided in our
pursuit
of the ``smart mind'' required for serendipity by recalling Turing's proposal that computers should ``be
able to converse with each other to sharpen their wits''
\cite{turing-intelligent}. Other fields, including computer Chess,
Go, and argumentation have achieved this, and to good effect.
diff --git a/related-work.tex b/related-work.tex
index 1130cb0..a6064d2 100644
--- a/related-work.tex
+++ b/related-work.tex
...
An active research community investigating computational models of serendipity exists in the field of information retrieval, and specifically, in recommender systems \cite{Toms2000}. In this domain, \citeA{Herlocker2004} and \citeA{McNee2006} view serendipity as an important factor for user satisfaction, alongside accuracy and diversity. Serendipity in recommendations is
understood to imply that the system suggests \emph{unexpected} items, which the user considers to be \emph{useful}, \emph{interesting}, \emph{attractive} or \emph{relevant}.
% \cite{Herlocker2004} \cite{Lu2012},\cite{Ge2010}.
Definitions differ as to the requirement of \emph{novelty}; \citeA{Adamopoulos2011}, for example, describe systems that suggest items that may already be known, but are still unexpected in the current context. While
standardized standardised measures such as the $F_1$-score or the (R)MSE are used to determine the \emph{accuracy} of a recommendation (i.e.~the recommended item is very close to what the user is already known to prefer), there is no common agreement on a measure for serendipity yet, although there are several proposals \cite{Murakami2008, Adamopoulos2011, McCay-Peet2011,iaquinta2010can}.
In terms of our model, these systems focus mainly on producing a \emph{serendipity trigger} and predicting the potential for serendipitous discovery on the side of the user. Intelligent user modeling could bring other components of serendipity into play, as we will discuss in Section \ref{sec:computational-serendipity}.
Recent work has examined related topics of \emph{curiosity}
diff --git a/serendipity.tex b/serendipity.tex
index 3579382..3db37c8 100644
--- a/serendipity.tex
+++ b/serendipity.tex
...
\newcommand{\handmark}{\ding{43}}%
\usepackage{lineno}
\usepackage{pagecolor}
\pagecolor{yellow!10!orange!5}
\usepackage[framemethod=tikz]{mdframed}
\mdfsetup{