deletions | additions
diff --git a/3model.tex b/3model.tex
index 7e18dea..9766cc1 100644
--- a/3model.tex
+++ b/3model.tex
...
model. The model is summarised in text form at the end of this
section, in our working definition of serendipity.
%% Figure \ref{fig:1a} is a heuristic map of the features of serendipity
%% introduced in Section \ref{sec:by-example}.
%% %
%% Dashed paths ending in `\ymark' show some of the things that can go
%% wrong.
It is worth remarking that many things might go wrong.
%
Dashed paths ending in `\ymark' show some of the things that can go
wrong. A serendipity trigger might not arise, or might not attract interest.
If interest is aroused, a path to a useful result may not be sought,
or may not be found. If a result is developed, it may turn out to be
of little value. Prior experience with a related problem could be
informative, but could also hamper innovation. Similarly, multiple
tasks, influences, and contexts can help to foster an inventive frame
of mind, but they may also be distractions.
Figure \ref{fig:1b}
removes ignores these unserendipitous
paths possibilities to
focus on the key features of ``successful'' serendipity.
%
The \textbf{trigger} is denoted here by $T$.
%
...
The \textbf{bridge} is comprised of the actions based on $p^{\prime}$
that are taken on $T^\star$ leading to the \textbf{result} $R$, which is ultimately given a positive evaluation.
\afterpage{\clearpage} %\afterpage{\clearpage}
\begin{figure}[p]
\vspace{2mm}
\captionsetup[subfigure]{justification=centering}
%% \begin{minipage}[b]{\textwidth}
%% {\centering
%% \input{heuristic-map-tikz}
%% \par}
%% \vspace{-4mm}
%% \subcaption{A heuristic map, showing serendipitous and unserendipitous outcomes}\label{fig:1a}
%% \end{minipage}
%% \medskip
\begin{subfigure}{\textwidth}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{minipage}[b]{\textwidth}
{\centering
\input{schematic-tikz}
%\includegraphics[width=.8\textwidth]{schematic}
\par}
%\subfloat[A simplified process schematic, showing the key features of the model: the trigger, prepared mind, focus shift, and result][A simplified process schematic, showing the key features of the model:\newline the trigger, prepared mind, focus shift, and result]
\subcaption{A simplified process schematic, showing the key features
of the
model}\label{fig:1b} model:\newline the trigger ($T$), prepared mind
($p$, $p^\prime$), focus shift ($T^\star$), and result ($R$)}
\label{fig:1b}
\end{minipage}
\medskip \end{subfigure}
\bigskip
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{minipage}[b]{\textwidth}
...
\end{minipage}
\bigskip
\caption{Three \caption{Schematic representations of
the elements of serendipity}\label{fig:model} a serendipitous process}\label{fig:model}
\end{figure}
Figure \ref{fig:1c} expands this schematic into a sketch of the
...
Serendipity does not adhere to one specific part of the system, but to
its operations as a whole.
Although Figures \ref{fig:1b} and \ref{fig:1c} treat the case of
successful serendipity, as
indicated in Figure
\ref{fig:1a}, the earlier remarks suggest, each step is
fallible, as is the system as a whole. Thus, for example, a trigger
that has been initially tagged as interesting may prove to be
fruitless.
Similarly fruitless in the verification stage. Similarly, a system that
implements all of the steps in Figure \ref{fig:1c}, but that for
whatever reason
is never
achieves results able to achieve a result of
significant value
cannot be said to have potential for serendipity. However, a system
only produces results of high value would also be suspect, since it
would indicate a tight coupling between trigger and outcome.
Fallibility is a ``meta-criterion'' that transcends the criteria from
Section \ref{sec:by-example}. Summarising, we propose the following:
%\vspace{.5cm}
diff --git a/6SPECS.tex b/6SPECS.tex
index ed25610..096bc82 100644
--- a/6SPECS.tex
+++ b/6SPECS.tex
...
%
%Then combining $\mathbf{a}\times\mathbf{b}\times\mathbf{c}$ gives a
% likelihood score:
{Low \begin{mdframed}
\vspace{.1cm} {\textbf{\emph{Likelihood score and ruling.}} Low but
nonzero likelihood $\mathbf{a}\times\mathbf{b}\times\mathbf{c}$ and
high value $\mathbf{d}$
are the criteria we use to say imply that the event was
``highly serendipitous.''} ``serendipitous.''
In other conditions, the event was ``unserendipitous.''}
\end{mdframed}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\item[{(\textbf{C - Factors})}] {Finally, if the criteria from Part A
are met, and if the event is deemed sufficiently serendipitous to
...
\subsection{Heuristics}\label{specs-heuristics}
\textbf{\emph{Choose relevant populations to produce a useful
estimate.}} It isn't necessary to assign explicit numerical
values to $\mathbf{a}$, $\mathbf{b}$, $\mathbf{c}$, and $\mathbf{d}$,
although that can be done if desired. More typically -- and in all of
the examples that follow -- all that is required is to select a
relevant population in order to make an estimate. With a population
of one, there is no basis for comparison, whereas in huge population,
the chance of any particularly specific outcome is likely to be
vanishingly small. The aim is to highlight what -- if anything -- is
special about the potentially serendipitous development pathway, in
comparison to other possible paths. Thus, we might compare Fleming to
other lab biologists, and Goodyear to other chemists. Even if we were
to shift the analysis and look at the much smaller populations of
experimental pathologists or inventors with an interest in rubber,
Fleming and Goodyear would have features that stand out, particularly
when it comes to their curiosity.
\textbf{\emph{Find the salient features of the trigger.}} How can we
we estimate the chance of the trigger appearing, if every trigger is
unique? Consider de Mestral's encounter with burrs. The chance of
encountering burrs while out walking is
high: \emph{high}: many people have
had that experience. The unique features of de Mestral's experience
are that he had the curiosity to investigate the burrs under a
microscope, and the sagacity (and tenacity) to turn what he discovered
into a successful product. The details of the particular burrs that
were encountered are essentially irrelevant. This shows that it is
not essential for all factors contributing to the likelihood score to
be
``low'' ``\emph{low}'' in order for a given process of discovery and
invention to be deemed serendipitous. In the general case, we are not
interested in the chance of encountering a particular object or set of
data. Rather, we are interested the chance of encountering some
trigger that could precipitate an interested response. The trigger
itself may be a complex object or event that takes place over a period
of time; in other words, it may be a pattern, rather than a fact.
Noticing patterns is a key aspect of sagacity, as well.
\textbf{\emph{Look at long-term behaviour.}} Although it is in no way
required by the SPECS methodology outlined above, many systems
(including all of the examples below) have an iterative aspect. This
means that a result may serve as a trigger for further discovery. In
such a case, further indeterminacy may need to be introduced to the
system, lest the results be convergent, and therefor, infallible. In
applying the critera to such systems, we consider long-term behaviour.
diff --git a/8cc.tex b/8cc.tex
index 6392cd3..c583c12 100644
--- a/8cc.tex
+++ b/8cc.tex
...
to analyse three examples of potentially serendipitous behaviour:
dynamic investigation problems, model generation, and poetry
flowcharts. Using our updated criteria, we discuss two new examples
below, and revisit poetry
flowcharts in our third example, flowcharts, reporting on recent work and
framing outlining the next steps. The
first example reviews three case studies respectively apply
the criteria to \emph{evaluate} of an existing
system, \emph{design} a
new experiment, and \emph{frame} a ``grand challenge.'' In the first
case study, the system
that we
deem evaluate turns out not to be
not particularly
serendipitous:
this serendipitous according to our criteria. This helps to show that our
definition is not overly inclusive. The second example
outlines a ``grand challenge''. The third example combines
retrospective and prospective positions, as it integrates design and
prototyping. As Campbell \citeyear{campbell2005serendipity} writes,
``serendipity presupposes a smart mind,'' and each of these examples
suggest potential directions for further work in computational
intelligence.
%% If the system learns an $N$th fact or
%% If applied to a system which could be described as minimally
...
musical parameters. Greater dynamism in future versions of the system
would be likely to increase its potential for serendipity.
\subsection{Case Study: Envisioning artificially intelligent recommender systems} \label{sec:nextgenrec}
\subsubsection{System description}
% Stress distinction between serendipity on the system- vs. serendipity on the user's side.
Recommender systems are one of the primary contexts in computing where
serendipity is currently discussed. In the context of the current
recommender system literature, `serendipity' means suggesting items to
a user that will be likely to introduce new ideas that are unexpected,
but thar are close to what the user is already interested in. These
systems mostly focus on supporting \emph{discovery} for the user --
but some architectures also seem to take account of \emph{invention}
of new methods for making recommendations, e.g.~by using Bayesian
methods, as surveyed in \citeNP{shengbo-guo-thesis}. Current
recommendation techniques that aim to stimulate serendipitous
discovery associate less popular items with high unexpectedness
\cite{Herlocker2004,Lu2012}, and use clustering to discover latent
structures in the search space, e.g., partitioning users into clusters
of common interests, or clustering users and domain objects
\cite{Kamahara2005,Onuma2009,Zhang2011}. But even in the Bayesian
case, the system has limited autonomy. A case for giving more
autonomy to recommender systems can be made, especially in complex and
rapidly evolving domains where hand-tuning is cost-intensive or
infeasible. This suggests the need to distinguish serendipity that
the recommender induces for the user from serendipity that user
behaviour induces in the system.
\subsubsection{Application of criteria}
With this challenge in mind, we ask how serendipity could be achieved
within a next-generation recommender system. In terms of our model,
current systems have at least the makings of a \textbf{prepared mind},
comprising both a user- and a domain model, both of which can be
updated dynamically. User behaviour (e.g.~following certain
recommendations) or changes to the domain (e.g.~adding a new product)
may serve as a potential \textbf{trigger} that could ultimately cause
the system to discover a new way to make recommendations in the
future. In the current generation of systems that seek to induce
serendipity for the user, the system aims to induce a focus shift by
presenting recommendations that are neither too close, nor too far
away from what user already knows. Here the flow of information is
the other way around. Note, however, that it is unexpected pattern of
behaviour in aggregate, rather than a one-off event, that is likely to
provide grounds for the system's \textbf{focus shift}. A
\textbf{bridge} to a new kind of recommendation could be created by
looking at exceptional patterns as they appear over time. For
instance, new elements may have been introduced into the domain that
do not cluster well, or a user may suddenly indicate a strong
preference towards an item that does not fit their preference history.
Clusters may appear in the user model that do not have obvious
connections between them. A new recommendation strategy that
addresses the organisation's goals would be a valuable
\textbf{result}.
The system has only imperfect knowledge of user preferences and
interests. At least relative to current recommender systems, the
\textbf{chance} of noticing some particular pattern in user behaviour
seems quite low. The urge to make recommendations specifically for
the purposes of finding out more about users could be described as
\textbf{curiosity}. Such recommendations may work to the detriment of
user satisfaction -- and business metrics -- over the short term. In
principle, the system's curiosity could be set as a parameter,
depending on how much coherence is permitted to suffer for the sake of
gaining new knowledge. Measures of \textbf{sagacity} would relate to
the system's ability to develop useful experiments and draw sensible
inferences from user behaviour. For example, the system would have to
select the best time to initiate an A/B test. A significant amount of
programming would have to be invested in order to make this sort of
judgement autonomously, and currently such systems are beyond rare.
The \textbf{value} of recommendation strategies can be measured in
terms of traditional business metrics or other organisational
objectives.
\subsubsection{Ruling}
In this case, we compute a likelihood measure of
$\mathit{low}\times\mathit{variable}\times\mathit{low}$, with outcomes
of potentially high value, so that such a system is ``potentially
highly serendipitous.'' Realising such a system should be understood
as a computational grand challenge. If such a system was ever
realised, to maintain high value, continued adaptations would be
required. If there was a population of super-intelligent systems
along the lines envisioned here, the likelihood measures would have to
be rescaled accordingly.
\subsubsection{Qualitative assessment}
Recommender systems have to cope with a \textbf{dynamic world} of changing user preferences and a changing collection of items to recommend. A dynamic environment which exhibits some degree of regularity represents a precondition for useful A/B testing. The system's \textbf{multiple contexts} include the user model, the domain model, as well as an evolving model of its own organisation. A system matching the description here would have \textbf{multiple tasks}: making useful recommendations, generating new experiments to learn about users, and improving its models. In order to make effective decisions, a system would have to avail itself of \textbf{multiple influences} related to experimental design, psychology, and domain understanding. Pathways for user feedback that go beyond answers to the question ``Was this recommendation helpful?'' could be one way make the relevant expertise available.
\subsection{Case Study: Iterative design in automated programming} \label{sec:flowchartassembly}
...
are another place where domain-specific knowledge can be brought to
bear.
\subsection{Case Study: Envisioning artificially intelligent recommender systems} \label{sec:nextgenrec}
\subsubsection{System description}
% Stress distinction between serendipity on the system- vs. serendipity on the user's side.
Recommender systems are one of the primary contexts in computing where
serendipity is currently discussed. In the context of the current
recommender system literature, `serendipity' means suggesting items to
a user that will be likely to introduce new ideas that are unexpected,
but thar are close to what the user is already interested in. These
systems mostly focus on supporting \emph{discovery} for the user --
but some architectures also seem to take account of \emph{invention}
of new methods for making recommendations, e.g.~by using Bayesian
methods, as surveyed in \citeNP{shengbo-guo-thesis}. Current
recommendation techniques that aim to stimulate serendipitous
discovery associate less popular items with high unexpectedness
\cite{Herlocker2004,Lu2012}, and use clustering to discover latent
structures in the search space, e.g., partitioning users into clusters
of common interests, or clustering users and domain objects
\cite{Kamahara2005,Onuma2009,Zhang2011}. But even in the Bayesian
case, the system has limited autonomy. A case for giving more
autonomy to recommender systems can be made, especially in complex and
rapidly evolving domains where hand-tuning is cost-intensive or
infeasible. This suggests the need to distinguish serendipity that
the recommender induces for the user from serendipity that user
behaviour induces in the system.
\subsubsection{Application of criteria}
With this challenge in mind, we ask how serendipity could be achieved
within a next-generation recommender system. In terms of our model,
current systems have at least the makings of a \textbf{prepared mind},
comprising both a user- and a domain model, both of which can be
updated dynamically. User behaviour (e.g.~following certain
recommendations) or changes to the domain (e.g.~adding a new product)
may serve as a potential \textbf{trigger} that could ultimately cause
the system to discover a new way to make recommendations in the
future. In the current generation of systems that seek to induce
serendipity for the user, the system aims to induce a focus shift by
presenting recommendations that are neither too close, nor too far
away from what user already knows. Here the flow of information is
the other way around. Note, however, that it is unexpected pattern of
behaviour in aggregate, rather than a one-off event, that is likely to
provide grounds for the system's \textbf{focus shift}. A
\textbf{bridge} to a new kind of recommendation could be created by
looking at exceptional patterns as they appear over time. For
instance, new elements may have been introduced into the domain that
do not cluster well, or a user may suddenly indicate a strong
preference towards an item that does not fit their preference history.
Clusters may appear in the user model that do not have obvious
connections between them. A new recommendation strategy that
addresses the organisation's goals would be a valuable
\textbf{result}.
The system has only imperfect knowledge of user preferences and
interests. At least relative to current recommender systems, the
\textbf{chance} of noticing some particular pattern in user behaviour
seems quite low. The urge to make recommendations specifically for
the purposes of finding out more about users could be described as
\textbf{curiosity}. Such recommendations may work to the detriment of
user satisfaction -- and business metrics -- over the short term. In
principle, the system's curiosity could be set as a parameter,
depending on how much coherence is permitted to suffer for the sake of
gaining new knowledge. Measures of \textbf{sagacity} would relate to
the system's ability to develop useful experiments and draw sensible
inferences from user behaviour. For example, the system would have to
select the best time to initiate an A/B test. A significant amount of
programming would have to be invested in order to make this sort of
judgement autonomously, and currently such systems are beyond rare.
The \textbf{value} of recommendation strategies can be measured in
terms of traditional business metrics or other organisational
objectives.
\subsubsection{Ruling}
In this case, we compute a likelihood measure of
$\mathit{low}\times\mathit{variable}\times\mathit{low}$, with outcomes
of potentially high value, so that such a system is ``potentially
highly serendipitous.'' Realising such a system should be understood
as a computational grand challenge. If such a system was ever
realised, to maintain high value, continued adaptations would be
required. If there was a population of super-intelligent systems
along the lines envisioned here, the likelihood measures would have to
be rescaled accordingly.
\subsubsection{Qualitative assessment}
Recommender systems have to cope with a \textbf{dynamic world} of changing user preferences and a changing collection of items to recommend. A dynamic environment which exhibits some degree of regularity represents a precondition for useful A/B testing. The system's \textbf{multiple contexts} include the user model, the domain model, as well as an evolving model of its own organisation. A system matching the description here would have \textbf{multiple tasks}: making useful recommendations, generating new experiments to learn about users, and improving its models. In order to make effective decisions, a system would have to avail itself of \textbf{multiple influences} related to experimental design, psychology, and domain understanding. Pathways for user feedback that go beyond answers to the question ``Was this recommendation helpful?'' could be one way make the relevant expertise available.
\afterpage{\clearpage}
\begin{table}[p]
{\centering \renewcommand{\arraystretch}{1.5}
\scriptsize
\begin{tabular}{p{1.5in}@{\hspace{.1in}}p{1.5in}@{\hspace{.1in}}p{1.5in}}
\multicolumn{1}{c}{\textbf{{\footnotesize Evolutionary music}}}
&
\multicolumn{1}{c}{\hspace{-.3cm}\textbf{{\footnotesize Next-gen.~recommenders\hspace{.3cm}}}} & \multicolumn{1}{c}{\textbf{{\footnotesize Flowchart assembly}}}
& \multicolumn{1}{c}{\hspace{-.3cm}\textbf{{\footnotesize Next-gen.~recommenders\hspace{.3cm}}}}
\\[.05in]
\multicolumn{3}{l}{\em {\textbf{Condition}}} \\
\cline{1-3}
\multicolumn{3}{l}{\em Focus shift} \\[-.1cm]
Driven by (currently, human) evaluation of samples
& Find a pattern to explain a successful combination of nodes
& Unexpected behaviour in the aggregate
& Find a pattern to explain a successful combination of nodes\\ \\
\cline{1-3}
~\\[-.1cm]
\multicolumn{3}{l}{\em {\textbf{Components}}} \\
...
\multicolumn{3}{l}{\em Trigger} \\[-.1cm]
% \textbf{Trigger}
Previous evolutionary steps, in combination with user input
& Trial and error in combinatorial search
& Input from user behaviour
& Trial and error in combinatorial search \\
% \cline{1-3}
\multicolumn{3}{l}{\em Prepared mind} \\[-.1cm]
% \textbf{Prepared mind}
Musical knowledge, evolution mechanisms
&
Through user/domain model
& Constraints on node inputs and outputs; history of successes and
failures\\ failures
& Through user/domain model\\
% \cline{1-3}
%\textbf{Bridge}
\multicolumn{3}{l}{\em Bridge} \\[-.1cm]
Newly-evolved Improvisors
&
Elements identified outside clusters
& Try novel combinations
\\ & Elements identified outside clusters\\
% \cline{1-3}
%\textbf{Result}
\multicolumn{3}{l}{\em Result} \\[-.1cm]
Music generated by the fittest Improvisors
&
Dependent on organisation goals
& Non-empty or more highly qualified output
& Dependent on organisation goals \\ \cline{1-3}
~\\[-.1cm]
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\multicolumn{3}{l}{\em \textbf{Dimensions}} \\
...
%\textbf{Chance}
\multicolumn{3}{l}{\em Chance} \\[-.1cm]
Looking for rare gems in a huge search space
&
Imperfect knowledge of user preferences and behaviour
& Changing state of the outside world; random selection of nodes to try
\\ & Imperfect knowledge of user preferences and behaviour\\
% \cline{1-3}
%\textbf{Curiosity}
\multicolumn{3}{l}{\em Curiosity} \\[-.1cm]
Aiming to have a particular user take note of an Improvisor
&
Making unusual recommendations
& Search for novel combinations
\\ & Making unusual recommendations\\
% \cline{1-3}
%\textbf{Sagacity}
\multicolumn{3}{l}{\em Sagacity} \\[-.1cm]
Enhance user appreciation of Improvisor over time, using a fitness function
&
Update recommendation model after user behaviour
& Don't try things known not to work; consider variations on successful patterns
& Update recommendation model after user behaviour \\
% \cline{1-3}
%\textbf{Value} &
\multicolumn{3}{l}{\em Value} \\[-.1cm]
Via fitness function (as a proxy measure of creativity)
&
Per business metrics/objectives
& Currently ``non-empty results''; more interesting evaluation functions possible
\\ & Per business metrics/objectives\\
\cline{1-3}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
~\\[-.1cm]
...
%\textbf{Dynamic world}
\multicolumn{3}{l}{\em Dynamic world} \\[-.1cm]
Changes in the user tastes
&
Changing data sources and growing domain knowledge
& As precondition for testing system's influences on user
behaviour
& Changing data sources and growing domain knowledge \\ behaviour\\
%\cline{1-3}
%\textbf{Multiple contexts}
\multicolumn{3}{l}{\em Multiple contexts} \\[-.1cm]
Multiple users' opinions would change what the system is curious about and require greater sagacity
&
User model, domain model, model of its own behaviour
& Interaction between different heuristic search processes would increase unexpectedness
\\ & User model, domain model, model of its own behaviour\\
% \cline{1-3}
%\textbf{Multiple tasks}
\multicolumn{3}{l}{\em Multiple tasks} \\[-.1cm]
Evolve Improvisors, generate music, collect user input, carry out fitness calculations
&
Make recommendations, learn from users, update models
& Generate new heuristics and new domain artefacts
\\ & Make recommendations, learn from users, update models\\
% \cline{1-3}
%\textbf{Multiple influences}
\multicolumn{3}{l}{\em Multiple influences} \\[-.1cm]
Through programming of fitness function and musical parameter combinations
&
Experimental design, psychology, domain understanding
& Learning to combine new kinds of
ProcessNodes\\ ProcessNodes
& Experimental design, psychology, domain understanding\\
\cline{1-3}
\end{tabular}
\par}
diff --git a/biblio.bib b/biblio.bib
index b237c12..4be8dba 100644
--- a/biblio.bib
+++ b/biblio.bib
...
pages={3--23},
year={2014},
publisher={Springer}
}
@book{slack2003noble,
title={Noble Obsession: Charles Goodyear, Thomas Hancock, and the Race to Unlock the Greatest Industrial Secret of the 19th Century},
author={Slack, Charles},
year={2003},
publisher={Hyperion}
}