Joe Corneli try my hand at revising the specs formula  about 9 years ago

Commit id: c64e5e261a9dd11a8dcc8b9f0e12a5fb1ff703b6

deletions | additions      

       

\begin{quote} {\em Using Step 1, clearly state what standards you use to evaluate the serendipity of your  system. }\end{quote}  With our definition in mind, we propose the following standards for evaluating our definition of serendipity: %% Serendipity relies on a reassessment or reevaluation -- a \emph{focus shift} in which something that was previously uninteresting, of neutral, or even negative value, becomes interesting.  \begin{quote}  \begin{description}  \item[\emph{Prepared mind}] \item[\emph{(\textbf{A - Definitional characteristics})}]  \emph{The system can be said to have a prepared mind, \emph{\textbf{prepared mind}},  consisting of previous experiences, background knowledge, a store of unsolved problems, skills, expectations, and (optionally) a current focus or goal.}  \item[\emph{Serendipity trigger}] \emph{The serendipity trigger goal. It then processes a \emph{\textbf{serendipity  trigger}} that  is at least partially the result of factors outside the system's control.  These may include of its control, including  randomness orsimple  unexpected events. Thetrigger should be determined independently from the end result.}  \item[\emph{Bridge}] \emph{The  system then  uses reasoning techniquesthat support a process of invention -- e.g.~abduction, analogy,  conceptual blending --  and/or social or otherwise externally enacted alternatives--  to create a bridge \emph{\textbf{bridge}}  from the trigger to a result.}  \item[\emph{Result}] \emph{A novel result is obtained, which result. The \emph{\textbf{result}}  is evaluated as useful, by the system and/or by an external source.}  \item[\emph{(\textbf{B - Dimensions})}] \emph{Serendipity, and its  various dimensions, can be present to a greater or lesser degree.  If the criteria above have been met, we generate ratings as  estimated probabilities in $[0,1]$, along several dimensions:  %  \emph{($\mathbf{a}$ - \textbf{chance})} how likely was this trigger to appear to  the system?  %  \emph{($\mathbf{b}$ - \textbf{curiosity})} On a population basis, comparing  similar circumstances, how likely was the trigger to be identified  as interesting?  %  \emph{($\mathbf{c}$ - \textbf{sagacity})} On a population basis, comparing  similar circumstances, how likely was it that a similar trigger  would be turned into a result?  %  Finally, we ask, again, comparing similar results where possible:  \emph{($\mathbf{d}$ - \textbf{value})} How valuable is the result that  is ultimately produced?}  \begin{itemize}  \item \emph{Then $\mathbf{a}\times\mathbf{b}\times\mathbf{c}$ gives a  likelihood score: low likelihood and high value is the criterion we use to say that the event was ``highly serendipitous.''}  \end{itemize}  \item[\emph{(\textbf{C - Factors})}] \emph{Finally, if the criteria  from Part A are met, and if the event is deemed ``highly  serendipitous'' according to the criteria in Part B, then in order  to deepen our qualitative understanding of the serendipitous  behaviour, we ask: To what extent does the system exist in a  \emph{\textbf{dynamic world}}, spanning \emph{\textbf{multiple  contexts}}, featuring \emph{\textbf{multiple tasks}}, and  incorporating \emph{\textbf{multiple influences}}?}  \end{description}  \end{quote}  \subsubsection*{Step 3: Testing our serendipitous system}         

%% case studies and thought experiments in terms of this model.  %  We used this model to examine several partial examples of serendipity,  in recommender systems, computerised jazz, andcomputational concept  invention. We then presented  a thought experiment that exhibits all of the features of the our  model. %  %% Section \ref{sec:discussion} offers recommendations for researchers  %% working in computational creativity (a key research area concerned  %% with the computational modelling of serendipity), and describes our  %% own plans for future work.  We then extracted recommendations related to the themes several corollaries  of our definitions, which  outline a paradigm for serendipitous computing rooted in  \emph{autonomy}, \emph{learning}, \emph{sociality}, and \emph{embedded  evaluation} which appear to be corollaries of serendipitous  computing. evaluation}.  %% Section \ref{sec:conclusion} reviews the argument and summarises the  %% limitations of our analysis.  % What answers have we offered?  The ideas presented in this article outline point to  several possible directions for implementation, but in and to further theoretical questions  about programming with design patterns. In  any case case,  considerable concrete work remains to be done in order to realise our model in code. Even our hand-picked examples of prior art pale in comparison to the examples of serendipitous discovery and invention from human history. It would seem that a fully-automated system that can realistically be said to behave in a highly  serendipitous manner has not yet been built. \textbf{[Actually, that depends on what we say in  the SPECS section, let's check.]}  % Further questions  Nevertheless, the theoretical work in this paper shows that it is  indeed possible to plan -- and program -- for serendipity.         

% \section{Connections} \label{sec:connections-to-formal-definition}  The features of our model matches and expands upon Merton's \citeyear{merton1948bearing} description of the ``serendipity pattern.'' $T$ is an unexpected observation; $T^\star$ highlights its interesting or anomalous features and recasts them as ``strategic data''; and, finally, the result $R$ may include updates to $p$ or $p^{\prime}$ that inform further phases of research.   Although they do not directly figure in our definition, the supportive         

The \textbf{bridge} is comprised of the actions based on $p^{\prime}$  that are taken on $T^\star$ leading to the \textbf{result} $R$, which is ultimately given a positive evaluation.  %% Here, $T$ is the trigger and $p$ denotes those preparations that afford the  %% classification $T^\star$, indicating $T$ to be of interest, while  %% $p^{\prime}$ denotes the preparations that facilitate the creation of a  %% bridge to a result $R$, which is ultimately given a positive  %% evaluation.         

% Dewey, Whitehead similar too.  Our thought experiment in Section \ref{sec:ww} develops a design  illustrating the relationship between creativity at the level of  artefacts (e.g. new (e.g.~new  poems) and creativity at the level of \emph{problem specification}. The search for connections that make  raw data into ``strategic data'' is an appropriate theme for  researchers in computational creativity to grapple with. 

%% detection and outlier analysis are part of the standard machine  %% learning toolkit, but it seems   In \cite{stakeholder-groups-bookchapter}, we \citeA{stakeholder-groups-bookchapter}  outlined a general programme for computational creativity, and examined perceptions of creativity in computational systems found among members of the general public, Computational Creativity researchers, and creative communities -- understood as human communities. We should now add a fourth important ``stakeholder'' group in computational creativity research: computer systems themselves. Creativity may look very different to this fourth stakeholder group than it looks to us. We should help computers evaluate their own results and creative process. %% These ideas set a relatively high bar, if only because computational  %% creativity has often been focused on generative rather than reflective 

The Writers Workshop described in Section \ref{sec:ww} is an example  of one such social model, but more fundamentally, it is an example of  \emph{learning from feedback}. experience}.  The Workshop model ``personifies'' the wider world as in the form of  one or several critics. It is clearly also possible for a lone creative agent to take its own critical approach in relationship to the world at large, using an experimental approach to generate feedback, and then looking for models to fit this feedback. %% While the pursuit of serendipitous findings may not enhance,  %% and may even diminish, results from a computationally creative system 

\subsubsection*{Serendipity as a framework for computational creativity}  \begin{itemize}  \item \textbf{Autonomy}: In the standard cybernetic model, we control computers, and we control the computer's context. There is little room for serendipity because there is nothing outside of our direct  control. Von Foerster \citeyear[p. 286]{von2003cybernetics} advocated a \emph{second-order cybernetics} in which ``the observer who enters the system shall be allowed to stipulate his own purpose.'' An eventual corollary of serendipitous operation of computers will be that \emph{Computational agents can specify their own problems.} problems to work on.}  \item \textbf{Learning}: If we admit the possibility of computational agents who that  operate our world rather than a circumscribed microdomain, together with curiousity about and  that are curious about this  world, then another corollary is that \emph{Computational agents will learn more and more about the world we live in.} \item \textbf{Sociality}:Turing recognised that we would not get there overnight, but that computers would have to be coached in this direction.  Deleuze \citeyear[p. 26]{deleuze1994difference} wrote: ``We learn nothing from those who say: `Do as I do'. Our only teachers are those who tell us to `do with me'[.]'' The Turing recognised that computers would have to be coached in the direction of social learning, but that once they attain that standard they will learn much more quickly. A  third corollary of serendipitous computing is that \emph{Computational agents will think, much like we do, interact  in a recognisably  social way rather than by reason alone.} with us and with each other.}  \item \textbf{Embedded evaluation}: Finally, the fourth corollary is that \emph{Computational agents will evaluate their own creativity.} creative process and products.}  \end{itemize}