Joe Corneli candidate Sept 22  over 8 years ago

Commit id: 5e24b08500cb5b6c33e591358f76dc3bcc426869

deletions | additions      

       

\section{Discussion} \label{sec:discussion}  In the preceding section, we applied our model to evaluate the serendipity of an evolutionary music improvisation system, a hypothetical class of next-generation recommender systems, and a system for assembling flowcharts. The model has helped to highlight directions for development that would increase the potential for serendipity in existing systems, either incrementally or more transformatively. Our model suggested case studies of systems that can observe events that would otherwise not be observed, take an interest in them, and transform the observations into artefacts with lasting value. We will now discuss implications from our findings for future research, and outline potential next steps. research.  %\input{12a-recommendations}  %\input{12b-future-work-intro} 

\begin{itemize}  \item \textbf{Embedded evaluation}:  \citeA{stakeholder-groups-bookchapter} outline a general programme  for computational creativity, and examine perceptions of computational creativity among members of the general public, computational creativity researchers, and existing creative  communities. We should now add a fourth important ``stakeholder''  group in computational creativity research: computer systems 

make evaluations in a way that is both reasonable and ethical. This  condition is exemplified by the preference for a ``non-zero sum''  criterion for value introduced in Section \ref{sec:by-example}.  Similar judgements apply when the ``product'' is a new process.  Within a Kantian framework ``an agent's moral maxims are instances  of universally-quantified propositions which could serve as moral  laws -- ones holding for any  agent'' \cite{powers2005deontological}.  Embedded evaluation is of immediately pragmatic as well as broader philosophical implacations;  thus, for example, the latest implementation of {\sf GAmprovising} is limited  because it is ``poor at using reasoned self-evaluation'' 

the steps involved, we see that the creation of a new design pattern  is somewhat serendipitous (Figure \ref{fig:pattern-schematic};  compare Figure \ref{fig:1b}).   %  To van Andel's assertion that ``The very moment I can plan or  programme `serendipity' it cannot be called serendipity anymore,'' we  reply that we can certainly describe patterns -- and programs -- with  built-in indeterminacy. Moreover, we can foster circumstances that  may make an unexpected happy outcome more likely. As discussed above,  autonomy, learning, sociality, and embedded evaluation are four themes  that are closely linked to serendipity; supporting them should improve  measurements of chance, curiosity, sagacity, and value. In addition,  designers can draw on to the four supportive environmental factors  introduced in Section \ref{sec:by-example}. Figure  \ref{fig:va-pattern-figure} illustrates one approach to planning for  serendipity, based on rewriting one of van Andel's serendipity  patterns using the standard design pattern template. In future work,  we intend to build a more complete serendipity pattern language -- and  put it to work within autonomous programming systems.  % Is ``having a stretch goal'' an example of a serendipity pattern? I think so!  \begin{figure}  \input{pattern-schematic-tikz.tex} 

\caption{Standard design pattern template applied to van Andel's \em{Successful error}\label{fig:va-pattern-figure}}  \end{figure}  To van Andel's assertion that ``The  very moment I can plan or programme `serendipity' it cannot be called  serendipity anymore,'' we reply that we can certainly describe  patterns -- and programs -- with built-in indeterminacy. We can  foster circumstances that may make an unexpected happy outcome more  likely. Figure \ref{fig:va-pattern-figure} illustrates this with one  van Andel's patterns of serendipity, rewritten using the standard  design pattern template. In future work, we intend to build a more  complete serendipity pattern language -- and put it to use within  autonomous programming systems.  % Is ``having a stretch goal'' an example of a serendipity pattern? I think so!         

%   We indicate several possible further directions for implementation  work in each of our case studies. We have also drawn attention to  theoretical questions related to program design, with applications to  autonomous programming. broader design considerations.  Our examples show that serendipity is not foreign to computing practice. There are further gains to be had for research in computing by planning -- and programming -- for serendipity.  %         

apprehension, and concern surrounding such systems, which  \citeA{machine-ethics-status} suggest is largely focused around the  question: will these systems behave in an ethical manner? The more we  constrain the system's operation, the less chance there is of it ``running off the rails.'' However, constraints come with a serious downside. Highly constained systems will not be able to \emph{learn} anything very new while they operate. They And yet, if this means that the  system's ethical judgement is fixed once and for all, we cannot trust  it to behave ethically if circumstances change  \cite{powers2005deontological}. Highly constrained systems  are unlikely to be convincingly \emph{social} \emph{social},  if emergent behaviour is ruled out in advance. Systems that only act normatively (that is, pursuing purposes for which they have been pre-programmed) serve as proxies for their creator's judgements, and do not make \emph{evaluations} that are meaningfully ``their own.'' Section \ref{sec:computational-serendipity} develops  three case studies considering the potential for serendipity in extant  and hypothetical computer systems. Section  \ref{sec:discussion} will reflect back on We return to  thesecase studies   using the  themes of autonomy, learning, sociality and embedded evaluation. in Section \ref{sec:discussion}.  With Rao's \citeyearpar{rao2015breaking} useful gloss ``happy  surprise'' in mind: systems that are able to reason about serendipity  (and unserendipity) will be able to distinguish between ``happy  surprises'' and ``unhappy surprises'' and make decisions accordingly.  In this way, a framework for evaluating serendipity may be an  important prerequisite to the development of machine ethics. If serendipity was ruled out as a matter of principle, computing would be restricted to happy or unhappy (as the case may be) \emph{unsurprises}. But \emph{unsurprises}, interspersed with unhappy surprises.  \citeA{rao2015breaking} uses the term \emph{zemblanity} -- after  William Boyd \citeyearpar{boyd2010armadillo}: ``zemblanity, the  opposite of serendipity, the faculty of making unhappy, unlucky and  expected discoveries by design'' -- to describe systems that are  doomed to produce only unhappy unsurprises. This is the fate of  systems that are tied inextricably to a fixed vision, from which any  deviation constitues a mistake. This condition stands at a sharp  contrast with the ``second-order cybernetics'' introduced by  \citeA{von2003cybernetics}, which envisions systems that are able to  specify, and adapt, their own purpose. It also contrasts with Taleb's  \citeyearpar{taleb2012antifragile} notion of ``antifragility''  in fact which disturbances within a certain range strengthen the system. In  fact,  any sufficiently complex computational  system is bound to make decisions that its creator creators  could not foresee, and may not fully understand \cite{minsky1967programming}. Demonstrably gracefully  behaviour in response to surprises and a preference for ``happy'' as  opposed to ``unhappy'' outcomes may be prerequisites for the  development of autonomous systems that are worthy of our trust.  Less controversial than ``programmed serendipity'', but no less worthy  of study, is serendipity that arises in the course of user         

vision of a snake biting its tail). The bridge may be  non-conceptual, relying on new social arrangements, or physical  prototypes. It may have many steps, and, like the trigger, it may  feature chance elements. Several serendipitous episodes may be  chained together in sequence, on the way to an unprecedented result.  C\'edric Villani \citeyear[p.~16]{birth-of-a-theorem} describes a hallway conversation with his colleague \'Etienne Ghys, who said ``I didn't  really want to say anything, C\'edric, but those figures there on  the board -- I've seen them before.''  \end{itemize}  \begin{itemize}  \item \textbf{Result}: This may be a is the  new product, artefact, process, hypothesis, a new theory,  use for a material substance, and so on. or other outcome.  The outcome may contribute evidence in support of a known hypothesis, or  a solution to a known problem. Alternatively, the result may itself  {\em be} a new hypothesis or problem. The result may be 

A further terminological clarification is warranted.  The word \emph{creative} can be used to describe a ``creative output'',  a ``creative person'', or even a ``creative method.''  On the understanding developed here, serendipity is  onlybe  meaningfully attributed to a particular kind of process. It is not a property of a generated artefact (like novelty or usefulness), nor is it a system trait (like skill or autonomy).  % This is why we speak of potential for serendipity, and instances of serendipity.  Paul Andr{\'e} et al.~\citeyear{andre2009discovery} have previously proposed a  two-part model of serendipity encompassing ``the chance encountering 

and the degree to which the computer was responsible for coming up  with this problem.  %  As \citeA[p. Pease et al. \citeyearpar[p.  69]{pease2013discussion} remark, anomaly detection and outlier analysis are part of the standard machine learning toolkit --  but recognising \emph{new} patterns and defining \emph{new} problems  is more ambitious \cite{von2003cybernetics}. Establishing complex 

in the current paper. However, there are other underlying factors.  Existing standards for assessing computational creativity have  historically focused on product evaluations.  \citeA{ritchie07} uses metrics that depend on observable properties of artifacts. He suggests ``typicality'', i.e., the extent to which an artifact belongs to a certain genre, and ``quality'' as atomic measures for more complex metrics, including ``novelty.'' More recently, Ritchie initially bases these metrics on human judgment, but points out that it may also be possible to compute them automatically. For instance, quality could be computed using a fitness score of the assessed artifacts, which should correlate highly with human-perceived quality. The typicality of produced artifacts can be calculated according to their similarity to the artifacts inspiring the generative process. Both fitness functions and distance metrics are subject to an ongoing debate in computational aesthetics. Section \ref{sec:evomusic} will return to these issues.  In recent years,  artefact-centred evaluations are increasingly complemented by methods that consider process \cite{colton2008creativity,colton-assessingprogress} or a combination  of product and process \cite{jordanous:12}. However, processes that  arise outside of the control of the system (and ultimately, the 

%  % Ritchie initially bases his metrics on human judgment, but points out different ways to compute them automatically, arising from practical study. For instance, quality could be computed using a fitness score of the assessed artifacts, which should highly correlate with human-perceived quality. The typicality of produced artifacts was calculated as their similarity to the artifacts inspiring the generative process. Nevertheless, this requires a good distant metric. Both fitness functions and distance metrics are subject to an ongoing debate in computational aesthetics.  Although the notion of serendipity is process-focused, value is a crucial dimension of serendipity, and evaluations of an outcome (often an artefact) continue to be relevant. Furthermore, an ``embedded'' evaluation is required to effect the critical focus shift, that is, to notice that the trigger is interesting.Although Ritchie initially based his metrics on human judgment, he points out different ways to compute them automatically, arising from practical study.  Adapting qualitative artefact-oriented measures (like novelty) may be necessary in order  to build systems that are capable of carrying out the necessary formative evaluation steps for serendipitous processing, as well as a final summative evaluation of the result.        

developed, it may turn out to be of little value. Prior experience  with a related problem could be informative, but could also hamper  innovation. Similarly, multiple tasks, influences, and  contexts can help to foster an inventive frame of mind, but they can may  also be distractions.  Figure \ref{fig:1b} removes these unserendipitous paths to focus on 

be based on observations of the outside world, or it may be a purely  computational process. In any case, its products are passed on to the  next stage. After running this data through a feedback loop, certain  aspects of the data are singled out, and marked up, up  as ``interesting.'' Note that this designation need not arise all at  once: rather, it the outcome of a \emph{reflective process}. In the  implementation envisioned here, this process makes use of two primary  functions: $p_1$, which notices particular aspects of the data, and $p_2$, which  offers reflections about those aspects. Together, these functions build up a  ``feedback object,'' $T^{\star}$, which consists of the original data  and added further  metadata. This is passed on to an \emph{experimental process}, which has the task of understanding what makes verifying that  the data interesting is indeed interesting,  and determining  what it may be useful for. This is again an iterative process, relying on functions $p^{\prime}_1$ and $p^{\prime}_2$, which build a contextual  understanding of the trigger by devising experiments and assessing their results. Oncesufficient understanding of the data  and its potential  implications has or applications have  been reached, found,  a result is generated, which is passed to a final \emph{evaluation process}, and,  from there, to applications. 

or achieve. If the system's next steps could be anticipated, we would  not say that the behaviour was serendipitous. In other words,  serendipity does not adhere to one specific part of the system, but to  its operations as a whole. Also note that Although Figures \ref{fig:1b} and \ref{fig:1c}  treat the case of successful serendipity, as indicated in Figure  \ref{fig:1a}, each step is fallible, as is the system as a whole.  Thus, for example,  a machine could trigger that has been initially tagged as interesting may prove to  be built fruitless.  Similarly a system  that implements all of the stepsdepicted  in this diagram, and yet, if  it Figure \ref{fig:1c}, but that never  achieves results of value does not have potential for serendipity.  However, a system  only generated uninteresting produces  results it of high value  would not also  be called  ``serendipitous.'' suspect, since it would indicate a tight coupling between trigger  and outcome. Fallibility is a ``meta-criterion'' that transcends the criteria from Section \ref{sec:by-example}.  Summarising, we propose the following: \begin{ndef}  \emph{(1) Within a system with a prepared mind, a previously uninteresting serendipity trigger arises due to circumstances that the system does not control, control and cannot predict,  and is classified as interesting by the system; and,} \emph{(2) The system uses the trigger and prior preparation, together with relevant computational processing, networking, and experimental techniques, to obtain a novel result that is evaluated favourably by the system or by external sources.}  \end{ndef}         

culminating in a method for evaluating computational serendipity in Section \ref{specs-overview}.  %  Along with clear criteria, it is important to clearly delineate the  scope of the system being evaluated. evaluated, and the position of the evaluator  (recalling that ``embedded evaluation'' is a requisite part of a  serendipitous system).  For example, a standard spell-checking program might suggest a substitution that the user deems especially fortuitous; and we might agree that serendipity has  occurred, but we would not attribute locate the potential for  serendipity in  the spell-checker itself, but rather  to the spell-checker. ``cyborg'' system  comprised of the user plus the machine and its software.  \citeA{pease2013discussion} used an earlier variant the SPECS criteria  to analyse three examples of potentially serendipitous behaviour: 

very serendipitous.'' Evaluating individual threads (as members  of a larger population) would yield varied results, which  emphasises the importance of system scoping, mentioned above.  However, it would be inaccurate to simply say that successful threads are serendipitous and unsuccessful threads are unserendipitous, since that ignores components other than contributing dimensions apart from  value. At the moment, individual threads are effectively equivalent regarding chance, curiosity, and sagacity; a thread-by-thread analysis should be deferred until there would be more to say.  This is related to other changes that would improve the global serendipity score, as the following qualitative factor analysis indicates. 

As discussed in Section \ref{sec:related}, recommender systems are one  of the primary contexts in computing where serendipity is currently discussed. Serendipity, for current recommender systems, means suggesting items to a user that will be likely to introduce new ideas that are unexpected, but close to what the user is already interested in. As we noted, these systems mostly focus on supporting \emph{discovery} for the user -- but some architectures also seem to take account of \emph{invention} of new methods for making recommendations, e.g.~using Bayesian methods, as surveyed in \citeNP{shengbo-guo-thesis}. In light of our working definition of serendipity, we need to distinguish serendipity on the user side from serendipity in the system itself.  Current recommendation techniques focused on serendipity  associate less popular items with high unexpectedness \cite{Herlocker2004,Lu2012}, and use clustering to discover latent structures in the search space, e.g., partitioning users into clusters of common interests, or clustering users and domain objects \cite{Kamahara2005,Onuma2009,Zhang2011}. But even in the Bayesian case, the system has limited autonomy. A case for giving more autonomy to recommender systems can be made, especially in complex and rapidly evolving domains where hand-tuning is cost-intensive or infeasible. With this challenge in mind, we ask how serendipity could be achieved on the system side. In terms of our model, current systems have at least the makings of a \textbf{prepared mind}, comprising both a user- and a domain model, both of which can be updated dynamically. User behaviour (e.g.~following certain recommendations) or changes to the domain (e.g.~adding a new product) may serve as a potential \textbf{trigger} that could ultimately cause the system to discover a new way to make recommendations in the future. In the current generation of systems that seek to induce serendipity for the user, the system provides a trigger for the user's focus shift by presenting recommendations that are neither too close, nor too far away from what user already knows; it is the other way around here.  Note, however, that it is unexpected behaviour in aggregate, rather than a one-off event, that is likely to provide grounds for a \textbf{focus shift}. A \textbf{bridge} to a new kind of recommendation could be created by looking at exceptional patterns as they appear over time. For instance, new elements may have been introduced into the domain that do not cluster well, and clusters or a user may suddenly indicate a strong preference towards an item that does not fit their preference history. Clusters  may appear in the user model that do not have obvious connections between them. A new recommendation strategy serves that addresses  the organisational mission organisation's goals  would be a serendipitous \textbf{result} for the system. \textbf{result}.  The system has only imperfect knowledge of user preferences and  interests. At least relative to current recommender systems, the 

of potentially high value, so that such a system is ``potentially  highly serendipitous.''  Recommender systems have to cope with a \textbf{dynamic world} of changing user preferences and a changing collection of items to recommend. A dynamic environment which exhibits some degree of regularity represents a precondition for useful A/B testing. The system's \textbf{multiple contexts} include the user model, the domain model, as well as an evolving model of its own organisation. A system matching the description here would have \textbf{multiple tasks}: making useful recommendations, generating new experiments to learn about users, and improving its models. In order to make effective decisions, a system would have to avail itself of \textbf{multiple influences} related to experimental design, psychology, and domain understanding. Pathways for user contributions that go beyond answers to the question ``Was this recommendation helpful?'' could be one way make the relevant expertise available.  \subsection{Case Study: Automated flowchart assembly} \label{sec:flowchartassembly}  Here we consider the design of a contemporary experiment with the  {\sf FloWr} flowcharting framework \cite{colton-flowcharting}. {\sf FloWr} is a  user interface tool  for creating and runnable flowcharts, built of small modules called ProcessNodes. In For  day-to-day use, user,  {\sf FloWr} can be viewed functions  as a visual programming environment. However, it can also be invoked programmatically, on the Java Virtual Machine, or with any language using a new web API. The goals of {\sf FloWr} are both to be a user  friendly tool for co-creativity, and to be an autonomous  \emph{Flowchart Writer}. Our experiment targets the latter scenario, 

%  Inputs and outputs have constraints. For instance, the {\tt  WordSenseCategoriser} node has a {\tt stringsToCategorise}  parameter, which is needs to be  seeded with an ArrayList of strings. The node produces useful output only when these strings can be parsed as as a space-separated list of words. The Similarly, the  node's {\tt requiredSense} parameter needs to be seeded with a string that representsexactly  one of the 57 British National Corpus Part of Speech tags. Given constraints of this nature, the first challenge in automated flowchart assembly is to match inputs to outputs correctly, and to make sure that all required inputs are satisfied. In our current experiment, the system's potential \textbf{triggers}  result from random, but constrained,  trial and error with flowchart assembly. Some valid combinations of nodes will produce results, and some will not. Due to  the dynamically changing environment (e.g., updates to data sources  like Twitter) some flowcharts that did not produce results earlier may 

%  The system's \textbf{prepared mind} lies in a distributed knowledge  base provided by ProcessNodes, showing the constraints on their inputs  and outputs, outputs;  and in the global history of successful and unsuccessful combinations. %  The system will not try combinations that it knows cannot produce  results, but it will try novel combinations and may retry earlier 

collection of nodes for which no known working combination existed  into a working flowchart is an occasion for a \textbf{focus shift}:  what made this particular combination work? Is there a pattern that  could be exploited in the future? However, it It  may be that no broader pattern can be found, other than the fact that the combination works.  %  Successful combinations and any further inferences are stored, and  referred to in future runs. The \textbf{bridge} to the next set of any new  results is accordingly found by informed trial and error. error, building  on previous outcomes.  %  In these early experiments, the basic \textbf{result} the system is  aiming to achieve is simply to generate a new combination of nodes that can fit together and that generate non-empty output. Subsequent versions of the system may have more detailed evaluation functions, setting a higher bar. For example, a future version of the system could be tuned to search for flowcharts that generate poetry, as we discuss in poetry  \cite{corneli2015computational}. The \textbf{chance} of finding a novel successful combination flowchart in any given sample  of nodes is fairly low, as this depends on both the output from certain nodes,  and in terms the combinatorial search strategy itself. low.  Compared to humans users of {\sf FloWr}, the seems search process is  exceptionally \textbf{curious}  about finding novel \textbf{curious}, since tries many  combinations of nodes. Remembering programmatically. However, remembering  viable combinations and avoiding combinations that are known not to work presents offer  only a modest degree of \textbf{sagacity}. At the moment, the system's criterion for attributing \textbf{value} is simply that  the combination of nodes generates non-empty output;however  an external evaluator third-party  is not likely to judge these combinations as useful. The associated likelihood score is  $\mathit{low}\times\mathit{low}\times\mathit{high}$, which  is favourable, however, relatively favourable. However,  until there is a more discriminating way to judge value, the attribution of serendipity to any particular run may be premature. Onefairly obvious  route would be to attribute value to explanatory heuristics, rather than generated texts; this would require increased sagacity on the part of the system as well. The \textbf{dynamic world} the system operates in is dynamic in two  ways: first, in the straightforward sense that some of the input 

points to long-term considerations that go beyond the unique  serendipitous event. How ``curious'' should these systems be? One  obvious criterion is that short-term value should be allowed to  suffer as long as expected value is still higher. The symmetry  between serendipity on the user side, and serendipity on the system  side might be exploited. Current systems seek to induce serendipity  by making use of implicit connections between clusters, resulting in  an update to the user's conception of the item space. As in the  example of {\sf SerenA} discussed in Section \ref{sec:related}, in  current systems the user shares a significant part of the workload  when forming the bridge, even when triggered by the system. Users  might be given the explicit task of triggering serendipity on the  system-side, as well.  \item The flowchart assembly process would need more stringent, and  more meaningful, criteria for value before third-party observers         

\begin{abstract}  Most prior work that deals with serendipity in a computing context focuses on computational ``discovery''; we argue that serendipity also includes an important ``invention'' aspect.  Building on a survey of describing serendipitous discovery and invention in science and technology, we advance a definition of serendipity and an accompanying  model that can be used evaluate the potential for serendipity in computational systems. The model adapts existing recommendations for evaluating computational creativity. It is applied in three case studies that evaluate the serendipity of existing and hypothetical systems in the context of  evolutionary computing, recommender systems, and automated programming.  From this analysis, we extract recommendations for practitioners working with computational serendipity, and outline future directions for research. We argue that patterns of serendipity can be used in the design of computational systems,and  that there is much to be gained by building an awareness of serendipity into computational these  systems, and that serendipity is  particularly from the perspective of machine ethics. critical for autonomous systems.  \\[.3cm]  \keywords{serendipity, evaluation, computational creativity, machine ethics}         

Year = {1995}}  @incollection{von2003cybernetics,  Author = {Von {von  Foerster, Heinz}, Booktitle = {Understanding {U}nderstanding},  Pages = {283--286},  Publisher = {Springer}, 

Title = {{U}nderstanding {U}nderstanding: {E}ssays on {C}ybernetics and {C}ognition},  Year = {2003}}  @book{taleb2012antifragile,  Author = {Taleb, Nassim},  Publisher = {Random House},  Title = {Antifragile: {T}hings {T}hat {G}ain from {D}isorder},  Year = {2012}}  3. Taleb, N. Antifragile: Things That Gain from Disorder. Random House. ISBN 9781400067824. 2012  @article{foster2003serendipity,  Author = {Foster, Allen and Ford, Nigel},  Journal = {Journal of Documentation}, 

author={Jordanous, Anna Katerina},  year={2012},  school={University of Sussex}  }  @inproceedings{powers2005deontological,  title={Deontological machine ethics},  author={Powers, T},  booktitle={2005 {AAAI} {F}all {S}ymposium on {M}achine {E}thics},  pages={79--86},  year={2005}  }  @book{boyd2010armadillo,  title={Armadillo: a novel},  author={Boyd, William},  year={1998},  publisher={Hamish Hamilton}  }         

% TITLE INFORMATION  \title{Modelling serendipity in a computational context\thanks{This research was supported by the Engineering and Physical Sciences Research Council through grants EP/L00206X, EP/J004049 as well as EP/L015846/1 and the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET-Open Grant numbers: 611553 (COINVENT) and 611560 (WHIM).}}  \author{Joseph Corneli \and Alison Pease \and Simon Colton Christian Guckelsberger  \and Anna Jordanous \and Christian Guckelsberger} Alison Pease \and Simon Colton}  \institute{Department of Computing, Goldsmiths College, University of London\\  %% \mailsa\\  %% \mailsb\\  % \url{http://ccg.doc.gold.ac.uk/}  \and  School of Computing, University of Dundee  \and  School of Computing, University of Kent}  \date{\today}  \titlerunning{Modelling serendipity in a computational context}  \authorrunning{~}  \maketitle  \input{abstract.tex} 

\input{12discussion}  \input{13conclusion}  %%  \bigskip %%  \noindent \textbf{Acknowledgement.} %%  We wish to thank appreciate  the anonymous reviewers of this paper for comments  which have been effort  of considerable help. our anonymous reviewers.  \bibliographystyle{spbasic}         

\draw [-latex] (poet.east) -- (poem.west);  % ``critic listens to poem and offers feedback''  \node[ellipse, draw, right=9mm of poem.east,text width=1.4cm] width=1.3cm]  (critic) {\emph{feedback}}; \draw [-latex] (poem.east) -- (critic.west);  \node[single, above=8mm of critic.north,text width=1.4cm] (experience) {\emph{reflective\\ process}};  \node[draw,diamond,inner sep =.3mm, above right=4mm and 3mm of critic] (comment) {\raisebox{2mm}{$p\vphantom{^{\prime}}_1$}} ; 

\node[below=2cm of discovery] (invention) {\textbf{\emph{Invention:}}};  % ``poet integrates feedback''  \node[ellipse, draw, right=12mm of invention.east,text width=2.2cm] width=1.71cm]  (integrator) {\emph{understanding}}; {\emph{verification}};  % nonprinting point to use to bend curve  \coordinate[above left=2mm and 9mm of integrator] (mid2);