Prepared mind. Participating systems need to be able to follow the Workshop protocol. The listening and questions stages of the protocol correspond to \(p\) and \(p^{\prime}\) our model of serendipity. The corresponding “comment generator” and “feedback integrator” modules in the architectural sketch represent the primary points of interface between author and critic. In principle these modules need to be prepared to deal, more or less thoughtfully, with any text, and in turn, with any comment on that text. Certain limits may be agreed in advance, e.g. as to genre or length in the case of texts; ground rules may constrain the type of comments that may be made. A participating system – particularly one with prior experience in the Workshop – will have a catalogue of outstanding unresolved, or partially resolved, problems (denoted “X” in Figure \ref{fig:generative-diagram}). Embodied in code, these drive comments, questions, and other behaviour – and they may be addressed in unexpected ways.

Serendipity triggers. Although the poem is under the control of the initial generative subsystem, it is not under control of the listening subsystem. The listening subsystem expects some poem, but it does not know what poem to expect. In this sense, the poem constitutes a serendipity trigger \(T\), not only for the listening subsystem, but for the Workshop as a whole. To expand this point, note that there may be several listeners, each sharing their own feedback and listening to the feedback presented by others (which, again, is outside of their direct control). This creates further potential for serendipity, since each listener can learn what others see in the poem. More formally, in this case \(T^\star\) may be seen as an evolving vector with shared state, but viewed and handled from different perspectives. With multiple agents involved in the discussion, the “comment generator” component would expand to contain its own feedback loops.

Bridge. Feedback on portions of the poem may lead the system to identify new problems and possibly new types of problems that it hadn’t considered before. This sort of system extension is quite typical when a human programmer is involved. However, here we are interested in the possibility of agents building new poetic concepts without outside intervention, starting with some basic concepts and abilities related to poetry (e.g. definitions of words, valence of sentiments, metre, repetition, density, etc.) and code (e.g. the data, functions, and macros in which the poetic concepts and workshop protocols are embodied). Some notable early experiments with concept invention have been fraught with questions about autonomy \cite{ritchie1984case,lenat1984and}. presented a system that was convincingly autonomous: it was able to generate interesting novel conjectures that surprised its author. However, note that this system was not convincingly serendipitous: “we had to willingly make the system less effective to encourage incidents onto which we might project the word serendipity.”

One cognitively inspired hypothesis is that the development of new concepts is closely related to development of new sensory experiences \cite{milan2013kiki}. Feedback on the poem – simply describing what is in the poem from several different points of view – can be used to define new problems for the system to solve. One of the functions of the questions step, corresponding to \(p^{\prime}\) in our formalism, is to give the poet the opportunity to enquire about how different pieces of feedback fit together, and learn more about where they come from. The reconstructive process may steadily approach the ideal case – familiar to humans – of relating to the sentiment expressed by the poem as a whole \cite[p. 209]{bergson1983creative}.

Result. In the most straightforward case, the poet would simply make changes to the draft poem that seem to improve it in some way. For example, the poet might remove or alter material that elicited a negative response from a critic. The system may then proceed to update its modules related to poetry generation. It may also update its own feedback modules, after reflecting on questions like: “How might the critic have noticed that feature in my poem?”

Likelihood scores and potential value. Assuming the poems presented to the system are not too repetitive, the chance of encountering a given serendipity trigger would be small. It should be straightforward for a critic to detect some known feature, like metre or rhyme, but at least moderately difficult to notice a novel poetic idea. There is some nuance here, since whenever the system learns a new concept, the low-hanging fruit from the pool of new concepts is used up, and the system’s perceptiveness simultaneously increases. The chance that a newly-observed feature will result in usable code seems relatively high, but only some of these new ideas will prove to have lasting value. Our likelihood score would be \(\mathit{low}\times\mathit{medium}\times\mathit{high}\), or fairly low overall, and value would be varied, with at least some high-valued cases meriting the description “highly serendipitous.”

Environmental factors. The system would set up its own internal dynamics, but it could also provide an interface for human poets to share their poetry and critical remarks. There is one primary context, the Workshop, shared by all participants. The primary tasks envisaged in the system design are poetry generation, comment generation, and code generation. Although these are different tasks, they may have similar features (i.e. they all may present opportunities to learn from feedback). Influences could be highly multiple, including many very different kinds of poetry and various approaches from NLP.