Joe Corneli add Christian's clearer definition of accuracy from email  about 9 years ago

Commit id: 811d1f38af93612e7ba63d6c1c75799b4f91434c

deletions | additions      

       

An active research community investigating computational models of serendipity exists in the field of information retrieval, and specifically, in recommender systems \cite{Toms2000}. In this domain, \citeA{Herlocker2004} and \citeA{McNee2006} view serendipity as an important factor for user satisfaction, alongside accuracy and diversity. Serendipity in recommendations variously require the system to deliver an \emph{unexpected} and \emph{useful}, \emph{interesting}, \emph{attractive} or \emph{relevant} item.   % \cite{Herlocker2004} \cite{Lu2012},\cite{Ge2010}.   Definitions differ as to the requirement of \emph{novelty}; \citeA{Adamopoulos2011}, for example, describe systems that suggest items that may already be known, but are still unexpected in the current context. While standardized measures such as the $F_1$-score or the (R)MSE are used to determine the \emph{accuracy} of an evaluation in terms of preferred items in a recommendation (as very close to what  the user's history, user is known to prefer),  there is no common agreement on a measure for serendipity yet, although there are several proposals \cite{Murakami2008, Adamopoulos2011, McCay-Peet2011,iaquinta2010can}. In terms of our model, these systems focus mainly on producing a \emph{serendipity trigger} for the user, but they include aspects of user modeling which could bring other elements into play, as we will discuss in Section \ref{sec:computational-serendipity}.  Paul Andr{\'e} et al.~\citeyear{andre2009discovery} have examined