Joe Corneli move some stuff from cc-intro to related work  about 9 years ago

Commit id: eb349b82b0f490aa4c8e4bcf6e5f8bf7178d59f5

deletions | additions      

       

If a pattern is found, it is used to \textbf{bridge} between items that are known and valuable to the user, and those that are potentially unexpected. As an example, \cite{Sugiyama2011} connects users with divergent interests, while \cite{Onuma2009} weight items stronger that bridge between topical clusters.   Recommender systems have to cope with a \textbf{dynamic world} of user preferences and new items that are introduced to the system. The imperfect knowledge about the user's preferences and interests represents perhaps the strongest dimension of \textbf{chance}. Determining the \textbf{value} of an item, both in terms of value for the user and unexpectedness, is of paramount importance.While standardized measures such as the $F_1$-score or the (R)MSE are used to determine the accuracy of an evaluation in terms of preferred items in the user's history, there is no common agreement on a measure for serendipity yet \cite{ Murakami2008, Adamopoulos2011, McCay-Peet2011}.         

An active research community investigating computational models of serendipity exists in the field of information retrieval, and specifically, in recommender systems \cite{Toms2000}. In this domain, \citeA{Herlocker2004} and \citeA{McNee2006} view serendipity as an important factor for user satisfaction, next to accuracy and diversity. Serendipity in recommendations variously require the system to deliver an \emph{unexpected} and \emph{useful} \cite{Lu2012}, \emph{interesting} \cite{Herlocker2004}, \emph{attractive} or \emph{relevant} item \cite{Ge2010}.   %% Recommendations are typically meant to help address the user's difficulty in finding items that meet his or her interests or demands within a large and potentially unobservable search space. The end user can also be passive, and items are suggested to support other stakeholder's goals, e.g. to increase sells.   Definitions differ as to the requirement of \emph{novelty}; \citeA{Adamopoulos2011}, for example, describe systems that suggest items that may already be known, but are still unexpected in the current context. While standardized measures such as the $F_1$-score or the (R)MSE are used to determine the accuracy of an evaluation in terms of preferred items in the user's history, there is no common agreement on a measure for serendipity yet, although there are several proposals \cite{Murakami2008, Adamopoulos2011, McCay-Peet2011}.  In terms of our model, these systems focus mainly on producing a serendipity trigger, but they include aspects of user modeling which could bring other elements into play. Paul Andr{\'e} et al.~\citeyear{andre2009discovery} have examined  serendipity from a design perspective. These authors also propose a