Joe Corneli draft of recommender systems stuf  about 9 years ago

Commit id: 3e972cf81c31e559c52314cafafc9aeecc20948a

deletions | additions      

       

@inproceedings{abbassi2009getting,  title={Getting recommender systems to think outside the box},  author={Abbassi, Zeinab and Amer-Yahia, Sihem and Lakshmanan, Laks VS and Vassilvitskii, Sergei and Yu, Cong},  booktitle={Proceedings of the third ACM conference on Recommender systems},  pages={285--288},  year={2009},  organization={ACM}  }  @inproceedings{mendoncca2008unsought,  title={Unsought innovation: serendipity in organizations},  author={Mendon{\c{c}}a, Sandro and Cunha, Miguel Pina E and Clegg, Stewart R}, 

@phdthesis{shengbo-guo-thesis,  title={{B}ayesian {R}ecommender {S}ystems: {M}odels and {A}lgorithms},  author={Guo, Shengbo},  year={2011},  school={The Australian National University}}  @phdthesis{eric-nichols-thesis,  title={{M}usicat: {A} {C}omputer {M}odel of {M}usical {L}istening and {A}nalogy-{M}aking},  author={Eric Paul Nichols}, 

Title = {Gum-Elastic and its Varieties, with a Detailed Account of its Applications and Uses, and of the Discovery of Vulcanization},  Year = {1855}}  @article{tce-postits,  Author = {Flavell-While, Claudia},  Journal = {The Chemical Engineer},  Month = {August},  Pages = {53--55},  Title = {{S}pencer {S}ilver and {A}rthur {F}ry: the chemist and the tinkerer who created the {P}ost-it {N}ote},  Year = {2012}}  @incollection{bex-generalising,  Author = {Bex, Floris and Lawrence, John and Reed, Chris},  Booktitle = {Fifth International Conference on Computational Models of Argument}, 

Bdsk-Url-1 = {http://www.emeraldinsight.com/10.1108/00220411211256030},  Bdsk-Url-2 = {http://dx.doi.org/10.1108/00220411211256030}}  @incollection{iaquinta2010can,  title={Can a recommender system induce serendipitous encounters?},  author={Iaquinta, Leo and Semeraro, Giovanni and de Gemmis, Marco and Lops, Pasquale and Molino, Piero},  booktitle={E-commerce},  editor={Kang, Kyeong},  year={2010},  publisher={InTech}  }  @article{Iaquinta2008,  Author = {Iaquinta, L. and Gemmis, M. and Lops, P. and Semeraro, G. and Filannino, M. and Molino, P.},  Doi = {10.1109/HIS.2008.25},         

\subsection{Prior partial examples}  \paragraph{{[}To add: Jazz.{]}} \citeA{pease2013discussion} used a somewhat different version of the  SPECS criteria to discuss three examples, related to dynamic  investigation problems, model generation, and poetry flowcharts. The  \paragraph{{[}To add: HR.{]}} Jazz.{]}}  \paragraph{Recommender systems.} Research in recommender systems focusses on stimulating serendipity on the user side, by suggesting items that might more be unexpected, novel, and valuable to the user. % Several components of the proposed model were implemented in related work, and art and music recommendation connects to the area of computational creativity. \paragraph{{[}To add: HR.{]}}  A recommender system must come with a \textbf{prepared mind} in terms of full knowledge of the items in the search space, and of the user's partial knowledge of their existence and properties. Predictions are based on existing knowledge of the items known to the user, and his or her preferences. The system's goal is to recommend an item as \textbf{result} which is unexpected, novel, and valuable to a specific user. % while the user's goal is to find items that are valuable in different respects. \paragraph{Recommender systems.}  Related work tries to structure As discussed in Section \ref{sec:related}, recommender systems are one  of  the search space and exploit patterns as \textbf{serendipity triggers}. For example, \cite{Herlocker2004} as well as \cite{Lu2012} associate less popular items with primary contexts in computing where serendipity is seen to play  a higher unexpectedness. Clustering was role. As we noted, these systems mostly focus on discovery.  Nevertheless, certain architectures that  also frequently used to discover latent structures in the search space. For example, \cite{Kamahara2005} partition users into clusters take account  of common interest, while \cite{Onuma2009} as well as \cite{Zhang2011} perform clustering on both users and items. In invention may match  the work criteria described  by \cite{Oku2011}, our model. We draw on  the user is allowed observation that recommender systems not only aim  to select two items in order to mix their features in some sort \emph{stimulate} serendipitous discovery for the user: they also have  the task  of conceptual blending. \emph{simulating} when this is likely to occur.  If a pattern is found, it A recommendation  is used typically provided if the system suspects that the  item will be likely  to \textbf{bridge} between items introduce ideas  that are known and valuable close  to what  the user, and those user  knows, but  that are potentially will be  unexpected. As an In other words, the system aims  to stimulate serendipity for the user. For  example, \cite{Sugiyama2011} connects users with divergent interests, while \cite{Onuma2009} weight items stronger a museum  recommender service might suggest a colourful medieval painting to a  user who seems to like colourful paintings by the modern artist Keith  Haring. User behaviour (e.g.~following up on these recommendations)  is outside of the direct control of the system and may serve as a  \textbf{serendipity trigger}, and change the way it makes  recommendations in the future. The system has a \textbf{prepared  mind}, including both a \emph{user model} and a \emph{domain model},  both of which can be updated dynamically. The connections through  which recommendations are made usually happen when the system notices  that elements of the domain have something in common via clustering or  faceting. A \textbf{bridge} to a new kind of recommendation may be  found if new elements are introduced into the domain which do not  cluster well, or if the user appears to know about different clusters  that bridge do not have obvious connections  between topical clusters. them. The intended  outcome of recommendations depend on the organisational mission  e.g.~to make money, to provide a good user experience, etc.; at the  system level, the serendiptious \textbf{result} would be learning a  new approach that helps to address these goals better.  Recommender systems From the perspective of our model, \textbf{chance} will only  haveto cope with  a \textbf{dynamic world} of significant role if the system has the capacity to learn from  user preferences and new items that behaviour. Indeed, Bayesian methods  are introduced used in contemporary recommender systems  (surveyed in Chapter 3 of \citeNP{shengbo-guo-thesis}). Combined with the ability  to learn, \textbf{curiosity}  could be described as  the system. The imperfect knowledge urge to make  ``outside-the-box''\footnote{\citeA{abbassi2009getting}.}  recommendations specifically for the purposes of learning more  about users. The typical commercial perspective on recommendations is  related to  the user's preferences process of ``conversion'' -- turning recommendations  into clicks  and interests represents perhaps the strongest dimension clicks into purchases. Measures  of \textbf{chance}. Determining \textbf{sagacity}  would relate to the system's ability to draw inferences from user  behaviour to update  the recommendation model. For example, the system  might do A/B testing to decide how novel recommendation strategies  influences conversion. The  \textbf{value} of an item, both new recommendation  strategies can be measured  in terms of value for the user and unexpectedness, is of paramount importance. traditional business metrics or  other organisational objectives.         

An active research community investigating computational models of serendipity exists in the field of information retrieval, and specifically, in recommender systems \cite{Toms2000}. In this domain, \citeA{Herlocker2004} and \citeA{McNee2006} view serendipity as an important factor for user satisfaction, next to accuracy and diversity. Serendipity in recommendations variously require the system to deliver an \emph{unexpected} and \emph{useful} \cite{Lu2012}, \emph{interesting} \cite{Herlocker2004}, \emph{attractive} or \emph{relevant} item \cite{Ge2010}.   %% Recommendations are typically meant to help address the user's difficulty in finding items that meet his or her interests or demands within a large and potentially unobservable search space. The end user can also be passive, and items are suggested to support other stakeholder's goals, e.g. to increase sells.   Definitions differ as to the requirement of \emph{novelty}; \citeA{Adamopoulos2011}, for example, describe systems that suggest items that may already be known, but are still unexpected in the current context. While standardized measures such as the $F_1$-score or the (R)MSE are used to determine the accuracy \emph{accuracy}  of an evaluation in terms of preferred items in the user's history, there is no common agreement on a measure for serendipity yet, although there are several proposals \cite{Murakami2008, Adamopoulos2011, McCay-Peet2011}. McCay-Peet2011,iaquinta2010can}.  In terms of our model, these systems focus mainly on producing a \emph{serendipity trigger} for the user, but they include aspects of user modeling which could bring other elements into play.   Paul Andr{\'e} et al.~\citeyear{andre2009discovery} have examined