A Synthetic Approach For Recommendation: Combining Ratings .

3y ago
31 Views
2 Downloads
3.28 MB
7 Pages
Last View : 17d ago
Last Download : 3m ago
Upload by : Fiona Harless
Transcription

Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015)A Synthetic Approach for Recommendation:Combining Ratings, Social Relations, and Reviews1Guang-Neng Hu1 , Xin-Yu Dai1 , Yunya Song2 , Shu-Jian Huang1 , Jia-Jun Chen1National Key Laboratory for Novel Software Technology; Nanjing University, Nanjing 210023, China2Department of Journalism; Hong Kong Baptist University, Hong Konghugn@nlp.nju.edu.cn, {daixinyu,huangsj,chenjj}@nju.edu.cn, yunyasong@hkbu.edu.hkAbstractCollaborative filtering (CF) approaches are extensively investigated in research community and widely used in industry. They are based on the naive intuition that if usersrated items similarly in the past, then they are likely to rateother items similarly in the future [Goldberg et al., 1992;Sarwar et al., 2001]. Latent factors CF, which learns a latent vector of preferences for each user and a latent vectorof attributes for each item, gains popularity and becomes thestandard model for recommender due to its accuracy and scalability [Billsus and Pazzani, 1998; Koren et al., 2009]. CFmodels, however, suffer from data sparsity and the imbalanceof ratings; they perform poorly on cold users and cold itemsfor which there are no or few data.To overcome these weaknesses, additional sources of information are integrated into RSs. One research thread, whichwe call social matrix factorization (Social MF), is to combine ratings with social relations [Ma et al., 2008; 2011;Jamali and Ester, 2011; Tang et al., 2013; Guo et al., 2015].Extensive studies have found higher likelihood of establishing social ties among people having similar characteristics,namely the theory of homophily [McPherson et al., 2001;Tang and Liu, 2010]. Given that interpersonal similarity andeffective communication condition, homophilous ties becomeeffective means of social influence [Marsden and Friedkin,1993; Zhang et al., 2013]. Social MF methods factorize rating matrix and social matrix simultaneously.Another research thread, which we call topic matrix factorization (Topic MF), is to integrate ratings with item contentsor reviews text [Wang and Blei, 2011; Ling et al., 2014]. Reviews justify the rating of a user, and ratings are associatedwith item attributes hidden in reviews [Jakob et al., 2009;Ganu et al., 2009]. Topic MF methods combine latent factors in ratings with latent topics in item reviews [McAuleyand Leskovec, 2013; Bao et al., 2014]. Nevertheless, bothSocial MF and Topic MF ignore some useful information, either item reviews or social relations.There is a tendency towards hybrid methods [Pazzani,1999; Purushotham et al., 2012; Chen et al., 2014]. Thesemethods all consider diverse sources for recommendation,however, the first two methods are belonging to one-classCF [Pan et al., 2008] and hence the dimensions discovered are not necessarily correlated with rating; while thelast two methods adopt two components which are not effective [McAuley and Leskovec, 2013; Tang et al., 2013].Recommender systems (RSs) provide an effectiveway of alleviating the information overload problem by selecting personalized choices. Online social networks and user-generated content providediverse sources for recommendation beyond ratings, which present opportunities as well as challenges for traditional RSs. Although social matrixfactorization (Social MF) can integrate ratings withsocial relations and topic matrix factorization canintegrate ratings with item reviews, both of themignore some useful information. In this paper, weinvestigate the effective data fusion by combiningthe two approaches, in two steps. First, we extendSocial MF to exploit the graph structure of neighbors. Second, we propose a novel framework MR3to jointly model these three types of informationeffectively for rating prediction by aligning latentfactors and hidden topics. We achieve more accurate rating prediction on two real-life datasets. Furthermore, we measure the contribution of each datasource to the proposed framework.1IntroductionFor all the benefits of the information abundance and communication technology, the “information overload” is one of thedigital-age dilemmas we are confronted with. Recommendersystems (RSs) are instrumental in tackling this problem asthey help determine which information to offer to individual consumers and allow users to quickly find the personalized information that fits their needs [Goldberg et al., 1992;Linden et al., 2003; Koren et al., 2009]. RSs are nowadays ubiquitous in various domains and e-commerce platforms, such as recommendation of books at Amazon, musicsat Last.fm, movies at Netflix and references at CiteULike.Social networking and knowledge sharing sites like Twitterand Epinions are popular platforms for users to connect toeach other, to participate in online activities, and to generateshared opinions. Social relations and item contents provideindependent and diverse sources for recommendation beyondexplicit rating information [Ganu et al., 2009; McAuley andLeskovec, 2013; Ma et al., 2008; Tang et al., 2013], whichpresent both opportunities and challenges for traditional RSs.1756

SymbolsFRi,jUiVjWi,jTi,kCi,kSi,kHdi,jwd,n ; zd,nθjφfHence, it is still a challenge to find an effective way to integrate multiple data sources for recommendation.In this paper, we investigate the effectiveness of fusing social relations and review texts to rating prediction in a novelway, inspired by the complementarity of the two independentsources for recommendation. The core idea is the alignmentbetween latent factors found by Social MF and topics foundby Topic MF. Our main contributions are outlined as follows. Providing a principled way to exploit ratings and socialrelations tightly for recommendation, where the tightness means exploiting the graph structure of neighbors; Proposing an effective framework MR3 to jointly modelratings, the social network, and item reviews for ratingprediction, where the effectiveness means adopting twoeffective components in some sense;Table 1: Notations Evaluating the proposed model extensively on two realworld datasets to understand its performance.and are very flexible to add side data sources for recommender such as reviews content and social relations introduced in the following subsections. We adopt MF as a basicpart of the proposed framework.MF based RSs are mainly to find the latent user-specificmatrix U [U1 , ., UI ] RF I and item-specific matrixV [V1 , ., VJ ] RF J , where F is the number of latentfactors, obtained by solving the following problemX2min(Ri,j R̂i,j ) λ(kU k2F kV k2F ), (1)The organization of this paper is as follows. Problemsetting and notations are given in Section 2. In Section 3,we present the two components and details of the proposedframework. In Section 4, we give empirical results on reallife datasets. Concluding remarks with a discussion of somefuture work are in the final section.2Problem Statement and NotationU,VSuppose there are I users U {u1 , ., uI } and J items V {v1 , ., vJ }. Let R RI J denote the rating matrix, whereRi,j is the rating of user i on item j, and we mark a zeroif it is unknown. The task of rating prediction is to predictmissing ratings from the observed data. Latent factors CFmethods like probabilistic matrix factorization (PMF) [Mnihand Salakhutdinov, 2007] exploit ratings for recommender.Users connect to others in a social network. We useT RI I to indicate the user-user social relations; Ti,k 1 if user i has a relation to user k or zero otherwise. SocialMF methods like social recommendation (SoRec) [Ma et al.,2008] and local and global (LOCABAL) [Tang et al., 2013]integrate social relations for recommender.Items have content information, e.g., reviews commentedby users. The observed data di,j is the review of item j written by user i, often along with a rating score Ri,j . Topic MFmethods like collaborative topic regression (CTR) [Wang andBlei, 2011] and hidden factors and topics (HFT) [McAuleyand Leskovec, 2013] integrate item content for recommender.Both Social MF and Topic MF ignore some useful datasources, either item reviews or social relations. Notationsused in this paper are described in Table 1.33.1Meaningsdimensionality of latent factors/topicsrating of item j by user iF -dimensional features for user iF -dimensional features for item jweight on the rating of item j given by user isocial relation between user i and ksocial strength between user i and ksocial rating similarity between user i and kF F -dimensional social correlation matrixreview (‘document’) of item j by user ithe nth word in doc d; corresponding topicF -dimensional topic distribution for item jword distribution for topic fRi,j 6 0where the predicted ratingsR̂i,j µ bi bj UiT Vj ,(2)and regularization parameter λ controls over-fitting. The rating mean is captured by µ; bi and bj are rating biases of uiand of vj . The F -dimensional feature vectors Ui and Vj represent preferences for user i and characteristics for item j,respectively. The dot products UiT Vj capture the interactionor match degree between users and items.3.2Topic MF: Integrating Rating with ReviewItem reviews generated by users provide implicit feedbackfor recommender beyond explicit ratings [Ganu et al., 2009;Bao et al., 2014]. Reviews explain the ratings of users, thushelp to understand the rating behavior of users, and alleviate the cold-item problem. On the one hand, item characteristics (i.e., factors) are latent in ratings, and can be foundby MF introduced in Eq.(1); on the other hand, item properties (i.e., topics) are hidden in reviews, and can be found bytopic models like latent Dirichlet allocation (LDA) [Blei etal., 2003]. Together, these intuitions were sharpened into theHFT model [McAuley and Leskovec, 2013].The HFT model combines ratings with reviews by minimizing the following problemThe Proposed FrameworkMatrix Factorization: A Basic ModelRating scores are the explicit user feedback and matrix factorization (MF) is a state-of-the-art recommender method to exploit this rating information. MF techniques have gained popularity and become the standard recommender approachesdue to their accuracy and scalability [Koren et al., 2009].They have probabilistic interpretation with Gaussian noiseXRi,j 6 02(Ri,j R̂i,j ) λJ XXlog θzd,n φzd,n ,wd,n (3)d 1 n Ndwhere the LDA parameters θ and φ denote the topic and worddistributions, respectively; wd,n and zd,n are the nth word occurring in doc d and the corresponding topic; and λ controls1757

the contribution from reviews content. Summation in the second term is over all documents and each word within.The goals to achieve are both modeling ratings accuratelyand generating reviews likely. The trick of fusing ratings andreviews is the transformationexp(κVj,f )θj,f P,f exp(κVj,f )(4)where the parameter κ is introduced to control the ‘peakiness’of the transform and the summation is with respect to the Flatent topics/factors. The above function transforms the realvalued parameters Vj RF associated with ratings to theprobabilistic ones θj F associated with reviews. The fusing trick works because if an item exhibits a certain property,it corresponds to some topic being commented by users. Weadopt HFT as a component of the proposed framework. 1Figure 1: Relationship among matrices of parameters anddata. Shaded nodes are data (R: rating matrix, S: socialrating similarity, and D: doc-term matrix of reviews); Othersare parameters (U : matrix of latent user factors, V : matrix oflatent item factors, H: social correlation matrix, θ: doc-topicdistributions, and φ: topic-word distributions). Parameters Vand θ are coupled by Eq.(4). The double connections betweenU and S are indicated by the term (S U T HU ) in Eq.(7).3.3SoRec [Ma et al., 2008] to exploit this structure, and proposethe extended Social MF (eSMF) model:X2Wi,j (Ri,j R̂i,j )minRi,j 6 0U,V,H(7)X2 λCi,k (Si,k UiT HUk ) λΩ(Θ).Social MF: Integrating Rating with RelationSocial relations among users provide additional informationfor recommender [Bedi et al., 2007; Jamali and Ester, 2011].On the one hand, social correlation theories [Tang and Liu,2010] including homophily and social influence indicate thatthe rating behavior of users is correlated with their social factors hidden in the social network, besides their preferencefactors hidden in the rating matrix. On the other hand, thereputation of a user in the social network reveals her ratingconfidence, and a consideration from a global perspective canalleviate the rating noise to some extent. Together, these ideaswere formulated in LOCABAL [Tang et al., 2013].The LOCABAL model combines ratings with social relations to achieve the goals of modeling ratings accurately andcapturing local social context by solving the problemminRi,j 6 0 λThe trust valuesCik Wi,j (Ri,j R̂i,j )XTi,k 6 0(Si,k 2UiT HUk )3.4(8)MR3: A Model of Rating, Review and RelationSo far, we have described solutions to integrating ratings withreviews (see Eq.(3)) and to integrating ratings with social relations (see Eq.(7)) based on MF respectively. By aligninglatent factors and topics, we propose an effective frameworkMR3 to jointly model ratings with social relations and reviews. MR3 connects Social MF and Topic MF by minimizing the following problem(5) λΩ(Θ),where the rating weight Wi,j 1/(1 log ri ) is computedfrom the PageRank score ri of user i in the social network,representing the global perspective of social context; Si,k isthe cosine similarity between rating vectors of user i and k;H RF F is the social correlation matrix, capturing theuser preference correlation; λ controls the contribution fromsocial relations; and the regularization term is given byΩ(Θ) kU k2F kV k2F kHk2F .q d uk /(dui duk ),where the outdegree d ui represents the number of users whomui trusts, while the indegree d uk denotes the number of userswho trust uk .2XU,V,HTi,k 6 0L(Θ, Φ, z, κ) ,XRi,j 6 02Wi,j (Ri,j R̂i,j )XJ X λrev(log θzd,n log φzd,n ,wd,n )d 1n NdX2 λrelCi,k (Si,k UiT HUk ) λΩ(Θ), (9)(6)Ti,k 6 0where parameters Θ {U, V, H} are associated with ratingsand social relations, parameters Φ {θ, φ} associated withreviews text; and λrel and λrev are introduced to balance results from social relations and reviews, respectively.Before we delve into the learning algorithm, a brief discussion on Eq.(9) is in order. On the right hand, the first termis the rating squared-error weighted by user reputation in thesocial network; the second term is the negative log likelihoodof item reviews corpus; the third term is local social contextfactorization weighted by trust values among users; the lasteSMF. While LOCABAL succeeded in integrating ratingswith social relations for recommender from local and globalperspectives, it can be further improved by exploiting thegraph structure of neighbors. Graph structure of neighborscaptures social influence locality [Zhang et al., 2013], in otherwords, user behaviors are mainly influenced by direct friendsin their ego networks. We employ the trust values used in1As the same with HFT, we aggregate all reviews of a particularitem as a ‘doc’; so the item index j is corresponding to doc index j.1758

term is Frobenius norm penalty of parameters to control overfitting. The connection between ratings and social relations isthe shared user latent feature space U ; ratings and reviewsare linked through the transformation involving V and θ inEq.(4). The dependencies among these data and parametermatrices are depicted in Figure 1.Learning. Our objective is to searcharg min L(Θ, Φ, z, κ).X L 2Wi,j (R̂i,j Ri,j )Uii:Ri,j 6 0 Vj mj λrev κ Mj exp (κVj ) 2λVj .zjX1 L λrelCi,k (UiT HUk Si,k )Ui UkT λH.2 HTi,k 6 0 mf L λrev Mf w exp (ψf w ) . ψf wzf X Lmj λrevexp (κVjf ) .Vjf Mjf j,f κzj(10)Θ,Φ,z,κObserve that parameters Θ and Φ are coupled (see aboveparagraph, Eq.(4), or Figure 1). The former can be foundby gradient descent and the latter by Gibbs sampling; so, wedesign a procedure alternating between following two steps:4update Θnew , Φnew , κnew arg min L(Θ, Φ, κ, z old );(13)(14)(15)(16)ExperimentsIn this section, we first evaluate our proposed eSMF component to show the benefit of exploiting the graph structureof neighbors. Then we demonstrate the effectiveness of ourproposed MR3 model compared with the individual components. Finally we analyze the contribution of each componentof data source to the proposed model, followed by sensitivityof MR3 to hyperparameters.Θ,Φ,κ(11a)newnewsample zd,nwith probability p(zd,n f ) φnewf,wd,n .(11b)For the first step Eq.(11a), topic assignments zd,n for eachword in reviews corpus are fixed; then we update the termsΘ, Φ, and κ by gradient descent (GD). Recall that θ and Vdepend on each other; we fit only V and then determine θ byEq.(4). This is the same as that in the standard gradient-basedMF for recommender except that we have to compute moregradients, which will be given later separately.For the second step Eq.(11b), parameters associated withreviews corpus θ and φ are fixed; then we sample topicassignments zd,n by iterating through all docs d and eachword within, setting zd,n f with probability proportion toθd,f φf,wd,n . This is similar to updating z via LDA except thattopic proportions θ are not sampled from a Dirichlet prior, butinstead are determined in the first step.Finally, the two steps are repeated until a local optimumis reached. In practice, we sample topic assignments every 5GD iterations/epoches and this is called a pass; usually it isenough to run 50 passes to find a local minima.Gradients. We now give gradients used in Eq.(11a). (Gradients of biases are omitted; rating mean is not fitted becauseratings are centered.) More notations are required here [Griffiths and Steyvers, 2004]. For each item j (i.e. doc j): 1)Mj is an F -dimensional count vector, in which each component is the number of times each topic occursP for it; 2) mjis the number of words in it; and 3) zj f exp (κVjf ) isa normalizer. For each word w: 1) Mw is an F -dimensionalcount vector, in which each component is the number of timesit has been assigned to each topic; P2) mf is the number oftimes topic f occurs; and 3)zf w exp (ψf w ) is a normalizer. Note that φf is a stochastic vector, so we optimize the corresponding unnormalized vector ψf and then getφf w exp (ψf w )/zf .4.1Datasets and MetricWe evaluate our models on two datasets: Epinions and Ciao.2They are both knowledge sharing and review sites, in whichusers can rate items, connect to others, and give reviews onproducts. We remove stop words3 and then select top L 8000 frequent words as vocabulary; we remove users anditems that occur only once or twice. The items indexed inthe rating matrix are aligned to documents in the doc-termmatrix, that is, we aggregate all reviews of a particular itemas a ‘doc’. Statistics of datasets are given in Table 2. We seethat the rating matrices of both datasets are very sparse, andthe average length of documents is short on Epinions.Statistics# of Users# of Items# of Ratings/Reviews# of Social Relations# of WordsRating DensitySocial DensityAve. Words Per ,0000.00110.00211284.9Table 2: Statistics of the Two DatasetsWe randomly select x% as the training set and report theprediction performance on the remaining 1 - x% testing set.The metric root-mean-square error (RMSE) for rating prediction task is definedrXas.(Ri,j R̂i,j )2 T (17)RM SET X1 L Wi,j (R̂i,j Ri,j )Vj λUij:Ri,j 6 02 UiX λrelCi,k (UkT HUi Si,k )H T Ukk:Tk,i 6 0X λrelCk,i (UiT HUk Si,k )HUk . (12)(ui ,vj ) Twhere T and T is the test set and its cardinality. A smallerRMSE means a better prediction performance.23k:Ti,k 6 01759http://www.public.asu.edu/ jtang20/http://www.ranks.nl/stopwords

4.2Comp

A Synthetic Approach for Recommendation: Combining Ratings, Social Relations, and Reviews Guang-Neng Hu 1, Xin-Yu Dai , Yunya Song2, Shu-Jian Huang , Jia-Jun Chen1 1National Key Laboratory for Novel Software Technology; Nanjing University, Nanjing 210023, China 2Department of Journalism; Hong Kong Baptist University, Hong Kong hugn@nlp.nju.edu.cn, fdaixinyu,huangsj,chenjjg@nju.edu.cn .

Related Documents:

Bruksanvisning för bilstereo . Bruksanvisning for bilstereo . Instrukcja obsługi samochodowego odtwarzacza stereo . Operating Instructions for Car Stereo . 610-104 . SV . Bruksanvisning i original

10 tips och tricks för att lyckas med ert sap-projekt 20 SAPSANYTT 2/2015 De flesta projektledare känner säkert till Cobb’s paradox. Martin Cobb verkade som CIO för sekretariatet för Treasury Board of Canada 1995 då han ställde frågan

service i Norge och Finland drivs inom ramen för ett enskilt företag (NRK. 1 och Yleisradio), fin ns det i Sverige tre: Ett för tv (Sveriges Television , SVT ), ett för radio (Sveriges Radio , SR ) och ett för utbildnings program (Sveriges Utbildningsradio, UR, vilket till följd av sin begränsade storlek inte återfinns bland de 25 största

Hotell För hotell anges de tre klasserna A/B, C och D. Det betyder att den "normala" standarden C är acceptabel men att motiven för en högre standard är starka. Ljudklass C motsvarar de tidigare normkraven för hotell, ljudklass A/B motsvarar kraven för moderna hotell med hög standard och ljudklass D kan användas vid

LÄS NOGGRANT FÖLJANDE VILLKOR FÖR APPLE DEVELOPER PROGRAM LICENCE . Apple Developer Program License Agreement Syfte Du vill använda Apple-mjukvara (enligt definitionen nedan) för att utveckla en eller flera Applikationer (enligt definitionen nedan) för Apple-märkta produkter. . Applikationer som utvecklas för iOS-produkter, Apple .

Recommendation: All NUIC vehicles to have a licensedo perator 178 Recommendation 50. 179 Recommendation: Requirements for being a NUIC operator 179 Good repute 180 Financial standing 180 An establishment in Great Britain 180 Demonstrating professional competence 181 Recommendation 51. 182 Recommendation 52. 182 Recommendation 53. 182

och krav. Maskinerna skriver ut upp till fyra tum breda etiketter med direkt termoteknik och termotransferteknik och är lämpliga för en lång rad användningsområden på vertikala marknader. TD-seriens professionella etikettskrivare för . skrivbordet. Brothers nya avancerade 4-tums etikettskrivare för skrivbordet är effektiva och enkla att

Den kanadensiska språkvetaren Jim Cummins har visat i sin forskning från år 1979 att det kan ta 1 till 3 år för att lära sig ett vardagsspråk och mellan 5 till 7 år för att behärska ett akademiskt språk.4 Han införde två begrepp för att beskriva elevernas språkliga kompetens: BI