Estimating Position Bias Without Intrusive Interventions

3y ago
56 Views
2 Downloads
730.92 KB
9 Pages
Last View : 16d ago
Last Download : 6m ago
Upload by : Mya Leung
Transcription

Estimating Position Bias without Intrusive InterventionsAman AgarwalIvan ZaitsevCornell UniversityIthaca, NYaa2398@cornell.eduCornell UniversityIthaca, NYiz44@cornell.eduXuanhui Wang, Cheng Li, Marc NajorkThorsten JoachimsGoogle Inc.Mountain View, CA{xuanhui,chgli,najork}@google.comCornell UniversityIthaca, NYtj@cs.cornell.eduABSTRACTPresentation bias is one of the key challenges when learning fromimplicit feedback in search engines, as it confounds the relevancesignal. While it was recently shown how counterfactual learningto-rank (LTR) approaches [18] can provably overcome presentationbias when observation propensities are known, it remains to showhow to effectively estimate these propensities. In this paper, we propose the first method for producing consistent propensity estimateswithout manual relevance judgments, disruptive interventions, orrestrictive relevance modeling assumptions. First, we show how toharvest a specific type of intervention data from historic feedbacklogs of multiple different ranking functions, and show that this datais sufficient for consistent propensity estimation in the positionbased model. Second, we propose a new extremum estimator thatmakes effective use of this data. In an empirical evaluation, we findthat the new estimator provides superior propensity estimates intwo real-world systems – Arxiv Full-text Search and Google DriveSearch. Beyond these two points, we find that the method is robustto a wide range of settings in simulation studies.KEYWORDSUnbiased learning-to-rank, counterfactual inference, propensityestimationACM Reference Format:Aman Agarwal, Ivan Zaitsev, Xuanhui Wang, Cheng Li, Marc Najork andThorsten Joachims. 2019. Estimating Position Bias without Intrusive Interventions. In The Twelfth ACM International Conference on Web Search andData Mining (WSDM ’19), February 11–15, 2019, Melbourne, VIC, Australia.ACM, New York, NY, USA, 9 pages. ONIn most information retrieval (IR) applications (e.g., personal search,scholarly search, product search), implicit user feedback (e.g. clicks,dwell time, purchases) is routinely logged and constitutes an abundant source of training data for learning-to-rank (LTR). However,implicit feedback suffers from presentation biases, which can makePermission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).WSDM ’19, February 11–15, 2019, Melbourne, VIC, Australia 2019 Copyright held by the owner/author(s).ACM ISBN 89600.3291017its naive use as training data highly misleading [16]. In particular,the position at which a result is displayed introduces a strong bias,since higher-ranked results are more likely to be discovered by theuser than lower-ranked ones.It was recently shown that counterfactual inference methodsprovide a provably unbiased and consistent approach to LTR despitebiased data [18]. The key prerequisite for counterfactual LTR isknowledge of the propensity of obtaining a particular feedbacksignal, which enables unbiased Empirical Risk Minimization (ERM)via Inverse Propensity Score (IPS) weighting. This makes gettingaccurate propensity estimates a crucial prerequisite for effectiveunbiased LTR.In this paper, we propose the first approach for producing consistent propensity estimates without manual relevance judgments,disruptive interventions, or restrictive relevance-modeling assumptions. We focus on propensity estimation under the Position-BasedPropensity Model (PBM), where all existing propensity estimationmethods have substantial drawbacks. In particular, the conventionalestimator for the PBM takes a generative approach [7] and requiresindividual queries to repeat many times. This is unrealistic formost ranking applications. To avoid this requirement of repeatingqueries, Wang et al. [29] included a relevance model that is jointlyestimated via an Expectation-Maximization (EM) procedure [9]. Unfortunately, defining an accurate relevance model is just as difficultas the learning-to-rank problem itself, and a misspecified relevancemodel can lead to biased propensity estimates. An alternative togenerative click modeling on observational data are estimators thatrely on specific randomized interventions – for example, randomlyswapping the result at rank 1 to any rank k [18]. While this providesprovably consistent propensity estimates for the PBM, it degradesretrieval performance and user experience [29] and is thus costly.In addition, we find in the following that swap-interventions arestatistically rather inefficient. The approach we propose in thispaper overcomes the disadvantages of the existing methods, as itdoes not require repeated queries, a relevance model, or additionalinterventions.The key idea behind our estimation technique is to exploit datafrom a natural intervention that is readily available in virtually anyoperational system – namely that we have implicit feedback datafrom more than one ranking function. We call this InterventionHarvesting. Since click behavior depends jointly on examinationand relevance, we show how to exploit this intervention to control for any difference in overall relevance of results at differentpositions under the PBM. This makes our approach fundamentally

interventional and it is thus consistent analogous to using explicitswap interventions under mild assumptions. However, by leveraging existing data that is readily available in most systems, itdoes not require additional online interventions and the resultingdecrease in user experience. To make efficient use of the interventional data we harvest, we propose a specific extremum estimator– called AllPairs – that combines all available data for improvedstatistical efficiency without the need for a relevance model thatcould introduce bias. We find that this estimator works well evenif the rankers we harvest interventions from are quite similar intheir result placements and overall performance, and that it is ableto recover the relative propensities globally even if most of thechanges in rank are small.controlling for relevance explicitly and without a relevance model,while still avoiding intrusive interventions.Another line of research employs click models [7] to infer relevance judgments from click logs. Training is typically performed viagenerative maximum likelihood under specific modeling assumptions about user behavior. There are two classic clicks models: theposition-based model (PBM) [23] and the Cascade model [8]. Basedon these two models, more advanced models have been developed,including UBM [11], DBN [6], and CCM [12]. Our propensity modelis based on the PBM. However, we do not use it as a generativemodel to infer relevance, but instead use interventional techniquesto infer propensities even without repeat queries.32RELATED WORKImplicit user feedback (e.g. clicks and dwell time) has been widelyused to improve search quality. In order to fully utilize the implicitfeedback signals for LTR algorithms, various types of bias have to behandled [18], e.g., position bias [15], presentation bias [30], and trustbias [15, 22]. To address this challenge, a large amount of researchhas been devoted to extracting more accurate signals from click data.For example, some heuristic methods have been proposed to addressthe position bias by utilizing the pairwise preferences betweenclicked and skipped documents [14–16]. Though these methodshave been found to provide more accurate relevance assessments,their data is still biased. For example, click vs. skip preference tendto reverse the presented order when used for learning [14] due totheir sampling bias.Recently, Joachims et al. [18] presented a counterfactual inference framework, which provides a provably unbiased and consistentapproach to LTR even with biased feedback data. It requires knowledge of the propensity of obtaining a particular feedback signal,based on which an Inverse Propensity Scoring (IPS) approach canbe applied. IPS was developed in causal inference [24] and is awidely accepted technique for handling sampling bias. It has beenemployed in unbiased evaluation and learning [1, 10, 20, 21, 25, 26].The common assumption in most of these studies is that the propensities are under the system’s control and are thus known. In theunbiased LTR setting, however, propensities arise from user behavior and thus need to be estimated.The problem of propensity estimation for LTR was already addressed in other works. Wang et al. [28] and Joachims et al. [18]proposed to estimate propensity via randomization experiments.Carterette and Chandar [4] extend the counterfactual inferenceframework [18] by considering the case of evaluating new rankersthat can retrieve previously unseen documents. Their methodsstill rely on interventions, though they make the interventionsminimally invasive. To avoid intrusive interventions that degradethe user search experience, Wang et al. [29] proposed a regressionbased EM algorithm to estimate position bias without interventions.Similarly, Ai et al. [3] presented a framework to jointly learn anunbiased ranker and an unbiased propensity model from biasedclick data. Both works couple relevance estimation with propensityestimation, which introduces the drawback of potential bias whenthe relevance model is misspecified. Our work differs from these byHARVESTING INTERVENTIONSWe approach the propensity estimation task by harvesting implicitinterventions from already logged data. In particular, we makeuse of the fact that we typically have data from multiple historicranking functions, and we will identify the conditions under whichthese provide unconfounded intervention data. We focus on thePosition-Based Propensity Model (PBM), which we review beforedefining interventional sets and analyzing their properties.3.1Position-Based Propensity ModelThe position-based model recognizes that higher-ranked resultsare more likely to be considered (i.e. discovered and viewed) bythe user than results further down the ranking. Suppose that for aparticular query q, result d is displayed at position k. Let C be therandom variable corresponding to a user clicking on d, and let E bethe random variable denoting whether the user examines d. In ournotation, q represents all information about the users, the querycontext, and which documents the user considers relevant or not.This mean we can denote the relevance of an individual documentas a non-random function rel(q, d) of q, where rel(q, d) 1 andrel(q, d) 0 indicates relevant and non-relevant respectively. Thenaccording to the Position-Based Propensity Model (PBM) [7],Pr(C 1 q, d, k) Pr(E 1 k) rel(q, d) pk rel(q, d).In this model, the examination probability pk : Pr(E 1 k) depends only on the position, and it is identical to the observationpropensity [18]. For learning, it is sufficient to estimate relativepropensities pk/p1 for each k [18], since multiplicative scaling doesnot change the training objective of counterfactual learning methods (e.g. [2, 3, 17, 18, 26, 27]). Estimating these relative propensitiesis the goal of this paper.Note that one can train multiple PBM models to account forchanges in the propensity curve due to context. In this way, theexpressiveness of the PBM model can be substantially extended.For example, one can train separate PBM models for navigationalvs. informational queries simply by partitioning the data. Whiletraining multiple such models is prohibitively expensive whenintrusive interventions are required, the intervention harvestingapproach we describe below makes training such contextual PBMsfeasible since it does not require costly data, is both statisticallyand computationally efficient, and does not have any parametersthat require manual tuning.

3.2Controlling for Relevance through SwapInterventionsThe key problem in both propensity estimation and unbiased learning to rank is that we only observe clicks C, but we never get toobserve E (whether a user examined a result) and rel(q, d) (whetherthe user found d relevant for q) individually. This makes it difficult to attribute the lack of a click to a lack of examination or alack of relevance. A key idea for overcoming this dilemma withoutexplicit relevance judgments or cumbersome instrumentation (e.g.eye tracking) was proposed in [18], namely to control for relevancethrough randomized interventions.The intervention proposed in [18] is to randomly swap the resultin position 1 with the result in position k. We call these Swap(1, k)interventions. Such swaps provide a completely randomized experiment [13], meaning that the assignment of the document to aposition does not depend on relevance or any covariates (e.g. abstract length, document language). Under these swap interventions,we now get to observe how many clicks position 1 results get whenthey stay in position 1 vs. when they get swapped into positionk. Since the expected relevance in either condition (i.e. swap vs.not swapped) is the same, any change in clickthrough rate must beproportional to a drop in examination.More formally, we model user queries as sampled i.i.d. q Pr(Q).Whenever a query is sampled, the ranker f (q) sorts the candidateresults d for the query and we apply the randomized swap intervention between positions 1 and some fixed k before the ranking isdisplayed to the user. As a precursor to the later exposition, suppose that the random swap occurs with a fixed probability p (notnecessarily 0.5), yielding the logged datasets D 11,k (qi1 , d 1i , C 1i )n1jjj(result stayed in position 1) and D k1,k (qk , dk , Ck )n2 (result wasswapped into position k) of sizes n 1 and n 2 respectively. Here C 1idenotes whether the document d 1i placed at position 1 was clickedÍjor not, and similarly for Ck . Denote with ĉ 11,k n11 C 1i the rateof clicks that documents get when they remain in position 1 andÍ jlet ĉ k1,k n12 Ck be the rate of clicks when they get swapped toposition k. Then, under the PBM, the relative propensity is equalto the ratio of expected click rates:pk E D 1,k [ rel(qkj , dkj )/n2 ]pkk Íp1p1 E D 1, k [ rel(q1i , d1i )/n1 ]Í1 E D 1,k [Ípk rel(q kj , d kj )/n 2 ]kE D 1,k [Íp1 rel(q 1i , d 1i )/n 1 ]1 E D 1,k [ÍPr(C kj 1 q kj , d kj , k )/n 2 ]kE D 1,k [ Pr(C1i 1 q1i , d1i , 1)/n1 ]1hiE D 1,k ĉ k1,kkhi E D 1,k ĉ 11,kÍ1The first equality holds since swaps are completely at random, suchthat the expected average relevance under each condition is equaland their ratio is one (we need not consider n 1 and n 2 as randomvariables). It is thus reasonable to estimate the relative propensityasĉ 1,kp̂k k ,p̂1ĉ 11,kwhich is a statistically consistent estimate as the sample size n grows[18]. Analogously, relative propensities can be estimated betweenany pairs of positions k and k ′ with Swap(k, k ′ ) interventions [29].3.3Interventional Sets from Multiple RankersA key shortcoming of the swap experiment is its impact on userexperience, since retrieval performance can be degraded quite substantially. This is especially true for swaps between position 1 andk for large k, even though this particular swap experiment makes iteasy to estimate propensities relative to position 1 directly. To overcome this impact on user experience, we now show how to harvestinterventions from already existing data under mild assumptions,so that no additional swap experiments are needed. As part of this,we may never see a direct swap between position 1 and k, and wetackle the problem of how to aggregate many local swaps into anoverall consistent estimate in Section 4.Our key idea in harvesting swap interventions lies in the use ofdata from multiple historic rankers. Logs from multiple rankers aretypically available in operational systems, since multiple rankersmay be fielded at the same time in A/B tests, or when the productionranker is updated frequently. Consider the case where we have datafrom m historic rankers F { f 1 , ., fm }. Furthermore, a crucialcondition is that the query distribution must not depend on thechoice of ranker fi (where the query q includes the user’s relevancevector rel in our notation), fi : Pr(Q fi ) Pr(Q) q Q : Pr(fi q) Pr(fi )(1)Note that the condition on the left implies that Pr(fi q) Pr(fi ) onthe right by Bayes rule, which is related to exploration scavenging[19]. Intuitively, this condition avoids that different rankers fi getdifferent types of queries. For rankers that are compared in anA/B test, this condition is typically fulfilled by construction sincethe assignment of queries to rankers is randomized. For data froma sequence of production rankers, one needs to be mindful thatany temporal covariate shift in the query and user distribution isacceptably small.j j jWe denote the click log of each ranker fi with Di (qi , yi , ci )ni ,where ni is the number of queries that fi processed. Here j [ni ],jjjjqi is a query, yi fi (qi ) is the presented ranking, and ci is avector that indicates for each document click or no click. Sincemost retrieval systems only rerank a candidate set of documents intheir final stage, we denote Ω(q) as the candidate set of results forquery q. We furthermore denote the rank of candidate result d injjjranking yi as rk(d yi ), and we use ci (d) [0, 1] to denote whetherresult d was clicked or not.We begin by defining interventional sets Sk,k ′ of query-documentpairs, where one ranking function f F puts document d at position k and another ranker f ′ F puts the same document d atposition k ′ for the same query q. Let M be some fixed number of toppositions for which propensity estimates are desired (e.g. M 10).Then, for each two ranks k , k ′ [M], we define the interventional

the rate of clicks in position k (and in position k ′ ):set asm ni′ ÕÕ Õĉ kk,k : 1[(q j ,d ) S ′ ] 1[rk(d y j ) k ]k, kiiSk,k ′ : {(q, d) : q Q, d Ω(q), f ,f ′ rk(d f(q)) k rk(d f ′(q)) k ′ }Intuitively, the pairs in these sets are informative because theyreceive different treatments or interventions based on the choiceof different rankers. But note that we are not requiring any queryto occur multiple times. An interventional set merely reflects thattwo potential outcomes (i.e. document d either in position k orin position k ′ for query q) were possible. In fact, we only everobserve one factual outcome, while the other outcome remainscounterfactual and unobserved.The key insight is that for any pair of ranks (k, k ′ ), the interventional set contains the query-document pairs (q, d) for which therank of d was randomly assigned to either k or k ′ via the choice ofranking function fi . Specifically, there are three possible outcomesdepending on the choice of ranking function fi F : (a) fi puts d atrank k, (b) fi puts d at rank k ′ , or (c) fi puts d at some other rank. Inthe latter case, the instance is not included in the interventional setSk,k ′ , but the former two cases can be seen as the two conditions ofa swap experiment between positions k and k ′ . In this way, we canthink about these as virtual swap interventions between ranks kand k ′ where the randomization comes from the randomized choiceof ranking function according to (1).While this swap experiment is completely randomized undercondition (1) (i.e. the choice of fi does not depend on q), the assignment is generally not uniform (i.e. d could have a higher probabilityto be presented in position k than in position k ′ ). The weightsw(q, d, k) : mÕni 1[rk(d fi (q)) k].i 1w(q, d, k)w(q, d, k) w(q, d, k ′ )We will show in the following how interventional sets can be usedto control for unobserved relevance information when estimatingpropensities.3.4ik,k ′ĉ k ′jw(qi , d, k)ni Õm ÕÕ1[(q j ,d ) S ′ ] 1[rk(d y j ) k ′ ]: k, kiii 1 j 1 d Ω(q j )i.jci (d)jw(qi , d, k ′ ).The definition normalizes the observed clicks by dividing withjw(qi , d, k), which accounts for non-uniform assignment probabilijties. Note that w(qi , d, k) is non-zero whenever the first indicatoris true, such that we never divide a non-zero quantity by zero′′(and we define 0/0 : 0). Intuitively, ĉ kk,k and ĉ kk,kcapture the′weighted click-through rate at position k and k ′ restricted to (k, k ′ )interventional (query, document) pairs, where the weights w(q, d, k)account for the imbalance in applying the intervention of puttingdocument d at position k vs k ′ for query q.′′The following shows that ĉ kk,k and ĉ kk,kare proportional to the′true clickthrough rate at positions k and k ′ in expectation, conditioned on the number of relevant documents in the interventionalset Sk,k ′ .Proposition 1. For the PBM model, i.i.d. queries q Pr(Q), and′′under the condition in (1), the expectations of ĉ kk,k and ĉ kk,kare′Eq,c [ĉ kk,k ] pk r k,k ′′Eq,c [ĉ kk,k′ ] pk ′ r k,k ′′where r k,k ′ EqhÍd Ω(q) 1[(q,d ) S k,k ′ ] rel(q, d)i.Proof. We only detail the proof for ĉ kk,k , since the proof for′ĉ kk,kis analogous.′′are then used to account for this non-uniformity. Specifically, theweight w(q, d, k) reflects how often document d is ranked at positionk given that we have employed each ranking function fi exactly nitimes. Therefore, the probability of assigning d to k as opposed tok ′ for query q can be estimated asP(rank k rank k or rank k ′, q, d) i 1 j 1 d Ω(q j )jci (d)Controlling for Unobserved Relevancethrough Interventional SetsAs we had already seen in Section 3.2, we need to disentangletwo unobserved quantities – relevance and examination – whenanalyzing observed clicks in the PBM. This can be achieved bycontrolling for relevance through randomization, which was doneexplicitly in Section 3.2 via Swap(1, k)-interventions. The followingshows that interventional sets Sk,k ′ provide analogous control forany pair of ranks (k, k ′ ) under condition (1).For each interventional set Sk,k ′ , with k , k ′ [M], we can now′′define the quantities ĉ kk,k (and ĉ kk,k′ ) which can be thought of asEq,c [ĉ kk,k ]′ ni Õm ÕÕPr(q)i 1 j 1 q Qni Õm ÕÕÕd Ω(q)1[(q,d ) Sk,k ′ ] 1[rk(d fi (q)) k]Ec [c(d)]w(q, d, k)p rel(q, d)1[(q,d ) Sk,k ′ ] 1[rk(d fi (q)) k] kw(q, d, k)i 1 j 1 q Qd Ω(q)Ím ÍniÕÕi 1 j 1 1[rk(d f i (q)) k ] pkPr(q)1[(q,d ) Sk,k ′ ] rel(q, d)w(q, d, k)q Qd Ω(q)ÍmÕi 1 ni 1[rk(d f i (q)) k ] pk Eq [1[(q,d ) Sk, k ′ ] rel(q, d)]w(q, d, k)d Ω(q)Õ pk Eq [1[(q,d ) Sk, k ′ ] rel(q, d)] Pr(q)Õd Ω(q)The first equality follows from the i.i.d. assumption of conditionjj(1) and that yi fi (qi ) by definition, the second from the PBMdefinition, and the third by taking the inner terms common. The quantity r k,k ′ is related to the average relevance of thedocuments in the interventional set Sk,k ′ (also see Section 4.2).

While r k,k ′ is unobserved, it is shared between both ĉ kk,k and ĉ kk,k′ .We can thus get the relative propensity between positions k and k ′as′pk r k,k ′pk ′pkpk ′ r k,k ′′′Eq,c [ĉ kk,k ]′ .Eq,c [ĉ kk,k′ ]ĉ kk,k and ĉ kk,k′ , but counting the non-click events:′′ˆ k,k c: k′ˆ k,k c: k′′jnim ÕÕÕi 1 j 1 d Ω(q j )i1[(q j ,d ) Sik, kk, k1j′ ] [rk(d y ) k ′ ]iÕi 1 j 1 d Ω(q j )4PROPENSITY ESTIMATORS FORINTERVENTIONAL SETSThis section defines relative propensity estimators that use interventional sets. The first set of estimators, which we call local estimators,are straightforward adaptations of estimators that had been proposed for data from explicit interventions [18, 29]. However, theselocal estimators ignore much of the available information when harvesting interventions, and we thus develop a new global estimatorthat exploits information from all interventional sets Sk,k ′ .4.11[(q j ,d ) Sii′to ĉ kk,kSince the click counts ĉ kk,k and ĉ kk,kcan be treated just like data′from an explicit intervention, the same estimators apply. In particular, we observe the interventional sets S 1,k and can thus use thesame estimator that we previously used for the explicit Swap(1, k)experiment [18]. We call this the PivotOne estimatorjw(qi , d, k ′ ).and ĉ kk,k′ , both quantities have the desired ex′Proposition 2. For the PBM model, i.i.d. queries q Pr(Q), and′′ˆ k,kˆ k,kunder the condition in (1), the expectations of cand carekk′ˆ k,kEq,c [ c] Nk,k ′ pk r k,k ′k′ˆ k,kEq,c [ c] Nk,k ′ pk r k,k ′k′hÍiwhere r k,k ′ Eqd Ω(q) 1[(q,d ) S k, k ′ ] rel(q, d) and Nk,k ′ ÍEq [ d Ω(q) 1[(q,d ) Sk, k ′ ] ].′′The way we constructed the interventional sets, the expectedrelevances r k,k ′ are not necessarily normalized to be within theinterval [0, 1]. It is therefore more convenient to instead considerthe normalized version of r k,k ′r k,k ′ ĉ 1,kp̂k kp̂1ĉ 11,kr k,k ′ [0, 1],Nk,k ′using the definitions from Propositions 1 and 2. Under this nor′′ˆ k,kmalization, we have that E[ĉ kk,k ] pk r k,k ′ Nk,k ′ and E[ c] kˆ k,k(1 pk r k,k ′ )Nk,k ′ and similarly for ĉ k′k,k and c. We modelkthis normalized r k,k ′ in the AllPairs estimator.We can now formulate the training objective of the AllPairsestimator. The following objective needs to be maximized withrespect to p̂k [0, 1] and rˆk,k ′ rˆk ′,k [0, 1] (since r k,k ′ r k ′,kby definition)Õ′′ˆ k,k(p̂, rˆ) argmaxĉ kk,k log(p̂k rˆk,k ′ ) clog(1 p̂k rˆk,k ′ ).k′We can similarly adapt the estimator used in [29], which uses achain of swaps between adjacent positions in the ranking. We callthis the AdjacentChain estimatorĉ 1,2 ĉ 2,3p̂k 21,2 · 32,3 · . ·p̂1ĉ 1ĉ 2ĉ kk 1,k 1,kĉ kk 1It is easy to see that both estimators are statistically consistent under mild conditions, most importantly that each relevant′′interventional set has non-zero support such that ĉ kk,k and ĉ kk,k′concentrate to their expectations.4.21 ci (d)Proof. The proof is analogous to Proposition 1 and thus omitted. Local Estimators′Analogouspectation.jw(qi , d, k)jnim ÕÕiWe are now in a position to define specific estimators for the relativepropensities based on the interventional sets.1 ci (d)1j′ ] [rk(d y ) k ]Global AllPairs EstimatorA key shortcoming of the local estimators is that they only use asmall part of the available information, and that they ignore the datafrom most interventional sets Sk,k ′ . To overcome this shortcoming, we developed a new extremum estimator called AllPairs thatresembles a maximum likelihood objective over all interventionalsets. In order to formulate the training objective of the AllPairsestimator, we first need to define a quantity that is analogous top,r′k ,k ′ [M ]This optimization problem can be interpreted as Weighted CrossEntropy Maximization for estimating the distribution pk r k,k ′ using′′ˆ k,kweighted samples ĉ kk,k and c. The weighting by Nk,k ′ ensureskthat the contribution of each aggregated click-through sample isproportional to the size of its interventional set.From the solution of this optimization problem, (p̂, rˆ), we onlyneed p̂ while the matrix rˆ can be discarded. To get normalizedpropensities relative to rank 1, we compute p̂k/p̂1 . Note that theoptimization problem is quite small, as it uses only O(M 2 ) variablesand 2 M (M 1) terms in the objective.Unlike the local estimators, the AllPairs approach integrates alldata from every interventional set. We will evaluate empirically howmuch statistical efficiency is gained by taking this global approachcompared to the local estimators.

5EMPIRICAL EVALUATIONWe take a two-pronged approach to evaluating our interventionharvesting technique and the estimators. First, we fielded themon two real-world systems – the Arxiv Full-Text Search Engineand the Google Drive Search – to gain insight into their practicaleffectiveness. Second, we augment these real-world experimentswith simulation studies using synthetically generated click data,where we can explore the behavior of our method over the wholespectrum of settings (e.g. varying ranker similarity, presentationbias severity, and click noise).5.1Real-World Evaluation: Arxiv Full-TextSearchTable 1: Size of the interventional sets Sk,k ′ for Arxiv (showing only top 10 positions).113625-We conducted a controlled experiment on the Arxiv Full-Text SearchEngine1 , where we compare the results from the AllPairs estimatorusing Intervention Harvesting with the results from a gold-standardintervention experiment as described below. We used three rankingfunctions { f 1 , f 2 , f 3 } for defining the interventional sets, whichwere generated by using different learning methods and datasets.Intervention Harvesting. For the other half of the incomingqueries, we uniformly pick a ranking function fi and present its results without further intervention. From the data in these conditions,we then compute the interventional sets Sk,k ′ for k , k ′ {1, ., 21}′′′k,k ′and their resulting ĉ kk,k , ĉ kk,kand ĉ kk,k′ , ĉ k′ . These are thenused in the AllPairs estimator from Section 4.2.5.1.1 Results. Data for all six conditions was collected simultaneously between May 14, 2018 and August 1, 2018 to avoid confounding due to shift in the query distribution for maximum validityof the experiment. About 53,000 queries and 25,600 clicks werecollected, about half for the Swap Experiment and half for theInterventional Set method.The estimated propensity curves for the gold-standard swapexperiment and for the AllPairs estimator are shown in Figure 1.The shaded region for each curve corresponds to a 95% confidenceinterval calculated from 1000 bootstrap samples. As we can see,the curves follow a similar trend for each position and AllPairsmostly lies within the confidence interval of the gold-standardcurve. However, the confidence interval for the AllPairs method issubstantially tighter than for the swap experiment, indicating thatAllPairs is not only less intrusive (no

Aman Agarwal Cornell University Ithaca, NY aa2398@cornell.edu Ivan Zaitsev Cornell University Ithaca, NY iz44@cornell.edu Xuanhui Wang, Cheng Li, Marc Najork Google Inc. Mountain View, CA {xuanhui,chgli,najork}@google.com Thorsten Joachims Cornell University Ithaca, NY tj@cs.cornell.edu AB

Related Documents:

DC Biasing BJT circuits There is numerous bias configuration of BJT circuits. Some of the common configuration of BJT circuit includes 1. Fixed-bias circuit 2. Emitter-bias circuit 3. Voltage divider bias circuit 4. Collector-feedback bias circuit 5. Emitter-follower bias circuit 6. Common base circuit Fixed Bias Configuration

(4 Hours) Biasing of BJTs: Load lines (AC and DC); Operating Points; Fixed Bias and Self Bias, DC Bias with Voltage Feedback; Bias Stabilization; Examples. (4 Hours) Biasing of FETs and MOSFETs: Fixed Bias Configuration and Self Bias Configuration, Voltage Divider Bias and Design (4 Hours) MODULE - II (12 Hours) .

CHAPTER 11 Conservatism Bias 119 CHAPTER 12 Ambiguity Aversion Bias 129 CHAPTER 13 Endowment Bias 139 CHAPTER 14 Self-Control Bias 150 CHAPTER 15 Optimism Bias 163 Contents vii 00_POMPIAN_i_xviii 2/7/06 1:58 PM Page vii. CHAPTER 16 Mental Accounting Bias 171 CHAPTER 17 Confirmation Bias 187

Electrical Construction Estimating Introduction to Electrical Construction Estimating Estimating activites will use the North State Electric estimating procedures. Estimating and the Estimator Estimating is the science and the art by which a person or organization determines in advance of t

RS485-1 Bias and Termination Resistors RS485-1 is equipped with a three-position slide switch which can be used to switch its bias resistors OFF (position "MID" – this is the default), switch its bias resistors ON (position "BIAS"), and switch its bias

monitoring energy consumption: the Intrusive Load Monitoring (ILM) approach, which use one sensor for each appliance, and the Non-Intrusive Load Monitoring (NILM) approach which aims at estimating the load consumption from a unique overall current and voltage measurement. Figure 1 presents the difference between the two approaches.

ES-5: PREVENTING SOCIAL BIAS Controlling Social Bias involves understanding, identifying, and actively countering bias. It is important to reflect on the nature of bias and how it comes about before attempting to control social bias. Bias is a part of human nature because we all naturally prefer familiar things and familiar ways of thinking.

4.4.1. Averaging total cloud amount and frequencies of clear sky and precipitation. 12. 4.4.2. Averaging methods for cloud types. 13. 4.4.3. Bias adjustments for cloud type analyses. 14 (partial-undercast bias, abstention bias, clear-sky bias, sky-obscured bias, night-detection bias) 5.