Statistically Significant Detection Of . - Vivek Kulkarni

2y ago
101 Views
2 Downloads
3.74 MB
11 Pages
Last View : Today
Last Download : 2m ago
Upload by : Victor Nelms
Transcription

Statistically Significant Detection of Linguistic ChangeVivek KulkarniRami Al-RfouStony Brook University, USAStony Brook University, USAvvkulkarni@cs.stonybrook.eduBryan Perozziralrfou@cs.stonybrook.eduSteven SkienaStony Brook University, USAStony Brook University, eduABSTRACTWe propose a new computational approach for tracking anddetecting statistically significant linguistic shifts in the meaning and usage of words. Such linguistic shifts are especiallyprevalent on the Internet, where the rapid exchange of ideascan quickly change a word’s meaning. Our meta-analysisapproach constructs property time series of word usage, andthen uses statistically sound change point detection algorithms to identify significant linguistic shifts.We consider and analyze three approaches of increasingcomplexity to generate such linguistic property time series,the culmination of which uses distributional characteristicsinferred from word co-occurrences. Using recently proposeddeep neural language models, we first train vector representations of words for each time period. Second, we warp thevector spaces into one unified coordinate system. Finally, weconstruct a distance-based distributional time series for eachword to track its linguistic displacement over time.We demonstrate that our approach is scalable by tracking linguistic change across years of micro-blogging usingTwitter, a decade of product reviews using a corpus of moviereviews from Amazon, and a century of written books usingthe Google Book Ngrams. Our analysis reveals interestingpatterns of language usage change commensurate with eachmedium.Categories and Subject DescriptorsH.3.3 [Information Storage and Retrieval]: InformationSearch and RetrievalKeywordsWeb Mining;Computational Linguistics1.INTRODUCTIONNatural languages are inherently dynamic, evolving overtime to accommodate the needs of their speakers. Thiseffect is especially prevalent on the Internet, where the rapidexchange of ideas can change a word’s meaning overnight.Copyright is held by the International World Wide Web Conference Committee (IW3C2). IW3C2 reserves the right to provide a hyperlink to theauthor’s site if the Material is used in electronic media.WWW 2015, May 18–22, 2015, Florence, Italy.ACM 2736277.2741627 .gay2005gay1950t alkat ivehealt hycourageouscourt eousgay 1900dappercheerfulsublim elyprofligat eunem barrassedlesbiangay1975uneducat ed hom osexualgay1990religiousphilant hropistt ransgenderst at esm angaysillit erat ehispanicsorcerersadolescent st ransgenderedart isansm et onym yapparit ionalFigure 1: A 2-dimensional projection of the latent semantic space captured by our algorithm. Notice the semantictrajectory of the word gay transitioning meaning in the space.In this paper, we study the problem of detecting suchlinguistic shifts on a variety of media including micro-blogposts, product reviews, and books. Specifically, we seek todetect the broadening and narrowing of semantic senses ofwords, as they continually change throughout the lifetime ofa medium.We propose the first computational approach for tracking and detecting statistically significant linguistic shifts ofwords. To model the temporal evolution of natural language,we construct a time series per word. We investigate threemethods to build our word time series. First, we extractFrequency based statistics to capture sudden changes in wordusage. Second, we construct Syntactic time series by analyzing each word’s part of speech (POS) tag distribution.Finally, we infer contextual cues from word co-occurrencestatistics to construct Distributional time series. In order todetect and establish statistical significance of word changesover time, we present a change point detection algorithm,which is compatible with all methods.Figure 1 illustrates a 2-dimensional projection of the latentsemantic space captured by our Distributional method. Weclearly observe the sequence of semantic shifts that the wordgay has undergone over the last century (1900-2005). Initially, gay was an adjective that meant cheerful or dapper.Observe for the first 50 years, that it stayed in the samegeneral region of the semantic space. However by 1975, ithad begun a transition over to its current meaning —a shiftwhich accelerated over the years to come.The choice of the time series construction method determines the type of information we capture regarding word

2.PROBLEM DEFINITIONOur problem is to quantify the linguistic shift in wordmeaning (semantic or context change) and usage across time.Given a temporal corpora C that is created over a time 0Normalized FrequencyNormalized AprJulOctOctJan13AprJulOctJan12AprJulOct13(a) Frequency method (Google Trends)77665Z¡Score5Z¡Scoreusage. The difference between frequency-based approachesand distributional methods is illustrated in Figure 2. Figure2a shows the frequencies of two words, Sandy (red), andHurricane (blue) as a percentage of search queries accordingto Google Trends1 . Observe the sharp spikes in both words’usage in October 2012, which corresponds to a storm calledHurricane Sandy striking the Atlantic Coast of the UnitedStates. However, only one of those words (Sandy) actuallyacquired a new meaning. Note that while the word Hurricane definitely experienced a surge in frequency of usage, itdid not undergo any change in meaning. Indeed, using ourdistributional method (Figure 2b), we observe that only theword Sandy shifted in meaning where as Hurricane did not.Our computational approach is scalable, and we demonstrate this by running our method on three large datasets.Specifically, we investigate linguistic change detection acrossyears of micro-blogging using Twitter, a decade of productreviews using a corpus of movie reviews from Amazon, anda century of written books using the Google Books NgramCorpus.Despite the fast pace of change of the web content, ourmethod is able to detect the introduction of new products,movies and books. This could help semantically aware webapplications to better understand user intentions and requests. Detecting the semantic shift of a word would triggersuch applications to apply focused sense disambiguation analysis.In summary, our contributions are as follows: Word Evolution Modeling: We study three different methods for the statistical modeling of wordevolution over time. We use measures of frequency,part-of-speech tag distribution, and word co-occurrenceto construct time series for each word under investigation.(Section 3) Statistical Soundness: We propose (to our knowledge) the first statistically sound method for linguisticshift detection. Our approach uses change point detection in time series to assign significance of changescores to each word. (Section 4) Cross-Domain Analysis: We apply our method onthree different domains; books, tweets and online reviews. Our corpora consists of billions of words andspans several time scales. We show several interestinginstances of semantic change identified by our method.(Section 6)The rest of the paper is structured as follows. In Section2 we define the problem of language shift detection overtime. Then, we outline our proposals to construct time seriesmodeling word evolution in Section 3. Next, in Section 4, wedescribe the method we developed for detecting significantchanges in natural language. We describe the datasets weused in Section 5, and then evaluate our system both qualitatively and quantitatively in Section 6. We follow this with atreatment of related work in Section 7, and finally concludewith a discussion of the limitations and possible future workin Section NovJanMarMayJulSepNovJan12MarMayJulSep13(b) Distributional methodFigure 2: Comparison between Google Trends and ourmethod. Observe how Google Trends shows spikes in frequency for both Hurricane (blue) and Sandy (red). Ourmethod, in contrast, models change in usage and detectsthat only Sandy changed its meaning and not Hurricane.S, we divide the corpora into n snapshots Ct each of periodlength P . We build a common vocabulary V by intersectingthe word dictionaries that appear in all the snapshots (i.e,we track the same word set across time). This eliminatestrivial examples of word usage shift from words which appearor vanish throughout the corpus.To model word evolution, we construct a time series T (w)for each word w V. Each point Tt (w) corresponds tostatistical information extracted from corpus snapshot Ctthat reflects the usage of w at time t. In Section 3, wepropose several methods to calculate Tt (w), each varying inthe statistical information used to capture w’s usage.Once these time series are constructed, we can quantifythe significance of the shift that occurred to the word inits meaning and usage. Sudden increases or decreases inthe time series are indicative of shifts in the word usage.Specifically we pose the following questions:1. How statistically significant is the shift in usage of aword w across time (in T (w))?2. Given that a word has shifted, at what point in timedid the change happen?3.TIME SERIES CONSTRUCTIONConstructing the time series is the first step in quantifying the significance of word change. Different approachescapture various aspects of a word’s semantic, syntactic andusage patterns. In this section, we describe three approaches(Frequency, Syntactic, and Distributional ) to building a timeseries, that capture different aspects of word evolution acrosstime. The choice of time series significantly influences thetypes of changes we can detect —a phenomenon which wediscuss further in Section 6.3.1Frequency MethodThe most immediate way to detect sequences of discreteevents is through their change in frequency. Frequency basedmethods are therefore quite popular, and include tools likeGoogle Trends and Google Books Ngram Corpus, both of

1.0 4.40.20 4.50.8 4.70.6 4.80.100.4 4.9 5.0JS(Q0 ;Qt )0.15Qt Pr(Pos japple)log Pr(w) 4.60.050.2 5.1 5.219000.019201940196019802000NounTimeFigure 3: Frequency usage of the word gay over time, observethe sudden change in frequency in the late 1980s.which are used in research to predict economical and publichealth changes [7, 9]. Such analysis depends on keywordsearch over indexed corpora.Frequency based methods can capture linguistic shift, aschanges in frequency can correspond to words acquiring orlosing senses. Although crude, this method is simple toimplement. We track the change in probability of a wordappearing over time. We calculate for each time snapshot corpus Ct , a unigram language model. Specifically, we constructthe time series for a word w as follows:#(w Ct )Tt (w) log,(1) Ct where #(w Ct ) is the number of occurrences of the wordw in corpus snapshot Ct . An example of the information wecapture by tracking word frequencies over time is shown inFigure 3. Observe the sudden jump in late 1980s of the wordgay in frequency.3.2Syntactic MethodWhile word frequency based metrics are easy to calculate,they are prone to sampling error introduced by bias in domainand genre distribution in the corpus. Temporal events andpopularity of specific entities could spike the word usagefrequency without significant shift in its meaning, recallHurricane in Figure 2a.Another approach to detect and quantify significant changein the word usage involves tracking the syntactic functionalityit serves. A word could evolve a new syntactic functionalityby acquiring a new part of speech category. For example, apple used to be only a “Noun” describing a fruit, but over timeit acquired the new part of speech “Proper Noun” to indicatethe new sense describing a technology company (Figure 4).To leverage this syntactic knowledge, we annotate our corpuswith part of speech (POS) tags. Then we calculate the probability distribution of part of speech tags Qt given the word wand time snapshot t as follows: Qt PrX POS Tags (X w, Ct ).We consider the POS tag distribution at t 0 to be theinitial distribution Q0 . To quantify the temporal changebetween two time snapshots corpora, for a specific word w,we calculate the divergence between the POS distributionsin both snapshots. We construct the time series as follows:Tt (w) JSD(Q0 , Qt )where JSD is the Jenssen-Shannon divergence [21].0.0019142020(2)19341954Proper Noun1974Adjective1994JS(Q0 ;Qt )Figure 4: Part of speech tag probability distribution of theword apple (stacked area chart). Observe that the “ProperNoun” tag has dramatically increased in 1980s. The sametrend is clear from the time series constructed using JenssenShannon Divergence (dark blue line).Figure 4 shows that the JS divergence (dark blue line)reflects the change in the distribution of the part of speechtags given the word apple. In 1980s, the “Proper Noun” tag(blue area) increased dramatically due to the rise of AppleComputer Inc., the popular consumer electronics company.3.3Distributional MethodSemantic shifts are not restricted to changes to part ofspeech. For example, consider the word mouse. In the 1970sit acquired a new sense of “computer input device”, but didnot change its part of speech categorization (since both sensesare nouns). To detect such subtle semantic changes, we needto infer deeper cues from the contexts a word is used in.The distributional hypothesis states that words appearingin similar contexts are semantically similar [13]. Distributional methods learn a semantic space that maps words tocontinuous vector space Rd , where d is the dimension of thevector space. Thus, vector representations of words appearing in similar contexts will be close to each other. Recentdevelopments in representation learning (deep learning) [5]have enabled the scalable learning of such models. We use avariation of these models [28] to learn word vector representation (word embeddings) that we track across time.Specifically, we seek to learn a temporal word embeddingφt : V, Ct 7 Rd . Once we learn a representation of a specificword for each time snapshot corpus, we track the changes ofthe representation across the embedding space to quantifythe meaning shift of the word (as shown in Figure 1).In this section we present our distributional approach indetail. Specifically we discuss the learning of word embeddings, the aligning of embedding spaces across different timesnapshots to a joint embedding space, and the utilization of aword’s displacement through this semantic space to constructa distributional time series.3.3.1Learning EmbeddingsGiven a time snapshot Ct of the corpus, our goal is to learnφt over V using neural language models. At the beginningof the training process, the word vector representations arerandomly initialized. The training objective is to maximizethe probability of the words appearing in the context of wordwi . Specifically, given the vector representation wi of a word

19952015TimeFigure 5: Distributional time series for the word tape overtime using word embeddings. Observe the change of behaviorstarting in the 1950s, which is quite apparent by the 1970s.wi (wi φt (wi )), we seek to maximize the probability of wjthrough the following equation:exp (wjT wi )Pr(wj wi ) Pexp (wkT wi )(3)Aligning EmbeddingsHaving trained temporal word embeddings for each timesnapshot Ct , we must now align the embeddings so thatall the embeddings are in one unified coordinate system.This enables us to characterize the change between them.This process is complicated by the stochastic nature of ourtraining, which implies that models trained on exactly thesame data could produce vector spaces where words have thesame nearest neighbors but not with the same coordinates.The alignment problem is exacerbated by actual changes inthe distributional nature of words in each snapshot.To aid the alignment process, we make two simplifyingassumptions: First, we assume that the spaces are equivalentunder a linear transformation. Second, we assume that themeaning of most words did not shift over time, and therefore,their local structure is preserved. Based on these assumptions,observe that when the alignment model fails to align a wordproperly, it is possibly indicative of a linguistic shift.Specifically, we define the set of k nearest words in theembedding space φt to a word w to be k-NN(φt (w)). We seekto learn a linear transformation Wt0 7 t (w) Rd d that mapsa word from φt0 to φt by solving the following optimization:XW(w) argminkφt0 (wi )W φt (wi )k22 , (7)0t 7 tWwk VIn a single epoch, we iterate over each word occurrence in thetime snapshot Ct to minimize the negative log-likelihood J ofthe context words. Context words are the words appearingto the left or right of wi within a window of size m. Thus Jcan be written as:X i mXJ log Pr(wj wi )(4)wi Ct j i mj! iNotice that the normalization factor that appears in Eq. (3)is not feasible to calculate if V is too large. To approximatethis probability, we map the problem from a classification of 1out-of-V words to a hierarchical classification problem [30, 31].This reduces the cost of calculating the normalization factorfrom O( V ) to O(log V ). We optimize the model parametersusing stochastic gradient descent [6], as follows: Jφt (wi ) φt (wi ) α ,(5) φt (wi )where α is the learning rate. We calculate the derivativesof the model using the back-propagation algorithm [34]. Weuse the following measure of training convergence:Tφk (w)φk 1 (w)1 Xρ ,(6) V w V kφk (w)k2 kφk 1 (w)k2where φk is the model parameters after epoch k. We calculate ρ after each epoch and stop the training if ρ 1.0 4 .After training stops, we normalize word embeddings by theirL2 norm, which forces all words to be represented by unitvectors.In our experiments, we use the gensim implementation ofskipgram models2 . We set the context window size m to10 unless otherwise stated. We choose the size of the wordembedding space dimension d to be 200. To speed up thetraining, we subsample the frequent words by the ratio 10 5[27].2https://github.com/piskvorky/gensimwi k-NN(φ 0 (w))twhich is equivalent to a piecewise linear regression model.3.3.3Time Series ConstructionTo track the shift of word position across time, we alignall embeddings spaces to the embedding space of the finaltime snapshot φn using the linear mapping (Eq. 7). Thisunification of coordinate systems allows us to compare relative displacements that occurred to words across differenttime periods.To capture linguistic shift, we construct our distributionaltime series by calculating the distance in the embeddingspace between φt (w)Wt7 n (w) and φ0 (w)W07 n (w) as(φt (w)Wt7 n (w))T (φ0 (w)W07 n (w))(8)kφt (w)Wt7 n (w)k2 kφ0 (w)W07 n (w)k 2Figure 5 shows the time series obtained using word embeddings for tape, which underwent a semantic change in the1950s with the introduction of magnetic tape recorders. Assuch recorders grew in popularity, the change becomes morepronounced, until it is quite apparent by the 1970s.Tt (w) 1 4.CHANGE POINT DETECTIONGiven a time series of a word T (w), constructed using oneof the methods discussed in Section 3, we seek to determinewhether the word changed significantly, and if so estimatethe change point. We believe a formulation in terms ofchangepoint detection is appropriate because even if a wordmight change its meaning (usage) gradually over time, weexpect a time period where the new usage suddenly dominates(tips over) the previous usage (akin to a phase transition)with the word gay serving as an excellent example.There exists an extensive body of work on change pointdetection in time series [1, 3, 38]. Our approach models thetime series based on the Mean Shift model described in [38].First, our method recognizes that language exhibits a generalstochastic drift. We account for this by first normalizing thetime series for each word. Our method then attempts to

5Mean Shift2.01.50.0 0.51.00.531.53.52.52.00.01900{K(X); X π(Z(w))}4.00.5192019401960198020000.019002020 1.019201940196019802000Mean Shift 1.5190020204541.81.61.4Pr (K(X, t 1985) x)X π(Z(w))196019802000at t 00.7pvalue x K(Z(w), t 1985)1.21.0192090%0.2pvalue 10%0.10.10.019201940196019802000K(Z(w))0.0 1.5xPr 1.0 0.5X π(Z(w))0.00.51.01.52.0(K(X, t 1985))Figure 6: Our change point detection algorithm. In Step , we normalize the given time series T (w) to produce Z(w). Next,we shuffle the time series points producing the set π(Z(w)) (Step ). Then, we apply the mean shift transformation (K)on both the original normalized time series Z(w) and the permuted set (Step ). In Step , we calculate the probabilitydistribution of the mean shifts possible given a specific time (t 1985) over the bootstrapped samples. Finally, we comparethe observed value in K(Z(w)) to the probability distribution of possible values to calculate the p-value which determines thestatistical significance of the observed time series shift (Step ).Algorithm 1 Change Point Detection (T (w), B, γ)Input: T (w): Time series for the word w, B: Number ofbootstrap samples, γ: Z-Score thresholdOutput: ECP : Estimated change point, p-value: Significance score.// Preprocessing1: Z(w) Normalize T (w).2: Compute mean shift series K(Z(w))// Bootstrapping3: BS {Bootstrapped samples}4: repeat5:Draw P from π(Z(w))6:BS BS P7: until BS B8: for i 1, n doP9:p-value(w, i) B1P BS [Ki (P ) Ki (Z(w))]10: end for// Change Point Detection11: C {j j [1, n] and Zj (w) γ}12: p-value minj C p-value(w, j)13: ECP argminj C p-value(w, j)14: return p-value, ECPdetect a shift in the mean of the time series using a variant ofmean shift algorithms for change point analysis. We outlineour method in Algorithm 1 and describe it below. We alsoillustrate key aspects of the method in Figure 6.Given a time series of a word T (w), we firstPnormalize1the time series. We calculate the mean µi V w V Ti (w)P21and variance V ari V w V (Ti (w) µi ) across all words.Then, we transform T (w) into a Z-Score series using:Ti (w) µiZi (w) ,(9)V ariwhere Zi (w) is the Z-Score of the time series for the word w 1at time snapshot i.We model the time series Z(w) by a Mean shift model [38].Let S Z1 (w), Z2 (w), . . . , Zn (w) represent the time series.We model S to be an output of a stochastic process whereeach Si can be described as Si µi i where µi is the meanand i is the random error at time i. We also assume thatthe errors i are independent with mean 0. Generally µi µi 1 except for a few points which are change points.Based on the above model, we define the mean shift of ageneral time series S as follows:K(S) jlX1X1Sk Skl jjk j 1(10)k 1This corresponds to calculating the shift in mean betweentwo parts of the time series pivoted at time point j. Changepoints can be thus identified by detecting significant shiftsin the mean.3Given a normalized time series Z(w), we then computethe mean shift series K(Z(w)) (Line 2). To estimate thestatistical significance of observing a mean shift at time pointj, we use bootstrapping [12] (see Figure 6 and Lines 3-10)under the null hypothesis that there is no change in themean. In particular, we establish statistical significance byfirst obtaining B (typically B 1000) bootstrap samplesobtained by permuting Z(w) (Lines 3-10). Second, for eachbootstrap sample P, we calculate K(P ) to yield its corresponding bootstrap statistic and we estimate the statisticalsignificance (p-value) of observing the mean shift at time icompared to the null distribution (Lines 8-10). Finally, weestimate the change point by considering the time point jwith the minimum p-value score (described in [38]). Whilethis method does detect significant changes in the mean ofthe time series, observe that it does not account for themagnitude of the change in terms of Z-Scores. We extendthis approach to obtain words that changed significantlycompared to other words, by considering only those time3This is similar to the CUSUM based approach used for detectingchange points which is also based on mean shift model.

Amazon121 year 9.9 108 50K8. 106MovieReviewsTable 1: Summary of our datasetsSpan (years)Period# words V # documentsDomainGoogle Ngrams1055 years 109 50K 7.5 108BooksTwitter21 month 109 100K 108MicroBloggingpoints where the Z-Score exceeds a user-defined thresholdγ (we typically set γ to 1.75). We then estimate the changepoint as the time point with the minimum p-value exactlyas outlined before (Lines 11-14).5.DATASETSHere we report the details of the three datasets that weconsider - years of micro-blogging from Twitter, a decade ofmovie reviews from Amazon, and a century of written booksusing the Google Books Ngram Corpus. Table 1 shows asummary of three different datasets spanning different modesof expression on the Internet: books, an online forum and amicro-blog.The Google Books Ngram Corpus.The Google Books Ngram Corpus project enables theanalysis of cultural, social and linguistic trends. It containsthe frequency of short phrases of text (ngrams) that wereextracted from books written in eight languages over fivecenturies [25]. These ngrams vary in size (1-5) grams. We usethe 5-gram phrases which restrict our context window sizem to 5. The 5-grams include phrases like ‘thousand poundsless then nothing’ and ‘to communicate to each other’.We focus on the time span from 1900 2005, and set thetime snapshot period to 5 years (21 points). We obtain thePOS Distribution of each word in the above time range byusing the Google Syntactic Ngrams dataset [14, 22, 23].Amazon Movie Reviews.Amazon Movie Reviews dataset consists of movie reviewsfrom Amazon. This data spans August 1997 to October 2012(13 time points), including all 8 million reviews. However, weconsider the time period starting from 2000 as the numberof reviews from earlier years is considerably small. Eachreview includes product and user information, ratings, anda plain-text review. The reviews describe user’s opinionsof a movie, for example: ‘This movie has it all. Drama,action, amazing battle scenes - the best I’ve everseen. It’s definitely a must see.’.Twitter Data.This dataset consists of a sample that spans 24 monthsstarting from September 2011 to October 2013. Each tweet includes the tweet ID, tweet and the geo-location if available. Atweet is a status message with up to 140 characters: ‘I hopesandy doesn’t rip the roof off the pool while we’reswimming .’.6.EXPERIMENTSIn this section, we apply our methods to each datasetpresented in Section 5 and identify words that have changedusage over time. We describe the results of our experimentsbelow. The code used for running these experiments isavailable at the first author’s website.46.1Time Series AnalysisAs we shall see in Section 6.4.1, our proposed time seriesconstruction methods differ in performance. Here, we usethe detected words to study the behavior of our constructionmethods.Table 2 shows the time series constructed for a sample ofwords with their corresponding p-value time series, displayedin the last column. A dip in the p-value is indicative of ashift in the word usage. The first three words, transmitted,bitch, and sex, are detected by both the Frequency andDistributional methods. Table 3 shows the previous andcurrent senses of these words demonstrating the changes inusage they have gone through.Observe that words like her and desk did not change signifantly in meaning, however, the Frequency method detectsa change. The sharp increase of the word her in frequencyaround the 1960’s could be attributed to the concurrent riseand popularity of the feminist movement. Sudden temporary popularity of specific social and political events couldlead the Frequency method to produce many false positives.These results confirm our intuition we illustrated in Figure 2.While frequency analysis (like Google Trends) is an extremelyuseful tool to visualize trends, it is not very well suited forthe task of detecting linguistic shift.The last two rows in Table 2 display two words (appleand diet) that Syntactic method detected. The word applewas detected uniquely by the Syntactic method as its mostfrequent part of speech tag changed significantly from “Noun”to “Proper Noun”. While both Syntactic and Distributionalmethods indicate the change in meaning of the word diet, itis only the Distributional method that detects the right pointof change (as shown in Table 3). The Syntactic method isindicative of having low false positive rate, but suffers froma high false negative rate, given that only two words in thetable were detected. Furthermore, observe that Syntacticmethod relies on good linguistic taggers. However, linguistictaggers require annotated data sets and also do not workwell across domains.We find that the Distributional method offers a goodbalance between false positives and false negatives, whilerequiring no linguistic resources of any sort. Having analyzed the words detected by different time series we turn ourattention to the analysis of estimated changepoints.6.2Historical AnalysisWe have demonstrated that our methods are able to detectwords that shifted in meaning. We seek to identify theinflection points in time where the new senses are introduced.Moreover, we are interested in understanding how the newacquired senses differ from the previous ones.Table 3 shows sample words that are detected by Syntactic and Distributional methods. The first set representswords which the Distributional method detected (Distributional better) while the second set shows sample words whichSyntactic method detected (Syntactic better).Our Distributional method estimates that the word tapechanged in the early 1970s to mean a “cassette tape” and notonly an “adhesive

Vivek Kulkarni Stony Brook University, USA vvkulkarni@cs.stonybrook.edu Rami Al-Rfou Stony Brook University, USA ralrfou@cs.stonybrook.edu Bryan Perozzi Stony Brook University, USA bperozzi@cs.stonybrook.edu Steven Skiena Stony Brook University, USA skiena@cs.stonybrook.edu ABSTRACT

Related Documents:

Sliding right into disaster: Left-to-right sliding windows leak 5 least signi cant bit (LSB) to the most signi cant bit (MSB) and generates d’s sliding windows from the least signi cant digit (right) to the most signi cant digit (left). For every clear bit, a zero digit is appended to the left of the windowed form.

The impact of corporate hedging on rm value dates back to the MM irrelevance propo-sitions. By curbing rm-speci c risk, corporate hedging reduces the volatility in a rm’s net cash . We nd that the di erence in monthly portfolio return between Group 3-5 and Group 0 is statistically signi cant. Interestingly, while there is also a signi cant .

dividend stocks tend to experience an increase in returns by 1.35%, whereas low dividend stocks tend to have a decrease in returns by 1.12%, with both e ects statistically signi cant at the 1% level. The di erence in estimated duration between high and low dividend stocks is 2.46 with high statistical signi cance.

Continuous Sleep until Morning. A signi cant di er-ence was found in the number of students who always experienced disturbed sleep until morning between bruxers (. %, 61 ) and nonbruxers (. %, 77 , 2 19.2 , 0.001 ). With regard to gender, there was no signi cant di erence in the number of students who slept undisturbed

mainder is 0 or 1. The process goes as follows: 1.Divide the number by 2. If the remainder is 1 then the least signi cant bit is 1 otherwise if the remainder is 0 (it was an even number) then the least signi cant bit is 0. 2.Take the quotient from the previous step and divide by 2. The remain-

and years of disorder progression. e results indicate a signi cant positive eect of years of disorder progression on the Phobic scale scores ( 6.507 ; 0.012 )forPDA patients. er e is also a signi cant positive eect of years of disorder progression on the Aggressive-Sadistic scale ( 4.404 ; 0.038 )andontheidenti cationofemotionsskills

Bernard Tschumi Deregulation of Meaning. “ The dominant history of architecture, which is a history of signi!ed, has to be revised “ De-, Dis-, Ex- (Pre!xes of today, 1987) De-centering Dis-integration Dis-continuity (punctum, quantum) The crisis of Determinism Uncertainty Distortion of Signi!er & Signi!ed

Ordena los nombres de estos animales por su primera letra. . En el diccionario se explican los signi cados de las palabras. Algunas palabras tienen un solo signi cado y otras, varios. En los diccionarios, cuando una palabra tiene varios signi cados, se escribe un número delante de cada uno.