Learning Latent Personas Of Film Characters

2y ago
8 Views
2 Downloads
579.69 KB
10 Pages
Last View : 17d ago
Last Download : 3m ago
Upload by : Camryn Boren
Transcription

Learning Latent Personas of Film CharactersDavid Bamman Brendan O’Connor Noah A. SmithSchool of Computer ScienceCarnegie Mellon UniversityPittsburgh, PA 15213, USA{dbamman,brenocon,nasmith}@cs.cmu.eduAbstracta story. Our testbed is film. Under this perspective, a character’s latent internal nature drives theaction we observe. Articulating narrative in thisway leads to a natural generative story: we first decide that we’re going to make a particular kind ofmovie (e.g., a romantic comedy), then decide on aset of character types, or personas, we want to seeinvolved (the PROTAGONIST, the LOVE INTER EST , the BEST FRIEND ). After picking this set, wefill out each of these roles with specific attributes(female, 28 years old, klutzy); with this cast ofcharacters, we then sketch out the set of events bywhich they interact with the world and with eachother (runs but just misses the train, spills coffeeon her boss) – through which they reveal to theviewer those inherent qualities about themselves.This work is inspired by past approaches that infer typed semantic arguments along with narrative schemas (Chambers and Jurafsky, 2009; Regneri et al., 2011), but seeks a more holistic viewof character, one that learns from stereotypical attributes in addition to plot events. This work alsonaturally draws on earlier work on the unsupervised learning of verbal arguments and semanticroles (Pereira et al., 1993; Grenager and Manning,2006; Titov and Klementiev, 2012) and unsupervised relation discovery (Yao et al., 2011).This character-centric perspective leads to twonatural questions. First, can we learn what thosestandard personas are by how individual characters (who instantiate those types) are portrayed?Second, can we learn the set of attributes and actions by which we recognize those common types?How do we, as viewers, recognize a V ILLIAN?At its most extreme, this perspective reducesto learning the grand archetypes of Joseph Campbell (1949) or Carl Jung (1981), such as the H EROor T RICKSTER. We seek, however, a more finegrained set that includes not only archetypes, butstereotypes as well – characters defined by a fixedset of actions widely known to be representative ofWe present two latent variable models forlearning character types, or personas, infilm, in which a persona is defined as aset of mixtures over latent lexical classes.These lexical classes capture the stereotypical actions of which a character is theagent and patient, as well as attributes bywhich they are described. As the firstattempt to solve this problem explicitly,we also present a new dataset for thetext-driven analysis of film, along witha benchmark testbed to help drive futurework in this area.1IntroductionPhilosophers and dramatists have long arguedwhether the most important element of narrativeis plot or character. Under a classical Aristotelianperspective, plot is supreme;1 modern theoreticaldramatists and screenwriters disagree.2Without addressing this debate directly, muchcomputational work on narrative has focused onlearning the sequence of events by which a storyis defined; in this tradition we might situate seminal work on learning procedural scripts (Schankand Abelson, 1977; Regneri et al., 2010), narrativechains (Chambers and Jurafsky, 2008), and plotstructure (Finlayson, 2011; Elsner, 2012; McIntyre and Lapata, 2010; Goyal et al., 2010).We present a complementary perspective thataddresses the importance of character in defining1“Dramatic action . . . is not with a view to the representation of character: character comes in as subsidiary to the actions . . . The Plot, then, is the first principle, and, as it were,the soul of a tragedy: Character holds the second place.” Poetics I.VI (Aristotle, 335 BCE).2“Aristotle was mistaken in his time, and our scholars aremistaken today when they accept his rulings concerning character. Character was a great factor in Aristotle’s time, and nofine play ever was or ever will be written without it” (Egri,1946, p. 94); “What the reader wants is fascinating, complexcharacters” (McKee, 1997, 100).352Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 352–361,Sofia, Bulgaria, August 4-9 2013. c 2013 Association for Computational Linguistics

a class. This work offers a data-driven method foranswering these questions, presenting two probablistic generative models for inferring latent character types.This is the first work that attempts to learn explicit character personas in detail; as such, wepresent a new dataset for character type inductionin film and a benchmark testbed for evaluating future work.3These three roles capture three different ways inwhich character personas are revealed: the actionsthey take on others, the actions done to them, andthe attributes by which they are described. For every character we thus extract a bag of (r, w) tuples, where w is the word lemma and r is oneof {agent verb, patient verb, attribute} as identified by the above rules.2Our second source of information consists of character and movie metadata drawn from the November 4, 2012 dump of Freebase.7 At the movielevel, this includes data on the language, country,release date and detailed genre (365 non-mutuallyexclusive categories, including “Epic Western,”“Revenge,” and “Hip Hop Movies”). Many of thecharacters in movies are also associated with theactors who play them; since many actors also havedetailed biographical information, we can groundthe characters in what we know of those real people – including their gender and estimated age atthe time of the movie’s release (the difference between the release date of the movie and the actor’sdate of birth).Across all 42,306 movies, entities average 3.4agent events, 2.0 patient events, and 2.1 attributes.For all experiments described below, we restrictour dataset to only those events that are among the1,000 most frequent overall, and only characterswith at least 3 events. 120,345 characters meet thiscriterion; of these, 33,559 can be matched to Freebase actors with a specified gender, and 29,802 canbe matched to actors with a given date of birth. Ofall actors in the Freebase data whose age is given,the average age at the time of movie is 37.9 (standard deviation 14.1); of all actors whose genderis known, 66.7% are male.8 The age distributionis strongly bimodal when conditioning on gender:the average age of a female actress at the time of amovie’s release is 33.0 (s.d. 13.4), while that of amale actor is 40.5 (s.d. 13.7).2.2Data2.1TextOur primary source of data comes from 42,306movie plot summaries extracted from theNovember 2, 2012 dump of English-languageWikipedia.4 These summaries, which have amedian length of approximately 176 words,5contain a concise synopsis of the movie’s events,along with implicit descriptions of the characters(e.g., “rebel leader Princess Leia,” “evil lord DarthVader”). To extract structure from this data, weuse the Stanford CoreNLP library6 to tag andsyntactically parse the text, extract entities, andresolve coreference within the document. Withthis structured representation, we extract linguisticfeatures for each character, looking at immediateverb governors and attribute syntactic dependencies to all of the entity’s mention headwords,extracted from the typed dependency tuples produced by the parser; we refer to “CCprocessed”syntactic relations described in de Marneffe andManning (2008): Agent verbs. Verbs for which the entity is anagent argument (nsubj or agent). Patient verbs. Verbs for which the entity isthe patient, theme or other argument (dobj,nsubjpass, iobj, or any prepositional argument prep *). Attributes. Adjectives and common nounwords that relate to the mention as adjectival modifiers, noun-noun compounds, appositives, or copulas (nsubj or appos governors,or nsubj, appos, amod, nn dependents of anentity mention).3MetadataPersonasOne way we recognize a character’s latent typeis by observing the stereotypical actions they3All datasets and software for replication can be found .wikimedia.org/enwiki/5More popular movies naturally attract more attention onWikipedia and hence more detail: the top 1,000 movies bybox office revenue have a median length of 715 r this extreme 2:1 male/female ratio reflects aninherent bias in film or a bias in attention on Freebase (orWikipedia, on which it draws) is an interesting research question in itself.353

perform (e.g., V ILLAINS strangle), the actionsdone to them (e.g., V ILLAINS are foiled and arrested) and the words by which they are described(V ILLAINS are evil). To capture this intuition, wedefine a persona as a set of three typed distributions: one for the words for which the character isthe agent, one for which it is the patient, and onefor words by which the character is attributivelymodified. Each distribution ranges over a fixed setof latent word classes, or topics. Figure 1 illustrates this definition for a toy example: a Z OMBIEpersona may be characterized as being the agentof primarily eating and killing actions, the patientof killing actions, and the object of dead attributes.The topic labeled eat may include words like eat,drink, and devour.νpψzφwγPKDEWφkrψp,rθdpejzjwjαβµ, σ 2mdmeνr , illlovedeadhappyσ2αmdνpmerψzrφwWWFigure 1: A persona is a set of three distributionsover latent topics. In this toy example, the Z OM BIE persona is primarily characterized by beingthe agent of words from the eat and kill topics, thepatient of kill words, and the object of words fromthe dead deadhappy0.00.20.40.60.81.0agentµαEDγEDNumber of personas (hyperparameter)Number of word topics (hyperparameter)Number of movie plot summariesNumber of characters in movie dNumber of (role, word) tuples used by character eTopic k’s distribution over V words.Tuple role: agent verb, patient verb, attributeDistribution over topics for persona p in role rMovie d’s distribution over personasCharacter e’s persona (integer, p {1.P })A specific (r, w) tuple in the dataWord topic for tuple jWord for tuple jConcentration parameter for Dirichlet modelFeature weights for regression modelGaussian mean and variance (for regularizing β)Movie features (from movie metadata)Entity features (from movie actor metadata)Dirichlet concentration parametersFigure 2: Above: Dirichlet persona model (left)and persona regression model (right). Bottom:Definition of variables.ModelsBoth models that we present here simultaneouslylearn three things: 1.) a soft clustering over wordsto topics (e.g., the verb “strangle” is mostly a typeof Assault word); 2.) a soft clustering over topics to personas (e.g., V ILLIANS perform a lot ofAssault actions); and 3.) a hard clustering overcharacters to personas (e.g., Darth Vader is a V IL LAIN .) They each use different evidence: sinceour data includes not only textual features (in theform of actions and attributes of the characters) butalso non-textual information (such as movie genre,age and gender), we design a model that exploitsthis additional source of information in discriminating between character types; since this extralinguistic information may not always be available, we also design a model that learns only fromthe text itself. We present the text-only model firstfor simplicity. Throughout, V is the word vocabulary size, P is the number of personas, and K isthe number of topics.4.1Dirichlet Persona ModelIn the most basic model, we only use information from the structured text, which comes as abag of (r, w) tuples for each character in a movie,where w is the word lemma and r is the relation of the word with respect to the character (oneof agent verb, patient verb or attribute, as outlined in §2.1 above). The generative story runs asfollows. First, let there be K latent word topics;as in LDA (Blei et al., 2003), these are words thatwill be soft-clustered together by virtue of appearing in similar contexts. Each latent word cluster354

φk Dir(γ) is a multinomial over the V words inthe vocabulary, drawn from a Dirichlet parameterized by γ. Next, let a persona p be defined as a setof three multinomials ψp over these K topics, onefor each typed role r, each drawn from a Dirichletwith a role-specific hyperparameter (νr ).ple the latent topics for each tuple as the following.P (zj k p, z j , w, r, ν, γ) (c jr ,p,k νrj )j(c jrj ,p,? Kνrj )Figure 2 (above left) illustrates the form of themodel. To simplify inference, we collapse out thepersona-topic distributions ψ, the topic-word distributions φ and the persona distribution θ for eachdocument. Inference on the remaining latent variables – the persona p for each character type andthe topic z for each word associated with that character – is conducted via collapsed Gibbs sampling(Griffiths and Steyvers, 2004); at each iteration,for each character e, we sample their persona pe :rj ,k,?(c jk,w γ)j(c jk,? V(2)γ)Here, conditioned on the current sample p forthe character’s persona, the probability that tuplej originates in topic k is proportional to the number of other tuples with that same role rj drawnfrom the same topic for that persona (c jrj ,p,k ), normalized by the number of other rj tuples associated with that persona overall (c jrj ,p,? ), multipliedby the number of times word wj is associated withthat topic (c jk,wj ) normalized by the total numberof other words associated with that topic overall(c jk,? ).We optimize the values of the Dirichlet hyperparameters α, ν and γ using slice sampling with auniform prior every 20 iterations for the first 500iterations, and every 100 iterations thereafter. After a burn-in phase of 10,000 iterations, we collectsamples every 10 iterations (to lessen autocorrelation) until a total of 100 have been collected.Every document (a movie plot summary) contains a set of characters, each of which is associated with a single latent persona p; for every observed (r, w) tuple associated with the character,we sample a latent topic k from the role-specificψp,r . Conditioned on this topic assignment, theobserved word is drawn from φk . The distribution of these personas for a given document is determined by a document-specific multinomial θ,drawn from a Dirichlet parameterized by α.P (pe k p e , z, α, ν) Q (c e νr )r ,k,zjjc e α j (c ej Kνkd,k) 4.2Persona RegressionTo incorporate observed metadata in the form ofmovie genre, character age and character gender, we adopt an “upstream” modeling approach(Mimno and McCallum, 2008), letting those observed features influence the conditional probability with which a given character is expected to assume a particular persona, prior to observing anyof their actions. This captures the increased likelihood, for example, that a 25-year-old male actor inan action movie will play an ACTION H ERO thanhe will play a VALLEY G IRL.To capture these effects, each character’s latent persona is no longer drawn from a documentspecific Dirichlet; instead, the P -dimensional simplex is the output of a multiclass logistic regression, where the document genre metadata md andthe character age and gender metadata me togetherform a feature vector that combines with personaspecific feature weights to form the following loglinear distribution over personas, with the probability for persona k being:(1)rjHere, c ed,k is the count of all characters in document d whose current persona sample is also k(not counting the current character e under consideration);9 j ranges over all (rj , wj ) tuples associated with character e. Each c erj ,k,zj is the countof all tuples with role rj and current topic zj usedwith persona k. c erj ,k,? is the same count, summingover all topics z. In other words, the probability that character e embodies persona k is proportional to the number of other characters in the plotsummary who also embody that persona (plus theDirichlet hyperparameter αk ) times the contribution of each observed word wj for that character,given its current topic assignment zj .Once all personas have been sampled, we sam-P (p k md , me , β) exp([m ;me ] βk )P 1 d 1 Pj 1 exp([md ;me ] βj )(3)The persona-specific β coefficients are learnedthrough Monte Carlo Expectation Maximization9The e superscript denotes counts taken without considering the current sample for character e.355

these characters are certainly free to assume different roles in different movies, we believe that,in the aggregate, they should tend to embody thesame character type and thus prove to be a natural clustering to recover. 970 character names occur at least twice in our data, and 2,666 individualcharacters use one of those names. Let those 970character names define 970 unique gold clusterswhose members include the individual characterswho use that name.(Wei and Tanner, 1990), in which we alternate between the following:1. Given current values for β, for all characterse in all plot summaries, sample values of peand zj for all associated tuples.2. Given input metadata features m and the associated sampled values of p, find the valuesof β that maximize the standard multiclass logistic regression log likelihood, subject to 2regularization.Figure 2 (above right) illustrates this model. Aswith the Dirichlet persona model, inference on pfor step 1 is conducted with collapsed Gibbs sampling; the only difference in the sampling probability from equation 1 is the effect of the prior,which here is deterministically fixed as the outputof the regression.P (pe k p e , z, ν, md , me , β) Q (c er ,k,zj νrj )exp([md ; me ] βk ) j (c ej Kν)rj ,k,?5.2As a second external measure of validation, weconsider a manually created clustering presentedat the website TV Tropes,10 a wiki that collects user-submitted examples of common tropes(narrative, character and plot devices) found intelevision, film, and fiction, among other media. While TV Tropes contains a wide range ofsuch conventions, we manually identified a set of72 tropes that could reasonably be labeled character types, including T HE C ORRUPT C ORPO RATE E XECUTIVE , T HE H ARDBOILED D ETEC TIVE , T HE J ERK J OCK , T HE K LUTZ and T HES URFER D UDE.We manually aligned user-submitted examplesof characters embodying these 72 character typeswith the canonical references in Freebase to create a test set of 501 individual characters. Whilethe 72 character tropes represented here are a moresubjective measure, we expect to be able to at leastpartially recover this clustering.(4)rjThe sampling equation for the topic assignments z is identical to that in equation 2. Inpractice we optimize β every 1,000 iterations, until a burn-in phase of 10,000 iterations has beenreached; at this point we following the same sampling regime as for the Dirichlet persona model.5EvaluationWe evaluate our methods in two quantitative waysby measuring the degree to which we recover twodifferent sets of gold-standard clusterings. Thisevaluation also helps offer guidance for model selection (in choosing the number of latent topicsand personas) by measuring performance on anobjective task.5.1TV Tropes5.3Variation of InformationTo measure the similarity between the two clusterings of movie characters, gold clusters G andinduced latent persona clusters C, we calculate thevariation of information (Meilă, 2007):Character NamesV I(G, C) H(G) H(C) 2I(G, C)First, we consider all character names that occur inat least two separate movies, generally as a consequence of remakes or sequels; this includes propernames such as “Rocky Balboa,” “Oliver Twist,”and “Indiana Jones,” as well as generic type namessuch as “Gang Member” and “The Thief”; to minimize ambiguity, we only consider character namesconsisting of at least two tokens. Each of thesenames is used by at least two different characters;for example, a character named “Jason Bourne”is portrayed in The Bourne Identity, The BourneSupremacy, and The Bourne Ultimatum. While H(G C) H(C G)(5)(6)VI measures the information-theoretic distancebetween the two clusterings: a lower value meansgreater similarity, and VI 0 if they are identical. Low VI indicates that (induced) clustersand (gold) clusters tend to overlap; i.e., knowing acharacter’s (induced) cluster usually tells us their(gold) cluster, and vice versa. Variation of information is a metric (symmetric and obeys triangle10356http://tvtropes.org

K2550100ModelPersona regressionDirichlet personaPersona regressionDirichlet personaPersona regressionDirichlet personaCharacter Names §5.1P 25 P 50 P 57.586.956.327.646.956.25TV Tropes §5.2P 25 P 50 P6.266.136.296.016.305.996.235.886.116.056.245.91 1005.745.575.655.605.495.42Table 1: Variation of information between learned personas and gold clusters for different numbers oftopics K and personas P . Lower values are better. All values are reported in bits.K2550100ModelPersona regressionDirichlet personaPersona regressionDirichlet personaPersona regressionDirichlet personaCharacter Names §5.1P 25P 50P 10062.8 ( 41%) 59.5 ( 40%) 53.7 ( 33%)54.7 ( 27%) 50.5 ( 26%) 45.4 ( 17%)63.1 ( 42%) 59.8 ( 42%) 53.6 ( 34%)57.2 ( 34%) 49.0 ( 23%) 44.7 ( 16%)63.1 ( 42%) 57.7 ( 39%) 53.0 ( 34%)55.3 ( 30%) 49.5 ( 24%) 45.2 ( 18%)P 2542.3 ( 31%)39.5 ( 20%)42.9 ( 30%)39.7 ( 30%)43.5 ( 33%)39.7 ( 34%)TV Tropes §5.2P 5038.5 ( 24%)31.7 ( 28%)39.1 ( 33%)31.5 ( 32%)32.1 ( 28%)29.9 ( 24%)P 10033.1 ( 25%)25.1 ( 21%)31.3 ( 20%)24.6 ( 22%)26.5 ( 22%)23.6 ( 19%)Table 2: Purity scores of recovering gold clusters. Higher values are better. Each absolute purity scoreis paired with its improvement over a controlled baseline of permuting the learned labels while keepingthe cluster proportions the same.inequality), and has a number of other desirableproperties.Table 1 presents the VI between the learned persona clusters and gold clusters, for varying numbers of personas (P {25, 50, 100}) and topics (K {25, 50, 100}). To determine significance with respect to a random baseline, we conduct a permutation test (Fisher, 1935; Pitman,1937) in which we randomly shuffle the labels ofthe learned persona clusters and count the number of times in 1,000 such trials that the VI ofthe observed persona labels is lower than the VIof the permuted labels; this defines a nonparametric p-value. All results presented are significant atp 0.001 (i.e. observed VI is never lower thanthe simulation VI).Over all tests in comparison to both gold clusterings, we see VI improve as both P and, toa lesser extent, K increase. While this may beexpected as the number of personas increase tomatch the number of distinct types in the goldclusters (970 and 72, respectively), the fact that VIimproves as the number of latent topics increasessuggests that more fine-grained topics are helpfulfor capturing nuanced character types.11The difference between the persona regressionmodel and the Dirichlet persona model here is notsignificant; while VI allows us to compare models with different numbers of latent clusters, its requirement that clusterings be mutually informativeplaces a high overhead on models that are fundamentally unidirectional (in Table 1, for example,the room for improvement between two modelsof the same P and K is naturally smaller thanthe bigger difference between different P or K).While we would naturally prefer a text-only modelto be as expressive as a model that requires potentially hard to acquire metadata, we tease apartwhether a distinction actually does exist by evaluating the purity of the gold clusters with respect tothe labels assigned them.5.4PurityFor gold clusters G {g1 . . . gk } and inferredclusters C {c1 . . . cj } we calculate purity as:1 XPurity max gk cj (7)jNkWhile purity cannot be used to compare models ofdifferent persona size P , it can help us distinguishbetween models of the same size. A model canattain perfect purity, however, by placing all characters into a single cluster; to control for this, wepresent a controlled baseline in which each character is assigned a latent character type label proportional to the size of the latent clusters we havelearned (so that, for example, if one latent persona cluster contains 3.2% of the total characters,11This trend is robust to the choice of cluster metric: hereVI and F -score have a correlation of 0.87; as more latenttopics and personas are added, clustering improves (causingthe F -score to go up and the VI distance to go down).357

TitanicThe BourneIdentityIron ManJackDawsonapprove, die, sufferrelent, refuse, agreeinherit live imagineRachelDraculaVan The DarkKnightColinSullivanThe Departedshoot, aim, overpowertestify, rebuff, confesshatch, vow, undergodark, major, henchmanshoot, aim, overpowersentence, arrest, assignFigure 3: Dramatis personae of The Dark Knight (2008), illustrating 3 of the 100 character types learnedby the persona regression model, along with links from other characters in those latent classes to othermovies. Each character type is listed with the top three latent topics with which it is associated.ics, along with an assigned label that consists ofthe single word with the highest PMI for that class.Of note are topics relating to romance (unite,marry, woo, elope, court), commercial transactions (purchase, sign, sell, owe, buy), and the classic criminal schema from Chambers (2011) (sentence, arrest, assign, convict, promote).the probability of selecting that persona at randomis 3.2%). Table 2 presents each model’s absolutepurity score paired with its improvement over itscontrolled permutation (e.g., 41%).Within each fixed-size partition, the use ofmetadata yields a substantial improvement overthe Dirichlet model, both in terms of absolute purity and in its relative improvement over its sizedcontrolled baseline. In practice, we find that whilethe Dirichlet model distinguishes between character personas in different movies, the persona regression model helps distinguish between different personas within the same movie.6Table 4 presents the most frequent 14 personasin our dataset, illustrated with characters fromthe 500 highest grossing movies. The personaslearned are each three separate mixtures of the50 latent topics (one for agent relations, one forpatient relations, and one for attributes), as illustrated in figure 1 above. Rather than presentinga 3 50 histogram for each persona, we illustrate them by listing the most characteristic topics, movie characters, and metadata features associated with it. Characteristic actions and featuresare defined as those having the highest smoothedpointwise mutual information with that class; exemplary characters are those with the highest posterior probability of being drawn from that class.Among the personas learned are canonical maleaction heroes (exemplified by the protagonists ofThe Bourne Supremacy, Speed, and Taken), superheroes (Hulk, Batman and Robin, Hector of Troy)and several romantic comedy types, largely characterized by words drawn from the F LIRT topic,including flirt, reconcile, date, dance and forgive.Exploratory Data AnalysisAs with other generative approaches, latent persona models enable exploratory data analysis. Toillustrate this, we present results from the personaregression model learned above, with 50 latentlexical classes and 100 latent personas. Figure 3visualizes this data by focusing on a single movie,The Dark Knight (2008); the movie’s protagonist,Batman, belongs to the same latent persona as Detective Jim Gordon, as well as other action movieprotagonists Jason Bourne and Tony Stark (IronMan). The movie’s antagonist, The Joker, belongsto the same latent persona as Dracula from VanHelsing and Colin Sullivan from The Departed, illustrating the ability of personas to be informedby, but still cut across, different genres.Table 3 presents an exhaustive list of all 50 top358

UGMost characteristic wordsunite marry woo elope courtpurchase sign sell owe buyshoot aim overpower interrogate killexplore investigate uncover deducewoman friend wife sister husbandwitch villager kid boy mominvade sail travel land exploredefeat destroy transform battle injectchase scare hit punch eattalk tell reassure assure calmpop lift crawl laugh shakesing perform cast produce danceapprove die suffer forbid collapsewerewolf mother parent killer fatherdiner grandfather brother terroristdecapitate bite impale strangle stalkreply say mention answer shoutdemon narrator mayor duck crimecongratulate cheer thank recommendintroduce bring mock read hatchhatch don exist vow undergoflirt reconcile date dance forgiveadopt raise bear punish feedfairy kidnapper soul slave presidentbug zombie warden king TRATESCREAMMost characteristic wordsswitch confirm escort report instructinfatuate obsess acquaint revolve concernalien child governor bandit priestcapture corner transport imprison trapmaya monster monk goon dragoninherit live imagine experience sharetestify rebuff confess admit denyapply struggle earn graduate developexpel inspire humiliate bully grantdig take welcome sink revolvecommand abduct invade seize surrenderrelent refuse agree insist hopeembark befriend enlist recall meetmanipulate conclude investigate conductelope forget succumb pretend likeflee escape swim hide managebaby sheriff vampire knight spiritbind select belong refer representrejoin fly recruit include disguisedark major henchman warrior sergeantsentence arrest assign convict promotedisturb frighten confuse tease scarerip va

present a new dataset for character type induction in lm and a benchmark testbed for evaluating fu-ture work. 3 2 Data 2.1 Text Our primary source of data comes from 42,306 movie plot summaries extracted from the November 2, 2012 dump of English-language Wikipedia. 4 These summaries, which have a median length of approximately 176 words, 5

Related Documents:

1920 - Nitrate negative film commonly replaces glass plate negatives. 1923 - Kodak introduces cellulose acetate amateur motion picture film. 1925 - 35mm nitrate still negative film begins to be available and cellulose acetate film becomes much . more common. 1930 - Acetate sheet film, X-ray film, and 35mm roll film become available.

Drying 20 minutes Hang film in film dryer at the notched corner and catch drips with Kim Wipe. Clean-Up As film is drying, wash and dry all graduates and drum for next person to use. Sleeve Film Once the film is done drying, turn dryer off, remove film, and sleeve in negative sleeve. Turn the dryer back on if there are still sheets of film drying.

2. The Rhetoric of Film: Bakhtinian Approaches and Film Ethos Film as Its Own Rhetorical Medium 32 Bakhtinian Perspectives on the Rhetoric of Film 34 Film Ethos 42 3. The Rhetoric of Film: Pathos and Logos in the Movies Pathos in the Movies 55 Film Logos 63 Blade Runner: A Rhetorical Analysis 72 4.

Film guide 5 Film is both a powerful communication medium and an art form. The Diploma Programme film course aims to develop students' skills so that they become adept in both interpreting and making film texts. Through the study and analysis of film texts and exercises in film-making, the Diploma Programme film

Latent print analysis is defined as experience in comparison of latent prints with inked and/or imaged prints, experience in crime scene processing for latent prints, all phases of physical evidence processing, and expert testimony to the

Sep 30, 2021 · 1.8.4.1. The Latent Print Analyst should, when possible, examine the item first. 1.8.4.2. Prior to conducting any part of the latent print examination, the Latent Print Analyst shall ensure that the firearm is safe. If there is any question as to the safety of the f

Topic models were inspired by latent semantic indexing (LSI,Landauer et al.,2007) and its probabilistic variant, probabilistic latent semantic indexing (pLSI), also known as the probabilistic latent semantic analysis (pLSA,Hofmann,1999). Pioneered byBlei et al. (2003), latent Dirichlet alloca

2a. OCR's A Level in Film Studies (H410) 7 2b. Content of A Level in Film Studies (H410) 8 2c. Content of Film History (01) 10 2d. Content of Critical Approaches to Film (02) 17 2e. Content of non-examined assessment Making Short Film (03/04) 25 2f. Prior knowledge, learning and progression 28 3 Assessment of A Level in Film Studies 29 3a.