Resolving Lexical Ambiguity In A Deterministic Parser

1y ago
4 Views
2 Downloads
1.20 MB
12 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Ryan Jay
Transcription

RESOLVINGLEXICALAMBIGUITYDETERMINISTICIN APARSERRobert MilneIntelligent Applications10 Charlotte SquareEdinburgh E H 2 4 D R ScotlandLexical ambiguity and especially part-of-speech ambiguity is the source of much non-determinism inparsing. As a result, the resolution of lexical ambiguity presents deterministic parsing with a major test.If deterministic parsing is to be viable, it must be shown that lexical ambiguity can be resolved easilydeterministically. In this paper, it is shown that Marcus's "diagnostics" can be handled without anymechanisms beyond what is required to parse grammatical sentences and reject ungrammaticalsentences. It is also shown that many other classes of ambiguity can be easily resolved as well.1 INTRODUCTIONLexical ambiguity, and especially part-of-speech ambiguity, is the source of much non-determinism in parsing.As a result, the resolution of lexical ambiguity presentsdeterministic parsing (Marcus 1980) with a major test. Ifdeterministic parsing is to be viable, it should be shownthat lexical ambiguity can be resolved deterministicallyfor many situations in which people do not have trouble.In this paper, it is shown that Marcus's "diagnostics" canbe handled without any mechanisms beyond what isrequired to parse grammatical sentences and rejectungrammatical sentences and that many other classes ofambiguity can be easily resolved as well. This result ispossible because of the constraints on English from wordorder and number agreement.Although many high-level constituents can be" m o v e d " in English, the lower-level structure of someconstituents is relatively fixed. For example, after adeterminer, one expects a noun rather than a verb. Inthis paper we also wish to ask, " H o w might this low-levelfixed order assist in the resolution of ambiguity?" Wewill not give a definite answer to this question, but willsee that it is extremely useful in the resolution of ambiguity.The examples of ambiguity shown in this paper seemto cause no apparent problems to a person reading them.That is, all of these examples read easily and certainly donot exhibit the garden path effect, except, of course, theexamples that are intended to be difficult. If a parser isto be psychologically plausible, then it is desirable that ithandle these examples in such a way as to explain whypeople have no apparent difficulty with most sentences,despite the inherent ambiguity in them.In parsing English, one of the major causes of nondeterminism is part-of-speech ambiguity. If a word canbe two parts of speech, then a non-deterministic parsermay have to explore both possibilities. If one claims tobe able to parse English deterministically, then the resolution of part-of-speech ambiguity is a very importantarea.It should be noted that a non-deterministic parser doesnot need to tackle the problem of local part-of-speechambiguity. If it should make an error, then it can backtrack and correct it. Alternatively, it could maintain allpossible parses at once and throw some of them away.In deterministic parsing we are not allowed to use eitherbacktracking or parallelism. Although this problem hasbeen investigated for many non-deterministic parsers, ithas not been the critical problem that it is for deterministic parsing. To handle ambiguity deterministically, wemust never make an error. As a result, our methods ofdisambiguation must be reliable. We will see that m a n ycases of ambiguity can be resolved using standard techniques that have been applied to non-deterministicparsers.If it is possible to handle all the examples of localambiguity presented here, with no additional mechanism,device or feature than is needed for ordinary sentenceparsing, then our goal above can be considered met. One.Copyright1986 by the Association for Computational Linguistics. Permissionto copy without fee all or part of this material is granted provided thatthe copies are not made for direct commercial advantage and the CL reference and this copyright notice are included on the first page. To copyotherwise, or to republish, requires a fee and/or putational Linguistics, Volume 12, Number 1, January-March 19861

Robert MilneResolving Lexical Ambiguity in a Deterministic Parserpossible explanation for the fact that people do notnotice local ambiguities may be that there is no specialmechanism needed for them, so that nothing differingfrom normal parsing is necessary.Conversely, if it is necessary to add special mechanisms and routines to the parser just to handle theseexamples of ambiguity, then this will not explain howpeople can understand these examples so well and it canbe considered a weakness in the model.To say part-of-speech ambiguity can be handleddeterministically but with the use of special mechanismswould be no surprise and not very important. To say onecan handle part-of-speech ambiguity deterministicallywith no special mechanisms is a more significant claim.In this paper it is indeed suggested that many cases ofpart-of-speech ambiguity can be handled by the parserwith no special mechanisms.This paper is a summary of a section of the author'sPh.D. thesis (Milne 1983) with the same title anddescribes work done at the University of Edinburgh.That thesis presents ROBIE, a deterministic parser that isable to resolve lexical ambiguities and that is fully implemented in PROLOG. ROBIE has two lookahead buffersand does not use Marcus's Attention Shift mechanism.This means that ROBIE scans the current token and onemore of lookahead.PARSIFAL scanned the currenttoken and two lookahead cells. In this paper, only localambiguities are addressed, that is, ambiguities that can beresolved within the sentence. Global ambiguities, whichrequire context to resolve, are not discussed. For thispaper, it is assumed that the reader is familiar with deterministic parsing and no other understanding of specificparsing mechanisms is assumed.In the rest of this paper, we look at lexical ambiguityfrom simple examples to more complex ones. We startwith how words are defined within the parser to beambiguous and how the morphology can be used toresolve ambiguities. Next we look at how word orderand finally various types of agreement can be used toresolve most remaining ambiguities.22.1SYNTACTIC CONTEXTWORD DATA STRUCTURESAs a first approach to handling ambiguity, it was asked," I f we construct a compound lexical entry for each wordcomposed of the features of each part of speech the wordcan have and make no alterations to the grammar, howwide a coverage of examples will we get?"This approach was used by Winograd (1972) and wasfound to be very effective for the following reason. Eachword has all the possible relevant features for it. Therefore, the test will succeed for each possible part of speechwith which a word can be used. In this way, all applicable rules will match. It may be that often only one rulewill match, or that the first rule tried is the correct rule.The question is, how often will the rule that matches bethe correct rule?2All words in ROBIE are defined in the syntacticdictionaries. Each word has a compound lexical entryincorporating all the features for all the possible parts ofspeech the word could have. This is exactly as was doneby Winograd (1972). For example, block is defined as anoun and a verb, can is defined as a noun, auxiliary verb,and verb, and hit is defined as a noun and a verb. Thefeatures for each of these parts of speech are kept in thedictionary and, when the word is looked up, they arereturned as a single ordered list of features.Thesefeatures are sub-grouped according to the part of speechthey are associated with. Hence, when the word block islooked up, the result returned is both the noun and theverb definition. In this way, all possibilities are returned.In the English language, most words can have severalparts of speech. This fact must be reflected in a parser ofEnglish and we do this with the multiple meanings above.When the parser has enough information to decide whichis the correct part of speech, it ignores (removes) theother possibilities. In this way, we have not built structure that is later thrown away. Although some may arguethat this is a form of parallelism, it seems necessary sinceit reflects the inherent parallelism of language.2.2MORPHOLOGYThe first part of the disambiguation process takes placein the morphology. When ROBIE identifies a word thathas a morphological ending, the morphology must adjustthe features of the word. For example, when blocked isidentified, the feature " e d " must be added to the list offeatures for block. At the same time, a portion of thedisambiguation takes place. If block is defined as both anoun and a verb, then blocked is not a noun.Themorphology causes some features to be added, such as"ed, past" and some features to be removed such as"tenseless." As features that are no longer applicable areremoved, so also are parts of speech and their associatedfeatures that are no longer applicable. For blocked, thefeatures "noun, ns, n3p" will be removed and thefeatures "adjective, ed, past" will be added.The morphology will identify words such as adverbs,adjectives, and verbs in a similar way. The morphologyused is very similar to that of Winograd (1972) and ofDewar, Bratley, and Thorne (1969); the part-of-speechadditions and deletions are taken from Marcus (1980).Although this technique may seem obvious, it is includedto point out that a majority of the occurrences of part-ofspeech ambiguity can be resolved or reduced on the basisof the morphology alone.2.3DISAMBIGUATIONN o w that we have allowed words to have multiple partsof speech and the morphology can be used to trim someof the ambiguity, we need a simple technique for disambiguating words to a single part of speech. Again, referring to Occam's Razor, what is preferable is a simple andgeneral technique for all types of disambiguation.Computational Linguistics, Volume 12, Number 1, January-March 1986

Robert MilneResolvingLexicalAmbiguityin a Deterministic ParserIn ROBIE each rule matches the features of one or twobuffer cells. (The word buffer will be used interchangeably with cell. That is, buffer and cell are the sameconcept.) If the word block is in the first buffer cell, thena pattern [noun] or a pattern [verb] will match. Thesepatterns do not relate to the other possible definitions ofa word. If a rule pattern has matched on the feature" n o u n " in the first buffer cell, then ROBIE assumes thatthis word is a noun. It would then be appropriate todisambiguate the word as a noun. This is exactly as inWinograd (1972).In a non-deterministic parser, it is not essential to findthe correct rule first. If the parser runs an incorrect rule,the parser may backtrack and change the categoryassignment. But in a deterministic parser, there willnever be any backtracking, and this solution cannot beused.Since ROBIE does not backtrack, disambiguating theword when the pattern matches will always result in thesame disambiguation as if the word were disambiguatedin the grammar rule. Once a rule runs assuming a buffercontains a certain part of speech, it must be used as suchin the parser. The general disambiguation scheme is: if afull pattern matches a word as a certain part of speech,then it is disambiguated as that part of speech.The compound lexical entries and pattern-matchingdisambiguation alone will handle many examples ofambiguity. In the rest of this paper we see just what thiscan do for us.2.4AN EXAMPLEGiven the above mechanisms - multiple definition anddisambiguation by the pattern matching, let us see how afew simple examples are handled. Consider:(1) The falling block needs painting.We will look only at the words falling and block in thisexample. The word falling is defined as a verb and anadjective in the dictionary and block is defined as a nounand a verb.While parsing this example, after the word the hasinitiated an NP and been attached to it as a determiner,the rules to parse adjectives are activated. The ruleADJECTIVE has the pattern [adj], and matches the wordfalling. Falling is then attached and disambiguated as anadjective. Recognition of falling as a verb does notoccur. As there are no more adjectives, ROBIE will activate the rules to parse the headnoun. (ROBIE's grammarassumes that all words between the first noun and thehead noun of an NP are nouns; see section 2.6.) The ruleNOUN with the pattern [noun] will match on the wordblock, and it will be attached as a noun. Hence block willalso be disambiguated without the verb use being considered by ROBIE.Other ambiguities inside the noun phrase will behandled in a similar way. This approach will djective ambiguity and many other pre-nominalambiguities. This works because the noun phrase has avery strict word order. When an ambiguous word isfound, only one of its meanings will be appropriate to theword order of the noun phrase at that point.Thisapproach can be thought of as an extension of the basicapproach of the Harvard Predictive Analyzer (Kuno1965).This strategy will also often disambiguate main verbs.For example, consider the following sentences:(2) Tom hit Mary.(3) Tom will hit Mary.(4) The will gave the money to Mary.In (2), hit is the main verb. In the dictionary, hit isalso defined as a noun, (as in card playing). The parserwill attach Tom as the subject of the sentence and thenactivate the rules for the main verb. Since hit has thefeature "verb", it will match that rule and be attachedand disambiguated as a verb. Again, other possible partsof speech are not considered.The word will could be a noun or a modal as sentences(3) and (4) demonstrate. In (3), will cannot be part ofthe headnoun with Tom, so the NP will be finished asabove. The rules for the auxiliary will then be activatedand the word will then matches the pattern [modal] andis attached to the AUX.In (4), the word will is used as a noun. Since itfollows the determiner, the rules for nouns will be activated. The word will then matches the pattern [noun]and attaches to the NP as a noun.The same approach will also disambiguate stop and runin the following sentence. Since stop is sentence initialand can be a tenseless verb, the rule IMPERATIVE willmatch, and it will be disambiguated as a verb. The wordrun, which can be a noun or a verb, will be handled aswill in (4).(5) Stop the run.2.5THE WORD TONOW let us consider a more difficult example, the wordto. To is defined as an auxiliary verb and a preposition inROBIE, as illustrated by these sentences:(6) I want to kiss you.(7) I will go to the show with you.In (6), to is the infinitive auxiliary, while in (7) to is apreposition. This analysis is based on that of Marcus(1980:118). Our two buffer cell lookahead is sufficientto disambiguate these examples.The buffer patterns for the above sentences are:[to&tenseless] - embedded VP[to&ngstart]- PPBy looking at the following word, to can be disambiguated. In (7), the word the cannot be a tenseless verb, sothe first pattern does not match. In (6), the second buff-Computational Linguistics, Volume 12, Number 1, January-March 19863

Robert MilneResolving Lexical Ambiguity in a Deterministic Parserer does not have the feature "ngstart", so the rule doesn'tmatch.However, the above patterns will accept ungrammatical sentences. To reject ungrammatical sentences, wecan use verb subcategorisation as a supplement to theabove rules. One cannot say:In (10), horizontal is a noun, while in (11), it is anadjective. The above algorithm handles these cases.This approach takes advantage of the lookahead of thedeterministic parser. A word should be used as an adjective if the following word can be an adjective or a noun.However, this approach would fail on examples such as:(8) *I want to the school with you.(9) *I will hit to wash you.(12) The old can get in for half price.(13) The large student residence blocks my view.In English, only certain verbs can take infinitivecomplements. To can only be used as an auxiliary verbstarting a VP when the verb can take an infinitivecomplement. Hence, by activating the rules to handle theVP usage only when the infinitive is allowed, the problemis partly reduced. Also by classifying the verb for PPswith the preposition to, the problem is simplified. This ismerely taking advantage of subcategorisation in verbphrases.Taking advantage of this subcategorisationgreatly reduces, but does not eliminate, the possibleconflict.We have seen what to do if the verb will only accept atoPP or a VP. The final difficult situation arises whenever the following three conditions are true: the verb will accept a toPP and a toVP, the item in the second buffer has the features"tenseless" and "ngstart" and, the toPP is a required modifier of the verb.Although this situation rarely arises, the above rule willmake the wrong decision if the ambiguous word is beingused as a noun. In this situation, ROBIE will make thewrong decision, and has no capability to better decide.By default, the principles of Right Association and Minimal Attachment apply as discussed in Frazier and Fodor(1978).A free text analysis done on a cover story in TIMEmagazine (1978) resulted in 55 occurrences of the wordto. The two rules mentioned above in conjunction withverb subcategorisation gave the correct interpretation ofall of these. These rules were also checked on theMECHO corpus (Milne 1983) and the ASHOK corpus(Martin, Church, and Patil 1981).There were noviolations of these rules in either of these.2.6ADJECTIVE/NOUN AND N O U N / N O U N AMBIGUITYAdjective/noun ambiguity is beyond the present scope ofthis research and is handled in a simple-minded way. Ifthe word following the ambiguous adjective/noun wordcan be a noun, then the ambiguous word is used as anadjective. In other words, all conflicts are resolved infavour of the adjective usage. This problem arises inthese examples:(10) The plane is inclined at an angle of 30 degreesabove the horizontal.(11) A block rests on a smooth horizontal table.42.7WHY DO THESE TECHNIQUES WORK?In this section we have seen many examples of the resolution of ambiguity. To handle these examples, we merely constructed a compound lexical entry for each word,composed of the features of each part of speech the wordcould be and allowed the pattern matching to performthe disambiguation. This technique has been used byWinograd (1972). Why does this work so well?English has a fairly strict structural order for all theexamples presented here. Because of this, in each example we have seen, the use of the word as a different partof speech would be ungrammatical. Although these techniques have been used for non-deterministic parsers,their effectiveness has not been investigated for a deterministic parser.Most ambiguities are not recognised by peoplebecause only one of the alternatives is grammatical. Inm a n y situations, when fixed constituent structure is takeninto account, other uses of an ambiguous word are notpossible and probably not even recognised. Since fixedconstituent structure rules out most alternatives, we havebeen able to handle the examples in this paper withoutany special mechanisms. In the introduction to thispaper, it was stated that a clean and simple method ofhandling ambiguity was desired. I feel that this goal hasbeen met for these examples.3THE ROLE OF AGREEMENT IN HANDLING AMBIGUITYUsing the simple techniques presented in the lastsections, we can handle many cases of part-of-speechambiguity, but there are many examples we cannotresolve.For example, the second of each pair ofsentences below would be disambiguated incorrectly.(14)(15)(16)(17)(18)(19)I know that boy is bad.I know that boys are bad.What boy did it?What boys do is not my business.The trash can be smelly.The trash can was smelly.Many people wonder what role p e r s o n / n u m b e r codesand the relatively rigid constituent structure in the verbgroup play in English. In this section, we will exploretheir role by attempting to answer the question, "Whatuse is the fixed structure of the verb group andp e r s o n / n u m b e r codes?"Computational Linguistics, Volume 12, Number 1, January-March 1986

Robert MilneResolving Lexical Ambiguity in a Deterministic Parser3.1UNGRAMMATICAL SENTENCESBefore we proceed, let us look at an assumption Marcusmade in his parser, that it would be given only grammatical sentences.This assumption makes life easy forsomeone writing a grammar, since there is no need toworry about grammatical checking. Hence no provisionwas made for ungrammatical sentences and the originalparser accepted such examples as:(20) *A blocks are red.(21) *The boy hit the girl the boy the girl.(22) *Are the boy run?This simplification causes no problems in mostsentences, but can lead to trouble in more difficult examples.If the parser's grammar is loosely formulatedbecause it assumes it will be given grammatical examplesonly, then ungrammatical sentences may be accepted. Ifthe syntactic analysis accepts ungrammatical sentences asgrammatical, then it is making an error. Using grammatical constraints actually helps parsing efficiency anddisambiguation. In the next sections we look at theconsequences of this assumption as well as those ofrejecting ungrammatical sentences.3.2[that][np] -* in the Packet CPOOL(Clause pool of rules)"If there is no determiner of secondand there is not a qp of secondand the nbar of 2nd is none of massn,npland 2nd is not-modifiablethen attach as detelse if c is nbarthen label 1st pronoun, relative pronounelse label 1st complementiser."(Marcus 1980:291)SUBJECT/VERB AGREEMENTWe know that the verb group has a complicated but relatively fixed constituent structure. Although verbals havemany forms, they must be mixed in a certain rigid order.We also know that the first finite verbal element mustagree with the subject in person and number. That is,one cannot say:(23) *The boy are run.(24) *The boy will had been run.(25) *The boys had are red.etc.While Marcus's parser enforced these observations tosome extent, he did not follow them throughout hisparser. We want to enforce this agreement throughoutROBIE. Checking the finite or main verb, to be sure thatit agrees in number with the subject, will lead to therejection of the above examples. This was done byadding the agreement requirement into the pattern foreach relevant rule as will be explained later.Buffers 1 and 2 must agree before a rule relating thesubject and verb can match. This check looks at thenumber code of the NP and the p e r s o n / n u m b e r code ofthe verb and checks whether they agree. The routine forsubject/verb agreement is very general and is used by allthe subject/verb rules. The routine can only check thegrammatical features of the buffers.3.3words he also used a Diagnostic rule. These Diagnosticrules matched when the word they were to diagnosearrived in the first buffer position and the appropriatepackets were active. Each diagnostic would examine thefeatures of the three buffers cells and the contents of theActive Node Stack. Once the diagnostic decided whichpart of speech the word was being used as, it eitheradded the appropriate features, or explicitly ran a grammar rule. Marcus did not give each word a compoundlexical entry as we have done here.Most of the grammar rules in his parser were simpleand elegant, but the diagnostics tended to be verycomplex and contained many conditionals. In some casesthey also seemed rather ad hoc and did not meet the goalof a simple, elegant method of handling ambiguity.For example, consider the THAT-DIAGNOSTIC:Notice that if the word that were to be used as a determiner, then it would be attached after the NP was built!This is his primary rule for disambiguating the word that.Marcus's parser also had three other rules to handledifferent cases.It seems that these rules did not "elegantly capturegeneralisations" as did the rest of his parser. I considerthese rules undesirable and feel that they should becorrected to comply with my criteria for simple andelegant techniques in resolving ambiguity. I wanted amethod that used no special mechanism, or routine, otherthan that needed to parse grammatical sentences. Thesediagnostics are certainly special mechanisms and do notmeet this goal. Can we cover the same examples in amore simple and principled way?In this section, we look at each of these diagnostics inturn and show how they have been replaced in the newermodel. We also look at a few other examples of ambiguity which Marcus did not handle, but are related to ourdiscussion here.3.4TOThe handling of to by Marcus's diagnostic can bereplaced by the method outlined in Section 2.5. Thismethod was motivated to handle grammatical sentencesand meets our criterion for a simple approach.MARCUS'S D I A G N O S T I C SMarcus (1980) did handle some part-of-speech ambiguities. The words to, what, which, that, and have could allbe used as several parts of speech. For each of theseHANDLINGTHE WORD3.5HANDLING WHATAND W H I C HFor both what and which, the ambiguity lies between arelative pronoun and a determiner. The following examples show various uses of both words:Computational Linguistics, Volume 12, Number 1, January-March 19865

Robert Milne(26)(27)(28)(29)(30)Which boy wants a fish?Which boys want fish?The river which I saw has many fish.What boy wants a fish?What boys want is fish.Resolving Lexical Ambiguityin a DeterministicParserNP-COMPLETE packet rules are active, and it will be arelative pronoun. In fact, since relative clauses can occuronly after the end of an NP, this correctly resolves therelative pronoun uses. If the word occurs at the start ofan NP, then it will be made a determiner.This approach has exactly the same effect and coverage as did Marcus's diagnostics, but we have not neededany special rules to implement it. It will now provide thecorrect interpretation for which, but will make someerrors for the word what. Marcus's what-diagnostic willtreat what as a determiner whenever the item in thesecond buffer could start a NP. This is usually correct,but what will be treated as a determiner in all of thefollowing:detdettel. pron.dettel. pron.There is some debate about the part of speech to beassigned the word which. Some linguists consider it to bea quantifier (Chomsky 1965), while others consider it tobe a determiner (Akmajian and Heny 1975, Chapter 8).We shall adopt the determiner analysis, making the problems for what and which similar.To determine the correct part of speech for these twowords, Marcus (1980:286) used the following diagnostics:(34)(35)(36)(37)(38)(39)[which] - in the packet CPOOL"If the NP above Current Node is not modifiedthen label 1st pronoun, relative pronounelse label 1st quant,ngstart,ns,wh,npl."[what][t] -*- in the packet NPOOL"If 2nd is ngstart and 2nd is not detthen label 1st det,ns,npl,n3p,wh;activate parse detelse label 1st pronoun,relpron,wh."These diagnostics would make the word in question arelative pronoun if it occurred after a headnoun, or adeterminer if the word occurred at the start of a possiblenoun phrase.If we follow the approach in the last section, and giveeach word a compound lexical entry composed of thedeterminer and relative pronoun features, we find thatthese words are always made determiners unless theyoccur immediately after a headnoun. In other words, thewhich examples are all parsed correctly, but (30) isparsed incorrectly. This happens because the determinerrule will always try to match before the rule for WH questions can take effect. This simple step gives the correctanalysis if the ambiguous word is to be a determiner, butwill still err on (30).The rule to parse a relative pronoun and start a relative clause is active only after the headnoun has beenfound. At this time, the rule for determiners is notactive. Therefore, if the word what or which is presentafter a headnoun, the only rule that can match is the ruleto use it as a relative pronoun, and it will be used as arelative pronoun. We have resolved the simple case ofwhat as a relative pronoun using only the simple techniques of the last section. For these sentences(31) What block is red?(32) Which boy hit her?(33) Which is the right one?ROBIE produces the correct analysis, but still errs on(30). This error is because what is being used as a relative pronoun but does not follow a headnoun. Withoutany additional changes to the parser, we get two things.Firstly, if the word occurs after the headnoun, then the6WhatWhatWhatWhatWhatWhatboys want is fish.blocks the road?climbs trees?boys did you see?blocks are in the road?climbs did you do?In this paper, we are adopting the following analysisfor WH clefts such as (34). The initial WH word, what isa relative pronoun and attached as the WH-COMP of thesubject S node. The subject is the phrase What boyswant. The main verb of the sentence is is and thecomplement fish. The exact details are not important,only that the word what or which is a not determiner atthe start of a WH cleft.In sentences (34-36), the word what is not used as adeterminer. In the analysis we are using, it is a relativepronoun and is used as the WH-COMP for the S. Insentences (37-39), the word what is used as a determiner.Marcus (1980:286) admits that this diagnostic producesthe incorrect result in this case. His diagnostic will makewhat a determiner in all of these examples, as will myanalysis.One can also see that each of the above pairs is a pairof potential garden path sentences. For each pair, thetwo buffers contain the same words. Hence our two-buffer lookahead is not sufficient to choose the correct usageof the word what. Using only two or three buffers, thereis no way to make what a relative pronoun when theheadnoun is plural but a determiner when it is singularfor all arbitrary sentences.With regard to the Semantic Checking Hypothesis(Milne 1982) then, it is suggested that this decision isbased o

RESOLVING LEXICAL AMBIGUITY IN A DETERMINISTIC PARSER Robert Milne Intelligent Applications 10 Charlotte Square Edinburgh EH2 4DR Scotland Lexical ambiguity and especially part-of-speech ambiguity is the source of much non-determinism in . found to be very effective for the following reason. Each word has all the possible relevant features .

Related Documents:

Keywords: lexical ambiguity, syntactic ambiguity, humor Introduction . These prior studies found that ambiguity is a source which is often used to create humor. There are two types of ambiguity commonly used as the source of humors, i.e. lexical and syntactic ambiguity. The former one refers to ambiguity conveyed

Resolving ambiguity through lexical asso- ciations Whittemore et al. (1990) found lexical preferences to be the key to resolving attachment ambiguity. Similarly, Taraban and McClelland found lexical content was key in explaining people's behavior. Various previous propos- als for guiding attachment disambiguation by the lexical

ambiguity. 5.1.2 Lexical Ambiguity Lexical ambiguity is the simplest and the most pervasive type of ambiguity. It occurs when a single lexical item has more than one meaning. For example, in a sentence like "John found a bat", the word "bat" is lexically ambiguous as it refer s to "an animal" or "a stick used for hitting the ball in some games .

3.1 The Types of Lexical Ambiguity The researcher identified the types of lexical ambiguity from the data and found 2 types based on types of lexical ambiguity framework used by Murphy (2010) which are absolute homonymy and polysemy. The researcher found 38 utterances which were lexically ambiguous. 3.1.1 Absolute

lexical ambiguity on the movie based on the theory. 4.1 Findings The finding of this study is divided into two parts based on the research problems. The first partis about lexical ambiguity that found in Zootopia movie. In this part the writer also analyzes the types of lexical ambiguity in the words that categorize as lexical ambiguity.

A. Use of Ambiguity Ambiguity is widely used as a way to produce a humorous effect both in English and Chinese humor because ambiguity can make a word or sentence understood more than one level of meaning. In this part, two kinds of ambiguity will be analyzed, including phonological ambiguity and lexical ambiguity. 1.

ambiguity. This paper also tackles the notion of ambiguity under the umbrella of Empson's (1949) and Crystal (1988). There are two types of ambiguity identified and they are as follows: a. Syntactic or structural ambiguity generating structure of a word in a sentence is unclear. b. Lexical or semantic ambiguity generating when a word has

Anurag Naveen Sanskaran Hindi Pathmala –Part-8 Orient BlackSwan Pvt Ltd. 2. Vyakaran Vyavahar – 8 Mohit Publications. 3. Amrit Sanchay (Maha Devi Verma) Saraswati House Publications COMPUTER 1. Cyber Tools – Part 8 KIPS Publishing World C – 109, Sector – 2, Noida. Class: 9 Subject Name of the Book with the name and address of the Publisher SCIENCE 1. NCERT Text Book For Class IX .