The Psycholinguistics Of Signed And Spoken Languages: How .

2y ago
34 Views
2 Downloads
1.15 MB
20 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Matteo Vollmer
Transcription

43-Gaskell-Chap433/11/0710:53 AMPage 703CHAPTER 43The psycholinguisticsof signed and spokenlanguages: how biologyaffects processingKaren Emmoreyinguistic research over the last few decadeshas revealed substantial similaritiesbetween the structure of signed and spoken languages (for reviews see Emmorey, 2002;Sandler and Lillo-Martin, 2006). These similarities provide a strong basis for cross-modalitycomparisons, and also bring to light linguisticuniversals that hold for all human languages. Inaddition, however, biology-based distinctionsbetween sign and speech are important, and canbe exploited to discover how the input–outputsystems of language impact online language processing and affect the neurocognitive underpinnings of language comprehension and production.For example, do the distinct perceptual and productive systems of signed and spoken languagesexert differing constraints on the nature of linguistic processing? Recent investigations havesuggested that the modality in which a languageis expressed can impact the psychological mechanisms required to decode and produce the linguistic signal. This chapter explores what aspectsof language processing appear to be universal toall human languages and what aspects areaffected by the particular characteristics of audition vs. vision, or by the differing constraints onmanual versus oral articulation.Sign language processing is appropriatelycompared to speech processing, rather than toreading, because unlike written text, which can becharacterized as “visual language,” sign languageLconsists of dynamic and constantly changingforms rather than static symbols. Further, neither sign language nor spoken language comespre-segmented into words and sentences for theperceiver. The production of writing, althoughperformed by the hand, differs substantiallyfrom sign language production because writingderives its structure from a separate system (theorthography of a spoken language). In contrastto written language, sign and speech are bothprimary language systems, acquired duringinfancy and early childhood without formalinstruction.43.1 Sign perception andvisual processingAlthough non-signers may interpret the visualsigned signal simply as a collection of rapid handand arm motions, signers quickly extract complex meaning from the incoming visual signal.Similarly, speakers extract meaning from a rapidly changing acoustic stream, if they know thelanguage. Listeners and viewers are able to automatically parse an incoming auditory or visuallinguistic signal by virtue of stored internal representations. Speech perception involves segmentation of speech sounds into phonemic units.For signed languages, a first question is whethersigns actually exhibit sublexical linguistic structure

43-Gaskell-Chap433/11/0710:53 AMPage 704704 · CHAPTER 43 The psycholinguistics of signed and spoken languagesthat could be used by a parser to segment visualsigned input. Is it possible to have a phonologythat is not based on sound?43.1.1 Phonology in a languagewithout soundSeveral decades of linguistic research has shownthat signed languages, like spoken languages, havea level of structure in which meaningless elementsare combined in rule-governed ways to createmeaningful forms (e.g. Stokoe, 1960; Battison,1978; Sandler, 1986; Brentari, 1998). For spokenlanguages, these elements are oral gestures thatcreate sounds. For signed languages, manual andfacial gestural units are combined to createdistinct signs. The discovery that sign languagesexhibit phonological structure was groundbreaking because it demonstrated that signs arenot holistic pantomimes lacking internal organization. Furthermore, this discovery showed thathuman languages universally develop a level ofmeaningless linguistic structure and a systemthat organizes this structure.Briefly, signs are composed of three basicphonological parameters: hand configuration,location (place of articulation), and movement.Orientation of the hand/arm is another contrasting parameter, but many theories representorientation as a sub-component of hand configuration or movement, rather than as a basicphonological element. Figure 43.1 providesillustrations of minimal pairs from LIS (LinguaItaliana dei Segni, Italian Sign Language). Thetop part of the figure illustrates two LIS signsthat differ only in hand configuration. Not allsign languages share the same hand configuration inventory. For example, the “t” hand configuration in American Sign Language (the thumbis inserted between the index and middle fingersof a fist) is not found in European sign languages.Chinese Sign Language contains a hand configuration formed with an open hand with all fingers extended except for the ring finger, which isbent—this hand configuration does not occurin American Sign Language (ASL). In addition,signs can differ according to where they aremade on the body or face. Figure 43.1B illustrates two LIS signs that differ only their place ofarticulation, and these different locations do notadd meaning to the signs. Signs can also differminimally in orientation, as illustrated in Figure43.1C. Finally, movement is another contrastingcategory that distinguishes minimally betweensigns, as shown in Figure 43.1D.In addition to segment-like units, syllables havealso been argued to exist in signed languages(Brentari, 1998; Corina and Sandler, 1993; Wilbur,1993). The syllable is a unit of structure that isbelow the level of the word but above the level ofthe segment, and is required to explain phonological form and patterning within a word.Although sign phonologists disagree about precisely how sign syllables should be characterized,there is general agreement that a sign syllablemust contain a movement of some type. In ASL,several phonological constraints have beenidentified that must refer to the syllable. Forexample, only certain movement sequences areallowed in bisyllabic (two-movement) signs:circle straight movements are permitted, butstraight circle movements are not (Uyechi,1994). Although a straight circle movementsequence is ill-formed as a single sign, it is wellformed when it occurs in a phrase. Thus, the constraint on movement sequences needs to refer toa level smaller than the word (the constraint doesnot hold across word boundaries), but largerthan the segment. Within Sandler’s (1986) HandTier model, signed segments consist of Movementsand Locations, somewhat akin to Vowels andConsonants.However, syllables in signed language differfrom syllables in spoken language because thereis little evidence for internal structure within thesigned syllable. Spoken syllables can be dividedinto onsets (usually, the first consonant or consonant cluster) and rhymes (the vowel and finalconsonants). Such internal structure does notappear to be present for sign syllables, althoughsome linguists have argued for weight distinctions, i.e. “heavy” vs. “light” syllables, based ondifferences in movement types. Because of thelack of internal structure, there do not appear tobe processes such as resyllabification in sign languages (e.g. a segment from one syllable becomespart of another syllable). These facts are important, given the emphasis that speech productionmodels place on syllabification as a separateprocessing stage (e.g. Levelt et al., 1999).Syllabification processes and/or the use ofa syllabary may be specific to phonologicalencoding for speech production. The syllablelikely serves as an organizational unit for speech,providing a structural frame for multisegmentalwords. For example, MacNeilage (1998) arguesthat the oscillation of the mandible creates aframe around which syllable production can beorganized. Meier (2000; 2002) points out thatsigning differs dramatically from speakingbecause signing does not involve a single, predominant oscillator, akin to the mandible. Rather,signs can have movement that is restricted to justabout any joint of the arm. Thus, sign production

43-Gaskell-Chap433/11/0710:53 AMPage 705Sign perception and visual processing · 705(a)BICICLETTA (“bicycle”)CAMBIARE (“change”)(b)AFFITTO (“rent”)STIPENDIO (“salary”)PAPA (“father”)UOMO (“man”)(c)(d)FORMAGGIO (“cheese”)SASSO (“stone”)Figure 43.1 Examples of minimal pairs in Lingua Italiana dei Signi, LIS (Italian Sign Language) (A) signsthat contrast in hand configuration; (B) signs that contrast in place of articulation (location); (C) signsthat contrast in orientation; (D) signs that contrast in movement. Illustrations from V. Volterra (ed.), Lalingua italiana dei segni. Bologna: Il Mulino, 1987 (new edn 2004) Copyright Virginia Volterra andElena Radutzky. Reprinted with permission.

43-Gaskell-Chap433/11/0710:53 AMPage 706706 · CHAPTER 43 The psycholinguistics of signed and spoken languagesmay not be constrained to fit within a frameimposed by a single articulator. Further, multisegmental signs (e.g. signs with more than threesegments) are relatively rare, regardless of howsigned segments are defined (see Brentari, 1998).In contrast, syllabification processes for speechproduction may serve a critical framing functionfor words, which can contain many segments.In sum, the linguistic evidence indicates thatsign languages exhibit a level of sublexical structure that is encoded during sign production andthat could be used to parse an incoming visuallinguistic signal. A next question is whether signers make use of such internal representationswhen perceiving signs. Evidence suggesting thatthey do comes from studies of categorical perception in American Sign Language.43.1.2 Categorical perceptionin sign languageJust as hearing speakers become auditorilytuned to perceive the sound contrasts of theirnative language, ASL signers appear to becomevisually tuned to perceive manual contrasts inAmerican Sign Language. Two studies have nowfound evidence of categorical perception forphonologically distinctive hand configurationsin ASL (Emmorey et al., 2003; Baker et al., 2005).“Categorical perception” refers to the findingthat stimuli are perceived categorically ratherthan continually, despite continuous variation inform. Evidence for categorical perception isfound (1) when perceivers partition continuousstimuli into relatively discrete categories and (2)when discrimination performance is betteracross a category boundary than within a category. For these categorical perception experiments, deaf signers and hearing non-signerswere presented with hand configuration continua that consisted of two handshape endpointswith nine intermediate variants. These continuawere either generated via a computer morphingprogram (Emmorey et al., 2003) or from a livesigner (Baker et al., 2005). In addition, Emmoreyet al. (2003) investigated categorical perceptionfor place of articulation continua. For all experiments, participants performed a discriminationtask in which they made same/different judgements for pairs or triplets of images from a continuum, and an identification task in which eachstimulus was categorized with respect to the endpoints of the continuum (the discriminationtask always preceded the categorization task).Deaf ASL signers and hearing English speakers (non-signers) demonstrated similar categoryboundaries for both hand configuration andplace of articulation (Emmorey et al., 2003; Bakeret al., 2005). This result is consistent with previous studies which found that deaf and hearingparticipants exhibit similar perceptual groupingsand confusability matrices for hand configuration and for place of articulation (Lane et al.,1976; Poizner and Lane, 1978). Thus, these ASLcategories may have a perceptual as well as alinguistic basis. However, only deaf signers exhibited evidence of categorical perception, and onlyfor distinctive hand configurations. Only deafsigners were sensitive to hand configuration category boundaries in the discrimination task,performing significantly better across categoryboundaries than within a hand configurationcategory (Emmorey et al., 2003; Baker et al.,2005).Interestingly, neither group exhibited categorical perception effects for place of articulation(Emmorey et al., 2003). Lack of a categoricalperception effect for place of articulation maybe due to more variable category boundaries. Inspeech, categorical perception is modulated bythe nature of the articulation of speech sounds.For example, categorical perception is often weakor not present for vowels, perhaps because ofthe more continuous nature of their articulationcompared to stop consonants (Fry et al., 1962).The same may be true for place of articulationin sign language. For example, the location ofsigns can be displaced within a major bodyregion in casual signing (Brentari, 1998) orcompletely displaced to the side during whispering. Category boundaries for place of articulation appear to be much less stable than forhand configuration. Categorical perception mayonly occur when articulations are relatively discrete for both sign and speech.The fact that only deaf signers exhibited categorical perception for ASL hand configurationsindicates that linguistic experience is what drivesthese effects. However, categorical perceptioneffects are weaker for sign than for speech. Deafsigners’ discrimination ability within handconfiguration categories was better than thenear-chance discrimination ability reportedwithin stop consonant categories for speech(e.g. Liberman et al., 1957). Nonetheless, thesign language results resemble discriminationfunctions observed for categorical perception inother visual domains, such as faces or facialexpressions (e.g. Beale and Keil, 1995; de Gelderet al., 1997). Discrimination accuracy withinvisual categories tends to be relatively high; generally, participants perform with about 70–85per cent mean accuracy rates within categories.The difference in the strength of categorical

43-Gaskell-Chap433/11/0710:53 AMPage 707Processing universals and modality effects in the mental lexicon · 707perception effects between speech and sign mayarise from psychophysical differences betweenaudition and vision.In sum, deaf signers appear to develop specialabilities for perceiving aspects of sign languagethat are similar to the abilities that speakersdevelop for perceiving speech. These findingssuggest that categorical perception emerges naturally as part of language processing, regardlessof language modality. In addition, the resultsindicate that phonological information is utilized during the perception of moving nonsensesigns (Baker et al., 2005) and when viewing stillimages of signs (Emmorey et al., 2003). Furtherresearch is needed to discover what parsing procedures might be used to identify sign boundaries, and whether categorical perception processesmight play a role in segmenting the signingstream.43.2 Processing universalsand modality effects in themental lexiconMany models of spoken word recognition hypothesize that an acoustic-phonetic representation issequentially mapped onto lexical entries, andlexical candidates which match this initial representation are activated (e.g. Marslen-Wilson,1987; McClelland and Elman, 1986; Goldingeret al., 1989). As more of a word is heard, activation levels of lexical entries which do not matchthe incoming acoustic signal decrease. Thesequential matching process continues untilonly one candidate remains which is consistentwith the sensory input. At this point, wordrecognition can occur. This process is clearlyconditioned by the serial nature of speech perception. Since signed languages are less dependent upon serial linguistic distinctions, visuallexical access and sign recognition may differfrom spoken language. To investigate this possibility, Grosjean (1981) and Emmorey and Corina(1990) used a gating technique to track theprocess of lexical access and sign identificationthrough time.43.2.1 The time course of signvs. word recognitionIn sign language gating tasks, a sign is presentedrepeatedly, and the length of each presentationis increased by a constant amount (e.g. onevideoframe or 33 msec). After each presentation, participants report what they think thesign is and how confident they are. Results fromsuch studies show that ASL signers produceinitial responses which share the place of articulation, orientation, and hand configuration of thetarget sign but differ in movement (Grosjean,1981; Emmorey and Corina, 1990). The movement of the sign is identified last, and coincideswith lexical recognition. This pattern of responsessuggests that, similarly to the speech signal, thevisual input for sign activates a cohort of potential lexical candidates that share some initialphonological features. This set of candidatesnarrows as more visual information is presented—until a single sign candidate remains. Clark andGrosjean (1982) showed further that sententialcontext did not affect this basic pattern of lexicalrecognition, although it reduced the time toidentify a target sign by about 10 per cent.However, unlike spoken word recognition,sign recognition appears to involve a two- stageprocess of recognition in which one group ofphonological features (hand configuration, orientation, and place of articulation) initiallyidentifies a lexical cohort, and then identification of phonological movement leads directly tosign identification. Such a direct correlationbetween identification of a phonological element and lexical identification does not occurwith English and may not occur for any spokenlanguage. That is, there seems to be no phonological feature or structure, the identification ofwhich leads directly to word recognition.Movement is the most temporally influencedphonological property of sign, and more time isrequired to resolve it. For speech, almost allphonological components have a strong temporal component, and there does not appear to be asingle feature that listeners must wait to resolvein order to identify a word.Furthermore, both Grosjean (1981) andEmmorey and Corina (1990) found that signswere identified surprisingly rapidly. Althoughsigns tend to be much longer than words, only35 per cent of a sign had to be seen before thesign was identified (Emmorey and Corina, 1990).This is significantly faster than word recognitionfor English. Grosjean (1980) found that approximately 83 per cent of a word had to be heardbefore the word could be identified. There are atleast two reasons why signs may be identifiedearlier than spoken words. First, the nature ofthe visual signal for sign provides a large amountof phonological information very early andsimultaneously. The early availability of thisphonological information can dramatically narrow the set of lexical candidates for the incoming stimulus. Second, the phonotactics andmorphotactics of a visual language such as ASL

43-Gaskell-Chap433/11/0710:53 AMPage 708708 · CHAPTER 43 The psycholinguistics of signed and spoken languagesmay be different from those of spoken languages.In English, many words begin with similarsequences, and listeners can be led down a garden path if a shorter word is embedded at theonset of a longer word—for example, “pan” in“pantomime.” This phenomenon does not commonly occur in ASL. Furthermore, sign initialcohorts seem to be much more limited byphonotactic structure. Unlike English, in whichmany initial strings have large cohorts (e.g. thestrings [kan], [mæn], and [skr] are all shared bythirty or more words), ASL has few signs whichshare an initial phonological shape (i.e. the samehand configuration and place of articulation).This phonotactic structure limits the size of theinitial cohort in ASL. The more constrainedphonotactics and the early and simultaneousavailability of phonological information mayconspire to produce numerically and proportionally faster identification times for ASL signs.In sum, lexical access and word recognitionare generally quite similar for spoken and signedlanguages. For both language types, lexical accessinvolves a sequential mapping process betweenan incoming linguistic signal and stored lexicalrep

ical perception effects for place of articulation (Emmorey et al., 2003). Lack of a categorical perception effect for place of articulation may be due to more variable category boundaries. In speech, categorical perception is modulated by the nature of the articulation of speech sounds. For examp

Related Documents:

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Food outlets which focused on food quality, Service quality, environment and price factors, are thè valuable factors for food outlets to increase thè satisfaction level of customers and it will create a positive impact through word ofmouth. Keyword : Customer satisfaction, food quality, Service quality, physical environment off ood outlets .

More than words-extreme You send me flying -amy winehouse Weather with you -crowded house Moving on and getting over- john mayer Something got me started . Uptown funk-bruno mars Here comes thé sun-the beatles The long And winding road .