Activation Of Auditory Cortex By Anticipating And Hearing .

3y ago
34 Views
2 Downloads
1.02 MB
9 Pages
Last View : 6d ago
Last Download : 6m ago
Upload by : Samir Mcswain
Transcription

COREMetadata, citation and similar papers at core.ac.ukProvided by Aaltodoc Publication ArchiveThis is an electronic reprint of the original article.This reprint may differ from the original in pagination and typographic detail.Author(s):Yokosawa, Koichi & Pamilo, Siina & Hirvenkari, Lotta & Hari, Riitta &Pihko, ElinaTitle:Activation of Auditory Cortex by Anticipating and Hearing EmotionalSounds: An MEG StudyYear:2013Version:Final published versionPlease cite the original version:Yokosawa, Koichi & Pamilo, Siina & Hirvenkari, Lotta & Hari, Riitta & Pihko, Elina. 2013.Activation of Auditory Cortex by Anticipating and Hearing Emotional Sounds: An MEGStudy. PLoS ONE. Volume 8, Issue 11. 1-8. ISSN 1932-6203 (printed). DOI:10.1371/journal.pone.0080284.Rights: 2013 Public Library of Science (PLoS). This is the accepted version of the following article: Yokosawa,Koichi & Pamilo, Siina & Hirvenkari, Lotta & Hari, Riitta & Pihko, Elina. 2013. Activation of Auditory Cortex byAnticipating and Hearing Emotional Sounds: An MEG Study. PLoS ONE. Volume 8, Issue 11. 1-8. ISSN1932-6203 (printed). DOI: 10.1371/journal.pone.0080284, which has been published in final form athttp://journals.plos.org/plosone/article?id 10.1371/journal.pone.0080284.All material supplied via Aaltodoc is protected by copyright and other intellectual property rights, andduplication or sale of all or part of any of the repository collections is not permitted, except that material maybe duplicated by you for your research use or educational purposes in electronic or print form. You mustobtain permission for any other use. Electronic or print copies may not be offered, whether for sale orotherwise to anyone who is not an authorised user.Powered by TCPDF (www.tcpdf.org)

Activation of Auditory Cortex by Anticipating andHearing Emotional Sounds: An MEG StudyKoichi Yokosawa1,2*, Siina Pamilo1, Lotta Hirvenkari1, Riitta Hari1, Elina Pihko11 Brain Research Unit, O.V. Lounasmaa Laboratory, and MEG Core, Aalto NeuroImaging, School of Science, Aalto University, Espoo, Finland, 2 Faculty of Health Sciences,Hokkaido University, Sapporo, Hokkaido, JapanAbstractTo study how auditory cortical processing is affected by anticipating and hearing of long emotional sounds, we recordedauditory evoked magnetic fields with a whole-scalp MEG device from 15 healthy adults who were listening to emotional orneutral sounds. Pleasant, unpleasant, or neutral sounds, each lasting for 6 s, were played in a random order, preceded by100-ms cue tones (0.5, 1, or 2 kHz) 2 s before the onset of the sound. The cue tones, indicating the valence of the upcomingemotional sounds, evoked typical transient N100m responses in the auditory cortex. During the rest of the anticipationperiod (until the beginning of the emotional sound), auditory cortices of both hemispheres generated slow shifts of thesame polarity as N100m. During anticipation, the relative strengths of the auditory-cortex signals depended on theupcoming sound: towards the end of the anticipation period the activity became stronger when the subject wasanticipating emotional rather than neutral sounds. During the actual emotional and neutral sounds, sustained fields werepredominant in the left hemisphere for all sounds. The measured DC MEG signals during both anticipation and hearing ofemotional sounds implied that following the cue that indicates the valence of the upcoming sound, the auditory-cortexactivity is modulated by the upcoming sound category during the anticipation period.Citation: Yokosawa K, Pamilo S, Hirvenkari L, Hari R, Pihko E (2013) Activation of Auditory Cortex by Anticipating and Hearing Emotional Sounds: An MEGStudy. PLoS ONE 8(11): e80284. doi:10.1371/journal.pone.0080284Editor: Claude Alain, Baycrest Hospital, CanadaReceived November 16, 2012; Accepted September 26, 2013; Published November 20, 2013Copyright: ß 2013 Yokosawa et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permitsunrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.Funding: This work was supported by JSPS (Japan Society for the Promotion of Science, http://www.jsps.go.jp/english/index.html) researcher exchange program2009, by the Academy of Finland (http://www.aka.fi/eng; National Centers of Excellence Program 2006–2011), and aivoAALTO research program of the AaltoUniversity (http://www.aalto.fi/en/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.Competing Interests: The authors have declared that no competing interests exist.* E-mail: yokosawa@med.hokudai.ac.jpsoon after the warning stimulus and of a later response reflecting,depending on the task, motor preparation or, when no motoraction is required, anticipatory attention or cognitive preparationto the second stimulus [13,14,20–22]. Studies using magnetoencephalography (MEG) and source analysis suggest that during thislater, anticipatory phase, the sensory cortex to be stimulated isalready active. Thus, for example, during anticipation of anauditory imperative stimulus cued by a visual stimulus, theauditory cortex can be activated already during the lateranticipation period, within 0.5 s before the auditory stimulussound [23].The aim of the current study was to determine whetheranticipation of emotional vs. neutral sounds would modulate theactivation of auditory cortices similarly during the early and lateparts of the anticipation period and during listening to the sounds.We used MEG to obtain excellent temporal and good spatialresolution in the study of auditory-cortex activation, and wemeasured auditory evoked magnetic fields without applying highpass filtering (direct current, DC) to reliably obtain both fast andslow brain signals. The 10.5-s time sequence included, after a 0.5-sbaseline, a short cue tone followed after 2 s by a 6-s long emotionevoking or neutral natural sound. The category of the upcomingsound (pleasant, neutral, or unpleasant) was indicated by the cuetone.IntroductionHumans detect positive and negative emotions easily from bothlinguistic and nonlinguistic utterances [1] as well as fromenvironmental sounds, such as crashes, breaking of glass, andmusic. Emotional sounds are important for social interaction andbonding, but they also serve a survival value in reorienting theprocessing resources. In the visual modality, emotional pictures,compared with neutral pictures, can enhance the processingalready in the early visual cortices [2]. The auditory cortices arealso affected by emotion. For human voice, cortices associatedwith auditory function—in addition to several cortical andsubcortical areas commonly related to emotional processes—reactmore strongly to emotional than neutral prosody [3–5]. Theauditory-cortex responses to emotional sounds may appear within0.3 s from the beginning of the stimulus [6,7], indicating thereadiness for fast emotion detection. Some electrophysiologicalstudies have shown subsequent slow shifts up to 0.5 s after theonset of an emotional sound [8,9]. In addition to human voice,other types of complex emotional sounds lead to increasedactivation of the auditory cortices [10]. Even neutral tonesconditioned in advance to emotional valence affect the auditorycortex 100-ms neuromagnetic response N100m [11].Anticipation of an imperative stimulus, cued by a precedingstimulus, can evoke slow scalp-negative EEG potentials [12–16]that are also sensitive to the anticipation of emotional pictures [17–19]. This slow shift consists of an earlier, orienting part occurringPLOS ONE www.plosone.org1November 2013 Volume 8 Issue 11 e80284

Auditory Cortex Activation by Emotional SoundsMaterials and MethodsSubjectsEighteen healthy volunteers participated in the experiment.Data from 3 subjects were excluded from the analysis: data fromtwo subjects because of excessive eye blinking and data from onesubject because of a questionable N100m source location. Thefinal analysis was therefore based on data from 15 subjects (8females, 7 males; mean 6 SD age, 27.566.9 yrs; age range, 21–47 yrs; all right-handed).Ethics StatementThe MEG recordings were approved by the Ethics Committeeof the Hospital District of Helsinki and Uusimaa, and writteninformed consent was obtained from each participant prior to theexperiments.StimuliFrom the International Affective Digitized Sounds database(2nd Edition; IADS-2, University of Florida), we selected soundsthat have been validated for emotional content by more than 100listeners: eight ‘‘pleasant & low arousal’’ (abbreviated as P; e.g.,music, birdsong, etc.), eight ‘‘neutral’’ (N; e.g., typewriter, wind,etc.), and eight ‘‘unpleasant & high arousal’’ (U; e.g., scream, carcrash, etc.) sounds. Fig. 1 (top) shows the selected sounds along thePleasure–Arousal scales among the sounds of the database, andTable 1 specifies our stimuli in more detail.Each sound was adjusted (cut at the end, when needed) to last6 s and modified so that no sound had rise and fall times shorterthan 10 ms. Additionally, the stimuli were normalized so that theirmaximum sound pressures were the same. Fig. 1 (bottom) showsthe averaged sound envelopes for the three sound categories,indicating a slightly slower rise for U than for N and P soundswithin the first 0.2 s but very similar sound intensities after 0.5 s.A 100-ms cue tone with rise and fall times of 10 ms waspresented 2 s before each emotional or neutral (P/N/U) sound.The pitch of the cue was 500 Hz, 1 kHz, or 2 kHz, indicatingdifferent valences of the upcoming sounds; the connection betweenthe cue and the emotional sound was fixed for each subject butwas counterbalanced across subjects. That is, the original 18subjects were allocated evenly across the 6 different cue–stimuluscombinations. The onsets of the successive cue sounds wereseparated by 20 s. Before the main experiment, the subjectsparticipated in a 6-min training session to learn the relationshipbetween the cue tones and the valence of the upcoming emotionalsound category.Consequently, each epoch consisted of an anticipation period(0–2 s; cue at time 0) and a hearing period (2–8 s). Both the cuesand emotional sounds were presented via a non-magnetic speakerlocated in front of the subject in a magnetically shielded room.All subjects were studied in two approximately 20-min sessions,each containing 60 cue–stimulus epochs—with the P, N and Usounds presented in a random order—and a few oddball epochs.The oddball epochs included a 40-ms burst of white noise at anarbitrary location of the cue–stimulus epoch. The subject’s taskwas to count the number of oddball epochs in each session; theseepochs were excluded from the analysis. This task was added tohelp the subjects to attend to the sounds and to keep their vigilancestable. Responses were thus collected for altogether 2660 120epochs, resulting in 40 epochs per sound category.Figure 1. Profile of auditory stimuli. (top) Emotional sounds usedas stimuli, shown in the valence and arousal matrix of IADS-2. Soundsused for high pleasant and low arousal (P) category, neutral (N)category, and low pleasant and high arousal (U) category are markedwith red, blue, and green dotted lines, respectively. (bottom) Amplitudeenvelopes averaged over 8 sounds belonging to the same category;whole duration (lower left) and details of the onset (lower ki, Finland) at the MEG Core of Aalto NeuroImaging,Aalto University, Espoo, Finland. The passband was from DC to200 Hz, and the signals were sampled at 600 Hz.AnalysisThe MEG signals obtained in the two sessions were merged offline after conversion of the data into the same reference headposition. Event-related signals from the 204 gradiometers (twoorthogonal sensors at each of the 102 locations in the sensorhelmet) were then averaged separately for each stimulus category,excluding the oddball epochs.Because of the tonotopic organization of the auditory cortex, thesource location of the 100-ms response N100m varies slightlyaccording to the stimulus frequency [24]. However, to obtain ahigher signal-to-noise ratio and a robust source location, wecalculated the N100m source location for the cue tones by usingthe MEG signals averaged over the three cue tones (500 Hz,1 kHz, and 2 kHz). Two equivalent current dipoles (ECDs), one inthe left hemisphere and the other in the right hemisphere, wereassumed in each individual brain. The locations and directions ofthe two equivalent current dipoles were calculated by ‘‘SourceModelling’’ software (Elekta, Neuromag) by using 20 pairs oforthogonal gradiometers over the temporal areas, i.e., 10 pairs foreach hemisphere around the N100m maximum. The dipoles wereRecordingsMEG signals were recorded with a 306-channel whole-scalpneuromagnetometer (VectorViewTM, Elekta Neuromag Oy,PLOS ONE www.plosone.org2November 2013 Volume 8 Issue 11 e80284

Auditory Cortex Activation by Emotional SoundsTable 1. Contents and affective ratings of auditory stimuli adopted from IADS-2.CategoryDescriptionSound r of happy childrenHigh-pleasureSeagull1506.954.38Seagull’s songLow-arousalRobin1517.124.47Robin’s songBrook1726.623.36Babble of streamAverageCork pour7266.824.51Cork pulled and liquid pouredPleasure: 7.03Harp8097.443.36Harp playingArousal: 4.02Beethoven8107.514.18Classic musicChoir8126.93.43Choir’s singingHiccup2454.185.05HiccuppingNose blow2514.165.14Blowing one’s noseOffice13204.235.48Typing and phone’s ringPropeller’s 325.4Wind blowingPleasure: 4.33Belch7024.455.37BurpingArousal: 5.43War7064.165.3Machineguns and battleplanePaper27294.35.79Tearing paperScream2752.058.16Screaming of a womanLow-pleasureFem scream22761.937.77Screaming of another womanHigh-arousalAttack12791.687.95Screaming and beatingAttack22851.87.79Screaming and beatingAverageVictim2861.687.88Screaming and gun shotPleasure: 1.88Fight12901.657.61Arguing voiceArousal: 7.83Tire skids4222.227.52Brake-screechingCar wreck4242.047.99Brake-screeching and carcrash*Added by the d every 4.9 ms and the ECD with the highest goodness-of-fitvalues was selected. These sources were then used to explain thesignals during the whole analysis period. The signal passband was0.1–40 Hz for the analysis of the transient responses and DC–8 Hz for the slow shifts. Activations of the auditory cortical areasassociated with anticipating or hearing emotional/neutral soundswere investigated by quantifying the slow shifts preceding thesounds as well as the sustained fields during the sounds. The meanN100m source location across the three cue tones (500 Hz, 1 kHz,and 2 kHz) was adopted as the source area for all signals because itis known that the auditory sustained field originates within 1 cmfrom the source of N100m (for example, [25]).For each participant, the mean source strengths were computedwithin time windows of 0.2–0.35 s, 0.4–0.7 s, 1.0–1.5 s, and 1.5–2.0 s with respect to a baseline from 20.5 to 0 s before the cueonset and in time windows of 2.5–8 s with respect to a baselinefrom 20.2 to 0 s before the emotional-sound onset (i.e., from 1.8to 2 s after cue onset).Statistical significance of the source strengths was evaluated bytesting the values against zero with one-way ANOVA followed byTukey’s multiple-comparison tests. Possible effects of cue tones aswell as differences between hemispheres, time windows andemotional categories were analyzed with repeated measures ofANOVA (IBM SPSS Statistics 20). Greenhouse-Geisser correctionwas used when the sphericity was violated. The level of statisticalsignificance was p,0.05.PLOS ONE www.plosone.orgResultsSource locationsFigure 2 (top) shows a ‘‘butterfly’’ display of the typical MEGwaveforms of Subject 1. Transient deflections follow the onset ofthe cue and the onset of the emotional sound, and a sustained fieldwith stable amplitude continues throughout the sound and even afew seconds afterwards.The strongest transient responses were the N100m deflections,occurring bilaterally in sensors over the auditory cortices (Fig. 2a)and peaking on average 108 ms after the sound onset. The laterresponses, peaking around 0.27 s and 0.5 s, occurred in thevicinity of the strongest N100m (Figs. 2b and c). Therefore, thesources of N100m were used to explain also these later responses.Figure 3 (left) shows the location of the current dipole forN100m to cue tone in the right supratemporal auditory cortex ofSubject 2. This location agrees with many earlier reports (forreviews, [26,27]). The N100m sources of all subjects clustered tothe auditory cortices, as shown in Fig. 3 (right). The mean 6 SDgoodness-of-fit value of the dipole model was 94.564.6% (median,95.8%), and all goodness-of-fit values exceeded 85%.Transient and sustained signalsFigure 4 shows the grand-mean source waveforms of the MEGsignals associated with the three sound categories (P/N/U) for thewhole analysis period, separately for each hemisphere. The top3November 2013 Volume 8 Issue 11 e80284

Auditory Cortex Activation by Emotional SoundsFigure 2. Time traces and maximum locations of magnetic field gradients. (top) An example of magnetic field gradients obtained by allgradiometers from one typical participant. Reverse triangles indicate the time points at the N100m peak (a) and broad peaks of 0.27 s (b) and 0.5 s (c).(bottom) (a) Positions of the gradiometer pairs that detected the strongest N100m signal in each individual; the helmet is shown from right (top) andleft (bottom). (b, c) Sensor positions for the strongest 0.27-s (b) and 0.5-s (c) peaks given in spatial relationship to the sensor that picked the strongestN100m response in each individual (grey shading); the squares illustrate the sensor pairs (as in the helmets on the left), separated by 34 mm. Eachstacked circle corresponds to one participant. Dotted circles refer to local maxima within the temporal area in a few cases with the global maximumlocated outside the temporal area. The signals were time-averaged over intervals from 0.2 to 0.35 s (b) and from 0.4 to 0.7 s (c), respectively. A fewpositions outside the temporal areas were eliminated from the figure.doi:10.1371/journal.pone.0080284.g002traces show the source waveforms with passband from DC to40 Hz, with prominent transient responses to the onsets of bothsounds, decaying slow shifts (with peaks around 0.27 and 0.5 s)during the anticipation period, and prominent sustained fieldsthroughout the emotional sound. Low-pass filtering at 8 Hz(middle traces) dampens the onset transients, and high-passfiltering at 0.1 Hz (bottom panels) strongly dampens the sustainedfields. The DC–8 Hz traces were used for quantification of theslow shifts, and the 0.1–40 Hz traces were used for quantificationof the N100m responses.Figure 5 shows a summary of the source strengths during thewhole analysis period. The trace in the top panel is an example ofleft-hemisphere source waveform for neutral sounds, here used toillustrate the different analysis periods (shadowed belts a–f). In alltime windows, the strengths of all sources in both hemispheresdiffered statistically significantly from zero.Anticipation period 0–2 sFirst, the source strengths (dipole moments) and latencies of theN100m responses were analyzed. Main effects of hemisphere wereobserved both for source strength and latency (hemisphere (2) andcategory (3), n 13; data of two participants not included becauseN100m responses were not clearly single-peaked) such that theFigure 3. Locations of N100m sources to cue sounds. (a) Typicalexample of N100m source location superimposed on the anatomicalimage of one subject. (b) Distribution of the source locations of allparticipants superimposed on a top view of a template brain.doi:10.1371/journal.po

auditory imperative stimulus cued by a visual stimulus, the auditory cortex can be activated already during the later anticipation period, within 0.5 s before the auditory stimulus sound [23]. The aim of the current study was to determine whether anticipation of emotional vs. neutral sounds would modulate the

Related Documents:

67Mosaico Cortex Honey 30x30 - 12”x12” Mosaico Cortex White 30x30 - 12”x12” 10mm 15x90 - 6”x36” 45x45 - 18”x18” 30x60 - 12”x24” BROWN GREY Gres porcellanato / porcelain feinsteinzeuG / Grès cérame / Керамогранит CORTEX 50 Cortex Brown 4545 Cortex Brown 3060 54 Cortex Brown

processors support a significantly larger physical memory address space as compared to Cortex-M processors. This section discusses two approaches to address these differences at the system level. TABLE III. ADDRESS SPACE DIFFERENCES BETWEEN CORTEX-A AND CORTEX-M PROCESSORS Cortex-A Cortex-M Physical addressing ARMv7-A: Upto 40-bits

Time frequency analysis Analysis of transient oscillatory activity Examples: auditory cortex / motor cortex Event related synchronization vs. desynchronization Spoken sentence, auditory cortex responses (Fontolan, Morillon et al, 2014) Build up of choice predictive activity in motor cortex

Test of Auditory Comprehension of Language-3. Austin, TX: PRO-ED. Test of Auditory Processing Skills, 3rd Edition (TAPS-3): This test measures what the person does with what is heard, and can be used for ages 4-18. There are numerous sub scores, and three cluster scores including basic auditory skills, auditory memory, and auditory cohesion.

Express VPN 8.5.3 Crack Activation Code Mac 2020 [Latest] . mobiledit forensic express activation code, spyder 3 express activation code, roku express activation code, vpn express activation code 2021, express vpn activation code, express vpn . a fantastic IP link system for your pc,

E2 Series VS ARM Cortex-M E2 Series VS ARM Cortex-M Comparison Table E2 Series Options E20 Standard Core E21 Standard Core Cortex-M0 Cortex-M3 Cortex-M4 Dhrystone Up to 1.38 DMIPS/MHz 1.1 DMIPS/MHz 1.38 DMIPS/MHz 0.95 DMIPS/MHz 1.25 DMIPS/MHz 1.25DMIPS/MHz CoreMark Up to 3.1 2.4 CoreMarks/MHz 3.1 CoreMarks/MHz 1.8 CoreMarks/MHz 2.76 Coremarks .

Organization of the Cerebral Cortex: Contains up to six distinct layers of cell bodies that are parallel to the surface of the cortex. Cells of the cortex are also divided into columns that lie perpendicular to the laminae. Divided into five lobes: occipital, parietal, temporal, limbic, and frontal. Each part of the cerebral cortex receives specialized input from a particular

ARCHAEOLOGICAL ILLUSTRATION 8 IMAGE GALLERY - SCRAN images to draw IMAGE GALLERY - illustrations from the 19 th century to the present day IMAGE GALLERY - illustrations from 19th century to the present day STONE WORK Stones with incised crosses, St N inian’s Cave, Wigtownshire. Illustration from Proceedings of the Society of Antiquaries of Scotland (1884-85), Figs. 2 and 3, p84 .