Learning to identify emotional voices
dc.contributor.author | Perrachione, Tyler | en_US |
dc.contributor.author | Wong, Patrick | en_US |
dc.date.accessioned | 2017-08-30T19:14:35Z | |
dc.date.available | 2017-08-30T19:14:35Z | |
dc.date.issued | 2017 | |
dc.identifier.uri | https://hdl.handle.net/2144/23700 | |
dc.description.abstract | Recognizing people by the sound of their voice is an important social skill. What listeners hear as a talker's “voice” is a highly variable signal, the acoustic features of which can change dramatically depending on situational factors such as a talker's emotional state when speaking or the linguistic content of an utterance. A challenge for listeners in talker identification is to maintain perceptual constancy of talker identity across different situations. We investigated listeners’ ability to learn to identify voices from emotional speech and generalize their knowledge of talker identity to new emotional contexts. Listeners learned to identify voices from utterances spoken with neutral, angry, or fearful vocal affects and were then tested on their ability to identify those voices from both trained and untrained affects. Listeners learned talker identity equally well regardless of which emotion was expressed during training. However, in all cases, changing the vocal affect of the speech at test resulted in a significant decrement in talker identification accuracy. These results elucidate how emotional variability impacts social auditory perception: The phonetic changes to speech resulting from the vocal expression of emotion can obscure the correspondence between speech acoustics and talker identity expected by listeners. | en_US |
dc.description.sponsorship | This work was supported by grants from the National Science Foundation (USA) (BCS-1125144), the National Institutes of Health (USA) (R01DC008333), the University Grants Committee (HKSAR) (RGC/GRF) (14117514) and the Global Parent Child Resource Centre Limited awarded to P.W., and by an NSF Graduate Research Fellowship and NIH grant R03DC014045 to T.P. | en_US |
dc.rights | Attribution-ShareAlike 3.0 United States | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-sa/3.0/us/ | |
dc.subject | Talker identification | en_US |
dc.subject | Voice recognition | en_US |
dc.subject | Generalization | en_US |
dc.subject | Phonetic variability | en_US |
dc.subject | Affect | en_US |
dc.subject | Emotion | en_US |
dc.subject | Affect display | en_US |
dc.title | Learning to identify emotional voices | en_US |
dc.type | Dataset | en_US |
This item appears in the following Collection(s)