Neural mechanisms underlying concurrent listening of simultaneous speech

Natasha Yuriko Santos Kawata, Teruo Hashimoto, Ryuta Kawashima

研究成果: ジャーナルへの寄稿学術論文査読

4 被引用数 (Scopus)


Can we identify what two people are saying at the same time? Although it is difficult to perfectly repeat two or more simultaneous messages, listeners can report information from both speakers. In a concurrent/divided listening task, enhanced attention and segregation of speech can be required rather than selection and suppression. However, the neural mechanisms of concurrent listening to multi-speaker concurrent speech has yet to be clarified. The present study utilized functional magnetic resonance imaging to examine the neural responses of healthy young adults listening to concurrent male and female speakers in an attempt to reveal the mechanism of concurrent listening. After practice and multiple trials testing concurrent listening, 31 participants achieved performance comparable with that of selective listening. Furthermore, compared to selective listening, concurrent listening induced greater activation in the anterior cingulate cortex, bilateral anterior insula, frontoparietal regions, and the periaqueductal gray region. In addition to the salience network for multi-speaker listening, attentional modulation and enhanced segregation of these signals could be used to achieve successful concurrent listening. These results indicate the presence of a potential mechanism by which one can listen to two voices with enhanced attention to saliency signals.

ジャーナルBrain Research
出版ステータス出版済み - 2020 7月 1


「Neural mechanisms underlying concurrent listening of simultaneous speech」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。