Neural mechanisms underlying concurrent listening of simultaneous speech

Natasha Yuriko Santos Kawata, Teruo Hashimoto, Ryuta Kawashima

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)


Can we identify what two people are saying at the same time? Although it is difficult to perfectly repeat two or more simultaneous messages, listeners can report information from both speakers. In a concurrent/divided listening task, enhanced attention and segregation of speech can be required rather than selection and suppression. However, the neural mechanisms of concurrent listening to multi-speaker concurrent speech has yet to be clarified. The present study utilized functional magnetic resonance imaging to examine the neural responses of healthy young adults listening to concurrent male and female speakers in an attempt to reveal the mechanism of concurrent listening. After practice and multiple trials testing concurrent listening, 31 participants achieved performance comparable with that of selective listening. Furthermore, compared to selective listening, concurrent listening induced greater activation in the anterior cingulate cortex, bilateral anterior insula, frontoparietal regions, and the periaqueductal gray region. In addition to the salience network for multi-speaker listening, attentional modulation and enhanced segregation of these signals could be used to achieve successful concurrent listening. These results indicate the presence of a potential mechanism by which one can listen to two voices with enhanced attention to saliency signals.

Original languageEnglish
Article number146821
JournalBrain Research
Publication statusPublished - 2020 Jul 1


  • Anterior insula
  • Functional MRI
  • Multiple-talker
  • Saliency


Dive into the research topics of 'Neural mechanisms underlying concurrent listening of simultaneous speech'. Together they form a unique fingerprint.

Cite this