TY - JOUR
T1 - Recruitment of fusiform face area associated with listening to degraded speech sounds in auditory-visual speech perception
T2 - A PET study
AU - Kawase, Tetsuaki
AU - Yamaguchi, Keiichiro
AU - Ogawa, Takenori
AU - Suzuki, Ken Ichi
AU - Suzuki, Maki
AU - Itoh, Masatoshi
AU - Kobayashi, Toshimitsu
AU - Fujii, Toshikatsu
PY - 2005/7/15
Y1 - 2005/7/15
N2 - For the fast and accurate cognition of external information, the human brain seems to integrate information from multi-sensory modalities. We used positron emission tomography (PET) to identify the brain areas related to auditory-visual speech perception. We measured the regional cerebral blood flow (rCBF) of young, normal volunteers during the presentation of dynamic facial movement at vocalization and during a visual control condition (visual noise), both under the two different auditory conditions of normal and degraded speech sounds. The subjects were instructed to listen carefully to the presented speech sound while keeping their eyes open and to say what they heard. The PET data showed that elevation of rCBF in the right fusiform gyrus (known as the "face area") was not significant when the subjects listened to normal speech sound accompanied by a dynamic image of the speaker's face, but was significant when degraded speech sound (filtered with a 500 Hz low-pass filter) was presented with the facial image. The results of the present study confirm the possible involvement of the fusiform face area (FFA) in auditory-visual speech perception, especially when auditory information is degraded, and suggest that visual information is interactively recruited to make up for insufficient auditory information.
AB - For the fast and accurate cognition of external information, the human brain seems to integrate information from multi-sensory modalities. We used positron emission tomography (PET) to identify the brain areas related to auditory-visual speech perception. We measured the regional cerebral blood flow (rCBF) of young, normal volunteers during the presentation of dynamic facial movement at vocalization and during a visual control condition (visual noise), both under the two different auditory conditions of normal and degraded speech sounds. The subjects were instructed to listen carefully to the presented speech sound while keeping their eyes open and to say what they heard. The PET data showed that elevation of rCBF in the right fusiform gyrus (known as the "face area") was not significant when the subjects listened to normal speech sound accompanied by a dynamic image of the speaker's face, but was significant when degraded speech sound (filtered with a 500 Hz low-pass filter) was presented with the facial image. The results of the present study confirm the possible involvement of the fusiform face area (FFA) in auditory-visual speech perception, especially when auditory information is degraded, and suggest that visual information is interactively recruited to make up for insufficient auditory information.
KW - Auditory-visual speech perception
KW - Fusiform face area
KW - PET
UR - http://www.scopus.com/inward/record.url?scp=19544368822&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=19544368822&partnerID=8YFLogxK
U2 - 10.1016/j.neulet.2005.03.050
DO - 10.1016/j.neulet.2005.03.050
M3 - Article
C2 - 15925100
AN - SCOPUS:19544368822
SN - 0304-3940
VL - 382
SP - 254
EP - 258
JO - Neuroscience Letters
JF - Neuroscience Letters
IS - 3
ER -