The time course of auditory–visual processing of speech and body actions: Evidence for the simultaneous activation of an extended neural network for semantic processing

Meyer, Georg F. and Harrison, Neil R. and Wuerger, Sophie M. (2013) The time course of auditory–visual processing of speech and body actions: Evidence for the simultaneous activation of an extended neural network for semantic processing. Neuropsychologia, 51 (9). pp. 1716-1725. ISSN 0028-3932

Full text not available from this repository. (Request a copy)
Official URL: http://www.sciencedirect.com/science/article/pii/S...

Abstract

An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results: 1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas. 2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions.

Item Type: Article
Keywords: Multisensory; Biological motion; Speech; Semantic processing; EEG
Subjects: B Philosophy. Psychology. Religion > BF Psychology
Faculty / Department: Faculty of Science > Psychology
Depositing User: Users 3 not found.
Date Deposited: 22 Nov 2013 11:18
Last Modified: 05 Mar 2014 13:28
URI: http://hira.hope.ac.uk/id/eprint/222

Actions (login required)

View Item View Item