TY - JOUR
T1 - Time to Face Language
T2 - Embodied Mechanisms Underpin the Inception of Face-Related Meanings in the Human Brain
AU - Garcia, Adolfo M.
AU - Hesse, Eugenia
AU - Birba, Agustina
AU - Adolfi, Federico
AU - Mikulan, Ezequiel
AU - Caro, Miguel Martorell
AU - Petroni, Agustin
AU - Bekinschtein, Tristan A.
AU - Del Carmen Garcia, Maria
AU - Silva, Walter
AU - Ciraolo, Carlos
AU - Vaucheret, Esteban
AU - Sedeno, Lucas
AU - Ibanez, Agustin
N1 - Publisher Copyright:
© 2020 The Author(s) 2020. Published by Oxford University Press.
PY - 2020/11/1
Y1 - 2020/11/1
N2 - In construing meaning, the brain recruits multimodal (conceptual) systems and embodied (modality-specific) mechanisms. Yet, no consensus exists on how crucial the latter are for the inception of semantic distinctions. To address this issue, we combined electroencephalographic (EEG) and intracranial EEG (iEEG) to examine when nouns denoting facial body parts (FBPs) and nonFBPs are discriminated in face-processing and multimodal networks. First, FBP words increased N170 amplitude (a hallmark of early facial processing). Second, they triggered fast (~100 ms) activity boosts within the face-processing network, alongside later (~275 ms) effects in multimodal circuits. Third, iEEG recordings from face-processing hubs allowed decoding ~80% of items before 200 ms, while classification based on multimodal-network activity only surpassed ~70% after 250 ms. Finally, EEG and iEEG connectivity between both networks proved greater in early (0-200 ms) than later (200-400 ms) windows. Collectively, our findings indicate that, at least for some lexico-semantic categories, meaning is construed through fast reenactments of modality-specific experience.
AB - In construing meaning, the brain recruits multimodal (conceptual) systems and embodied (modality-specific) mechanisms. Yet, no consensus exists on how crucial the latter are for the inception of semantic distinctions. To address this issue, we combined electroencephalographic (EEG) and intracranial EEG (iEEG) to examine when nouns denoting facial body parts (FBPs) and nonFBPs are discriminated in face-processing and multimodal networks. First, FBP words increased N170 amplitude (a hallmark of early facial processing). Second, they triggered fast (~100 ms) activity boosts within the face-processing network, alongside later (~275 ms) effects in multimodal circuits. Third, iEEG recordings from face-processing hubs allowed decoding ~80% of items before 200 ms, while classification based on multimodal-network activity only surpassed ~70% after 250 ms. Finally, EEG and iEEG connectivity between both networks proved greater in early (0-200 ms) than later (200-400 ms) windows. Collectively, our findings indicate that, at least for some lexico-semantic categories, meaning is construed through fast reenactments of modality-specific experience.
KW - EEG
KW - embodied cognition
KW - functional connectivity
KW - intracranial recordings
KW - semantic processing
UR - http://www.scopus.com/inward/record.url?scp=85088134220&partnerID=8YFLogxK
U2 - 10.1093/cercor/bhaa178
DO - 10.1093/cercor/bhaa178
M3 - Article
C2 - 32577713
AN - SCOPUS:85088134220
SN - 1047-3211
VL - 30
SP - 6051
EP - 6068
JO - Cerebral Cortex
JF - Cerebral Cortex
IS - 11
ER -