How the human brain recognizes text-based emoticons사람의 뇌가 활자 기반 이모티콘을 인지하는 방법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 802
  • Download : 0
Like nonverbal cues in oral interactions, text-based emoticons, which are textual portrayals of a writer’s facial expressions, are commonly used in electronic device-mediated communication. Little is known, however, about how text-based emoticons are processed in the human brain. The goal of this study was to examine the discrete neural systems engaged in the recognition of text-based emoticons and to investigate the spatio-temporal steps of the areas involved during text-based emoticon processing with multimodal neuroimaging data. In chapter 1, various emotional emoticons (happy, sad, angry, fearful and scrambled) were only used. During fMRI scan, subjects were asked to respond by pressing a button, indicating whether text-based emoticons represented positive or negative emotions. Voxel-wise analyses were performed to compare the responses and contrasted with emotional versus scrambled emoticons and among emoticons with different emotions. In the voxel-wise analysis, both emotional and scrambled emoticons were processed mainly in cortical regions. To explore processing strategies for text-based emoticons, brain activity in the bilateral occipital and fusiform face areas were compared. In a percent signal change analysis, the occipital and fusiform face areas of the right hemisphere showed significantly higher activation that of the left hemisphere. These results suggest that text-based emoticons are processed in neocortex, dominantly in the right hemisphere, regardless of whether they contain configural information or not. In chapter 2, MEG recordings and fMRI scans were acquired from 40 healthy subjects. To compare emoticons and faces, text-based emoticons and facial expressions were used as stimuli. Before undergoing neuroimaging, all subjects rated the emotional valence of stimuli. Using fMRI BOLD analyses and MEG source estimation, it was demonstrated that spatial concordance between fMRI and MEG components within the primary visual cortex, the ventral occipito-temporal cortex, the ventrolateral prefrontal cortex, and the occipito-parietal cortex. The incoming emoticon stimuli were analyzed for their visual characteristics at around 80ms in the primary visual cortex as named by M100. Processing of the characteristics akin to face in the fusiform face area (FFA) at around 150ms (M170). Subsequent, presumably emotional processing, source distributed the left parietal cortex at around 220ms (EPN). Source maps of the emoticons versus faces difference suggested that face stimuli are processed through dual route whereas emoticons are processed cortical pathway dominantly. In addition, the correlations between behavior and source activation of the left AIC supports cortical processing of emoticons. The preliminary fMRI results indicate that people recognize text-based emoticons as pictures representing face expressions rather than words. Spatio-temporal data showed emoticons are perceived dominantly through cortical route while face expressions are perceived through both subcortical and cortical pathways. This is the first study to propose a neural model of emoticon recognition model. My result help understand that human brains area able to create contextual meanings during interactions with information perceived in a given situation.
Advisors
Jeong, Bum-Seokresearcher정범석researcher
Description
한국과학기술원 :의과학대학원,
Publisher
한국과학기술원
Issue Date
2016
Identifier
325007
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 의과학대학원, 2016.2 ,[60 p. :]

Keywords

Emoticon; Emotion; Fusiform face area; fMRI; MEG; 이모티콘; 감정; 방추상 얼굴 영역; 기능적 뇌자기공명영상; 뇌자도

URI
http://hdl.handle.net/10203/222279
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=648214&flag=dissertation
Appears in Collection
MSE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0