agreement/disagreement to others, and trustworthiness of others.
In this dissertation, we proposed an implicit intention recognition framework to classify a user’s im-plicit agreement/disagreement at his/her EEG single trial level. From EEG data recorded during self-relevant sentence reading, we were able to discriminate two implicit intentions, i.e., ‘agreement’ and ‘disagreement.’ To improve the classification accuracy, discriminant features were selected based on Fisher score among EEG frequency bands and electrodes. Especially, the time-frequency representation with Morlet wavelet trans-forms showed clear differences in gamma, beta, and alpha band powers at fronto-central area, and theta band power at centro-parietal area, where were also found in an fMRI study. The best classification accuracy of 75.5% was obtained by a support vector machine (SVM) classifier with the gamma band features at fron-to-central area. This result may enable a new intelligent user-interface which understands human internal states regarding agreement and disagreement.
We designed another experiment to search for the second human internal state, i.e., trustworthiness. Specifically, when a human and an intelligent machine work together as a team, human trust has been broadly known as its strong influence to the performance. Yet, an electrophysiological signature of trust has not been isolated. In order to isolate such a signature, we recorded event-related potentials while healthy sub-jects (N = 31) played a theory-of-mind game with two types of computerized agents: with or without human-like cues. Electrophysiological activities in brain regions belonging to the theory-of-mind network correlated with perceived capability, especially when a machine opponent has some human-likeness. In particular, our research shows that activity in the left parietal region correlating with a human player’s future behavior can be identified as the neural signature of capability-based trust. These results reveal that brain signals underly-ing trust as influenced by perceived capability and human-likeness might be useful for performance optimi-zation of human-machine systems.
According to the results in two studies, we proposed novel research paradigms to understand human internal states, and successfully showed relationship between human internal states and neural responses in a human brain. By understanding a human brain, we believe that it is possible to develop a better human ma-chine interface which can support human beings.; Understanding human minds is a challenging goal for a successful interaction between a human and a machine. Although the erstwhile machines were trained and developed to understand explicitly presented human minds, understanding un-presented or hidden human mind will play an important role for a future human-machine interface. To understand un-presented human mind, we hypothesized that the space of hu-man internal states has several independent axes, for example, memory, emotion, intention, trustworthiness, etc. Each state may be represented on its axes in a form of neural responses in a human brain. Therefore, it is necessary to investigate brain signals in a human brain representing his/her internal state. In the same vein, a brain-computer interface (BCI) has been developed to facilitate a communication between a human and a machine. It has primarily been applied in healthcare to assist physically impaired patients, who have a diffi-culty to present their mind. For a general purpose, a future BCI should assist healthy people in their natural daily life. Our continuous efforts resulted in a good success of understanding two different human internal states