Keynote: Understanding Human Internal States: I Know Who You Are and What You Think
For the successful interaction between human and machine agents, the agents need understand both explicitly-presented human intention and unpresented human mind. Although the current human-agent interaction (HAI) systems mainly rely on the former with keystrokes, speech, and gestures, the latter will play an important role for the new and up-coming HAIs. In this talk we will present our continuing efforts to understand unpresented human mind, which may reside at the internal states of neural networks in human brain and may be estimated from brain-related signals such as fMRI (functional Magnetic Resonance Imaging), EEG (Electroencephalography), and eye movements.
We hypothesized that the space of brain internal states have several independent axes, of which temporal dynamics have different time scales. Special emphasis was given to human memory, trustworthiness, and sympathy to others during interactions. Human memory changes much slowly in time, and is different from person to person. Therefore, by analyzing brain-related signals from many stimulating images, it may be possible to identify a person. On the other hand the sympathy to others has much shorter time constants during human-agent interactions, and may be identified for each user interaction. The trustworthiness to others may have slightly longer time constants, and may be accumulated by temporal integration during sequential interactions. Therefore, we measured brain-related signals during sequential Theory-of-Mind (ToM) games. Also, the effects of human-like cues of the agents to the trustworthiness were evaluated.
At this moment the estimation of human internal states utilizes brain-related signals such as fMRI, EEG, and eye movements. In the future the classification systems of human internal states will be trained with audio-visual signals only, and the current study will provide near-ground-truth labels.
Soo-Young Lee received B.S., M.S., and Ph.D. degrees from Seoul National University in 1975, Korea Advanced Institute of Science in 1977, and Polytechnic Institute of New York in 1984, respectively. From 1977 to 1980 he worked for the Taihan Engineering Co., Seoul, Korea. From 1982 to 1985 he also worked for General Physics Corporation at Columbia, MD, USA. In early 1986 he joined the Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, as an Assistant Professor and now is a Full Professor. at the Department of Electrical Engineering and also Department of Bio & Brain Engineering. From June 2008 to June 2009 he also worked for Mathematical Neuroscience Laboratory at RIKEN Brain science Institute for his sabbatical leave.
In 1997 he established Brain Science Research Center, which is the main research organization for the Korean Brain Neuroinformatics Research Program. The research program is one of the Korean Brain Research Promotion Initiatives sponsored by Korean Ministry of Science and Technology from 1998 to 2008, and currently about 35 Ph.D. researchers have joined the research program from many Korean universities.
He is a Past-President of Asia-Pacific Neural Network Assembly, and has contributed to International Conference on Neural Information Processing as Conference Chair (2000), Conference Vice Co-Chair (2003), and Program Co-Chair (1994, 2002). He is on Editorial Boards for Neural Processing Letters and Cognitive Neurodynamics journals. He received Leadership Award and Presidential Award from International Neural Network Society in 1994 and 2001, respectively, and APPNA Service Award and APNNA Outstanding Achievement Award from Asia-Pacific Neural Network Assembly in 2004 and 2009, respectively. From SPIE he also received Biomedical Wellness Award and ICA Unsupervised Learning Pioneer Award in 2008 and 2010, respectively.
His research interests have resided in Artificial Brain, alias Artificial Cognitive Systems, the human-like intelligent systems/robots based on biological information processing mechanism in our brain. He has worked on computational models of the auditory and visual pathways, unsupervised and supervised learning architecture and algorithms, active learning, situation awareness from environmental sound, and top-down selective attention. Recently he is also working on understanding human internal states from multimodal data including fMRI, EEG, and eye movements. Especially, he is pioneering a new research area to identify human internal states, such as memory, agreement to others, and trustworthiness of others, with EEG and eye movements. His research scope covers cognitive experiments, mathematical models, and real-world applications.