UPDATE (11/01/2014): The conference proceedings have been uploaded to the ACM Digital Library.


Click each keynote for detailed information about the talk and author.

Creative Design Computing for Happy Healthy Living
Oct. 29th at 13:45

Ellen Do, National University of Singapore

Ellen Do
Georgia Tech, NUS

Crowdsourcing Agents for Smart IoT
Oct. 30th at 14:00

Jane Hsu, National Taiwan University

Jane Hsu
National Taiwan University

Design Everything by Yourself
Oct. 31st at 14:00

Takeo Igarashi, The University of Tokyo

Takeo Igarashi
The University of Tokyo


Program Schedule

Subject to changes



We are happy to announce three workshops at this year’s HAI. More information will be posted as it becomes available. For individual workshop information, please refer to the workshop’s website.

Workshop Attendance

The workshop registration will start from 12:30 on the first floor of the Media Union of Kasuga Campus. Please note this is a different building from the main conference. Please see the maps on the attendance page for details.



9:30 – 11:30 AIST tour (please sign-up)
13:30 – 13:45 Welcome
13:45 – 14:45 Keynote: Creative Design Computing for Happy Healthy Living, Ellen Do
20 minute break
15:05 – 16:40 Session: Agents for Support and Learning chaired by Tatsuya Nomura

  • PEKOPPA: A Minimalistic Toy Robot to Analyse A Listener- Speaker Situation in Neurotypical and Autistic Children aged 6 years
    Irini Giannopulu, Valérie Montreynaud and Tomio Watanabe

  • User-friendly Autonomous Wheelchair for Elderly Care Using Ubiquitous Network Robot Platform
    Masahiro Shiomi, Takamasa Iio, Kamei Koji, Sharma Chandraprakash and Norihiro Hagita

  • Tangible Earth: Tangible Learning Environment for Astronomy Education
    Hideaki Kuzuoka, Naomi Yamashita, Hiroshi Kato, Hideyuki Suzuki and Yoshihiko Kubota

  • Daily Support Robots that Move on the Body
    Tamami Saga, Nagisa Munekata and Tetsuo Ono

  • Simplification of Wearable Message Robot with Physical Contact for Elderly’s Outing Support
    Hirotake Yamazoe and Tomoko Yonezawa

16:50 – 18:00 Welcome Reception



9:00 – 10:20 Session: Novel Interaction Techniques chaired by Jun Kato

  • Ningyō of the CAVE: Robots as Social Puppets of Static Infrastructure
    Nico Li, Stephen Cartwright, Ehud Sharlin and Mario Costa Sousa

  • Personal and Interactive Newscaster Agent based on Estimation of User’s Understanding
    Naoto Yoshida, Miyuki Yano and Tomoko Yonezawa

  • Emotional Cyborg: Complementing Emotional Labor with Human-agent Interaction Technology
    Hirotaka Osawa

  • Calamaro: Perceiving Robotic Motion in the Wild
    John Harris, Stephanie Law, Kazuki Takashima, Ehud Sharlin and Yoshifumi Kitamura

20 minute break
10:40 – 12:20 Session: Telepresence and Teleoperation chaired by Daniel Rea

  • Volume Adaptation and Visualization by Modeling the Volume Level in Noisy Environments for Telepresence System
    Akira Hayamizu, Michita Imai, Keisuke Nakamura and Kazuhiro Nakadai

  • An Affective Telepresence System Using Smartphone High Level Sensing and Intelligent Behavior Generation
    Elham Saadatian, Thoriq Salafi, Hooman Samani, Yu De Lim and Ryohei Nakatus

  • Can a Social Robot Help Children’s Understanding of Science in Classrooms?
    Tsuyoshi Komatsubara, Masahiro Shiomi, Takayuki Kanda, Hiroshi Ishiguro, and Norihiro Hagita

  • Robotic tele-presence with DARYL in the wild
    Christian Becker-Asano, Kai O. Arras and Bernhard Nebel

  • Nodding Responses by Collective Proxy Robots for Enhancing Social Telepresence
    Tsunehiro Arimoto, Yuichiro Yoshikawa and Hiroshi Ishiguro

12:20 – 14:00 Lunch
14:00 – 15:00 Keynote: Crowdsourcing Agents for Smart IoT, Jane Yung-Jen Hsu
20 minute break
15:20 – 18:00 Session: Interactive Poster Session
Odd numbered posters: 15:20 – 16:40
Even numbered posters: 16:40 – 18:00

  • P01: A Fixed Pattern Deviation Robot That Triggers Intention Attribution
    Kazunori Terada, Yuto Imamura, Hideyuki Takahashi and Akira Ito

  • P02: The Sharing of Meta-Signals and Protocols Is the First Step for the Emergence of Cooperative Communication
    Takakazu Mizuki, Akira Ito and Kazunori Terada

  • P03: FIONA: A Platform for Embodied Cognitive Agents
    Celestino Alvarez and Lucía Fernández Cossío

  • P04: A Cooking Assistant Robot Using Intuitive Onomatopoetic Expressions and Joint Attention
    Mutsuo Sano, Yuka Kanemoto, Syogo Noda, Kenzaburo Miyawaki and Nami Fukutome

  • P05: Synchrony Based Side by Side Walking: An Application in Human-Robot Interactions
    Syed Khursheed Hasnain, Ghiles Mostafaoui, Caroline Grand and Philippe Gaussier

  • P06: Image Recognition Method Which Measures Angular Velocity from a Back of Hand for Developing a Valve UI
    Hirotsugu Minowa

  • P07: Development of a Dialogue Scenario Editor on a Web Browser for a Spoken Dialogue System
    Ryota Nishimura, Daisuke Yamamoto, Takahiro Uchiya and Ichi Takumi

  • P08: Notification Design Using Mother-like Expressions
    Marie Uemura, Keiko Yamamoto, Itaru Kuramoto and Yoshihiro Tsujino

  • P09: Will You Follow the Robot’S Advice? The Impact of Robot Types and Task Types on People’s Perception of a Robot
    Hyewon Lee, Jung Ju Choi and Sonya S. Kwak

  • P10: Multimodal Bodily Feeling Analysis to Design Air Conditioning Services for Elderly People
    Shinya Kiriyama, Kenichi Shibata, Shogo Ishikawa, Kei Ogawa, Harunobu Nukushina and Yoichi Takebayashi

  • P11: Portable Robot Inspiring Walking in Elderly People 
    Yuri Kumahara and Yoshikazu Mori

  • P12: Can You Touch Me? The Impact of Physical Contact on Emotional Engagement with a Robot
    Chaehyun Baek, Jung Ju Choi and Sonya S. Kwak

  • P13: Social Acceptance by Elderly People of a Fall-detection System with Range Sensors in a Nursing Home
    Takamasa Iio, Masahiro Shiomi, Koji Kamei, Chandraprakash Sharma and Norihiro Hagita

  • P14: Preliminary Investigation of Supporting Child-Care at an Intelligent Playroom
    Masahiro Shiomi and Norihiro Hagita

  • P15: Recovery of Virtual Object Contact Surface Features for Replaying Haptic Feeling
    Yongyao Yan, Greg S. Ruthenbeck and Karen J. Reynolds

  • P16: Toward Playmate Robots That Can Play with Children Considering Personality
    Kasumi Abe, Chie Hieida, Muhammad Attamimi, Takayuki Nagai, Takayuki Shimotomai, Takashi Omori and Natsuki Oka

  • P17: Affective Agents for Enhancing Emotional Experience
    Takahiro Matsumoto, Shunichi Seko, Ryosuke Aoki, Akihiro Miyata, Tomoki Watanabe and Tomohiro Yamada

  • P18: The Hybrid Agent MARCO: A Multimodal Autonomous Robotic Chess Opponent
    Christian Becker-Asano, Eduardo Meneses Bello, Nicolas Riesterer, Julien Hué, Christian Dornhege and Bernhard Nebel

  • P19: Artificial Endocrine System for Language Translation Robot
    Wu Jhong Ren and Hooman Samani

  • P20: Pointing Gesture Prediction Using Minimum-Jerk Model in Human-Robot Interaction
    Ren Ohmura, Yuki Kusano and Yuta Suzuki

  • P21: Digital Play Therapy for Children with Learning Disabilities
    Yukako Watanabe, Yoshiko Okada, Hirotaka Osawa and Midori Sugaya

  • P22: Amae and Agency Appraisal as Japanese Emotional Behavior: Influences on Agent’s Believability
    Koushi Mitarai and Hiroyuki Umemuro

  • P23: Weight-Aware Robot Motion Planning for Lift-to-Pass Action
    Oskar Palinko, Alessandra Sciutti, Francesco Rea and Giulio Sandini

  • P24: Emotion Recognition and Expression in Therapeutic Social Robot Design
    Sun Jie, Daniel Peng Zhuo, Li Qinpei, Anthony Wong Chern Yuen and Rui Yan

  • P25: Luminous Device for the Deaf and Hard of Hearing People
    Akira Matsuda, Midori Sugaya and Hiroyuki Nakamura

  • P26: Development of Werewolf Match System for Human Players Mediated with Lifelike Agents
    Yu Kobayashi, Hirotaka Osawa, Michimasa Inaba, Kosuke Shinoda, Fujio Toriumi and Daisuke Katagami

  • P27: Development of Smart Infant-Parents Affective Telepresence System
    Elham Saadatian, Reihaneh Hosseinzadeh Hariri, Adrian David Cheok and Ryohei Nakatsu

  • P28: COLUMN: Persuasion as a Social Mediator to Establish the Interpersonal Coordination
    Ysutaka Takeda, Kohei Yoshida, Shotaro Baba, Ravindra De Silva and Michio Okada

  • P29: Towards Better Eye Tracking in Human Robot Interaction Using an Affordable Active Vision System
    Oskar Palinko, Alessandra Sciutti, Francesco Rea and Giulio Sandini

  • P30: Evaluation of a Video Communication System with Speech-Driven Embodied Entrainment Audience Characters with Partner’s Face
    Yutaka Ishii and Tomio Watanabe

  • P31: Dynamic Dialog System for Human Robot Collaboration – Playing a Game of Pairs
    Andreas Kipp and Franz Kummert

  • P32: Unification of Demonstrative Pronouns in a Small Group Guided by a Robot
    Takashi Ichijo, Nagisa Munekata and Tetsuo Ono

  • P33: Evaluating an Intuitive Teleoperation Platform Explored in a Long-Distance Interview
    Ritta Baddoura, Gentiane Venture and Guillaume Gibert

  • P34: AAnalysis of Personality Traits for Intervention Scene Detection in Multi-User Conversation
    Shochi Otogi, Hung-Hsuan Huang, Ryo Hotta and Kyoji Kawagoe

  • P35: A Design Method Using Cooperative Principle for Conversational Agent
    Masahide Yuasa

  • P36: Experimental Study of Empathy and Its Behavioral Indices in Human-Robot Interaction
    Yuichiro Tsuji, Ami Tsukamoto, Takashi Uchida, Yusuke Hattori, Ryosuke Nishida, Chie Fukada, Motoyuki Ozeki, Takashi Omori, Takayuki Nagai and Natsuki Oka

  • P37: Huggable Communication Medium Encourages Listening to Others
    Junya Nakanishi, Hidenobu Sumioka, Masahiro Shiomi, Daisukei Nakamichi, Kurima Sakai and Hiroshi Ishiguro

  • P38: Tap Model to Improve Input Accuracy of Touch Panels
    Takahisa Tani and Seiji Yamada

  • P39: Modeling of Cooperative Behavior Agent Based on Collision Avoidance Decision Process
    Kensuke Miyamoto, Hiroaki Yoshioka, Norifumi Watanabe and Yoshiyasu Takefuji

  • P40: Representation of Gaze, Mood, and Emotion: Movie-watching with Telepresence Robots
    Ken Yonezawa and Hirotada Ueda

  • P41: A Hierarchical Structure for Gesture Recognition Using RGB-D Sensor
    Hyunsoek Choi and Hyeyoung Park

  • P42: Communicating Emotions: A Model for Natural Emotions in HRI
    Oliver Damm and Britta Wrede

  • P43: How Does Emphatic Emotion Emerge via Human-Robot Rhythmic Interaction?
    Hideyuki Takahashi, Nobutsuna Endo, Hiroki Yokoyama, Takato Horii, Tomoyo Morita and Minoru Asada

  • P44: Determining Robot Gaze According to Participation Roles in Multiparty Conversations
    Takashi Yoshino, Yuki Hayashi and Yukiko Nakano

  • P45: Interactions on Eyeballs of Humanoid-Robots
    Takayuki Todo and Takanari Miisho

  • P46: Video-Based Emotion Identification Using Face Alignment and Support Vector Machines
    Gil-Jin Jang, Ahra Jo and Jeong-Sik Park

  • P47: Social Networking Sites Photos and Robots: A Pilot Research on Facebook Photo Albums and Robotics Interfaces for Older Adults
    Angie Lorena Marin

  • P48: Telepresence Robot That Exaggerates Non-Verbal Cues for Taking Turns in Multi-Party Teleconferences
    Komei Hasegawa and Yasushi Nakauchi

  • P49: Emotional Scene Understanding Based on Acoustic Signals Using Adaptive Neuro-Fuzzy Inference System
    Taewoong Kim and Minho Lee

19:00 – 21:00 Banquet



9:00 – 10:20 Session: Techniques and Strategies for Developing Agents chaired by Christian Becker-Asano

  • SB Simulator:A Method to Estimate How Relation Develops
    Taichi Sono, Toshihiro Oosumi and Michita Imai

  • Modeling Perception-Action Loops: Comparing Sequential Models with Frame-Based Classifiers
    Alaeddine Mihoub, Gerard Bailly and Christian Wolf

  • PaintBoard – Prototyping Interactive Character Behaviors by Digitally Painting Storyboards
    Daniel J. Rea, Takeo Igarashi and James E. Young
    Best Paper

  • Voice Interaction System with 3D-CG Virtual Agent for Stand-alone Smartphones
    Daisuke Yamamoto, Keiichiro Oura, Ryota Nishimura, Takahiro Uchiya, Akinobu Lee, Ichi Takumi and Keiichi Tokuda

20 minute break
10:40 – 12:20 Session: Social Interaction Strategies for Agents chaired by Tomio Watanabe

  • Assigning a Personality to a Spoken Dialogue Agent through Self-disclosure of Behavior
    Yoshito Ogawa, Kouki Miyazawa and Hideaki Kikuchi

  • Potential of Imprecision: Exploring Vague Language in Agent Instructors
    Leigh Clark, Khaled Bachour, Abdulmalik Ofemile, Svenja Adolphs and Tom Rodden

  • Sharedo: To-Do List Interface for Human-Agent Task Sharing
    Jun Kato, Daisuke Sakamoto, Takeo Igarashi and Masataka Goto
    Best Paper Nominee

  • A Design Model of Emotional Body Expressions in Non-humanoid Robots
    Jekaterina Novikova, Leon Watts
    Best Paper Nominee

  • Signaling Trouble in Robot-To-Group Interaction. Emerging Visitor Dynamics with a Museum Guide Robot
    Raphaela Gehle, Karola Pitsch and Sebastian Wrede

12:20 – 14:00 Lunch break
14:00 – 15:00 Keynote: Design Everything Yourself, Takeo Igarashi
20 minute break
15:30 – 16:50 Session: Understanding Users chaired by Tomoko Yonezawa

  • Methodology for Study of Human-Robot Social Interaction in Dangerous Situations
    David J. Atkinson and Micah H. Clark

  • More Human than Human? A Visual Processing Approach to Exploring Believability of Android Faces
    Masayuki Nakane, James E. Young and Neil D. B. Bruce

  • Differences of Expectation of Rapport with Robots Dependent on Situations
    Tatsuya Nomura and Takayuki Kanda

  • Stage of Subconscious Interaction in Embodied Interaction
    Takafumi Sakamoto and Yugo Takeuchi

10 minute break
17:00 – 17:30 Closing and Awards



On Oct. 29, the HAI2014 committee is arranging a tour to the National Institute of Advanced Industrial Science and Technology (AIST). The tour will includes visiting one of the leading laboratory in Japan on Humanoid robots. You will also have a chance to see the Actroid-F.

We will take you there by shuttle bus departing from the conference sight (University of Tsukuba, Kasuga). The tour will start from 9:30 and will return to the conference sight at around 11:30.

The maximum number of attendees are limited to 20 and will be selected in order of all submissions. So if you wish to attend to this tour, please sign up with this URL below and fill in the form as soon as possible. The deadline for this application is Oct. 15. We look forward to your submissions!!