Keynotes

david_hsu

David Hsu is a professor of computer science at the National University of Singapore, a member of NUS Graduate School for Integrative Sciences & Engineering (NGS), and deputy director of the Advanced Robotics Center. His current research focuses on robotics and AI.

He received B.Sc. in computer science & mathematics from the University of British Columbia, Canada and Ph.D. in computer science from Stanford University, USA. After leaving Stanford, he worked at Compaq Computer Corp.’s Cambridge Research Laboratory and the University of North Carolina at Chapel Hill. At the National University of Singapore, he held the Sung Kah Kay Assistant Professorship and was a Fellow of the Singapore-MIT Alliance. He is currently serving as the general co-chair of IEEE International Conference on Robotics & Automation 2016, the general chair of Robotics: Science & Systems 2016, a steering committee member of International Workshop on the Algorithmic Foundation of Robotics, and an editorial board member of Journal of Artificial Intelligence Research. He and his team of colleagues and students won the Humanitarian Robotics and Automation Technology Challenge Award at the International Conference on Robotics & Automation (ICRA) 2015 and the RoboCup Best Paper Award at IEEE/RSJ International Conference on Intelligent Robots & Systems (IROS) 2015.

Abstract: Robots in Harmony with Humans (5 October 2016, 09:00 – 10:00)
In early days, robots often occupied tightly controlled environments, for example, factory floors, designed to segregate robots and humans for safety. Today robots “live” with humans, providing a variety of services at homes, in workplaces, or on the road.  To become effective and trustworthy collaborators, robots must understand human intentions and act accordingly in response. One core challenge here is the inherent uncertainty in understanding intentions, as a result of the complexity and diversity of human behaviours. Robots must hedge against such uncertainties to achieve robust performance and sometimes actively elicit information in order to reduce uncertainty and ascertain human intentions. Our recent work explores planning and learning under uncertainty for human-robot interactive or collaborative tasks. It covers  mathematical models for human intentions, planning algorithms that connect robot perception with decision making, and learning algorithms that enable robots to adapt to human preferences. The work, I hope, will spur greater interest towards principled approaches that integrate perception, planning, and learning for fluid human-robot collaboration.


 

Leila Takayama

Leila Takayama is a human-robot interaction researcher. This year, she founded Hoku Labs and joined the faculty at the University of California, Santa Cruz, as an acting associate professor of Psychology. Prior to UC Santa Cruz, she was a senior user experience researcher at GoogleX, and was a research scientist and area manager for human-robot interaction at Willow Garage. She is a World Economic Forum Global Agenda Council Member and Young Global Leader. Last year, she was presented the IEEE Robotics & Automation Society Early Career Award. In 2012, she was named a TR35 winner and one of the 100 most creative people in business by Fast Company.

With a background in Psychology, Cognitive Science, and Human-Computer Interaction, she examines human encounters with new technologies. Dr. Takayama completed her PhD in Communication at Stanford University in 2008, advised by Professor Clifford Nass. She also holds a PhD minor in Psychology from Stanford, a master’s degree in Communication from Stanford, and bachelor’s of arts degrees in Psychology and Cognitive Science from UC Berkeley (2003). During her graduate studies, she was a research assistant in the User Interface Research (UIR) group at Palo Alto Research Center (PARC).

Abstract: Perceptions of agency in human-robot interactions (6 October 2016, 13:30 – 14:30)
Robots are no longer only in outer space, in factory cages, or in our imaginations. We interact with robotic agents when withdrawing cash from ATMs, driving cars with anti-lock brakes, and tuning our thermostats. In the moment of those interactions with robotic agents, we behave in ways that do not necessarily align with the rational belief that robots are just plain machines. Through a combination of controlled experiments and field studies, we will examine the ways that people make sense of robotic agents, including (1) how people interact with personal robots and (2) how people interact through telepresence robots. These observations and experiments raise questions about the psychology of human-agent interaction, particularly about issues of perceived agency and the incorporation of technologies into one’s sense of self.