The Workshop

In the last few years, a growing interest has been seen in the development of autonomous interactive robots to enhance our daily lives in our homes and at work. Applications are plentiful: companions for children or elderly people, partners in industries, guides in public or personal spaces, educational tutors at school and so on. Robots able to face such broad range of social situations require social-cognitive capabilities that promote fluid and effective interactions. These include the automatic understanding of the user’s actions, behaviors, and mental and emotional states and the coherent production of multimodal, verbal and non-verbal communication skills.

Recent advances in the field of robotics contributed to the development of several kinds of robots able to express social communication skills using a number of interactive modalities such as facial expressions, gestures, gaze, motion, and color. Despite this progress, the multimodal expression capabilities of robots are still far behind the intuitiveness and naturalness that is required to allow uninformed people to interact in their everyday life with naturally communicative robots. Designing and developing multimodal communication skills for robots in order to provide more natural and powerful interactive experiences is a significant challenge in practice due to limitations both in technology and in our understanding of how different modalities must work together to convey human-like levels of social intelligence. The area of social interaction and multimodal expression for socially intelligent robots remains very much an active research area with many challenges and open research questions. For example, to what degree, and how precisely, different modalities (e.g., eye gaze, touch, vocal, body, and facial expressions) might be involved in human interaction with intelligent robots remains largely unknown. Likewise, whether combining multimodal elements of emotional expressions result in enhanced recognition of the emotion is still an open question. Furthermore, current research is predominantly focused on visual and auditory senses, neglecting other modalities such as touch that are an integral part of how we experience the real world around us.

The scope of this workshop is to present rigorous scientific advances on social interaction and multimodal expression for socially intelligent robots. Previous research findings show that this challenge cannot be solely approached from a pure engineering perspective. Human sciences, social sciences, and cognitive sciences play a primary role in the development and the enhancement of social interaction skills for socially intelligent robots. This workshop will foster interdisciplinary collaboration between researchers on the domain, addressing both the study of human-human interactions as well as human-machine interactions. The analysis of human-human interactions offers researchers the opportunity to understand how humans interact with the world and with one another multimodally, through both parallel and sequential use of multiple modalities (e.g., eye gaze, touch, vocal, body, and facial expressions), and to develop guidelines on how to design robot behaviors. On the other hand, results achieved by researchers studying human-robot interactions are particularly important to understand how the multimodal communication skills developed for social robots are perceived by uninformed interaction partners (e.g., children, elderly) and how they influence the interaction process (e.g., regarding usability and acceptance).

This workshop will bring together a multidisciplinary audience to discuss the abovementioned open questions, address difficult challenges and elaborate on novel ways to advance research in the field, based on theories from human-human interaction and on empirical findings validated human-robot interaction studies. The outcomes of this discussion will have direct ramifications towards the establishment of a set of standard guidelines for the production of coherent multimodal, verbal and non-verbal communication skills for robots. Advancing the development of socially intelligent robots opens new social and economic opportunities for the application of robots in our daily lives.

Submissions

We invite contributions of fundamental nature (e.g., psychophysical studies and empirical research about multimodality), theoretical aspects as well as technical contributions (e.g., use cases, prototype systems, empirical HRI studies). Position papers and reviews of the state-of-the-art and ongoing research are also invited. Submissions do not necessarily have to address multiple modalities, but work focusing on single modalities that go beyond the state-of-the-art (e.g., papers about touch) is suited as well.

Workshop topics include, but are not limited to:

  1. Contributions of fundamental nature
    • Psychophysical studies and empirical research about multimodality
  2. Technical contributions on multimodal interaction
    • Novel strategies of multimodal human-robot interactions
    • Dialogue management using multimodal output
    • Work focusing on novel modalities (e.g., touch)
  3. Multimodal interaction evaluation
    • Evaluation and benchmarking of multimodal human-robot interactions
    • Empirical HRI studies with (partial) functional systems
    • Methodologies for the recording, annotation, and analysis of multimodal interactions
  4. Applications for multimodal interaction with social robots
    • Novel application domains for multimodal interaction
  5. Position papers and reviews of the state-of-the-art and ongoing research