|Autonomous social robots and autonomous virtual humans have a lot in common as they can both be companions for the people from young children to elderly people, and long-term interaction with both types of companion is now possible. The main difference is that the social robots exist physically while autonomous Virtual Humans are software-based visual agents. We wish now to have these physical and virtual companions more and more capable of achieving more and more things and in an intelligent and autonomous way. In terms of research, the problems to solve for both social robots and virtual humans are very similar. Autonomy is one of the big challenges in the field of computer animation. We generally consider autonomy as the quality or state of being self-governing. Perception of the elements in the environment is essential, as it gives the social robot or the virtual human the awareness of what is changing around it. It is indeed the most important element that one should simulate before going further. Most common perceptions include (but are not limited to) visual and auditory feedback. Adaptation and intelligence then define how the social robot or the virtual human is capable of reasoning about what it perceives, especially when unpredictable events happen. On the other hand, when predictable elements are showing up again, it is necessary to have a memory capability, so that similar behaviour can be selected again. Lastly, emotion instantaneously adds realism by defining affective relationships between characters.
The goal of this workshop is to attract researchers of both communities to share their methods, their ideas to for the development of autonomous companions.
Papers are solicited in the following topics or related:
Papers which deal with both social robots and virtual humans are especially welcomed.
- Affective computing
- Dialogue management
- Decision making
- Expressive behavior generation
- Facial expression recognition
- Short-term and long-term memory
- Groups and collective behavior
- Vision, audition, and tactile
- Hand gestures
- Learning (e.g. by example)
- Emotion models
- Gaze Animation and attention
- Cooperation between humans and social robots
Selected papers will be included in chapters of a book published by Springer Verlag.