Abstract
Building rapport in human-robot interaction is a key component for facilitating communication between the user and intelligent autonomous robotic platforms. To convey their intentions, both sides can use a variety of verbal and nonverbal behaviors. In human-human interaction, one such nonverbal behavior is partial subtle mirroring of the conversational partner’s nonverbal behaviors during the conversation, which is also referred to as parallel empathy. Furthermore, specific to verbal behaviors, human interlocutors often convey understanding via reflective listening by rephrasing their conversational partner’s words. The effects of mirroring nonverbal behaviors and reflective listening have both been studied on an individual basis in interactions between robots and humans, as well as in interactions between Embodied Conversational Agents (ECAs) and humans. But only a few studies exist that examine interactions between humans and ECAs integrated with robots that utilize such rapport-building techniques. The contribution of this thesis is a pilot study on a prototype integration of an expressive ECA capable of reflective listening, tracking its interlocutor’s face, and mirroring their facial expressions and head movements in real time with a Toyota Human Support Robot such that the robot and the ECA are fully aware of and capable of synchronizing with each other’s and the users’ verbal and nonverbal behavior. This research, based on a pilot study, examines whether integrating the aforementioned ECA with the support robot will improve the user’s experience while interacting with the support robot.