Light-based nonverbal signaling with passive demonstrations for mobile service robots
MetadataShow full item record
With emerging applications in robotics that have the potential to bring them into our daily lives, it is expected for them to not only operate in close proximity to humans but also interact with them as well. When operating in crowded, human-populated environments there are many communication challenges faced by robots due to variable levels of interactions (e.g. asking for help, giving information, or navigating near humans). A crucial factor for success in these interactions is a robot’s ability to express information about their intent, actions, and knowledge to co-located humans. Many of the robot platforms developed for service roles have non-anthropomorphic form factors in order to simplify and tailor them to their jobs. Due to a lack of anthropomorphic features, these types of robots primarily communicate using an on-screen display and/or spoken language. To overcome the limitation of not communicating as people do, we explore the viability of nonverbal light-based signals as a communication modality for mobile service robots. These types of signals have many benefits over existing modalities which they can either complement or replace when appropriate, such as having long-range visibility and persisting over time. We present a novel light-based signal control architecture implemented as a custom Robot Operating System (ROS) software package generalized to allow for various signal implementations. We implement our framework on a BWIBot, an autonomous mobile service robot created as part of the Building-Wide Intelligence Project, and evaluate its validity through a real-world user study on the scenario where a robot and human are traversing a shared corridor from opposite ends, and the potential conflict created when their paths meet. Our results demonstrate that exposing users to the robot’s use of an animated light signal only once prior to when it is information critical for the user is sufficient to disambiguate its meaning, and thus greatly enhances its utility in-situ, with no direct instruction or training to the user. These findings suggest a paradigm of passive demonstration of light-based signals in future applications.