Optimizing for task performance and fairness in human-robot teams
Robots are already entering our homes and workplaces, and they are increasingly put in the role of a teammate working with teams of humans. Designing effective robotic teammates is challenging because teamwork is a multi-faceted concept. The ability to quickly assimilate into human teams will help robots to foster strong collaborations. Prior approaches in human-robot teamwork have mostly focused on optimizing for the team's task performance such as task allocation algorithms that minimize the overall task completion time. However, another critical dimension of teamwork that can impact task performance is fairness since it is a hallmark of cooperative relationships. A robot who treats team members fairly may have to make a trade-off with task performance. In recent years, human-robot interaction (HRI) researchers have started to explore the concept of fairness in human-robot teaming. A challenge in the human-robot teaming domain is understanding the relationship between task performance and fairness. The adjacent field of fairness in machine learning (ML) shows that there is typically a trade-off with accuracy and fairness. It is unclear if this trade-off in ML translates to human-robot teamwork since fairness is context sensitive. Motivated by this challenge, this dissertation shows that robots can optimize task performance through intent communication and can maintain this while treating human teammates fairly to achieve satisfactory human-robot partnerships. In my approach, I define task performance and fairness as team-level components. My exploration of human-robot teamwork is inspired by the concept of Shared Cooperative Activity (SCA) by Bratman. For an activity to be considered teamwork, Bratman defines three facets that must all be present: mutual responsiveness, commitment to the joint activity, and commitment to mutual support. My approach can be categorized into three main stages. In the first stage, I use SCA to guide my exploration of human-robot teamwork with a focus on optimizing for the team's task performance. From this work, I learn that fairness is an important factor. However, the human-robot teaming field lacks proper metrics and understanding of fairness. In the second stage, I seek to understand factors that influence people's perception of fairness and develop fairness metrics. In the last stage, I use the insights gained from investigating task performance and fairness to design an algorithm that optimizes for task performance and fairness. My contributions include algorithms, fairness metrics, and empirical findings. To optimize for task performance in single robot - single human teams in which the robot's role is a peer, I implement a bi-directional intent system that enables robots to recognize the human teammate's intent and also communicate its intent resulting in improved collaboration. In addition, I develop the Teammate Algorithm for Shared Cooperation (TASC) that enables robots to consider three components of teamwork: intent, effort, and value. TASC enables participants to predict the robot's goal significantly earlier and with higher confidence as well as conserve their energy usage compared to the baseline. To develop fairness metrics, my empirical findings show that people's perceptions of fairness in single robot - single human teams where the robot is a peer are influenced by factors including capabilities, task type, and workload. Based on these insights, I create fairness metrics: equality of capability, equality of task type and equality of workload. When another human is added to the team, I consider the amount of time that the robot spends working with each human teammate and develop an additional fairness metric, equality of time. My evaluation conducted with participants validate that equality of capability and equality of time aligns with people's perception of fairness from a third-person perspective. My results reveal that there are bleed-over effects in people's assessments of fairness. Due to the context dependent nature of fairness, I also explore single robot - groups of human teams where the robot takes on the leader role. I focus on the scenario where the robot allocates tasks to two human teammates who are characterized by their capabilities and task preferences. I use an iterative design approach consisting of simulation data and human pilot data to design a robotic teammate algorithm that balances fairness and task efficiency. My pilot user study results suggest that team members who are the most capable in the team's perception of fairness does not align with the equality of capability metric. That is, they want more opportunities to work on tasks that they are the most skilled at. This leads to the creation of the fair-equity metric. This metric measures the difference between the ratio of teammates' outcomes to inputs. Outcomes are defined by task preference and inputs are defined by capabilities. I develop the Efficient & Fair-Equity algorithm that considers task efficiency and fairness in terms of capabilities and task preferences. I evaluate this algorithm against the baseline (Efficient algorithm) in a set of studies focusing on various team types in a task allocations scenario. I show that in the mixed team type, the Efficient & Fair-Equity robot is able to achieve fairness without reducing task efficiency. By treating team members fairly, the team is able to maintain their performance. My initial results of the twins team type suggest that the Efficient & Fair-Equity robot can treat teammates fairly while maintaining task efficiency. In general, participants prefer robotic teammates that display efficient and fair behavior. Overall, these contributions aim to enable robotic teammates to easily integrate within human teams to collaborate with people who have different capabilities and preferences. Considering fairness in the robot's decision-making process will help reduce potential negative impacts on the team's social dynamics. My research aims to create strong human-robot collaborations.