Decision subject-centered AI explanations : organizational justice perspective
dc.contributor.advisor | Lee, Min Kyung, Ph. D. | |
dc.creator | Sreekanth, Varshinee | |
dc.creator.orcid | 0009-0005-6238-4569 | |
dc.date.accessioned | 2023-08-25T15:54:52Z | |
dc.date.available | 2023-08-25T15:54:52Z | |
dc.date.created | 2023-05 | |
dc.date.issued | 2023-04-21 | |
dc.date.submitted | May 2023 | |
dc.date.updated | 2023-08-25T15:54:53Z | |
dc.description.abstract | As AI is being integrated into various tasks, much research has focused on making AI explanations human-centered. Most prior work has focused on supporting human decision-makers who use AI explanations; however, less research has designed AI explanations for decision subjects, who are affected by AI decisions but have little control over them. We investigate decision subject-centered AI explanations in order to design explanations to influence decision subjects' sense of justice and empowerment. We draw from the organizational justice literature which suggests that different explanations impact decision subjects differently even when the decision outcomes are the same. We explore the effects of three forms of explanations---apology, excuse, and justification---that vary the organization's assumed responsibility in and acknowledgment of negative outcomes. We contrast them with having only input-influence-based explanations, one of the most common AI explanation types. We conduct two online studies to investigate the explanations' impact on subjects' perceptions of justice and empowerment in the context of AI shift scheduling to compare AI-assigned vs. human-assigned schedules at varying degrees of worker preference satisfaction (low vs medium outcome favorability). Our findings suggest that apologies and excuses improve participants' sense of interpersonal justice over justifications and input-influence-based explanations. Additionally, when decisions are made by an AI system, participants are more willing to negotiate changes, believe they have greater influence over the schedule, and believe the decision was procedurally fairer, though human-decision-makers are thought to be more polite and respectful. The qualitative analysis of participants' responses suggests that AI decisions are seen as more rational and unbiased, while human decision-makers are seen as more likely to be biased but also in possession of more power to override previous decisions. This work serves to inform the design of XAI systems to consider worker well-being in the context of organizational justice and worker empowerment. | |
dc.description.department | Computer Science | |
dc.format.mimetype | application/pdf | |
dc.identifier.uri | https://hdl.handle.net/2152/121236 | |
dc.identifier.uri | http://dx.doi.org/10.26153/tsw/48064 | |
dc.language.iso | en | |
dc.subject | Explainable AI | |
dc.subject | Organizational justice | |
dc.subject | Algorithmic decision-making | |
dc.subject | Explanation | |
dc.subject | Justice | |
dc.subject | Fairness | |
dc.subject | Empowerment | |
dc.title | Decision subject-centered AI explanations : organizational justice perspective | |
dc.type | Thesis | |
dc.type.material | text | |
thesis.degree.department | Computer Sciences | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | The University of Texas at Austin | |
thesis.degree.level | Masters | |
thesis.degree.name | Master of Science in Computer Sciences |
Access full-text files
Original bundle
1 - 1 of 1