Decision subject-centered AI explanations : organizational justice perspective

dc.contributor.advisorLee, Min Kyung, Ph. D.
dc.creatorSreekanth, Varshinee
dc.creator.orcid0009-0005-6238-4569
dc.date.accessioned2023-08-25T15:54:52Z
dc.date.available2023-08-25T15:54:52Z
dc.date.created2023-05
dc.date.issued2023-04-21
dc.date.submittedMay 2023
dc.date.updated2023-08-25T15:54:53Z
dc.description.abstractAs AI is being integrated into various tasks, much research has focused on making AI explanations human-centered. Most prior work has focused on supporting human decision-makers who use AI explanations; however, less research has designed AI explanations for decision subjects, who are affected by AI decisions but have little control over them. We investigate decision subject-centered AI explanations in order to design explanations to influence decision subjects' sense of justice and empowerment. We draw from the organizational justice literature which suggests that different explanations impact decision subjects differently even when the decision outcomes are the same. We explore the effects of three forms of explanations---apology, excuse, and justification---that vary the organization's assumed responsibility in and acknowledgment of negative outcomes. We contrast them with having only input-influence-based explanations, one of the most common AI explanation types. We conduct two online studies to investigate the explanations' impact on subjects' perceptions of justice and empowerment in the context of AI shift scheduling to compare AI-assigned vs. human-assigned schedules at varying degrees of worker preference satisfaction (low vs medium outcome favorability). Our findings suggest that apologies and excuses improve participants' sense of interpersonal justice over justifications and input-influence-based explanations. Additionally, when decisions are made by an AI system, participants are more willing to negotiate changes, believe they have greater influence over the schedule, and believe the decision was procedurally fairer, though human-decision-makers are thought to be more polite and respectful. The qualitative analysis of participants' responses suggests that AI decisions are seen as more rational and unbiased, while human decision-makers are seen as more likely to be biased but also in possession of more power to override previous decisions. This work serves to inform the design of XAI systems to consider worker well-being in the context of organizational justice and worker empowerment.
dc.description.departmentComputer Science
dc.format.mimetypeapplication/pdf
dc.identifier.urihttps://hdl.handle.net/2152/121236
dc.identifier.urihttp://dx.doi.org/10.26153/tsw/48064
dc.language.isoen
dc.subjectExplainable AI
dc.subjectOrganizational justice
dc.subjectAlgorithmic decision-making
dc.subjectExplanation
dc.subjectJustice
dc.subjectFairness
dc.subjectEmpowerment
dc.titleDecision subject-centered AI explanations : organizational justice perspective
dc.typeThesis
dc.type.materialtext
thesis.degree.departmentComputer Sciences
thesis.degree.disciplineComputer Science
thesis.degree.grantorThe University of Texas at Austin
thesis.degree.levelMasters
thesis.degree.nameMaster of Science in Computer Sciences

Access full-text files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
SREEKANTH-THESIS-2023.pdf
Size:
1.47 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
4.46 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.85 KB
Format:
Plain Text
Description: