The Importance of Multi-Dimensional Intersectionality in Algorithmic Fairness and AI Model Development

dc.contributor.advisorDe-Arteaga, Maria
dc.contributor.advisorPeterson, Tina
dc.creatorMickel, Jennifer
dc.date.accessioned2023-11-17T14:24:36Z
dc.date.available2023-11-17T14:24:36Z
dc.date.issued2023-05
dc.description.abstractPeople are increasingly interacting with artificial intelligence (AI) systems and algorithms, but oftentimes, these models are embedded with unfair biases. These biases can lead to harm when an AI system’s output is implicitly or explicitly racist, sexist, or derogatory. If the output is offensive to a person interacting with it, it can cause the person emotional harm that may manifest physically. Alternatively, if a person agrees with the model’s output, the person’s negative biases may be reinforced, inciting the person to engage in discriminatory behavior. Researchers have recognized the harm AI systems can lead to, and they have worked to develop fairness definitions and methodologies for mitigating unfair biases in machine learning models. Unfortunately, these definitions (typically binary) and methodologies are insufficient for preventing AI models from learning unfair biases. To address this, fairness definitions and methodologies must account for intersectional identities in multicultural contexts. The limited scope of fairness definitions allows for models to develop biases against people with intersectional identities that are unaccounted for in the fairness definition. Existing frameworks and methodologies for model development are based in the US cultural context, which may be insufficient for fair model development in different cultural contexts. To assist machine learning practitioners in understanding the intersectional groups affected by their models, a database should be constructed detailing the intersectional identities, cultural contexts, and relevant model domains in which people may be affected. This can lead to fairer model development, for machine learning practitioners will be better adept at testing their model's performance on intersectional groups.
dc.description.departmentComputer Sciences
dc.identifier.urihttps://hdl.handle.net/2152/122644
dc.identifier.urihttps://doi.org/10.26153/tsw/49447
dc.language.isoen_US
dc.relation.ispartofHonors Thesesen
dc.rights.restrictionOpen
dc.subjectAI Fairness
dc.subjectIntersectionality
dc.subjectMulticultural
dc.subjectArtificial Intelligence
dc.titleThe Importance of Multi-Dimensional Intersectionality in Algorithmic Fairness and AI Model Development
dc.typeThesis

Access full-text files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
PS Thesis Final Draft 2022-23 .pdf
Size:
1.77 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.64 KB
Format:
Item-specific license agreed upon to submission
Description:

Collections