An Overview of Algorithmic Bias in Artificial Intelligence
Artificial Intelligence has grown throughout recent years to become a major part of popular culture and products used by people around the world. However, these systems are not perfect and can in fact contain multiple different biases in their underlying algorithms. In this paper, we provide an overview of the sources of algorithmic bias, a discussion of real-world case studies and their impacts, and a general summary of past attempts to address biases in artificial intelligence such as the General Data Protection Regulation (GDPR), corporate and governmental ethical guidelines, and New York City’s Automated Decision System (ADS) Task Force. Specifically, we discuss the COMPAS algorithm used for pretrial assessments, the Facebook ad-delivery algorithm used on its online advertising platform, and a healthcare algorithm used for high-risk care management in the United States.
We conclude that algorithmic bias will only be exacerbated as more systems become automated through artificial intelligence. However, recognizing and calling for the alleviation of biases in current systems as well as approaching the design of automated systems holistically have led to reduced biases. More empirical research is required to fully understand what ways algorithmic bias can consistently be reduced.