Browsing by Subject "AI ethics"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Is Google Duplex too human? : exploring user perceptions of opaque conversational agents(2019-02-06) O'Neal, Aubrey Lauren; Bock, Mary AngelaConversational Agents (CAs) are increasingly embedded in consumer products, such as smartphones, home devices, and industry devices. Advancements in machine generated voice, such as the Google Duplex feature released in May 2018, aim to perfectly mimic the human voice while constructing a scenario in which users do not know whether they are talking to a human or a CA. Exactly how well users can distinguish between human/machine voices, how the degree of humanness impacts user emotional perception, and what ethical concerns this raises, remains an underexplored area. To answer these questions, I collected 405 surveys, including both an experimental design that exposed users to three different voices (human, advanced machine, and simple machine) and questions about the ethical implication of CAs. Results of the experiment revealed that users have difficulty distinguishing between human and advanced machine voices. Users do not experience the negative feeling referred to as the uncanny valley when listening to advanced synthetic audio and they only narrowly prefer a real human voice over a synthetic voice. Results from the questions about ethical implications revealed the importance of context and transparency. Drawing on these findings, I discuss the implications of advanced CAs and suggest strategies for ethical design.Item It Takes a Village: Participation, Data, and Ethics in Health AI(2023) Richardson, Jensen; Graham, S. ScottNew artificial intelligence (AI) tools will shift the paradigm of healthcare and redefine how triage, diagnosis, and treatment are performed. This thesis examines studies analyzing ethical and practical issues of developing health AI tools and some suggested solutions, such as changes in data collection and study design. Though the dangers of AI of which we are aware are currently well-described, none of them have a simple solution. After an extensive narrative literature review of AI scholarship, I present common issues discussed in the literature and propose some solutions gleaned from them. One such solution is participatory design methods. Participatory design methods can guide the development of more ethical AI tools by involving the communities they affect from the beginning of the project. If participatory methods were consistently integrated into clinical trials, they could help resolve problems such as disconnects between factions of multidisciplinary research groups, patient data concerns about overbroad and irrelevant data collection, and even racial bias from data sources and uneven/unrepresentative data collection. The integration of participatory methods in clinical trials would lead to better and more ethical first-generation AI tools, which is essential because the data from these tools will influence those created in the future. This improvement would, in turn, lead to better AI tools in the future by improving and equalizing their performance across more treatment groups, as well as helping to make health AI more useful to patients and physicians.