Impactful fact-checking for science : communication strategies applying artificial intelligence as the source of information

Date

2022-08-12

Authors

Moon, Won-Ki

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This research explores how labeling an information source as AI can help to improve the message credibility of science information and reduce the partisan bias in the evaluation of fact-checking messages in the context of science communication. To that end, two moderated mediation models were constructed and tested. In Study 1, experimental results suggest that AI sources will conditionally assist scientists to develop more persuasive science communication information. Results also indicated that trust in machines moderates the effect of source labeling as AI. Additionally, the experiment provided empirical evidence that public trust in science may be more significant than source effects when individuals judge the credibility of science information. In Study 2, an experiment was designed to delve into information processing for fact-checking messages about politicized science issues. The research model showed that individuals’ polarized perception of the fact-checking message decreased when they recognize that fact-checking was created by AI while the polarization existed when the source of the fact-checking was labeled as human scientists. This study extends the theoretical discussion on AI in communication by shedding light on the role of AI in human judgment and decision-making. It also provides practical implications for communication practitioners.

Department

Description

LCSH Subject Headings

Citation