Home  | Publications | BFS+24

Comparing the Willingness to Share for Human-Generated vs. AI-Generated Fake News

MCML Authors

Link to Profile Stefan Feuerriegel PI Matchmaking

Stefan Feuerriegel

Prof. Dr.

Principal Investigator

Abstract

Generative artificial intelligence (AI) presents large risks for society when it is used to create fake news. A crucial factor for fake news to go viral on social media is that users share such content. Here, we aim to shed light on the sharing behavior of users across human-generated vs. AI-generated fake news. Specifically, we study: (1) What is the perceived veracity of human-generated fake news vs. AI-generated fake news? (2) What is the user's willingness to share human-generated fake news vs. AI-generated fake news on social media? (3) What socio-economic characteristics let users fall for AI-generated fake news? To this end, we conducted a pre-registered, online experiment with N= 988 subjects and 20 fake news from the COVID-19 pandemic generated by GPT-4 vs. humans. Our findings show that AI-generated fake news is perceived as less accurate than human-generated fake news, but both tend to be shared equally. Further, several socio-economic factors explain who falls for AI-generated fake news.

inproceedings


CSCW 2024

27th ACM SIGCHI Conference on Computer-Supported Cooperative Work and Social Computing. San José, Costa Rica, Nov 09-13, 2024.
Conference logo
A Conference

Authors

A. Bashardoust • S. Feuerriegel • Y. R. Shrestha

Links

DOI

Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: BFS+24

Back to Top