Home  | Publications | BFG+24

The Effect of Education in Prompt Engineering: Evidence From Journalists

MCML Authors

Link to Profile Stefan Feuerriegel PI Matchmaking

Stefan Feuerriegel

Prof. Dr.

Principal Investigator

Abstract

Large language models (LLMs) are increasingly used in daily work. In this paper, we analyze whether training in prompt engineering can improve the interactions of users with LLMs. For this, we conducted a field experiment where we asked journalists to write short texts before and after training in prompt engineering. We then analyzed the effect of training on three dimensions: (1) the user experience of journalists when interacting with LLMs, (2) the accuracy of the texts (assessed by a domain expert), and (3) the reader perception, such as clarity, engagement, and other text quality dimensions (assessed by non-expert readers). Our results show: (1) Our training improved the perceived expertise of journalists but also decreased the perceived helpfulness of LLM use. (2) The effect on accuracy varied by the difficulty of the task. (3) There is a mixed impact of training on reader perception across different text quality dimensions.

misc


Preprint

Sep. 2024

Authors

A. Bashardoust • Y. Feng • D. Geißler • S. Feuerriegel • Y. R. Shrestha

Links


Research Area

 A1 | Statistical Foundations & Explainability

BibTeXKey: BFG+24

Back to Top