Home | Research | Groups | Sven Nyholm

Research Group Sven Nyholm


Link to website at LMU

Sven Nyholm

Prof. Dr.

Principal Investigator

Ethics of Artificial Intelligence

Sven Nyholm

is Professor of Ethics of Artificial Intelligence at LMU Munich.

His research and teaching encompass applied ethics (particularly, but not exclusively, ethics of artificial intelligence), practical philosophy, and philosophy of technology. Currently, he is working on his fourth book, which will be about the ethics of artificial intelligence. His previous books were concerned with Kantian ethics, the ethics of human-robot interactions and the ethics of technology.

Team members @MCML

PhD Students

Link to website

Dilin Gong

Ethics of Artificial Intelligence

Recent News @MCML

Link to Artificial Intelligence as a Radio Host

26.11.2024

Artificial Intelligence as a Radio Host

Link to Our PI Sven Nyholm About AI in Government Services

10.07.2024

Our PI Sven Nyholm About AI in Government Services

Link to Sven Nyholm About the Role of AI in India's Political Campaigns

12.06.2024

Sven Nyholm About the Role of AI in India's Political Campaigns

Link to Call for Abstracts

28.04.2024

Call for Abstracts

Link to Call for Abstracts - MCML Conference on "The Ethics of Conversational Agents & Generative AI"

11.04.2024

Call for Abstracts - MCML Conference on "The Ethics of Conversational Agents & Generative AI"

Publications @MCML

2025


[4]
B. D. Earp, S. P. Mann, M. Aboy, E. Awad, M. Betzler, M. Botes, R. Calcott, M. Caraccio, N. Chater, M. Coeckelbergh, M. Constantinescu, H. Dabbagh, K. Devlin, X. Ding, V. Dranseika, J. A. C. Everett, R. Fan, F. Feroz, K. B. Francis, C. Friedman, O. Friedrich, I. Gabriel, I. Hannikainen, J. Hellmann, A. K. Jahrome, N. S. Janardhanan, P. Jurcys, A. Kappes, M. A. Khan, G. Kraft-Todd, M. Kroner Dale, S. M. Laham, B. Lange, M. Leuenberger, J. Lewis, P. Liu, D. M. Lyreskog, M. Maas, J. McMillan, E. Mihailov, T. Minssen, J. Teperowski Monrad, K. Muyskens, S. Myers, S. Nyholm, A. M. Owen, A. Puzio, C. Register, M. G. Reinecke, A. Safron, H. Shevlin, H. Shimizu, P. V. Treit, C. Voinea, K. Yan, A. Zahiu, R. Zhang, H. Zohny, W. Sinnott-Armstrong, I. Singh, J. Savulescu and M. S. Clark.
Relational Norms for Human-AI Cooperation.
Preprint (Feb. 2025). arXiv
Abstract

How we should design and interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy. In human society, relationships such as teacher-student, parent-child, neighbors, siblings, or employer-employee are governed by specific norms that prescribe or proscribe cooperative functions including hierarchy, care, transaction, and mating. These norms shape our judgments of what is appropriate for each partner. For example, workplace norms may allow a boss to give orders to an employee, but not vice versa, reflecting hierarchical and transactional expectations. As AI agents and chatbots powered by large language models are increasingly designed to serve roles analogous to human positions - such as assistant, mental health provider, tutor, or romantic partner - it is imperative to examine whether and how human relational norms should extend to human-AI interactions. Our analysis explores how differences between AI systems and humans, such as the absence of conscious experience and immunity to fatigue, may affect an AI’s capacity to fulfill relationship-specific functions and adhere to corresponding norms. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence

Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence


2023


[3]
B. H. Lang, S. Nyholm and J. Blumenthal-Barby.
Responsibility Gaps and Black Box Healthcare Ai: Shared Responsibilization as a Solution.
Digital Society 2.52 (Nov. 2023). DOI
Abstract

As sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be understood for an AI’s actions or influence on an outcome. So called ‘responsibility gaps’ occur whenever there exists an apparent chasm in the ordinary attribution of moral blame or responsibility when an AI automates physical or cognitive labor otherwise performed by human beings and commits an error. Healthcare administration is an industry ripe for responsibility gaps produced by these kinds of AI. The moral stakes of healthcare are often life and death, and the demand for reducing clinical uncertainty while standardizing care incentivizes the development and integration of AI diagnosticians and prognosticators. In this paper, we argue that (1) responsibility gaps are generated by ‘black box’ healthcare AI, (2) the presence of responsibility gaps (if unaddressed) creates serious moral problems, (3) a suitable solution is for relevant stakeholders to voluntarily responsibilize the gaps, taking on some moral responsibility for things they are not, strictly speaking, blameworthy for, and (4) should this solution be taken, black box healthcare AI will be permissible in the provision of healthcare.

MCML Authors
Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence


[2]
J. Smids, H. Berkers, P. Le Blanc, S. Rispens and S. Nyholm.
Employers Have a Duty of Beneficence to Design for Meaningful Work: A General Argument and Logistics Warehouses as a Case Study.
The Journal of Ethics (Oct. 2023). DOI
Abstract

Artificial intelligence-driven technology increasingly shapes work practices and, accordingly, employees’ opportunities for meaningful work (MW). In our paper, we identify five dimensions of MW: pursuing a purpose, social relationships, exercising skills and self-development, autonomy, self-esteem and recognition. Because MW is an important good, lacking opportunities for MW is a serious disadvantage. Therefore, we need to know to what extent employers have a duty to provide this good to their employees. We hold that employers have a duty of beneficence to design for opportunities for MW when implementing AI-technology in the workplace. We argue that this duty of beneficence is supported by the three major ethical theories, namely, Kantian ethics, consequentialism, and virtue ethics. We defend this duty against two objections, including the view that it is incompatible with the shareholder theory of the firm. We then employ the five dimensions of MW as our analytical lens to investigate how AI-based technological innovation in logistic warehouses has an impact, both positively and negatively, on MW, and illustrate that design for MW is feasible. We further support this practical feasibility with the help of insights from organizational psychology. We end by discussing how AI-based technology has an impact both on meaningful work (often seen as an aspirational goal) and decent work (generally seen as a matter of justice). Accordingly, ethical reflection on meaningful and decent work should become more integrated to do justice to how AI-technology inevitably shapes both simultaneously.

MCML Authors
Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence


[1]
S. Nyholm.
Is Academic Enhancement Possible by Means of Generative Ai-Based Digital Twins?
American Journal of Bioethics 23.10 (Sep. 2023). DOI
MCML Authors
Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence