Home | Research | Groups | Benjamin Lange

Research Group Benjamin Lange


Link to website at LMU

Benjamin Lange

Dr.

JRG Leader Ethics of AI

Ethics of Artificial Intelligence

Benjamin Lange

leads the MCML Junior Research Group ‘Ethics of Artificial Intelligence’ at LMU Munich.

He and his team conduct research into fundamental and application-related ethical issues relating to AI and ML. They deal with fundamental and practical questions of AI ethics from a philosophical-analytical perspective. By organizing conferences, workshops and panel discussions, the group aims to enter into an interdisciplinary exchange with researchers from philosophy and other disciplines. An important focus here is also communication with the wider public about the moral and social aspects of AI. Another important task of the JRG is the transfer of philosophical-ethical findings and results into practice, for example through collaborations and dialogue with industry and society.

Team members @MCML

PhD Students

Link to website

Anna-Maria Brandtner

Ethics of Artificial Intelligence

Link to website

Jesse de Jesus de Pinho Pinhal

Ethics of Artificial Intelligence

Recent News @MCML

Link to Benjamin Lange on ZDF Heute Journal

14.02.2025

Benjamin Lange on ZDF Heute Journal

Link to MCML Researchers With 62 Papers in Highly-Ranked Journals

01.01.2025

MCML Researchers With 62 Papers in Highly-Ranked Journals

Link to MCML JRG Leader Benjamin Lange About Risk and Opportunities of AI

16.09.2024

MCML JRG Leader Benjamin Lange About Risk and Opportunities of AI

Publications @MCML

2025


[8]
B. Lange.
Beyond the Ivory Tower? The Practical Role of Ethicists in Business.
Artificial Intelligence, Entrepreneurship and Risk. Technikzukünfte, Wissenschaft und Gesellschaft / Futures of Technology, Science and Society (Apr. 2025). DOI
Abstract

null

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence


[7]
B. D. Earp, S. P. Mann, M. Aboy, E. Awad, M. Betzler, M. Botes, R. Calcott, M. Caraccio, N. Chater, M. Coeckelbergh, M. Constantinescu, H. Dabbagh, K. Devlin, X. Ding, V. Dranseika, J. A. C. Everett, R. Fan, F. Feroz, K. B. Francis, C. Friedman, O. Friedrich, I. Gabriel, I. Hannikainen, J. Hellmann, A. K. Jahrome, N. S. Janardhanan, P. Jurcys, A. Kappes, M. A. Khan, G. Kraft-Todd, M. Kroner Dale, S. M. Laham, B. Lange, M. Leuenberger, J. Lewis, P. Liu, D. M. Lyreskog, M. Maas, J. McMillan, E. Mihailov, T. Minssen, J. Teperowski Monrad, K. Muyskens, S. Myers, S. Nyholm, A. M. Owen, A. Puzio, C. Register, M. G. Reinecke, A. Safron, H. Shevlin, H. Shimizu, P. V. Treit, C. Voinea, K. Yan, A. Zahiu, R. Zhang, H. Zohny, W. Sinnott-Armstrong, I. Singh, J. Savulescu and M. S. Clark.
Relational Norms for Human-AI Cooperation.
Preprint (Feb. 2025). arXiv
Abstract

How we should design and interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy. In human society, relationships such as teacher-student, parent-child, neighbors, siblings, or employer-employee are governed by specific norms that prescribe or proscribe cooperative functions including hierarchy, care, transaction, and mating. These norms shape our judgments of what is appropriate for each partner. For example, workplace norms may allow a boss to give orders to an employee, but not vice versa, reflecting hierarchical and transactional expectations. As AI agents and chatbots powered by large language models are increasingly designed to serve roles analogous to human positions - such as assistant, mental health provider, tutor, or romantic partner - it is imperative to examine whether and how human relational norms should extend to human-AI interactions. Our analysis explores how differences between AI systems and humans, such as the absence of conscious experience and immunity to fatigue, may affect an AI’s capacity to fulfill relationship-specific functions and adhere to corresponding norms. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence

Link to Profile Sven Nyholm

Sven Nyholm

Prof. Dr.

Ethics of Artificial Intelligence


[6]
B. Lange.
Moral parenthood and gestation: replies to Cordeiro, Murphy, Robinson and Baron.
Journal of Medical Ethics 51.2 (Jan. 2025). DOI
Abstract

null

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence


[5]
B. Lange.
Moral parenthood: not gestational.
Journal of Medical Ethics 51.2 (Jan. 2025). DOI
Abstract

null

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence


[4]
B. Lange.
Digital Duplicates and Collective Scarcity.
Philosophy and Technology 38.7 (Jan. 2025). DOI
Abstract

null

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence


2024


[3]
B. Lange.
The Future Audit Society? Automated Assurance and Auditing.
AISoLA 2024 - 2nd International Conference on Bridging the Gap Between AI and Reality. Crete, Greece, Oct 30-Nov 03, 2024. To be published.
Abstract

null

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence


[2]
G. Keeling, B. Lange, A. McCroskery, D. Weinberger, K. Pedersen and B. Zevenbergen.
Moral Imagination for Engineering Teams: The Technomoral Scenario.
The International Review of Information Ethics 34.1 (Oct. 2024). DOI
Abstract

‘Moral imagination’ is the capacity to register that one’s perspective on a decision-making situation is limited, and to imagine alternative perspectives that reveal new considerations or approaches. We have developed a Moral Imagination approach that aims to drive a culture of responsible innovation, ethical awareness, deliberation, decision-making, and commitment in organizations developing new technologies. We here present a case study that illustrates one key aspect of our approach – the technomoral scenario – as we have applied it in our work with product and engineering teams. Technomoral scenarios are fictional narratives that raise ethical issues surrounding the interaction between emerging technologies and society. Through facilitated roleplaying and discussion, participants are prompted to examine their own intentions, articulate justifications for actions, and consider the impact of decisions on various stakeholders. This process helps developers to reenvision their choices and responsibilities, ultimately contributing to a culture of responsible innovation.

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence


[1]
F. Poszler and B. Lange.
The impact of intelligent decision-support systems on humans' ethical decision-making: A systematic literature review and an integrated framework.
Technological Forecasting and Social Change 204.123403 (Jul. 2024). DOI
Abstract

With the rise and public accessibility of AI-enabled decision-support systems, individuals outsource increasingly more of their decisions, even those that carry ethical dimensions. Considering this trend, scholars have highlighted that uncritical deference to these systems would be problematic and consequently called for investigations of the impact of pertinent technology on humans’ ethical decision-making. To this end, this article conducts a systematic review of existing scholarship and derives an integrated framework that demonstrates how intelligent decision-support systems (IDSSs) shape humans’ ethical decision-making. In particular, we identify resulting consequences on an individual level (i.e., deliberation enhancement, motivation enhancement, autonomy enhancement and action enhancement) and on a societal level (i.e., moral deskilling, restricted moral progress and moral responsibility gaps). We carve out two distinct methods/operation types (i.e., process-oriented and outcome-oriented navigation) that decision-support systems can deploy and postulate that these determine to what extent the previously stated consequences materialize. Overall, this study holds important theoretical and practical implications by establishing clarity in the conceptions, underlying mechanisms and (directions of) influences that can be expected when using particular IDSSs for ethical decisions.

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence