Home | Research | Groups | Benjamin Lange

Research Group Benjamin Lange


Link to website at LMU

Benjamin Lange

Dr.

JRG Leader Ethics of AI

Ethics of Artificial Intelligence

Benjamin Lange

leads the MCML Junior Research Group ‘Ethics of Artificial Intelligence’ at LMU Munich.

He and his team conduct research into fundamental and application-related ethical issues relating to AI and ML. They deal with fundamental and practical questions of AI ethics from a philosophical-analytical perspective. By organizing conferences, workshops and panel discussions, the group aims to enter into an interdisciplinary exchange with researchers from philosophy and other disciplines. An important focus here is also communication with the wider public about the moral and social aspects of AI. Another important task of the JRG is the transfer of philosophical-ethical findings and results into practice, for example through collaborations and dialogue with industry and society.

Team members @MCML

PhD Students

Link to website

Anna-Maria Brandtner

Ethics of Artificial Intelligence

Link to website

Jesse de Jesus de Pinho Pinhal

Ethics of Artificial Intelligence

Publications @MCML

2025


[6]
B. Lange.
Moral parenthood and gestation: replies to Cordeiro, Murphy, Robinson and Baron.
Journal of Medical Ethics 51.2 (Jan. 2025). DOI
Abstract

I am grateful to James Cordeiro, Timothy Murphy, Heloise Robinson and Teresa Baron for their perceptive and stimulating comments on my article in this journal. In what follows, I seek to respond to some of the main points raised in each commentary.

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence


[5]
B. Lange.
Moral parenthood: not gestational.
Journal of Medical Ethics 51.2 (Jan. 2025). DOI
Abstract

Parenting our biological children is a centrally important matter, but how, if it all, can it be justified? According to a contemporary influential line of thinking, the acquisition by parents of a moral right to parent their biological children should be grounded by appeal to the value of the intimate emotional relationship that gestation facilitates between a newborn and a gestational procreator. I evaluate two arguments in defence of this proposal and argue that both are unconvincing.Data are available in a public, open access repository.

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence


[4]
B. Lange.
Duplicates and Collective Scarcity.
Philosophy and Technology 38.7 (Jan. 2025). DOI
Abstract

Digital duplicates reduce the scarcity of individuals and thus may impact their instrumental and intrinsic value. I here expand upon this idea by introducing the notion of collective scarcity, which pertains to the limitations faced by social groups in maintaining their size, cohesion and function.

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence


2024


[3]
B. Lange.
The Future Audit Society? Automated Assurance and Auditing.
AISoLA 2024 - 2nd International Conference on Bridging the Gap Between AI and Reality. Crete, Greece, Oct 30-Nov 03, 2024. To be published.
Abstract

AI audits are a key mechanism for responsible AI governance. AI audits have been proposed in a variety of laws and regulations standardized frameworks and guidelines for industry best practices as a mechanism to facilitate public trust and accountability for AI system developers and deployers. Though AI auditing for the purpose of compliance and assurance with normative requirements currently lacks defined norms and standardized practices, some systematic assurance AI audit methodologies are emerging that are modelled on financial auditing practices. In the spirit of financial audits which aim to uphold trust in the integrity of the proper function of the financial markets for stakeholders, AI audits, on this line of reasoning, aim to provide assurance to their stakeholders about AI organizations’ ability to govern their algorithms in ways that mitigate harms and uphold human values. Against this backdrop, the nature of the auditing industry is currently evolving. Traditional financial auditing practices are becoming increasingly automated by AI and, given the complexity of some AI-systems themselves and the high degree of assurance that they will require, the future of AI auditing itself will foreseeably be automated. This paper makes a first step toward exploring this picture. I argue that current automated auditing trends run the risk of undermining the justificatory plausibility of auditing as an accountability and trust-facilitating mechanism itself. In particular, I suggest that this leads to a continuous desire for verification, in which the epistemic obscurity of auditing assurance – the nature of the judgment provided auditors – increases and the operational capability of audits to achieve their aims decreases.

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence


[2]
G. Keeling, B. Lange, A. McCroskery, K. Pedersen, D. Weinberger and B. Zevenbergen.
Moral Imagination for Engineering Teams: The Technomoral Scenario.
The International Review of Information Ethics 34.1 (Oct. 2024). DOI
Abstract

‘Moral imagination’ is the capacity to register that one’s perspective on a decision-making situation is limited, and to imagine alternative perspectives that reveal new considerations or approaches. We have developed a Moral Imagination approach that aims to drive a culture of responsible innovation, ethical awareness, deliberation, decision-making, and commitment in organizations developing new technologies. We here present a case study that illustrates one key aspect of our approach – the technomoral scenario – as we have applied it in our work with product and engineering teams. Technomoral scenarios are fictional narratives that raise ethical issues surrounding the interaction between emerging technologies and society. Through facilitated roleplaying and discussion, participants are prompted to examine their own intentions, articulate justifications for actions, and consider the impact of decisions on various stakeholders. This process helps developers to reenvision their choices and responsibilities, ultimately contributing to a culture of responsible innovation.

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence


[1]
F. Poszler and B. Lange.
The impact of intelligent decision-support systems on humans' ethical decision-making: A systematic literature review and an integrated framework.
Technological Forecasting and Social Change 204.123403 (Jul. 2024). DOI
Abstract

With the rise and public accessibility of AI-enabled decision-support systems, individuals outsource increasingly more of their decisions, even those that carry ethical dimensions. Considering this trend, scholars have highlighted that uncritical deference to these systems would be problematic and consequently called for investigations of the impact of pertinent technology on humans’ ethical decision-making. To this end, this article conducts a systematic review of existing scholarship and derives an integrated framework that demonstrates how intelligent decision-support systems (IDSSs) shape humans’ ethical decision-making. In particular, we identify resulting consequences on an individual level (i.e., deliberation enhancement, motivation enhancement, autonomy enhancement and action enhancement) and on a societal level (i.e., moral deskilling, restricted moral progress and moral responsibility gaps). We carve out two distinct methods/operation types (i.e., process-oriented and outcome-oriented navigation) that decision-support systems can deploy and postulate that these determine to what extent the previously stated consequences materialize. Overall, this study holds important theoretical and practical implications by establishing clarity in the conceptions, underlying mechanisms and (directions of) influences that can be expected when using particular IDSSs for ethical decisions.

MCML Authors
Link to Profile Benjamin Lange

Benjamin Lange

Dr.

Ethics of Artificial Intelligence