leads the MCML Junior Research Group ‘Ethics of Artificial Intelligence’ at LMU Munich.
He and his team conduct research into fundamental and application-related ethical issues relating to AI and ML. They deal with fundamental and practical questions of AI ethics from a philosophical-analytical perspective. By organizing conferences, workshops and panel discussions, the group aims to enter into an interdisciplinary exchange with researchers from philosophy and other disciplines. An important focus here is also communication with the wider public about the moral and social aspects of AI. Another important task of the JRG is the transfer of philosophical-ethical findings and results into practice, for example through collaborations and dialogue with industry and society.
null
How we should design and interact with social artificial intelligence depends on the socio-relational role the AI is meant to emulate or occupy. In human society, relationships such as teacher-student, parent-child, neighbors, siblings, or employer-employee are governed by specific norms that prescribe or proscribe cooperative functions including hierarchy, care, transaction, and mating. These norms shape our judgments of what is appropriate for each partner. For example, workplace norms may allow a boss to give orders to an employee, but not vice versa, reflecting hierarchical and transactional expectations. As AI agents and chatbots powered by large language models are increasingly designed to serve roles analogous to human positions - such as assistant, mental health provider, tutor, or romantic partner - it is imperative to examine whether and how human relational norms should extend to human-AI interactions. Our analysis explores how differences between AI systems and humans, such as the absence of conscious experience and immunity to fatigue, may affect an AI’s capacity to fulfill relationship-specific functions and adhere to corresponding norms. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.
null
null
null
null
‘Moral imagination’ is the capacity to register that one’s perspective on a decision-making situation is limited, and to imagine alternative perspectives that reveal new considerations or approaches. We have developed a Moral Imagination approach that aims to drive a culture of responsible innovation, ethical awareness, deliberation, decision-making, and commitment in organizations developing new technologies. We here present a case study that illustrates one key aspect of our approach – the technomoral scenario – as we have applied it in our work with product and engineering teams. Technomoral scenarios are fictional narratives that raise ethical issues surrounding the interaction between emerging technologies and society. Through facilitated roleplaying and discussion, participants are prompted to examine their own intentions, articulate justifications for actions, and consider the impact of decisions on various stakeholders. This process helps developers to reenvision their choices and responsibilities, ultimately contributing to a culture of responsible innovation.
With the rise and public accessibility of AI-enabled decision-support systems, individuals outsource increasingly more of their decisions, even those that carry ethical dimensions. Considering this trend, scholars have highlighted that uncritical deference to these systems would be problematic and consequently called for investigations of the impact of pertinent technology on humans’ ethical decision-making. To this end, this article conducts a systematic review of existing scholarship and derives an integrated framework that demonstrates how intelligent decision-support systems (IDSSs) shape humans’ ethical decision-making. In particular, we identify resulting consequences on an individual level (i.e., deliberation enhancement, motivation enhancement, autonomy enhancement and action enhancement) and on a societal level (i.e., moral deskilling, restricted moral progress and moral responsibility gaps). We carve out two distinct methods/operation types (i.e., process-oriented and outcome-oriented navigation) that decision-support systems can deploy and postulate that these determine to what extent the previously stated consequences materialize. Overall, this study holds important theoretical and practical implications by establishing clarity in the conceptions, underlying mechanisms and (directions of) influences that can be expected when using particular IDSSs for ethical decisions.
©all images: LMU | TUM
2024-12-27 - Last modified: 2024-12-27