leads the MCML Junior Research Group 'Ethics of Artificial Intelligence' at LMU Munich.
He and his team conduct research into fundamental and application-related ethical issues relating to AI and ML. They deal with fundamental and practical questions of AI ethics from a philosophical-analytical perspective. By organizing conferences, workshops and panel discussions, the group aims to enter into an interdisciplinary exchange with researchers from philosophy and other disciplines. An important focus here is also communication with the wider public about the moral and social aspects of AI. Another important task of the JRG is the transfer of philosophical-ethical findings and results into practice, for example through collaborations and dialogue with industry and society.
AI audits are a key mechanism for responsible AI governance. AI audits have been
proposed in a variety of laws and regulations standardized frameworks and guidelines for
industry best practices as a mechanism to facilitate public trust and accountability for AI system developers and deployers. Though AI auditing for the purpose of compliance and assurance with normative requirements currently lacks defined norms and standardized practices, some systematic assurance AI audit methodologies are emerging that are modelled on financial auditing practices. In the spirit of financial audits which aim to uphold trust in the integrity of the proper function of the financial markets for stakeholders, AI audits, on this line of reasoning, aim to provide assurance to their stakeholders about AI organizations’ ability to govern their algorithms in ways that mitigate harms and uphold human values. Against this backdrop, the nature of the auditing industry is currently evolving. Traditional financial auditing practices are becoming increasingly automated by AI and, given the complexity of some AI-systems themselves and the high degree of assurance that they will require, the future of AI auditing itself will foreseeably be automated. This paper makes a first step toward exploring this picture. I argue that current automated auditing trends run the risk of undermining the justificatory plausibility of auditing as an accountability and trust-facilitating mechanism itself. In particular, I suggest that this leads to a continuous desire for verification, in which the epistemic obscurity of auditing assurance – the nature of the judgment provided auditors – increases and the operational capability of audits to achieve their aims decreases.
©all images: LMU | TUM