Home  | Publications | LNB23

Responsibility Gaps and Black Box Healthcare Ai: Shared Responsibilization as a Solution

MCML Authors

Link to Profile Sven Nyholm PI Matchmaking

Sven Nyholm

Prof. Dr.

Principal Investigator

Abstract

As sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be understood for an AI’s actions or influence on an outcome. So called 'responsibility gaps' occur whenever there exists an apparent chasm in the ordinary attribution of moral blame or responsibility when an AI automates physical or cognitive labor otherwise performed by human beings and commits an error. Healthcare administration is an industry ripe for responsibility gaps produced by these kinds of AI. The moral stakes of healthcare are often life and death, and the demand for reducing clinical uncertainty while standardizing care incentivizes the development and integration of AI diagnosticians and prognosticators. In this paper, we argue that (1) responsibility gaps are generated by 'black box' healthcare AI, (2) the presence of responsibility gaps (if unaddressed) creates serious moral problems, (3) a suitable solution is for relevant stakeholders to voluntarily responsibilize the gaps, taking on some moral responsibility for things they are not, strictly speaking, blameworthy for, and (4) should this solution be taken, black box healthcare AI will be permissible in the provision of healthcare.

article


Digital Society

2.52. Nov. 2023.

Authors

B. H. Lang • S. Nyholm • J. Blumenthal-Barby

Links

DOI

Research Area

 C5 | Humane AI

BibTeXKey: LNB23

Back to Top