Some authors argue that responsibility gaps can open up when no one has sufficient control over negative outcomes. With recent developments in Artificial Intelligence, the responsibility gap is thought to have grown since AI technologies can produce negative outcomes over which people do not have sufficient control. This paper aims to close the responsibility gap by recommending allocating responsibility according to a strategy constituted by two conditions: in scenarios where no one seems to be responsible for negative outcomes, responsibility should be primarily allocated to people who have (1) intentionally and voluntarily exercised prerequisite control that causally makes the current situation uncontrolled, and (2) expected personal benefits when exercising the prerequisite control. Theoretically speaking, every case of the responsibility gap contains at least one agent meeting these two conditions, and the responsibility gap could thus be closed.
article Gon25
BibTeXKey: Gon25