Home  | Publications | MN24

Advanced AI Assistants That Act on Our Behalf May Not Be Ethically or Legally Feasible

MCML Authors

Link to Profile Sven Nyholm PI Matchmaking

Sven Nyholm

Prof. Dr.

Principal Investigator

Abstract

Google and OpenAI have recently announced major product launches involving artificial intelligence (AI) agents based on large language models (LLMs) and other generative models. Notably, these are envisioned to function as personalized ‘advanced assistants’. With other companies following suit, such AI agents seem poised to be the next big thing in consumer technology, with the potential to disrupt work and social environments. To underscore the importance of these developments, Google DeepMind recently published an extensive report on the topic, which they describe as “one of [their] largest ethics foresight projects to date”1. The report defines AI assistants functionally as “artificial agent[s] with a natural language interface, the function of which is to plan and execute sequences of actions on the user’s behalf across one or more domains and in line with the user’s expectations”. The question the Google DeepMind researchers argue we should be pondering is ‘what kind of AI assistants do we want to see in the world?’. But a more fundamental question is whether AI assistants are feasible, given basic ethical and legal requirements. Key issues that will impact the deployment of AI agents concern liability and the ability of users to effectively transfer some of their agential powers to AI assistants.

article


Nature Machine Intelligence

6. Jul. 2024.

Authors

S. Milano • S. Nyholm

Links

DOI

Research Area

 C5 | Humane AI

BibTeXKey: MN24

Back to Top