Tegan Cohen
Lecturer, Law School, Queensland University of Technology, Brisbane, Australia, Tegan.cohen@qut.edu.au. The author would like to thank Dr Henry Fraser, Associate Professor Mark Burdon, Dr Ariadna Matamoros- Fernández and Dr Kylie Pappalardo, as well as the two anonymous reviewers, for their time and insightful comments on an earlier draft of this article.
AI scientists are rapidly developing new approaches to understanding and exploiting vulnerabilities in human decision-making. As governments around the world grapple with the threat posed by manipulative AI systems, the European Commission (EC) has taken a significant step by proposing a new sui generis legal regime (the AI Act) which prohibits certain systems with the ’significant’ potential to manipulate. Specifically, the EC has proposed prohibitions on AI systems which deploy subliminal techniques and exploit vulnerabilities in specific groups. This article analyses the EC’s proposal, finding that the approach is not tailored to address the capabilities of manipulative AI. The concepts of subliminal techniques, group-level vulnerability, and transparency, which are core to the EC’s proposed response, are inadequate to meet the threat arising from growing capabilities to render individuals susceptible to hidden influence by surfacing and exploiting vulnerabilities in individual decision-making processes. In seeking to secure the benefits of AI while meeting the heightened threat of manipulation, lawmakers must adopt new frameworks better suited to addressing new capabilities for manipulation assisted by advancements in machine learning.