A Risk-based Approach to AI Regulation: System Categorisation and Explainable AI Practices

Authors

  • Keri Grieman* and Joseph Early** * Doctoral Researcher, The Alan Turing Institute and Queen Mary University of London, k.grieman@qmul.ac.uk ** Doctoral Researcher, The Alan Turing Institute and AIC Research Group, Department of Electronics and Computer Science, University of Southampton, J.A.Early@soton.ac.uk

DOI:

https://doi.org/10.2966/scrip.200123.56

Abstract

The regulation of artificial intelligence (AI) presents a challenging new legal frontier that is only just beginning to be addressed around the world. This article provides an examination of why regulation of AI is difficult, with a particular focus on understanding the reasoning behind automated decisions. We go on to propose a flexible, risk-based categorisation for AIbased on system inputs and outputs, and incorporate explainable AI (XAI) into our novel categorisation to provide the beginnings of a functional and scalable AI regulatory framework.

Downloads

Published

26-Feb-2023

Issue

Section

Research Article