Responsibility and Assurance in Machine Learning
Dr Chris Burr, Alan Turing Institute
Responsible and Trustworthy Machine Learning
There has been a recent surge of interest in the ethics of data-driven technologies, including algorithmic systems that rely on some form of machine learning (ML). The initial stages of this interest were characterised by the establishment and specification of ethical principles, many of which were derived from bioethics and adapted to the unique challenges that intelligent systems pose to individuals and society.
More recently, there has been a pragmatic turn—within academia, public policy, and industry—towards the development of technical standards, policies, and guidance that can address these aforementioned challenges. In some instances, this turn was motivated by a dissatisfaction from some individuals with the perceived lack of action-guidance inherent in the proliferating number of ethical principles and frameworks.
In this presentation, I will introduce and explore some of the notable characteristics of this recent pragmatic turn, exposing gaps and areas that demand ongoing philosophical and ethical investigation. To structure this work I will introduce an approach derived from argumentation theory, known as argument-based assurance, and demonstrate how it can be used to facilitate the responsible and trustworthy design, development, and deployment of ML-based systems.
In doing so, I will show how the conceptual work that characterised the earlier stages of interest in the ethics of data-driven technologies can provide principled, normative foundations for ongoing research and innovation.