Making sense of artificial intelligence

We hear a lot about machine learning and automated decision making – how can we open-up the operation of these processes for both the organizations that use these methods and the publics that are affected by them?

The big issues

Artificial intelligence (AI) and machine learning aren’t far-off technologies; machines are using algorithms to make countless decisions affecting people right now. Recent uses have caused growing worry about such machine-driven or automated decision-making. Take, for example, biases in facial recognition systems which fail to correctly identify faces of colour, or algorithms which present lower-paying job ads to women.

The decisions machines make can have far-reaching impacts that are too important to be determined without scrutiny. We need to define how machine learning and automated decision-making might work fairly for all of us, and that means building a more engaged society with a diverse range of voices. To facilitate that, we must propagate better understanding of these processes in action and ensure that organisations working with machine learning can communicate their work in easy-to-understand ways.

Our response

Our three-year project focuses on two connected strands. The first is a collaboration between BDFI and LV=GI, a major personal lines insurance provider. We’re working with the data science team at LV=GI to discover how they develop and deploy machine learning across the organisation from lab to call centre.

Secondly, we’re teaming up with two partners in Bristol – Knowle West Media Centre and Black South West Network – to co-design methods of explaining machine learning decision processes. Our recent survey of digital inequality in the Knowle West area of the city revealed deep suspicion about artificial intelligence: few in the community were confident that increased machine learning and algorithmic decision-making would be a good thing. Working with our partners, we’ll investigate further how their communities understand and engage with the idea of AI. Building on this, we’ll co-create a participatory programme to explore what could be done to ensure machine learning decisions are communicated in transparent and actionable ways.

The benefits

LV=GI places significant emphasis on using machine learning fairly, ethically and transparently. We aim to give the company an external, academic view of their machine learning usage so they can strengthen their efforts and feed into their strategy.

Our community partners want to initiate discussions about the relationships between the groups they represent and increasingly technical decision-making processes. By tailoring and articulating knowledge about machine learning and automated decision-making, they aim to give people a greater say in its development.

Across both strands of our project, we want to empower people who may not have technical expertise to learn and talk about the machine learning systems they experience. As such, it’s vital that we find engaging ways to converse about our results with our partner groups and extend our partnerships to other communities in the future too.

How is BDFI involved?

We are managing this project that is funded by one of our partners, LV = General Insurance.

Researchers

  • Prof. Susan Halford (BDFI)
  • Dr Marisela Gutierrez Lopez (BDFI)
  • Dr Venura Perera (LV=GI)
  • Dr David Hopkinson (LV=GI)

Collaborators

  • LV General Insurance Group
  • School of Sociology, Politics and International Relations
  • Knowle West Media Centre
  • Black South West Network

Funding

Project: Explainable AI

Funded by: LV General Insurance Group; Bristol Digital Futures Institute

Want to find out what we’ve been up to in the past year?

Download the BDFI Impact Report (2022)

Edit this page