Patient and public involvement to build trust in artificial intelligence

Hosted by the Bristol Data Ethics Club

Patient and public involvement to build trust in artificial intelligence: A framework, tools, and case studies by Dr Soumya Banerjee and colleagues, about how research advisory groups can be used to build trust in, and create more ethical, AI-health research.

The piece this week is a little bit longer, so you could just stick to the following two sections if you’re short on time:

  1. Framework for building trust and typical patient concerns
  2. Discussion

At the meeting we'll invite an attendee to summarise the content so we would welcome a ~3 minute summary of the piece (or those sections) if you would like to volunteer.

Like always, there will then be a chance to talk about whatever we like in breakout groups, before coming together for a general discussion. We have some specific questions to think about while you're reading to help kickstart the discussion:

  • Do you agree that it is important to increase public and patient trust in AI?
  • What would you consider good evidence that research advisory groups are successful in their aims of more ethical and/or trustworthy research?

Is there anything that you think research advisory groups should not have a say in? Or anything that they don’t have a say in that you think they should?

Click to join the Zoom meeting