The Bristol AI Summer School (BAIS) 2025

The Interactive AI and the Practice-Oriented AI CDTs at the University of Bristol are holding an in-person summer school between the 9 and 11 September 2025. For three days the fundamentals and the latest progress in key areas of AI will be discussed by a range of experts from both industry and academia. Please note day 3 of the event is for our Interactive AI and Practice-Oriented AI CDT students and staff only. This year, the Summer School will focus on AI for health.

T‌his year's event will take place at Wills Conference Hall in Bristol. 

Programme:

 

Public Programme (subject to change)

Tuesday 9 September, 2025

 

"Subtractive Mixture Models: Representation, Learning and Inference"

Abstract: 

Mixture models are traditionally represented and learned by adding several distributions as components. Allowing mixtures to subtract probability mass or density can drastically reduce the number of components needed to model complex distributions. However, learning such subtractive mixtures while ensuring they still encode a non-negative function is challenging. We investigate how to learn and perform inference on deep subtractive mixtures by squaring them. We do this in the framework of probabilistic circuits, which enables us to represent tensorized mixtures and generalize several other subtractive models such as positive semi-definite kernel models and Born machines.  PCs also enable several applications of reliable neuro-symbolic AI, from controlled generation with LLMs to knowledge-compliant concept bottleneck models.  We theoretically prove that the class of sum of squared circuits allowing subtractions can be exponentially more expressive than traditional additive mixtures. We empirically show this increased expressiveness on a series of real-world distribution estimation tasks and discuss which inference scenarios are tractable with this new class of circuits. Finally, I will talk about how to use these subtractive mixtures for approximate inference when plugged in monte carlo and importance sampling estimators.

 Bio:

Antonio Vergari loves probabilistic machine learning and equally loves to tease the probabilistic machine learning community on how we desperately need efficient and reliable machine learning systems. He would like to find a unifying framework for complex reasoning. Many days he believes this can be done with some form of circuits.

 

 

"Solutions for stroke imaging"

"The Journey from Academia towards Deployment"

 Abstract:

In this talk, Jeff will share the journey of turning academic research into the spin-out company IngeniumAI. Originally a postdoctoral research project led by Jeff, IngeniumAI is dedicated to improving detection of uncommon but deadly diseases by harnessing underutilised medical imaging data. Jeff will outline the path from initiating the research to forming a company and managing its day-to-day operations. The talk will explore key challenges in developing and deploying Software as a Medical Device (SaMD) and AI as a Medical Device (AIaMD) products, including funding, regulatory, and market access considerations.

Bio:

Dr Jeff Clark is co-founder and CEO of IngeniumAI, a spin-out company from the University of Bath and Royal United Hospitals Bath NHS Trust. Jeff has a background in medical engineering, with a PhD in 4D imaging for tissue regeneration from Imperial College London, and prior to this developing HIV diagnostics for remote regions at Cambridge and portable anaesthetic delivery systems at Cardiff. Alongside his role at IngeniumAI, Jeff is a part-time Research Fellow at the University of Bristol, where he applies machine learning to tackle pressing challenges for social good. In his spare time Jeff runs TheSavvyScientist.com, a resource aiming to make PhDs more accessible.

 

 

Wednesday 10 September, 2025

 

 

 

 

"Vision-Language Models for Chest X-Rays"

Abstract: 

Chest X-rays (CXR) is a widely requested imaging test used as a quick and non-invasive procedure to examine various pathologies in the chest cavity. To interpret these scans, radiologists integrate visual findings with multiple sources of patient information, such as medical history, and document the relevant findings visualised in the CXR into free-text radiology reports. However, an ever-increasing volume of imaging studies, coupled with a global shortage of radiologists, has created significant diagnostic backlogs that can delay treatment and negatively impact patient outcomes. This talk explores the potential of Vision-Language Models (VLMs) to help address this challenge. I will present different approaches to enhance VLM performance for key radiological applications, specifically focusing on the automation of generating accurate radiology reports and answering clinically relevant questions from CXRs.

"Medical AI Safety"

"From Data to Decisions: Modeling Time Series in Healthcare Applications"

Bio:

Yu Chen, PhD, currently is an AI/ML Engineer at GSK and formerly a Research Associate in Machine Learning for Healthcare at Imperial College London. She received her PhD in Computer Science from the University of Bristol, supervised by Prof. Peter Flach and Dr. Tom Diethe. Her research focuses on developing advanced methods for time series forecasting and interpretable machine learning, with applications to clinical decision support and healthcare data analysis.

"Beyond XAI: Explainable Data-driven Modelling for Human Reasoning and Decision Support"

Abstract:

Insights from social sciences have transformed explainable artificial intelligence from a largely technical into a more human-centred discipline, thus enabling diverse stakeholders, rather than technical experts alone, to benefit from its developments. The focus of explainability research itself, nonetheless, remained largely unchanged, that is to help people understand the operation and output of predictive models. This, however, may not necessarily be the most consequential function of such systems; they can be adapted to complement, augment and enhance the abilities of humans instead of (fully) automating their various roles in an explainable way. In this talk Kacper will explore how we can reimagine XAI by drawing upon a broad range of relevant interdisciplinary findings. The resulting, more comprehensive conceptualisation of the entire research field promises to be better aligned with humans by supporting their reasoning and decision-making in a data-driven way. As the talk will show, medical applications, as well as other high stakes domains, stand to greatly benefit from such a shift in perspective.

Bio:

Kacper is a researcher in the Faculty of Informatics at USI in Lugano, Switzerland. His main research focus is transparency – interpretability and explainability – of data-driven predictive systems based on artificial intelligence and machine learning algorithms intended for medical applications. Prior to joining USI, he worked with the Medical Data Science group at ETH Zurich. Before, he was a Research Fellow at the ARC Centre of Excellence for Automated Decision-Making and Society, affiliated with the RMIT University in Melbourne, Australia. Prior to that he held numerous research positions at the University of Bristol, United Kingdom, working on diverse AI and ML projects. Kacper holds a Master's degree in Mathematics and Computer Science and a doctorate in Computer Science from the University of Bristol.

“Research Pathways for Data Science and AI in the NHS”

Abstract:

The NHS is a data-rich environment, with information spanning clinical records, imaging, genomics, operational metrics, and public health trends. Harnessing this data for actionable insights and efficiencies is key to improving patient care and streamlining services. Yet, the path from research to real-world impact is shaped by complex challenges: governance, ethics, interoperability, and an exceptionally fast pace of technological change.

In this talk, we will explore the kinds of research needed to enable data science and AI to thrive in the NHS, alongside practical pathways for turning academic innovation into deployed solutions that make a difference in healthcare.  We’ll also highlight the NHS England PhD internship scheme which will be open for applications in the Autumn.

Thursday 11 September, 2025 - Student-centered day for IAI and PrOAI CDT students 

"Presenting with Confidence and Flair"

For many of us, presenting to an audience of peers and familiar academics is gruelling enough, speaking at a conference is far worse. The sense of exposure; the inner voice that seems to offer nothing but criticism; the prospect of being asked tough questions – all these can be paralysing.

This workshop equips research students with strategies to manage their nerves by offering ways to hold an audience’s attention, convey information and ideas clearly and respond to questions with authority and composure. Course participants have the opportunity to analyse their presentation style and explore ways of developing it.

"Trusted Research Workshop"

Trusted Research is becoming increasingly important as government oversight of university generated research grows. Recent geopolitical developments have heightened concerns around national security, particularly in relation to sensitive technologies and software developed within universities. Conducting due diligence on who we are collaborating with is a key priority, especially when it comes to ensuring the integrity of international research collaborations. This workshop will highlight the main risk indicators related to Export Control and other national security legislation, and explore how these considerations may apply to projects within the PrO-AI CDT.

"An Introduction to Responsible Research and Innovation"

This is an interactive session that invites participants to explore themes of responsibility, future impacts of research and researchers' personal standpoint through playful activities. The session will also introduce the AREA framework for RRI and provide an overview of the RRI work to date in the PrO-AI CDT.