Seminar series
Welcome to the Trustworthy Systems Lab Seminar Series!
Here you can find all the upcoming and previous seminars in the series. The focus of the seminars is on promoting discussion and debate around the topic of Trustworthiness.
The format of the talks are 20-30 mins of presentation and 30 mins for discussion and questions. We usually hold these weekly on a Wednesday lunchtime at midday (12:00). This is an open group so please share widely and get in contact if you wish to observe, join the debate or give a presentation.
Please contact us to be added to the mailing list where you will be sent an invitation to the talks each week along with an update of the next talks.
Details of upcoming talks and speakers can be found below.
19th November 2025, Towards Trustworthy Deep Learning: Verification, Evaluation, and Adversarial Training
Deep learning (DL) has been advancing rapidly, and it is increasingly poised for deployment in a wide range of applications, such as autonomous systems, medical diagnosis and natural language processing. The quick adoption of DL technologies has also exposed significant safety concerns. Neural networks can be unstable, difficult to interpret, and susceptible to adversarial perturbations. In the longer term, developing safety certification techniques is essential for reducing potential harm, mitigating avoidable system failures, and ultimately ensuring trustworthiness. In this talk, I will present recent research works in verification, testing and adversarial learning aimed at improving the safety and reliability of DL systems, and discuss emerging challenges in modern foundational models.
Dr Xiyue Zhang

Xiyue Zhang is a Lecturer in the School of Computer Science at the University of Bristol. Before joining Bristol, she was a Research Associate in the Department of Computer Science at the University of Oxford. She received her PhD in in Applied Mathematics from Peking University in 2022. Her work focuses on trustworthy deep learning by integrating provable certification and empirical testing methods. Her recent work includes abstraction and verification for deep learning models and deep learning-enabled systems.