Seminar series

Welcome to the Trustworthy Systems Lab Seminar Series!

Here you can find all the upcoming and previous seminars in the series. The focus of the seminars is on promoting discussion and debate around the topic of Trustworthiness. 

The format of the talks are 20-30 mins of presentation and 30 mins for discussion and questions. We usually hold these weekly on a Wednesday lunchtime at midday (12:00). This is an open group so please share widely and get in contact if you wish to observe, join the debate or give a presentation.

Please contact us to be added to the mailing list where you will be sent an invitation to the talks each week along with an update of the next talks.


Details of upcoming talks and speakers can be found below.


 

12th November 2025, Formal Verification for Trustworthy Human-Robot Collaboration

Robotic systems are inherently complex, integrating heterogeneous components and operating in dynamic environments — posing unique challenges for safety assurance. This talk will explore how formal methods can address these challenges to support the design of trustworthy human-robot collaborative systems. Key topics include formal modelling of human-robot interactions and uncertainty sources, formal specification of desired robot behaviours and constraints, and formal verification/synthesis techniques to enhance system safety and trustworthiness. In collaborative human-robot scenarios, we model trust-based human-robot interaction using a partially observable Markov decision process (POMDP). Within this framework, data-driven techniques are employed to model human cognitive states and adaptive conformal prediction, a statistical machine learning method, is utilised to quantify uncertainty. For scenarios where human intention cannot be directly quantified, we propose Markov Decision Processes with Set-Valued Transitions (MDPSTs) as the modelling framework to capture unpredictable human intentions. In both settings, we reason about actions and planning for temporally extended goals expressed in Linear Temporal Logic (LTL) or Linear Distribution Temporal Logic (LDTL). We present novel algorithms for optimal policy synthesis and validate our approach through various case studies, which demonstrate promising results.

Dr Pian Yu
Pian Yu is a Lecturer (Assistant Professor) in Robotics & AI at Department of Computer Science, University College London (UCL). Prior to this position, she was a Postdoctoral Researcher at Department of Computer Science, University of Oxford and a Postdoctoral Researcher at EECS, KTH Royal Institute of Technology. She received her PhD in Electrical Engineering from KTH Royal Institute of Technology in Feb. 2021. She was a Student Best Paper Award finalist at the 2020 American Control Conference in Denver. She was selected as a Future Digileader by Digital Futures, Sweden in 2023 and a DAAD AInet fellow, Germany in 2023.