TSL Seminar Series, 2025-2026 Academic Year

This page contains information on Seminars that occurred during the 2025-2026 Academic Year. Details of upcoming talks, and how to attend, can be found on the main page: Seminar Series

1st October 2025, Model Based UAV Test Generation

We introduce the overall winner of the Uncrewed Aerial Vehicles (UAVs) Testing Competition at the 18th IEEE International Conference on Software Testing, Verification and Validation (ICST) 2025 and the 18th Intl. Workshop on Search-Based and Fuzz Testing (SBFT), and present an extension that leverages genetic algorithms in addition to a low-fidelity UAV path simulator to efficiently produce effective UAV test cases.
We also propose new metrics to measure UAV testing coverage and support prioritisation of test case selection. These metrics provide insights into both the diversity and situational relevance of the generated test cases.
Simulation-based testing provides a safe and cost-effective environment for verifying the safety of UAVs. However, identifying effective test suites requires a large number of simulations, which is resource consuming. To address this challenge, we optimise simulation resources using a model-based test generator that efficiently produces effective and diverse test suites. A genetic algorithm further enhances the test generation by employing a Neural Network (NN) as a surrogate fitness function to enable rapid evaluation of test cases. For the NN to make accurate predictions, it must be trained on a large dataset—one that cannot be feasibly generated using computationally intensive High-Fidelity Simulators (HFS). To overcome this, we simplify the PX4 autopilot HFS to develop a Low-Fidelity Simulator (LFS), which can produce the required training data an order of magnitude faster than the HFS.

Dr Anas Shrinah

A photograph of Dr Anas Shinrah

Dr Anas Shrinah is an Assistant Professor at the Applied Science University in Amman, Jordan, and an Honorary Senior Research Associate with the Trustworthy Systems Laboratory at the University of Bristol. His research focuses on leveraging artificial intelligence to generate effective test cases for cyber-physical systems. Anas led the development of the UAV test case generation tool that won first place in SBFT 2025 UAV Testing Competition and was named the overall winner in both the SBFT 2025 and ICST 2025 UAV Testing Competitions.
Anas is a certified Project Management Professional (PMP) with over 16 years of combined experience in academia and industry. He holds a PhD in the verification and validation of planning-based autonomous systems, an MSc in Robotics (with Distinction), as well as a first-class honours BEng in Computer and Automation Engineering.
Dr Chris Bennett

A photograph of Dr Chris Bennett
Dr Chris Bennett is a Senior Research Associate with the Trustworthy Systems Laboratory at the University of Bristol, developing machine learning techniques for test-based verification. A chartered engineer with a background in systems engineering for automotive, he worked at Jaguar Land Rover before transitioning into research seven years ago, completing a PhD in Robotics and Autonomous Systems. He has previously worked on projects with Thales UK, examining the role of hybrid autonomy in multi-agent systems, and on the UKRI funded Trustworthy Autonomous Systems project, investigating how trust can be built in artificial intelligence and robotics through system engineering practices. His research interests include test-based verification, system engineering design practices, and multi-agent artificial intelligence.

8th October 2025, Systems Trustworthiness for Human Rights

Trustworthy systems need to consider factors such as privacy-by-design, safety-by-design, and security-by-design. These all form part of upholding and respecting human rights, but alone they are not enough. This session will discuss some of the often-overlooked considerations when designing for real-world deployment, and what is going on in the compliance world to try and standardise these approaches. This session will be of use to anyone designing and deploying technology systems, even (and especially) if they are unfamiliar with their obligations to respect human rights.

Beckett LeClair

Beckett is the Head of Compliance at 5Rights Foundation, an NGO working internationally to uphold the rights of young citizens as they interface with the digital world. He is involved in standards development in multiple jurisdictions, especially AI standards at the European level, and has a particular interest in ensuring technology respects the freedoms of vulnerable and/or marginalised citizens. Beckett was previously a Senior Engineer in Frazer-Nash as part of the Digital Assurance team, with a focus on cyber security and responsible AI.

15th October 2025, Open-source methodology & toolbox for trustworthy AI Engineering, with the ETAA

Nicolas Rebierre will present the Confiance.ai industrial research program and its key results, both on the methodological and technological dimensions. He will introduce the European Trustworthy AI Association that was created by Confiance.ai consortium members right after the research program. He will talk about the motivation to make the results openly available to engineers in Europe & beyond, the purpose and ethos of the association, and the ecosystem approach to keep the portfolio at the state of the art.

Nicolas Rebierre

A photograph of Nicolas Rebierre

Nicolas Rebierre is leading the European Trustworthy AI Association with the mission to provide state of the art, open source methodology and toolbox empowering engineers to build trustworthy AI-based systems.  Nicolas is bringing his experience in the technology sector, where he served with different roles in engineering, product management, open source program office, internal ventures, industrial research, and leadership.

12th November 2025, Formal Verification for Trustworthy Human-Robot Collaboration

Robotic systems are inherently complex, integrating heterogeneous components and operating in dynamic environments — posing unique challenges for safety assurance. This talk will explore how formal methods can address these challenges to support the design of trustworthy human-robot collaborative systems. Key topics include formal modelling of human-robot interactions and uncertainty sources, formal specification of desired robot behaviours and constraints, and formal verification/synthesis techniques to enhance system safety and trustworthiness. In collaborative human-robot scenarios, we model trust-based human-robot interaction using a partially observable Markov decision process (POMDP). Within this framework, data-driven techniques are employed to model human cognitive states and adaptive conformal prediction, a statistical machine learning method, is utilised to quantify uncertainty. For scenarios where human intention cannot be directly quantified, we propose Markov Decision Processes with Set-Valued Transitions (MDPSTs) as the modelling framework to capture unpredictable human intentions. In both settings, we reason about actions and planning for temporally extended goals expressed in Linear Temporal Logic (LTL) or Linear Distribution Temporal Logic (LDTL). We present novel algorithms for optimal policy synthesis and validate our approach through various case studies, which demonstrate promising results.

Dr Pian Yu
Pian Yu is a Lecturer (Assistant Professor) in Robotics & AI at Department of Computer Science, University College London (UCL). Prior to this position, she was a Postdoctoral Researcher at Department of Computer Science, University of Oxford and a Postdoctoral Researcher at EECS, KTH Royal Institute of Technology. She received her PhD in Electrical Engineering from KTH Royal Institute of Technology in Feb. 2021. She was a Student Best Paper Award finalist at the 2020 American Control Conference in Denver. She was selected as a Future Digileader by Digital Futures, Sweden in 2023 and a DAAD AInet fellow, Germany in 2023.