Seminar series

Welcome to the Trustworthy Systems Lab Seminar Series!

Here you can find all the upcoming and previous seminars in the series. The focus of the seminars is on promoting discussion and debate around the topic of Trustworthiness. 

The format of the talks are 20-30 mins of presentation and 30 mins for discussion and questions. We usually hold these weekly on a Wednesday lunchtime at midday (12:00). This is an open group so please share widely and get in contact if you wish to observe, join the debate or give a presentation.

Please contact us to be added to the mailing list where you will be sent an invitation to the talks each week along with an update of the next talks.


Details of upcoming talks and speakers can be found below.


 

1st October 2025, Model Based UAV Test Generation

We introduce the overall winner of the Uncrewed Aerial Vehicles (UAVs) Testing Competition at the 18th IEEE International Conference on Software Testing, Verification and Validation (ICST) 2025 and the 18th Intl. Workshop on Search-Based and Fuzz Testing (SBFT), and present an extension that leverages genetic algorithms in addition to a low-fidelity UAV path simulator to efficiently produce effective UAV test cases.
We also propose new metrics to measure UAV testing coverage and support prioritisation of test case selection. These metrics provide insights into both the diversity and situational relevance of the generated test cases.
Simulation-based testing provides a safe and cost-effective environment for verifying the safety of UAVs. However, identifying effective test suites requires a large number of simulations, which is resource consuming. To address this challenge, we optimise simulation resources using a model-based test generator that efficiently produces effective and diverse test suites. A genetic algorithm further enhances the test generation by employing a Neural Network (NN) as a surrogate fitness function to enable rapid evaluation of test cases. For the NN to make accurate predictions, it must be trained on a large dataset—one that cannot be feasibly generated using computationally intensive High-Fidelity Simulators (HFS). To overcome this, we simplify the PX4 autopilot HFS to develop a Low-Fidelity Simulator (LFS), which can produce the required training data an order of magnitude faster than the HFS.

Dr Anas Shrinah

A photograph of Dr Anas Shinrah

Dr Anas Shrinah is an Assistant Professor at the Applied Science University in Amman, Jordan, and an Honorary Senior Research Associate with the Trustworthy Systems Laboratory at the University of Bristol. His research focuses on leveraging artificial intelligence to generate effective test cases for cyber-physical systems. Anas led the development of the UAV test case generation tool that won first place in SBFT 2025 UAV Testing Competition and was named the overall winner in both the SBFT 2025 and ICST 2025 UAV Testing Competitions.
Anas is a certified Project Management Professional (PMP) with over 16 years of combined experience in academia and industry. He holds a PhD in the verification and validation of planning-based autonomous systems, an MSc in Robotics (with Distinction), as well as a first-class honours BEng in Computer and Automation Engineering.
Dr Chris Bennett

A photograph of Dr Chris Bennett
Dr Chris Bennett is a Senior Research Associate with the Trustworthy Systems Laboratory at the University of Bristol, developing machine learning techniques for test-based verification. A chartered engineer with a background in systems engineering for automotive, he worked at Jaguar Land Rover before transitioning into research seven years ago, completing a PhD in Robotics and Autonomous Systems. He has previously worked on projects with Thales UK, examining the role of hybrid autonomy in multi-agent systems, and on the UKRI funded Trustworthy Autonomous Systems project, investigating how trust can be built in artificial intelligence and robotics through system engineering practices. His research interests include test-based verification, system engineering design practices, and multi-agent artificial intelligence.

8th October 2025, Systems Trustworthiness for Human Rights

Trustworthy systems need to consider factors such as privacy-by-design, safety-by-design, and security-by-design. These all form part of upholding and respecting human rights, but alone they are not enough. This session will discuss some of the often-overlooked considerations when designing for real-world deployment, and what is going on in the compliance world to try and standardise these approaches. This session will be of use to anyone designing and deploying technology systems, even (and especially) if they are unfamiliar with their obligations to respect human rights.

Beckett LeClair

Beckett is the Head of Compliance at 5Rights Foundation, an NGO working internationally to uphold the rights of young citizens as they interface with the digital world. He is involved in standards development in multiple jurisdictions, especially AI standards at the European level, and has a particular interest in ensuring technology respects the freedoms of vulnerable and/or marginalised citizens. Beckett was previously a Senior Engineer in Frazer-Nash as part of the Digital Assurance team, with a focus on cyber security and responsible AI.

15th October 2025, Open-source methodology & toolbox for trustworthy AI Engineering, with the ETAA

Nicolas Rebierre will present the Confiance.ai industrial research program and its key results, both on the methodological and technological dimensions. He will introduce the European Trustworthy AI Association that was created by Confiance.ai consortium members right after the research program. He will talk about the motivation to make the results openly available to engineers in Europe & beyond, the purpose and ethos of the association, and the ecosystem approach to keep the portfolio at the state of the art.

Nicolas Rebierre

A photograph of Nicolas Rebierre

Nicolas Rebierre is leading the European Trustworthy AI Association with the mission to provide state of the art, open source methodology and toolbox empowering engineers to build trustworthy AI-based systems.  Nicolas is bringing his experience in the technology sector, where he served with different roles in engineering, product management, open source program office, internal ventures, industrial research, and leadership.