Roussel Desmond Nzoyem

General Profile:

I graduated from the University of Strasbourg with an MSc in Scientific Computing and Mathematics of Information. Before that, I had studied – sometimes online, inevitably so – at various locations around the globe: Mathematics and Physical Sciences in Cameroon, Mechatronics in Japan, Computer Science in the USA, and Mathematics in France.

During my MSc, I worked with researchers at IRMA Strasbourg on a Neural Network solution to cancer screening based on the radiative transfer equation and one of its ensuing inverse problems. This was followed by a placement at Sorbonne University where I worked on a mathematical model for ice floe percussion and fracture to understand the impact of the shrinking Arctic ice cap and the Marginal Ice Zone on oil exploitation and weather forecast.

I’m interested in working with academic and industry experts to solve engineering problems (modelling, simulation, optimization, etc.) by bringing in responsible AI perspectives; reason why I’ve started a PhD within the Interactive AI CDT at Bristol.

When I’m not working, I’m usually learning or coding new stuff, at the movies, engaging in a match of football, or playing the piano – or rather, trying to.

Research Project Summary:

Our summer project covered the broad topic of fluid simulation acceleration using Machine Learning (ML). We designed a methodology around Graph Neural Networks (GNN) to speed up the solving of large and sparse linear systems using Algebraic Multigrid (AMG), a state-of-the art solver in the High-Performance Computing (HPC) community. The GNN was used to find better prolongation operators between the coarse and fine grids. Alongside this task, we developed a survey to investigate awareness of AMG, features to consider when designing successful solvers, and some obstacles to the adoption of AI-enabled solvers. Our survey was valuable in that it highlighted that participants want AI involved in more steps of the fluid simulation pipeline.

 The goal of the PhD project is to reduce the computational cost of obtaining high-fidelity fluid simulations, with a clear emphasis on uncertainty quantification on the results. Recent years have seen an abundance of techniques combining both Machine Learning and physical principles to upsize low-fidelity simulations. By scaling these methods to real-world problem sizes, we see a clear benefit to the Computation Fluid Dynamics (CFD) and dynamical systems communities, among others. With added uncertainty quantification, this will equip users with interactive tools for shape optimisation, real-time visualisation, reliability engineering, and more.

Current data-driven approaches for computational cost reduction in fluid simulation tend to be under-representative. They focus on small problems with simplified geometries, breaking down when we attempt to scale them to real-world engineering problems. Research on these techniques stops at the proof-of-concept phase, giving little attention to production-ready software. Most importantly, they struggle with GPU acceleration.

This is problematic since we are interested in large problems with billions of degrees of freedoms. Our work will thus involve leveraging recent advances in bias correction and uplifting to large-scale problems. We will look at distributed data-, model-, and pipeline-parallel HPC techniques for quick inference of simulated quantities. We will apply those to data-driven methods that have showed great promise over the last few years. These include Physics-Informed Neural Networks (PINNs), Scientific Machine Learning (SciML), equation-free modelling, and differentiable simulation.

 Uncertainty quantification (UQ) is a major barrier to the adoption of AI-enabled methods to real-life engineering problems. Typical examples like PINNs or SciML do not have uncertainty quantification built-in, exposing a research gap actively explored by an ever-growing community of experts. We will work on incorporating UQ in our methodology, thus allowing users to interact with the system in all knowledge of its accuracy and shortcomings.

Considering our focus on large and viable engineering problems, we look forward to tackling questions like: (i) What Direct Numerical Simulation datasets are available for training AI models? (ii) How to meaningfully scale recent techniques to high-fidelity problem sizes? (iii) What is the feasibility of integrating recent ML frameworks into industrial-grade fluid simulation solvers? (iv) How to efficiently quantify the model's accuracy for the user's downstream task?

Supervisors:

Website:

Edit this page