Open lecture: Probabilistic graphical models and deep learning methods for remote sensing image analysis

18 July 2025, 3.00 PM - 18 July 2025, 4.00 PM

Dr Josiane Zerubia, Bristol Benjamin Meaker Distinguished Visiting Professor

Wills Memorial Building, Reynolds Room (WMB G25)

Dr Josiane Zerubia is visiting Bristol from the Institut National de Recherche en Informatique et en Automatique (INRIA) in France.

Lecture abstract: Given the current advances in space missions for Earth observation, it is possible to have access to very-high-resolution and multimodal satellite imagery. The data acquired can be optical (e.g., panchromatic, multispectral, and hyperspectral images) or synthetic aperture radar (SAR), with different spectral channels, radar frequencies and polarizations, as well as various trade-offs between spatial resolution and coverage. This offers great application potential in the field of remote sensing. An important role in this context is played by semantic segmentation whose purpose is to assign each pixel in an image to a semantic class, typically related to land cover or land use and with prominent applications in areas such as urban planning, precision agriculture, monitoring of forest species, natural disaster management, and climate change monitoring and mitigation.

This talk aims to address these challenges by focusing on innovative techniques that combine deep learning and stochastic modeling to fully exploit the multisource, multisensor, and multiresolution characteristics of satellite imagery for improved semantic segmentation. The proposed methods merge ideas from deep learning, probabilistic graphical models, and ensemble learning. On the one hand, deep learning is currently the dominant approach to image classification and semantic segmentation in remote sensing. Thanks to the non-parametric formulation and the intrinsically multiscale processing stages that characterize convolutional neural networks, deep learning architectures can be effectively employed for multimodal image fusion and analysis. However, the performances of deep learning methods are remarkably influenced by the quantity and quality of the ground truth used for training. On the other hand, probabilistic graphical models have sparked major interest in the past few years, because of the ever-growing need for structured predictions. Depending on the underlying graph topology over which they are defined, they can effectively model spatial and multiresolution information. The theoretical framework of the developed methods and the theorems regarding their analytical properties (causality, inference formulation, output probability distribution, etc.) are proven. The experimental validations, conducted with multispectral, panchromatic, and radar satellite images, show the interest of the proposed methods.

Keywords: deep learning, fully convolutional networks, probabilistic graphical models, semantic segmentation, remote sensing, multimodal data

Collaborators: M. Pastorino, G. Moser and S. Serpico from University of Genova, Italy

Contact information

Dr Zerubia's Bristol host, Professor Alin Achim: Alin.Achim@bristol.ac.uk

Edit this page