Following the success of our first postgraduate session, we are pleased to continue this new event format, showcasing the work of our postgraduate research community. As before, three researchers will each give a short presentation (approximately 15 minutes), offering an opportunity to share their work and engage with the wider BVI network.
For this session, we are delighted to feature the following speakers:
Talk 1: Visual ecology of the European adder and its avian predator, the common buzzard (George Lewis, Bristol Veterinary School)
This PhD project will investigate the visual ecology of the European adder and how its detected by a key avian predator, the common buzzard. The study aims to understand how variation in snake posture, dorsal patterning, and environmental context affects detectability, integrating perspectives from predator vision, camouflage theory, and thermoregulation. To address this, 3D-printed snake models will be deployed in natural habitats characterised using drone-based environmental mapping. Models will vary systematically in posture and pattern, allowing controlled assessment of their visibility under realistic field conditions. High-resolution imagery of these models in situ will be presented to human participants as proxy observers, who will identify snake locations to generate quantitative measures of detectability across treatments. We will also analyse spectral properties, including ultraviolet reflectance, to better approximate how these signals may be perceived by avian visual systems. Finally, trials with trained buzzards will assess predator search and detection sequences in a controlled setting, providing a direct test of ecological relevance.
Talk 2: Using bio-behavioural features of facial dynamics for deepfake detection (Tim Murphy, School of Psychological Science)
Deepfake detection research has largely converged on deep learning approaches that, despite strong benchmark performance, offer limited insight into what distinguishes real from manipulated facial behaviour. This study presents an interpretable alternative grounded in bio-behavioural features of facial dynamics and evaluates how computational detection strategies relate to human perceptual judgements. We identify core low-dimensional patterns of facial movement from which temporal features characterising spatiotemporal structure were derived. Traditional machine learning classifiers trained on these features achieved modest but significant above-chance deepfake classification, driven by higher-order temporal irregularities more pronounced in manipulated than real facial dynamics. Detection was substantially more accurate for videos containing emotive expressions, and emotional valence classification analyses indicate that emotive signals are systematically degraded in deepfakes, explaining this differential effect. Furthermore, we provide an additional and often overlooked dimension of explainability by assessing the relationship between model decisions and human perceptual detection. Model and human judgements converged for emotive but diverged for non-emotive videos, and even where outputs aligned, underlying detection strategies differed. These findings demonstrate that face-swapped deepfakes carry a measurable behavioural fingerprint, most salient during emotional expression. Additionally, model-human comparisons suggest that interpretable computational features and human perception may offer complementary rather than redundant routes to detection.
Talk 3: Translation through Connection: Bridging the gap between climate science and public engagement through human interest (Danni Pollock, School of Biological Sciences)
It can often feel anarchic in the world of science to want to make your work widely accessible to those without a specialist or technical understanding of your particular niche of research. However, in a public atmosphere increasingly riddled with misinformation, activism-inertia and hyperindividualism, it has never been more important to make arguably the most crucial area of scientific research accessible and interesting to a greater audience than ever before. The re-imagining of communication as a scientist is daunting, and something many of us find uncomfortable. It is common, and understandable to find comfort in the security of a scientific journal. However, it is a culture that needs help to shift towards a more inclusive norm; one only need look at the discrepancies in publication language (2% of tropical research is published in a language other than English) to see that perhaps, we are missing something (Ramírez-Castañeda. 2020). Visual imagery is the most universal form of information dispersion in the 21st century. Through this medium capturing researchers and people, hearing their research and seeing them at work is the simple, effective, yet prevalent gap that exists within the world of scientific communication techniques. I am Danni Pollock, a photojournalist and tropical ecologist who specialises in bridging this gap between researchers and their non-academic target audience.