Our CDT Students and Research Projects

Since September 2019, we have welcomed the following students to our Centre for Doctoral Training in Cyber Security (Trust, Identity, Privacy and Security at Scale).  

Anthony Mazeli Dominika Wojtczak Emily Godwin Emily Johnstone
Feras Shahbi Hannah Hutton James Clements Jessie Hamill-Stewart
Katie Hawkins Luciano Maino Marios Samanis Priyanka Badva
Robert Peace Soo Yee Lim Tobias Weickert Trevor Jones

 

Hannah Hutton

Improving User Privacy in Mobile and Ubiquitous Health Technologies

Our world is increasingly moving online, and healthcare is no exception. The use of mobile and ubiquitous health (mHealth and uHealth) technologies – devices that can assist with monitoring and managing the health of an individual such as smartphones, smart home assistants and wearable sensors - has increased massively in recent years and millions of people now own devices capable of collecting data and making inferences about their health and wellbeing. While these do provide benefits to the user, this continuous data collection can also negatively impact an individual’s privacy – such as being denied health insurance based on inferences made from such data.

The initial focus of this project will be gaining an understanding of users’ privacy expectations and preferences for mHealth and uHealth technologies, followed by studies into how users understand and perceive privacy properties and risks of such systems. This knowledge will then be used to develop a model for facilitating informed decision making within these technologies, with further research aimed at evaluating and refining this model.

Dr Simon Jones (Bath)

 Dr David Ellis (Bath)

Priyanka Badva

 Provenance-based Forensic and Incident Analysis

This research aims to develop AI based techniques for detecting and interpreting cyberattacks using provenance graphs. Recent work has shown the effectiveness of graph-based machine learning models at detecting attacks in provenance data. However, provenance graphs are extremely complex and thus detected attacks remain difficult to interpret by humans. Therefore, this project will investigate and develop explainable and interpretable machine learning methods for helping humans understand, for example, where an attack originated from, the steps involved in the attack, and the impact of the attack.

 

Dr Ryan McConville (Bristol)

Dr Eleonora Pantano (Bristol)

Robert Peace 

Empowering users to navigate untrustworthy online information ecosystems to reach factual information

The challenge of disinformation in today's world is a major threat that can have serious consequences for individuals as well as societies. This threat is partly facilitated by the vast number of users that are turning to social media and other hyper-connected online information ecosystems for important information. However, the user is linked to a huge amount of both trustworthy and malicious information due to the inherently "veracity-neutral" nature of most online information ecosystems. Moreover, users' own psychological biases may further reduce their ability to correctly evaluate factual information, resulting in user evaluating the trustworthiness of information (and disinformation) without the support of the system itself or the requisite training or skills to make appropriate judgements.
The key objective of this research is to test if a holistic approach that considers both the technical system and psychological constraints of online information ecosystems can create a more effective intervention against disinformation.

Dr Laura G.E. Smith (Bath)

Professor Adam Joinson (Bath)

Soo Yee Lim

Efficient Kernel Partitioning

The operating system (OS) kernel forms the foundation of a system, and is often assumed to be the trusted computing base (TCB) for many higher level security mechanisms. Unfortunately, there have been attacks on OS kernel that compromise the security of the entire system. In the case of monolithic kernels, its lack of isolation results in a flat and wide attack surface, hence making it an attractive attack target. Attack surface reduction is one of the promising techniques for mitigating such attacks. In this work, we aim to harden the security of monolithic kernels by reducing its attack surface via kernel partitioning. Our goal is to implement a practical kernel partitioning technique that has reasonably low overhead. The research aims at investigating kernel partitioning techniques by leveraging the recent developments in hardware to strike a balance between overhead and accuracy/precision.

Dr Sanjay Rawat (Bristol) 

Dr Emma Slade (Bristol)

Tobias Weickert

Security habits: designing an at-scale intervention for security fatigue

The concept of habit is widely studied in the psychological sciences - especially in social psychology. Verplanken (2018, p.4) defines habits as "memory-based propensities to respond automatically to specific cues, which are acquired by the repetition of cue-specific behaviours in stable contexts." This definition casts a habit as a cognitive structure involving a cued response, rather than the act itself. Consequently, this definition sets formal use of the term apart from colloquial use: informally, any act that tends to be repeated can be discussed under that label.

The concept of habit has thus far been insufficiently investigated in the field of cybersecurity. Although - here too - the term is used in casual conversation and research, its exact usage is generally left unspecified. As the wealth of research findings from the psychological sciences only apply to phenomena that can formally be conceptualised in the above terms, the potential implications of these findings for the field of cybersecurity are unclear. The herewith-proposed research consequently aims to explicate differences between prevalent folk models of habit in cybersecurity and formal models of habit in psychology through a qualitative, semi-structured interview-based study. The findings from this initial study will inform further research.

Professor Adam Joinson (Bath)

Dr Barney Craggs (Bristol)

 

 

 

Edit this page