Matt Clifford

General Profile:

I first discovered the beauty (and challenges) of AI through various projects during my undergraduate degree, which was solidified with my dissertation focusing on techniques to reduce the computational and data requirements of object detection algorithms. Since graduating I have worked in the post-production film industry exploring and integrating AI-based computer vision algorithms for tasks such as colourisation, depth estimation and slow motion.
Although my main area of interest has been in computer vision, I hope to explore and collaberate on other exciting areas of study and research offered to me by the CDT such as NLP, computational neuroscience, FATe(fairness, accountability, transparency, ethics) etc. and most cruicually, interaction.
Outside of work I love to ride bikes in any form, road cycling or mountain biking. I'm a keen mechanic also, constantly tweaking, fixing and building my own and friends bikes. There is no better feeling than riding a bike that you have built yourself!

Research Project Summary:

Sustainability is a motivating factor for change in many aspects of society. There is a growing culture to efficiently utilise resources. However, this is an attitude that is not always shared when designing and implementing AI pipelines, where often all available power-hungry compute and large-scale data resources are often thrown at an AI task.

Considering recent events like the energy crisis it is more relevant than ever to take sustainably in to account when researching and deploying AI systems. I think that the interaction between AI and sustainability is one of the more topical influences that AI is going to have on our society.

For my PhD project I plan to create tools for a sustainably focused AI framework that helps AI and machine learning practitioners to interact with AI in not only a responsible but also a sustainable way. This is outlined at three main steps of the AI pipeline: data, models and evaluation. I plan to answer key questions at each stage of this pipeline:

1) For data, how much data is sufficient to train AI algorithms and what methods are there to reduce data requirements? There is also a need to assess data quality and how similar different datasets and tasks are before even training models.

2) For models, are there ways to reuse pre-existing models so that resources are not wasted on creating new models? This will utilise techniques such as transfer and multitask learning as well as exploring and developing others.

3) For evaluation of models, can we trust models more through convenient and accurate evaluation? This links back to reusing and adapting models when they fail during evaluation, allowing models to not just be thrown away when their performance degrades. This increases model lifespan and reduces the necessity to train models from scratch.

I will consider these key questions, methodologies and ideas to the area of sim-to-real transfer in tactile robotics. In this domain, touch sensors are used tasks such as object pushing or grasping. In order to reduce real life data collection as well as safety, the underlying models are often trained in simulation. The work to bring sustainably focused AI to simulation-to-real tactile robotics already started during my summer project as part of the CDT program. There, I investigated how to reduce the data requirements for transferring simulated models to real life by reusing the knowledge acquired in similar models. This is important because current methods do not make use of existing and available models. Instead, they train a new and costly model from scratch for each task.



Edit this page