Jonathan Erskine

General Profile:

Starting out with a relatively traditional education in aerospace engineering, it was only in the late stages of my degree that I had my first tangible experience of software engineering, computer vision and control systems, developing a growing passion for the development of “smart” machines. In industry, I have worked for a small medical start-up and then on missiles and weapon systems, dealing primarily with the present and future challenges of the air domain across integration and control, concept development and user requirements management.

Over the four years I have spent working as an engineer, I have observed an increasingly present dialogue about the power and potential of “AI” paired with an ignorance about the underlying fundamental principles and limitations. I have joined the Interactive AI CDT so that I can help inform and shape the discussion around AI as we continue to expand the scope of applications available to us, dispelling common myths and ensuring we are putting our best foot forward.

Reflecting on my experience in the Future Systems sector of defence, I believe it is important to answer the “why” as much as the “how” - what we should do is as important as what we can do, and I hope to contribute to both areas over the course of my research by focusing on ethical frameworks as much as the technical challenges.

Research Project Summary:

In a world where AI agents are increasingly embedded into critical applications, it is crucial to understand the mechanisms at play which drive these agents. This research lies somewhere within the broad domain of Human-Machine Communication, hoping to contribute to the methods we have at our disposal for both teaching and learning from machines in an interpretable way. That is, where the communication is intelligible between human and machine, and effectively improves performance.

Currently, research is focussed on investigating the mechanisms for communicating richer annotations from human to machine in a supervised learning setting. Supervised machine learning relies on the provision of labelled data, with performance typically correlated with the amount of training data available. In a recent paper we investigate the use of more complex forms of labels for learning in situations where access to data is limited – although at the expense of more involved annotation processes. We propose a novel loss function which compares, for each data point, expertly curated directions towards the decision boundary against the gradients of the predictive model. As a proof-of-concept, we generate random direction vectors around each training point and indicate where intersection with the decision boundary occurs along each vector. In effect, we are providing an indication of the directions in which we know the classification to change, which we refer to as “expert data”.

Combining cross-entropy loss and our proposed loss function enables us to penalise both incorrectly labelled instances and regions of the model where the gradients are out of alignment with our expert data. This results in models which separate the XOR dataset with fewer samples, and in less training epochs, than models which utilise only standard cross-entropy loss. While this proof-of-concept demonstrates improved learning from additional expert data, annotations are randomly generated and inefficient. Future work will generate counterfactual direction vectors to better represent expert input and investigate where these costly labels may be most effectively applied.

 Additional research includes surveying literature on emergent communication, multi-agent reinforcement learning and game theory to understand the nature and consequence of complex interactions in multi-agent systems. I hope to develop a test bed in collaboration our industrial sponsor, Thales, to understand how intelligent agents can influence/be influenced by human collaborators. Such interactions must improve system performance while retaining a threshold of interpretability and/or predictability for the human collaborator. The impacts of this work aim to contribute to the field of explainable AI by demonstrating a framework for effective human-AI teaming.



Edit this page