The Bristol Interactive AI Summer School (BIAS) 2023
The Interactive AI CDT hosted 'BIAS', an in-person summer school between the 5th and 7th of September 2023. For three days the fundamentals and the latest progress in key areas of AI were discussed by a range of experts. This year, we focussed on language models and their safety. BIAS particularly welcomes PhD students and early-career researchers in AI and neighbouring areas.
This year's event took place at the Watershed, Bristol.
Public Programme
Tuesday 5th September, 2023
09.00-09.45: Registration
9.45: Welcome from the IAI CDT Director, Professor Peter Flach
10.00: Speaker: Dr Mike Wray (University of Bristol) "Bridging the Gap between Vision & Language"
Talk title: "Bridging the Gap between Visions and Language"
Abstract:
"A picture is worth a thousand words" - language can be used to describe images, videos, or other forms of visual information, which as humans we can do with ease. Teaching computers to understand language or vision is a challenging task let alone understanding both. In this talk I will present background and current methods on Deep learning systems for Vision Language tasks as well as future problems still yet to be tackled.
11.30: Speaker: Dr James Cussens (University of Bristol) "Causal Representation Learning"
Talk title: "Causal Representation Learning"
Abstract:
There is growing interest in making connections between research in (learning) causal graphs and machine learning - particularly representation learning [1]. Causal representation learning is thetask of learning a causal model where the causal variables are *latent* (i.e. unobserved). For example, we may wish to learn a causal model with variables corresponding to the positions of discrete objects(e.g. a robot's fingers, objects to be manipulated) but our data may just be raw image data (e.g. a video of a robot). In this talk I will give a high-level overview of current research in this area.
14.15: Speaker: Dr Kacper Sokol "The Difference Between Interactive AI and Interactive AI"
Talk title: "The Difference Between Interactive AI and Interactive AI"
Abstact:
Over the past few years we have seen the explainability of AI and ML models pivot towards becoming more human-centred, inspired by insights from social sciences. Human explanations tend to be contrastive and social, which prompted researchers to embrace counterfactual explanations and, to a lesser degree, interactive explainability that allows the recipients to co-create the explanations. Additional real-world desiderata – e.g., feasibility and actionability of explanatory insights – have also been translated into technical requirements. Yet such a conceptualisation of explainability is rarely ever deployed, especially in high stakes domains, such as healthcare, where it is most needed. In this talk we will discuss my journey of going back to the social sciences literature, looking at human decision-making in cognitive and behavioural psychology, to find the missing pieces of the puzzle. We will overview different levels of data-driven automation and relate them to how humans and algorithms make decisions. We will then speculate how these insights from psychology could be operationalised and applied to healthcare, specifically detection, management and treatment of paediatric sepsis, using the example of antibiotic exposure. These findings contribute to a novel conceptualisation of interactive explainability that accounts for the social and organisational aspects of this process, in contrast to a purely technical viewpoint manifested in functional interactivity and logical counterfactuals. Such an approach promises to augment decision-making workflows instead of disrupting them, thus enabling the adoption of AI and ML systems in high stakes domains.
15:30: Poster Sessions: Group A + Group B
Wednesday 6th September, 2023
9.00 Registration/ 9.30: Speaker: Dr Edwin Simpson (University of Bristol) "ChatGPT Architecture and Evolution"
Talk title: "ChatGPT Architecture and Evolution"
Abstract:
Large language models (LLMs) such as ChatGPT have leapt from being a niche academic interest to a game-changer finding a wide range of everyday uses. This talk introduces the technology behind them: their training, how they learn to follow instructions, the architecture of their models. We will also look at the models that came before to understand why interacting with LLMs is so different, then take a look at some open problems with LLMs, such as ‘hallucinations.
10.00: Speaker: Professor El-Mahdi El-Mhamdi (Ecole Polytechnique/Calicarpa) "Security of large AI models"
Talk title: "Security of Large AI Models"
Abstract:
This talk introduces the problem of securely training AI models in the presence of malicious actors, those can use corrupt data, disinformation through fake accounts and social media astroturing or worse, compromised machines, to influence the outcome of training an AI model. Important solutions to secure the training of AI models have been proposed by the statistics and machine learning communities and will be reviewed, the talk will provide a few mathematical insights on why most of these solutions fail in the face of larger and larger AI models, arguing for an inevitable impossibility of securing AI models without limiting their number of parameters.
11.15: Speaker: Jess Rumbelow (Leap Labs) "Model-agnostic, data-independent interpretability"
Talk title: "Model-agnostic, data-independent interpretability"
Abstract:
Increasingly powerful AI systems demand better interpretability methods to predict and prevent dangerous or embarrassing failures in deployment. Jessica introduces some of Leap's novel interpretability algorithms and demonstrates how they can be used to better understand what models have learned and when they might fail.
12:15 Speaker Huw Day (UoB) "Data Unethics Club"
Talk Title: "Data Unethics Club"
Abstract:
Data Ethics Club is a “journal” club about doing data science ethically. “Journal” because we will also read blog posts, (parts of) books, or watch videos. The thing that makes Data Ethics Club great is that we provide a space for people to discuss ideas about different aspects of data ethics.
This talk will feature a series of games and activities where the audience will split into teams and perform tasks such as trying to spot when ChatGPT has been used to write a thesis abstract, jailbreaking ChatGPT to do perform devious tasks and see if their peers would lie to them about what dirty work they have had ChatGPT do for them.
Lunch: 13:00
14:00 Speaker: Sven Hollowell (University of Bristol) "ChatGPT Automate Anything"
Talk title: "ChatGPT Automate Anything"
Abstract:
Can GPTs become autonomous by using tools? This session demonstrates a working prototype of a GPT-based personal assistant, capable of using the same desktop applications as a human.
14:25 Speaker: Professor Andrew Charlesworth (University of Bristol) "LMs & Law"
Talk title: "ChatGPT and the Law: New Technology, Old Problems?"
Abstract:
LLMs, such as ChatGPT, have generated considerable attention in academia and the media, with some suggesting that their impact will require wholesale reconsideration of the law (and ethics) applicable to them, and others suggesting that the technology concerned raises few issues that cannot already be addressed via existing legal practice. As is often the case with new information technologies, the outcome is likely to fall somewhere between these positions. While it may sometimes appear that a particular outcome is inevitable, exactly where we end up will be determined by a complex interrelation between jurisprudence, politics, commercial pragmatics and social acceptance/rejection.
15:30 Speaker: Harry Field (University of Bristol) "ChatGPT-enabled search. in company data: tutorial & critical review"
Talk title: "ChatGPT-enabled search in company data: tutorial & critical review"
followed by:
16:00 Harry Field (University of Bristol) Workshop: "Breaking large language models through hands-on interaction: A discovery of limitations through play"
Thursday 7th September, 2023
9.00: Registration
09.30: Speaker: Dr Dandan Zhang (University of Bristol) "Robot Learning for Dexterous Manipulation’"
Talk title: "Robot Learning for Dexterous Manipulation"
Abstract:
The landscape of robotics has undergone a significant shift over the past decade, transitioning from rigid, pre-programmed operations to more adaptable, intuitive systems. In this talk, I will focus on the use of robot learning algorithms to enhance the dexterous manipulation capabilities of intelligent robotics. I will highlight the exceptional abilities of robots achieved through imitation learning and introduce state-of-the-art methods to address the current limitations of such approaches. I envision intelligent robots reshaping our world by providing tangible benefits in our daily life, contributing to the healthcare system, and assisting humans in hazardous environments.
10:15 Speaker: Dr Daniel Schien (University of Bristol) "Sustainability of AI within global carbon emissions"
Talk title: "Sustainability of AI within global carbon emissions"
11:30 Speaker: Professor Kerstin Eder (University of Bristol) "The AI Verification Challenge"
Talk title: "The AI Verification Challenge"
14:00-16:45: Student Collaboration Session
Dates
5 - 7 September 2023