The Bristol Interactive AI Summer School (BIAS)
The Interactive AI CDT was delighted to host 'BIAS', a summer school between the Second and Seventh of September 2021.
This unique online event focused on machine learning and other forms of data-driven AI, intelligent reasoning and other forms of knowledge-intensive AI, human-AI interaction, and how to do all this in a responsible manner. For four half-days the fundamentals and latest progress in these key areas of AI were discussed by a range of international experts.
The School was aimed at PhD students and early-career researchers in AI and neighbouring areas.
Public Programme
Thursday the Second of September, 2021. 'Interactive AI'
2:30pm - 3:30pm: Dr Martin Porcheron: Studying Voice Interfaces in the Home
In this talk I will introduce our work to understand how families make use of and embed voice-based interactive AI technologies in the home. We collected audio data of actual family interactions with an Amazon Echo over one-month periods using a purpose-built recording device. We adopted an analytic approach informed by ethnomethodology and conversation analysis to document the methodical practices of interactive AI users, and how their use is accomplished in the complex social life of the home. I’ll reflect on our approach with this research, and the importance of studying interactions with technology in the world, outside of the laboratory. Our approach adopted here allowed us to understand more about how device use is made accountable to and embedded into conversational settings like family dinners where various simultaneous activities are being achieved.
3:40pm - 4:40pm: Dr Alison Smith-Renner: Designing for the Human-in-the-Loop: Transparency and Control in Interactive ML
Alison Smith-Renner is a Senior Research Scientist at Dataminr, where she designs, builds, and evaluates intelligent systems for augmenting human workflows. Her research interests lie at the intersection of AI and HCI, focusing on transparency and control for human-in-the-loop systems to engender appropriate trust and improve human performance. Alison received her Ph.D. in Computer Science from the University of Maryland, College Park. She is active in the explainable AI and human-centered AI research communities.
5:00pm - 6:00pm: Prof. Ben Shneiderman: Human-Centered AI: A New Synthesis
A new synthesis is emerging that integrates AI technologies with HCI approaches to produce Human-Centered AI (HCAI). Advocates of this new synthesis seek to amplify, augment, and enhance human abilities, so as to empower people, build their self-efficacy, support creativity, recognize responsibility, and promote social connections.
Researchers, developers, business leaders, policy makers and others are expanding the technology-centered scope of Artificial Intelligence (AI) to include Human-Centered AI (HCAI) ways of thinking. This expansion from an algorithm-focused view to embrace a human-centered perspective, can shape the future of technology so as to better serve human needs. Educators, designers, software engineers, product managers, evaluators, and government agency staffers can build on AI-driven technologies to design products and services that make life better for the users. These human-centered products and services will enable people to better care for each other, build sustainable communities, and restore the environment. The passionate advocates of HCAI are devoted to furthering human values, rights, justice, and dignity, by building reliable, safe, and trustworthy systems.
The talk will particularly cover issues related to explainable AI (XAI). These ideas are drawn from Ben Shneiderman’s forthcoming book (Oxford University Press, January 2022). Further information at: https://hcil.umd.edu/human-centered-ai
Friday the Third of September, 2021. 'Knowledge-Intensive AI'
1:30pm - 2:30 pm: Prof. pInar Yolum: Personal Privacy Assistants: Representations and Mechanisms
Privacy is a major concern in collaborative Web systems, such as online social networks or Internet of Things applications. Contrary to traditional Web systems, collaborative systems allow their users to create and share content about themselves as well as about others. Since different individuals may have different privacy constraints, sharing these co-owned content often creates privacy conflicts. We advocate the use of personal privacy assistants to help users manage the privacy of their content online. Each personal assistant represents a single user and acts on behalf of the user to make privacy decisions. When some content that belongs to multiple users is about to be shared, the personal assistants of the users
employ a privacy decision mechanism, such as negotiation or argumentation, to regulate the privacy of the content. This requires each personal assistant to capture (and learn) the preferences of its user and contribute to the mechanism accordingly. This talk will discuss various challenges and solutions to designing such personal privacy assistants.
Pinar Yolum is a faculty member at Utrecht University, Department of Information and Computing Sciences. She has (co-)authored more that 100 papers in selected journals and conferences on trust, commitments, and privacy. She serves on the Editorial Boards of various journals, including Journal of Autonomous Agents and Multiagent Systems, ACM Transactions on Internet Technology, and IEEE Internet Computing. She served as the Program Co-Chair of International Conference on Autonomous Agents and Multiagent Systems (AAMAS) in 2011 and as general co-chair in 2015. She regularly contributes to activities to promote women participation in computer science.
2:40pm - 3:40pm: Dr Jeff Pan: Introduction to Knowledge Graphs
4:00pm - 5:00pm: Prof. Pascal Hitzler: Knowledge graph reasoning with deep learning (deep deductive reasoning)
Symbolic knowledge representation and reasoning and deep learning are fundamentally different approaches to artificial intelligence with complementary capabilities. The former are transparent and data-efficient, but they are sensitive to noise and cannot be applied to non-symbolic domains where the data is ambiguous. The latter can learn complex tasks from examples, are robust to noise, but are black boxes; require large amounts of - not necessarily easily obtained - data, and are slow to learn and prone to adversarial examples. Either paradigm excels at certain types of problems where the other paradigm performs poorly. In order to develop stronger AI systems, integrated neuro-symbolic systems that combine artificial neural networks and symbolic reasoning are being sought. In this context, one of the fundamental open problems is how to perform logic-based deductive reasoning over knowledge bases by means of trainable artificial neural networks. In this talk we will present and discuss recent advances made on this topic, and concrete pointers to open research questions.
Pascal Hitzler is Professor and endowed Lloyd T. Smith Creativity in Engineering Chair and Director of the Center for Artificial Intelligence and Data Science (CAIDS) at the Department of Computer Science at Kansas State University. Until July 2019 he was endowed NCR Distinguished Professor, Brage Golding Distinguished Professor of Research, and Director of Data Science at the Department of Computer Science and Engineering at Wright State University in Dayton, Ohio, U.S.A. He is on the editorial board of several journals and book series and a founding steering committee member of the Neural-Symbolic Learning and Reasoning Association and the Association for Ontology Design and Patterns, and he frequently acts as conference chair in various functions, including e.g. General Chair (ESWC2019, us2ts2019), Program Chair (FOIS 2018, AIMSA2014), Track Chair (ISWC2018, ESWC2018, ISWC2017, ISWC2016, AAAI-15), Workshop Chair (K-Cap2013), Sponsor Chair (ISWC2013, RR2009, ESWC2009), PhD Symposium Chair (ESWC 2017). For more information about him, see http://www.pascal-hitzler.de <http://www.pascal-hitzler.de
Monday the Sixth of September, 2021. 'Responsible AI'
09:30am - 10:30am: Prof. Toby Walsh: AI and Ethics: why all the fuss?
There’s a lot of discussion in many different fora about AI and Ethics. In this talk, Toby Walsh will identify the new issues AI brings to the table, as well as where AI requires us to address otherwise old issues. He will cover topics from driverless cars to Cambridge Analytica.
Toby Walsh is a Laureate Fellow and Scientia Professor of AI at the University of New South Wales and Data61, and adjunct professor at QUT. He was named by the Australian newspaper as one of the "rock stars" of Australia's digital revolution. Professor Walsh is a strong advocate for limits to ensure AI is used to improve our lives. He has been a leading voice in the discussion about autonomous weapons (aka "killer robots"), speaking at the UN in New York and Geneva on the topic. He is a Fellow of the Australia Academy of Science and recipient of the NSW Premier's Prize for Excellence in Engineering and ICT. He appears regularly on TV and radio, and has authored two books on AI for a general audience, the most recent entitled "2062: The World that AI Made".
11:00am - 12:00pm: Dr Nirav Ajmeri and Dr Pradeep Murukannaiah: Ethics in Sociotechnical Systems
The surprising capabilities demonstrated by AI technologies overlaid on detailed data and fine-grained control give cause for concern that agents can wield enormous power over human welfare, drawing increasing attention to ethics in AI. This tutorial introduces ethics as a sociotechnical construct, demonstrating how ethics can be modeled and analyzed, and requirements on ethics (value preferences) can be elicited, in a sociotechnical system. Ethics is inherently a multiagent concern---an amalgam of (1) one party's concern for another and (2) a notion of justice. To capture the multiagent conception, this tutorial introduces ethics as a sociotechnical construct. Specifically, we demonstrate how ethics can be modeled and analyzed, and requirements on ethics (value preferences) can be elicited, in a sociotechnical system (STS). An STS comprises of autonomous social entities (principals, i.e., people and organizations), and technical entities (agents, who help principals), and resources (e.g., data, services, sensors, and actuators). This tutorial includes three key elements. 1) Specifying a decentralized STS, representing ethical postures of individual agents as well as the systemic (STS level) ethical posture. 2) Reasoning about ethics, including how individual agents can select actions that align with the ethical postures of all concerned principals. 3) Eliciting value preferences (which capture ethical requirements) of stakeholders using a value-based negotiation technique. We build upon our earlier tutorials (e.g., at AAMAS 2021, AAMAS 2020, IJCAI 2020, and ACSOS 2020) on engineering ethics in sociotechnical systems and (e.g., at AAMAS 2015 and IJCAI 2016) on engineering a decentralized multiagent system. However, we extend the previous tutorials substantially, including ideas on ethics and values applied to AI. Attendees will learn the theoretical foundations as well as how to apply those foundations to systematically engineer an ethical STS.
Tuesday the Seventh of September, 2021. 'Data-Driven AI'
1:30pm - 2:30 pm: Prof. Robert Jenssen: Industrial and basic deep learning research for computer vision with limited lab
This talk takes as a starting point an industrial innovation project for monitoring power lines by deep learning for computer vision. The talk describes briefly a multi-stage pipeline based on traditional supervised object detection and localization for power line monitoring from aerial imagery, developed in collaboration with a company. The need for detecting the power lines themselves is discussed, leading to the development of a novel line segment detector fully utilizing synthetic images, due to the lack of annotations/labeled data for this problem. Further recognizing the importance of learning from limited labeled data, a novel approach to few-shot learning is presented. Finally, going to the extreme case of having no labeled image data available at all, a new deep learning method for multi-view clustering is discussed.
Robert Jenssen is the Director of Visual Intelligence, a Norwegian Center for Research-based Innovation: http://visual-intelligence.no. He is professor and head of the Machine Learning Group at UiT The Arctic University of Norway. He is in addition an adjunct professor at Copenhagen University and at the Norwegian Computing Center in Oslo, Norway. Jenssen's research interests are within deep neural networks, kernel machines, and information theoretic learning, for applications within health, computer vision and industry. He has served on the IEEE TC on Machine Learning for Signal Processing, he has served on the general board of IAPR, and he is an associate editor for the journal Pattern Recognition. He is a general chair for the annual Northern Lights Deep Learning Conference http://nldl.org
2:40pm - 3:40pm: Jonas Pfeiffer: Adapters in Transformers. A New Paradigm for Transfer Learning...?
Adapters have recently been introduced as an alternative transfer learning strategy. Instead of fine-tuning all weights of a pre-trained transformer-based model, small neural network components are introduced at every layer. While the pre-trained parameters are frozen, only the newly introduced adapter weights are fine-tuned, achieving an encapsulation of the down-stream task information in designated parts of the model. In this talk we will provide an introduction to adapter-training in natural language processing. We will go into detail on how the encapsulated knowledge can be leveraged for compositional transfer learning, as well as cross-lingual transfer. We will briefly touch the efficiency of adapters in terms of trainable parameters as well as (wall-clock) training time. Finally, we will provide an outlook to recent alternative adapter approaches and training strategies.
Jonas Pfeiffer is a 3rd year PhD student at the Ubiquitous Knowledge Processing lab at the Technical University of Darmstadt. He is interested in compositional representation learning in multi-task, multilingual, and multi-modal contexts, and in low-resource scenarios. Jonas has received the IBM PhD Research Fellowship award in 2020. He has given invited talks at academia and industry such as IBM Research, NEC Labs, the University of Cambridge, University of Colorado - Boulder, and the University of Mannheim.
BIAS Summer School
2 - 7 September 2021
Online Event