Trisha Khallaghi
-
Academic Background
MEng Computer Science, University of Southampton (2016)
General Profile:
Hiya, I'm Trish. I completed my engineering foundation year and MEng in Computer Science back in 2016 at University of Southampton. My dissertation was in the field of Music Information Retrieval (MIR) and involved writing a hybrid music recommender system. I spent nearly six years after that working as a software engineer at a few different companies, with most of my time spent at a music tech company (Last.fm) as a backend developer. For the last year and a bit, I was in music full-time.
Having encountered AI at different points in my education and career, I applied for the CDT to delve deeper into how it can serve as a tool within my areas of interest. My inclination toward the interactive aspect is rooted in my belief that promoting explainability and transparency within the field is crucial. I enjoy exploring interdisciplinary research areas that intersect with the creative arts within the context of AI, and have a particular interest in places where it overlaps with music. I'm currently looking into how we can make expressive music making more accessible, addressing both physical and social accessibility aspects.
Research Project Summary:
The PhD aims to create an interactive music system that allows users to manipulate and explore data through sound using deformable interfaces using AI techniques, enabling individuals to explore and experience complex data, such as ecological and historical phenomena, through immersive sound powered by tactile interfaces and personalised interactions. The system will consist of several modular building blocks so each component can be developed in isolation. The spatiotemporal data will be sonified by applying AI techniques to traditional methods so that the system can map sounds in ways that align with different individuals' perceptions. This approach will use techniques found in music recommendation systems. The deformable interfaces will use interactive AI to map the physical interface to sounds, taking into account a user's perceptual links between sound and shape. This idea will be expanded further by exploring dynamic mapping to cater to different accessibility needs. The project draws inspiration from soundwalking, where individuals or groups walk together in a space while focusing on the sounds of their environment. The goal is to broadcast the audio within a larger space by using, e.g. acousmoniums or line array speakers. However, these approaches are often cost-prohibitive; thus, affordable audio broadcasting alternatives will also be explored.
The project will address issues related to the following research questions: (1) How can AI be applied to dynamically personalise sound generation based on data sonification? (2) How can AI be used to optimise interface-to-sound mappings in deformable interfaces? (3) How can AI be used to create inclusive systems that accommodate diverse preferences needs in immersive and participatory sound environments?
Engaging diverse audiences with complex topics has become challenging in a time characterised by overwhelming distractions. Traditional methods of presenting data can struggle to capture individuals' interest even in particular contexts, such as exhibitions within museums. Immersive spaces can tackle this by providing focused environments to promote deep engagement with specific contexts. Incorporating tactile and collaborative interfaces in these spaces encourages further engagement by allowing individuals to navigate data exploratively.
Allowing individuals to broadcast their sonic interpretation of data into a broader space is a unique form of communication that goes beyond traditional verbal exchanges. Immersive and participatory sound experiences have been shown to have therapeutic value and can foster a sense of engagement and belonging. Therefore, it is essential to take an inclusive approach to developing such a system by making complex subjects more accessible to individuals who may have previously felt out of place or underrepresented within specific contexts.
While various efforts have been made to incorporate AI techniques in the areas described, no overarching system combining these elements into a framework has yet to emerge. Applying AI techniques to different areas of the system can enhance the user experience by tailoring it to individual needs or perceptions. These techniques can improve sound generation, interface-to-sound mapping, and the spatial arrangement of sound to create a more immersive experience.
The impact of this project can provide insight into the development of future systems and frameworks. The modular design encourages the flexible development of compatible components, meaning that techniques from various sources can be integrated over time. The system has potential across fields where data sonification can encourage engagement with complex topics. It enables participatory sound exploration, allowing individuals to collaboratively manipulate sound environments in public installations, such as museums. For example, in ecomusicology, environmental data can be sonified to highlight ecological phenomena like climate change.
Immersive sonic environments offer therapeutic applications. By tailoring soundscapes to individual preferences and perceptual links and broadcasting them in shared spaces, a therapeutic auditory experience can be created for both users and listeners. Accessibility and inclusion are central to the system's design, enabling participation from individuals whom traditional engagement methods have marginalised. Additionally, the system can promote social accessibility by encouraging individuals (e.g. students) to engage with topics they might not have been previously exposed to and making complex subjects more approachable by allowing them to interact with the data in a playful and immersive manner.
Supervisors:
- Professor Atau Tanaka, School of Computer Science
- Dr Peter Bennett, School of Computer Science
Website: