Vanessa Hanschke

General Profile:

I have always enjoyed mixing humanities with technology starting with a Bachelor’s Degree in Cognitive Science and then a Master's in Design Informatics. Right after handing in my Master's thesis on Participatory Design in health care, I started working as a Data Science consultant in Milan. There I helped Italian and international customers navigate the world of Data and Artificial Intelligence.
I have a passion for the philosophical side of AI and like to ask myself questions such as: where this progress is leading our society and how everyone can benefit from it. In my opinion, AI needs to be made more inclusive, for example by encouraging girls to break the traditional and statistical boundaries of technical domains and join the field of AI.

Research Project Summary:

Many historical injustices have been re-discovered in AI applications. Be it in the justice system (Dressel & Farid, 2018), job recruitment (Dastin, 2018) or public visibility (Buolamwini & Gebru, 2018), research has shown how structures of power are perpetuated through data into algorithms and highlighted the importance of incorporating ethical principles and values into design. In response, many ethical frameworks, fairness metrics and explainability techniques have been proposed to tackle the challenges of responsible, accountable AI. Now is the time to evaluate these efforts and address the question of how well these responsible AI tools fare in practice. My PhD proposal looks into how these methods play out in a living, breathing society.

Fairness, ethics, and value discussions are complex and often described as wicked problems (Strauß, 2021) that have no single clear solution. The perspectives of different stakeholders need to be considered and balanced among each other and appropriate solutions can vary between different application contexts. To analyse such complex scenarios, I propose to use different narrative tools during the course of my PhD. Storytelling can help us imagine future scenarios and give our technological ideas concrete contexts to test them, without causing actual harm to any of the people involved. Furthermore, stories can be used to give a voice to the lesser heard participants of AI and offer an opportunity to empathise with disagreeing views. I am interested in exploring perspectives such as Decolonial theory (Mohamed et al., 2020) to understand the structural power imbalances reflected in AI technology.

For my summer project, I focused on one specific actor in the AI pipeline: the data scientist. I set out to study the work of a data science team operating in the financial sector. The main goal of the summer project was to understand how this data science team already applies concepts such as explainability, ethics, bias and accountability in their everyday work by combining both ethnographic analysis and design research. My project consisted of two main parts. Firstly, I carried out a rapid ethnography to analyse current views and practices in the daily workflow of data scientists. Secondly, I used design fiction memos (Wong, 2021) along with thematic analysis to analyse the collected qualitative data. I presented the three design fiction memos that were developed during the analysis of the ethnography data of the data scientists in my summer project report along with a thematic analysis of my ethnographic work and an evaluation of the design fiction memo method.

Following on from this summer project, my plan is to feed back my results to the data science team by further developing the design fiction memos into interactive probes.  As future work, I also plan to investigate the perspective of other AI stakeholders, such as the users of AI technology.



Edit this page