Algorithms of Suspicion: Quasi-criminalisation and the erosion of worker data rights

Hosted by the Bristol Digital Futures Institute (BDFI)

Associate Professor Lilly Irani will examine the assemblage of policies, practices, and algorithms of suspicion that control workers’ access to wages and work on digital labour platforms. 

In her work Lilly shows how “fraud” acts as a quasi-legal category that legitimizes and protects platform operators’ unilateral decisions to fire workers. This case study begins with the problem of opaque account suspensions suffered by good faith workers on the platform Amazon Mechanical Turk.

Through an investigation of patents, research papers, and industry documentation, the chapter constructs a view of the models and assumptions Amazon deploys to guess the difference between good and bad workers. These algorithms and the opaque organisational routines that deploy them submit workers to automated surveillance, suspicion, and terminating action – managing workers at scale and at a distance.

These practices may have discriminatory consequences, sometimes in ways recognised by legally recognised protected categories and sometimes not. Lilly concludes by arguing that existing digital rights frameworks must be revised to give workers rights and protections against platforms’ algorithmic forms of management.

About Lilly Irani
Lilly Irani is an Associate Professor of Communication & Science Studies at University of California, San Diego where she co-directs the Just Transitions Initiative. She also serves as faculty in the Design Lab, Institute for Practical Ethics, the program in Critical Gender Studies. She is author of Chasing Innovation: Making Entrepreneurial Citizens in Modern India (Princeton University Press, 2019) and Redacted (with Jesse Marx) (Taller California, 2021).

Register for your free place