Existing work on artificial intelligence (AI) overwhelmingly focuses on the inner workings and computational models sustaining machine-learning algorithms. On the one hand, AI evangelists emphasize the benefits of technological innovation to solve long-standing social issues. On the other hand, scholars criticize the opacity and discriminatory impact of these “weapons of math destruction” (O’Neil 2016). Lacking is a discussion of the contexts of reception of AI and their role in mediating the social impact of AI. This project takes a different approach. Instead of focusing on the technologies that sustain AI, Angèle Christin asks: who are the new actors working behind machine-learning algorithms? What are their incentives and career paths? How do they manipulate, “game,” or optimize AI algorithms? How do they understand their role in mediating opaque technologies? I examine these questions through an ethnographic study of three occupations: Search Engine Optimization specialists; “life hacks” bloggers; and social media branding coaches. These occupations grow “in the shadows” of AI in the sense that these workers claim to manipulate complex and opaque algorithmic formulas on a daily basis. All of these jobs have emerged recently and often verge on the illegal or the fraudulent. Building on my previous work on “algorithms in practice” (Christin 2017), I argue that it is essential to study these new types of practices in order to understand the actual impact of AI on the social world.

Angèle Christin is an assistant professor in the Department of Communication and affiliated faculty in the Sociology Department and Program in Science, Technology, and Society at Stanford University. She studies how algorithms and analytics transform professional values, expertise, and work practices.

Her past research focused on the case of web journalism, analyzing the growing importance of audience metrics (‘clicks’) in web newsrooms in the U.S. and France, drawing on ethnographic methods. She also studied the construction, institutionalization, and reception of predictive algorithms in the U.S. criminal justice system.