•  
  •  
 

Abstract

Large Language Models (LLMs) like ChatGPT have set in motion a series of crises. These include disruptions to the labor force, education, and democracy. Some people believe that rich technocratic ‘saviors’ should solve these crises. Naomi Klein, however, argues that this is a neoliberal fantasy. Tech CEOs will not solve AI-related crises because they have a vested interest in perpetuating disaster capitalism and the social inequalities that keep wages low. Who, then, can solve the AI crisis? I submit that the answer is: oppressed groups with experiential and intergenerational knowledge of crises. To oppressed folks, technological crises are not new, but merely an extension of hundreds of years of uninterrupted subjugation. The popular misconception that AI-related crises are ‘unprecedented’ is an example of what Kyle Whyte calls ‘crisis epistemology,’ a pretext of newness used to dismiss the intergenerational wisdom of oppressed groups. If AI-related crises are new, then what can we learn about them from Indigenous and disabled histories? Nothing. I argue that oppressed groups (rather than billionaire technocrats) should be at the forefront of AI discourse, research, and policymaking.

Share

COinS