Part two takes us to Venezuela, where AI data-labeling companies discovered cheap and desperate laborers in the midst of a catastrophic economic crisis, resulting in a new model of labor exploitation. The series also considers how to break out from these patterns.
In episode three, we visit Indonesian ride-hailing drivers who are learning to defy algorithmic control and fragmentation by leveraging community strength. Part four concludes in Aotearoa, New Zealand’s Maori name, where an Indigenous couple is reclaiming control of their community’s data in order to reinvigorate its language.
The stories together show how AI is impoverishing communities and countries that don’t have a role in its development—communities, and countries that have already been impoverished by prior colonial empires. They also suggest that AI might be much more—a way for the historically disadvantaged to reclaim their culture, their voice, and their right to control their own destiny.
That is, in the end, the goal of this series: to widen our understanding of AI’s impact on society so that we might begin to imagine how things might be different. Without honestly understanding and facing the difficulties in the way, it’s impossible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s slogan), or “broadly distributed[ing]” its benefits (OpenAI’s language).
Now, a new generation of academics is advocating for “decolonial AI,” which would return authority from the Global North to the Global South, and from Silicon Valley to the people. This series, I hope, will serve as a provocation for what “decolonial AI” might entail—as well as an invitation, because there’s so much more to learn.