Think. Resist. Act local: is a slow AI possible?
Jeremy Crampton, Professor of Urban Data Analysis at Newcastle University on the three principles that could underpin a more mindful approach to AI.
3 March 2020
Reading time: 7 minutes
In the popular imaginary, AI has become omnipresent, from Alexa smart devices to urban dashboards that monitor entire cities in real-time. Scientific work on AI has jumped significantly in the last 15 years. Scopus, the academic corpus of citations, lists over half a million publications on AI or machine learning – 85% of them since the turn of the century.
There is now an “AI arms race” between China and the USA. Xi Jinping, China’s president, announced in 2017 that his government would invest heavily in AI and Big Data and promised global dominance by 2030. Businesses fear being “left behind” in our new AI economy.
But is AI always warranted? In this piece I suggest we adopt a more mindful approach; what I call “Slow AI” that is based on three fundamental principles: Think. Resist. Act Local.
We need to change the technological imaginary so that computer scientists cannot just respond that societal problems “are beyond the remit of their lab” or that regulations would “stifle innovation” (a catchall that masks just about any activity).
In terms of Slow AI, this includes accounting for the values, desires and intentions that frame technology; in sum the imaginaries, even if they are fantasies (in the psychoanalytic sense). We would see in this view that unjust tech is a symptom, not a cause of injustice. We need to get at the unjust infrastructural systems producing AI. Such a research agenda will always involve cross-disciplinary and non-technical expertise.
Think!
Not everything is AI, nor is AI always needed.
Not every line of code or every algorithm is AI. Think about whether AI is the best solution. How can we tell? Perhaps we could do some kind of audit of AI to see if it is justified. If so, good intentions won’t be enough. “I’m right because I have good intentions” is just as unconvincing in AI as it is in an argument with a spouse or other loved one. It’s a hard lesson to learn. An obvious corollary is “good for whom?” Who does it benefit and who does it damage? Who profits and who is exploited?
Resist!
There is an avalanche of rhetoric around AI at the moment. We need to resist it.
Any movement of counter-AI or critique should have its forces of resistance. So far, the most widely adopted approach has been “FAT,” or fairness, accountability and transparency.
But there are also more radical approaches. These are not couched around the outcomes of a particular tech (its fairness) but about the values that feed into and frame the tech in the first place. Drawing on a recent article by Laura Hoffman (2019) one could say this is “where fairness fails.”
Hoffman argues that antidiscriminatory (fairness) approaches are limited in three ways: they focus on bad actors; they’re not intersectional enough; and there’s too much attention to a narrow range of good outcomes. Instead, Hoffman favours more structural approaches, looking at how conditions on the ground frame and produce technology itself. This “infrastructural turn” as some have dubbed it (Strauss 2019) identifies a whole complex of social and technological conditions such as precarity in the platform economy.
Perhaps the most exciting work on this right now is by Ruha Benjamin, who has explored how social norms and imaginaries, particularly racism, serves to force people to live within what she calls a “new Jim Code” of technology (Benjamin 2019). In order to understand what this means for AI, she says, we have to begin by confronting the fact that the basic conditions for producing technology are racist (and sexist) and not by accident. Benjamin identifies a deep desire to continue these hierarchies of control, and that race-neutral (“fair”) outcomes merely serve to reproduce inequalities. One way to confront structural racism is to develop what she calls an abolitionist imaginary to counter our carceral imaginary. This means, among other things, race critical code studies that would not seek solutions through code, but through broader infrastructures.
One of the most encouraging exemplars of resistance that fulfils the spirit of Slow AI is the growing moratorium on facial recognition. San Francisco, Oakland and Seattle have voted to ban government use of the technology, as has the Australian government. The EU is also considering whether to implement a facial recognition ban. Reasons may vary from cost to privacy concerns, but a refusal to simply install facial (or emotion) recognition just because we can is a shared motivation. And the Ada Lovelace Institute has now commissioned an independent review of the governance of biometric data.
Finally, there is now a small cottage industry of “techlash lit” critically exploring the tech industry. As but one example, I’d cite Ranu Faroohar’s excellent book Don’t be Evil, which mourns the erosion of the founding vision of big tech and its replacement by disruption and profit.
Act local!
AI is strange because it is conceived as a universal. But it is produced in centres of production (Silicon Valley, China, Oxbridge) and then distributed to the police, classrooms and the home as if it were universally applicable. A Slow AI, like slow food, would be produced from the available local resources and even more importantly with the proper and full co-production of locals. A grand challenge facing AI researchers is how to ensure AI is place-based; to co-produce AI with local communities without exploiting them as data subjects. At present, too many tech projects construe participation as a form of data gathering after the fact.
Finally, we have the no-code/low-code movement(s). In principle, these allow makers to develop applications or websites without writing any code themselves by using minimal just-in-time tools. Perhaps my favourite site is PublicLab.org, which bills itself as an environmental science community. It got its start in 2010 following the BP oil spill in the Gulf of Mexico, using low-tech analogue tools such as helium balloons with cameras tied to them to collect “remotely sensed” data about the oil plume (their images are more high resolution than US spy satellites). I’ve used the balloon and camera set up several times in my classes to great effect. While you can buy a complete balloon mapping kit from PublicLab, you can make one at home with everyday materials, and there’s something about going old-school analogue that provides a great sense of tactile satisfaction!
Conclusion
There is a lovely pedagogical principle known as “troublesome knowledge” that applies here (Meyer and Land 2003). Troublesome knowledge is the idea that as we learn we encounter knowledge that is “difficult” for us in some way, but that when you’ve encountered it, you cannot go back – such as the idea that AI is not a universal social good.
The good news is that the tech industry is now highly aware of the techlash against it, and if for no other reason than the bottom line has made steps to be more sustainable (such as Microsoft’s recent announcement they would be “carbon negative” by 2030). Regulators are newly emboldened. But the principles of Slow AI – Think. Resist. Act Local – are still needed to avoid fig-leaf solutions.
Jeremy Crampton is Associate Editor of Dialogues in Human Geography and Professor of Urban Data Analysis, School of Architecture, Planning and Landscape, Newcastle University.
References
Benjamin, R. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK: Polity Press.
Crawford, Kate, Roel Dobbe, Theodora Dryer, Genevieve Fried, Ben Green, Elizabeth Kaziunas, Amba Kak, Varoon Mathur, Erin McElroy, Andrea Nill Sánchez, Deborah Raji, Joy Lisi Rankin, Rashida Richardson, Jason Schultz, Sarah Myers West, and Meredith Whittaker. 2019. AI Now 2019 Report. New York: AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.html.
Hoffmann, A. L. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse, Information, Communication & Society, 22:7, 900-915
Hooks, b. 1994. Teaching to transgress: education as the practice of freedom. New York: Routledge.
Dobbe, R. and Whittaker, M. 2019. AI and Climate Change: How they’re connected, and what we can do about it. AI Now Institute. https://medium.com/@ainowinstitute.
Meyer, J. H. F., and R. Land. 2003. Threshold concepts and troublesome knowledge: Linkages to ways of thinking and practising within the disciplines. Improving Student Learning – Ten Years on, 412-424.
Strauss, K. 2019. Labour geography III: Precarity, racial capitalisms and infrastructure. Progress in Human Geography. Online First.