From Impoverished Intelligence to Abundant Intelligences

Jason Edward Lewis
5 min readMay 28, 2021

--

The artificial intelligence (AI) industry-academic complex does not have an ethics problem. It has an epistemology problem. The persistent failures with computationally-enabled and -amplified bias are symptoms of this deeper issue.

Image: QUARTET. Concept by author. Illustration by Kari Noe. Courtesy of the Indigenous Protocol and Artificial Intelligence Workshops

The epistemology problem stems from a series of assumptions that are built into how we design and deploy our computational systems. These include the user is an individual; the individual prioritizes her personal well-being; culture is an epiphenomenon rather than the phenomenon; text and context can be separated; and that the only useful knowledge is that produced through rational instrumentality.

This makes AI system engineers blind to vital aspects of human existence — such as trust, care, and community — that are fundamental to how intelligence actually operates. The refusal to engage, explore and operationalize knowledge frameworks that centralize these aspects is a tremendous scientific failing. It creates huge gaps between what humans think of as an intelligent presence in the world and what the AI industry-academic complex is building.

This blindness stems from an intellectual lineage heavily infected with Cartesian duality, monotheistic eschatology, and computational reductionism. Separating mind from the body, elevating after-life over present-life, and violently forcing all experience into binary terms to make it (appear to be) computable produces impoverished notions of what constitutes human intelligence.

A 2020 special issue of the Journal of Artificial General Intelligence called “On Defining Artificial Intelligence” exemplifies this approach. Most of the contributions assume or argue that intelligence is the thing that happens when we engage with the world rationally. But we know that much of our intelligent engagement with the world is a-rational, pre-rational or straight up irrational — we’re notoriously poor at cost/benefit analyses, we constantly make all kinds of bad assumptions, and much of the time we don’t even really know what we are doing as our autonomous nervous system drives us. Yet we still consider ourselves intelligent.

How can we be building “intelligent” systems that cannot account for these non-rational ways of being in the world? AI system designers are building worlds for us to inhabit. In the same way that architects design the physical spaces in which we live, they are designing the virtual systems in which we think. These systems are being increasingly tasked with interactions that have deep social, political and legal consequences yet the field continues to fetishize rational goal-seeking as the definition of intelligence. Re-reading those articles now, they seem like a series of exercises in wish-fulfilment for the socially awkward, i.e., what a bunch of super-nerds wish intelligence was. They wish intelligence was rational; they wish it was subject to inspection; they wish that it was replicable in computational substrates.

Do we really want to live in computational worlds incompetent with regards to emotion, sociability and embodiment? As AI pioneer Roger Schank writes in a beautifully cranky response to the special issue:

“Today we have AI that is not about people, and this kind of AI has taken over the field. AI is now just about counting…[that] is not intelligence and it would be good if we would stop calling it AI.”

White Supremacy — It’s not Just for People Anymore
The same genealogy informs the white supremacy in our AI systems that scholars and computer scientists Safiya Noble, Ruha Benjamin, Timnit Gebru, Joy Buolamwini and others have identified. The bias in these systems is not a bug but rather a feature of an interlocking set of knowledge systems designed over centuries to benefit white men first and foremost.

We need to be clear about this. This is one reason why, despite being founders and primary actors in the Ethical AI group at Google, Gebru and her colleague Margaret Mitchell no longer work there. An ethical approach did nothing to alter the fundamental knowledge framework within which Google, and other hi-tech companies, operate. Within a framework that prioritizes knowledge that elevates individual profit over communal well-being, Google is acting ethically.

Machine Learning: Poisoned at the Root
What does this mean for the goose presently laying all the golden eggs — machine learning, deep learning, and various AI techniques that rely on vast amounts of data to work? It means they are fundamentally ethically compromised, poisoned at the root. Little of that data collected by ‘scraping’ the web or other large-scale aggregation techniques is collected in a truly ethical manner. The people who produced that data were not asked if it be used this way, they were not compensated for this use, and the use most likely does not benefit them directly.

Indigenous communities have long histories with people like this — scientists who behave as if our data is a publicly available resource that they are free to plunder in the name of the ‘greater good’ or ‘increasing humanity’s store of knowledge’. We see people like that coming a mile away, and remember how the greater good has often meant our detriment and humanity has often not been extended to us. We recognize them for what they are: colonizers.

AI System Builders: Third-rate Engineers?
We have codes that specify the standards of materials that go into building our bridges — appropriate aggregate for the concrete, high grade steel, etc. Civil engineers are educated and professionalized to design and build in a way that respects those expectations. They are also expected to be able to fully explain and justify their solutions.

Why, then, do we allow computer scientists to use garbage ingredients to build systems that affect millions of people; to use algorithms that disadvantage those not white and male; and to deploy systems with black boxes that are obscure to our understanding? This is just bad engineering.

AI system builders cannot understand how their systems actually perform if they do not understand the cultures in which they operate. As scientists they should be ashamed at how narrow-minded they are. As engineers, they should be embarrassed at how poorly their technology fits their use cases.

From Impoverished Intelligence to Abundant Intelligences
We must think more expansively about these AI systems. We need to expand the operational definition of intelligence used when building these systems to include the full spectrum of behavior we humans use to make sense of the world. Or — following Schank and others such as Kate Crawford — dispense with the term ‘intelligence’ altogether.

This will require exploring epistemologies different from the Western knowledge frameworks that are used when building such systems, epistemologies that, for instance, can encompass Indigenous understandings of the relationships between ourselves and our non-human kin. What would it mean, for example, if we taught AI systems to view the world as one of abundance rather than of scarcity?

It will also require building AI systems that are designed in tight collaboration with the communities that need them, that are deployed only where they are benefit to those communities, and that are capable of evolving along with the needs of those communities.

If we are to reach towards the future, we have to do it while enabling the fullness of human life and thought. Rather than build AI systems from impoverished knowledge frameworks, let’s build them from knowledge frameworks that recognize the abundant multiplicity of ways of being intelligent in the world.

--

--

Jason Edward Lewis
Jason Edward Lewis

Written by Jason Edward Lewis

Digital poet, interactive media artist, and software designer. He co-directs the Aboriginal Territories in Cyberspace (AbTeC) research network.

No responses yet