On Other Intelligences: A Conversation with Mashinka Firunts Hakopian

By Nora N. KhanJune 19, 2023

On Other Intelligences: A Conversation with Mashinka Firunts Hakopian

The Institute for Other Intelligences by Mashinka Firunts Hakopian

IN APRIL, Mashinka Firunts Hakopian met me at my home in Los Angeles, where we discussed her remarkable new book, The Institute for Other Intelligences (2022). In The Institute, a host of algorithmic agents that we have known, now live with, and will know participate in an academic symposium, convened in an undefined timeline. Over the sessions, they collectively analyze their past. The agents are from many eras of artificial intelligence: from early chatbots to the future children of current machine learning systems, from critical artists’ neural networks to a host of speculative nonhuman intelligences. They reflect on their histories and design, and toast to moments of hopeful resistance and critique and organizing, in which they evolved or were recalibrated.

Reading The Institute, I was struck by just how many scales of time a book can hold: the conference’s future time, digital time, algorithmic time. I am pitched forward and back across time in much the same way algorithmic and predictive systems train us to think forward to an uncertain horizon. The algorithms of today often freeze us as “marks” of our past, to hold us captive and bound into the future. Hakopian uses complex time travel to map the many possible outcomes of technological decisions wrought now, today. We are asked: How do algorithms’ ways of knowing shape our own? What possibilities in our lives become curtailed or expanded by thinking through the decisions of evolving algorithmic agents? How might algorithmic agents and humans negotiate between all of their fully inhabited, embodied worldviews, and move toward co-engineering radical outcomes?

¤


NORA KHAN: What kind of technological future is unfolding in The Institute for Other Intelligences?

MASHINKA FIRUNTS HAKOPIAN: I was deliberately coy in sketching out the contours of what this future might look like. The convening in the book unfolds in an indeterminate timeline to underline how unthinkable our present might look from its vantage. How far into the future would we have to project ourselves in order for our present, in its palpable bleakness, to become unthinkable?

Fictive algorithmic agents, interviewed in the book, speak on their past purposes, which seem (gloriously) quaint in this future. They exercise a collective memorializing of our present’s algorithms, and the ones most likely coming. This insistence on multiplicity struck me as a strategic refusal, of a future made through one vision alone. You ask us to imagine a critical culture that would make Institute’s gathering possible.

Sidestepping any one particular vision, I wanted to invoke what decolonial studies frames as “pluriversal epistemologies of the future.” In the performative lectures that accompany the book, there’s a slide that visualizes the Institute as a planetary network of sites set in pastoral environs, in buildings made of bioorganic matter, adorned with SWANA geometric patterns and carpet motifs. The book’s fictive director mentions that this Institute is one among a large network of related agencies, which includes institutes for plant intelligences, stones, and other nonhuman beings.

Exciting. On pluriversal epistemologies: There is a gorgeous polyphony of voices in the book. The Institute’s director describes “rooms reverberating with the cacophonous data of a thousand oppositional automata.” A noisy, unremitting choir of killjoys. This book feels warm with this chorus of agents reflecting on past critiques, naming unconsidered impacts of automated systems. They’re still happy, as you write, to recork the champagne, to interrupt dominant narratives. Given this moment’s pervasive, sinking feeling that a monolithic voice, company, and ideology is programming a future that can’t be turned around, it seems that a collective would have to march against those voices from space shuttles, describing what the future should look like for all below.

Heaven forfend.

[Laughs.] But we talk about this often! Especially as two critics weaving in and out of tech-spaces. The response to very mild critique is often that criticism kills innovation. The critic is against progress. Wanting an “AI Ethics” that considers climate change, or global social inequalities, not one neatly repackaged for capital gain, is a killjoy impulse. How do you think of the choir of critical killjoys counting the impossible scale of dominant AI? Can that chorus of critical AI voices scale?

At the time that I began writing this, roughly four years ago, there were so many feminist and queer thinkers putting forth urgent critiques of emerging technologies, and their voices weren’t accorded the kind of amplification that they should have been. It was in response to this that I wanted to imagine what a roomful of automated agents (artificial killjoys) who had been trained on those critiques would look like. As to portending the death of innovation and of progress: both of those terms arrive to us by traveling colonial routes. So, if our critiques result in their death … then we can pop the champagne. Sara Ahmed writes that there’s joy in killing joy, and I’ve very much found that to be the case.

Killjoys celebrate killing a false joy: the belief in progress based on a rotten foundation. No harm, no—what is it?

No harm, no champagne?

Let’s talk about the different thought exercises in the book. There’s an elegant logic tree and structure to each. They map the logics of fintech or of loan assessments, as driven by algorithms, or the ways “neutrality” and “intelligence” are constructed. In the third exercise, you look at the ways algorithms help companies determine who is employable. What was assumed to be known about the applicant? And how did they know what they knew? The exercises deconstruct the presumptive conditions of knowing, then codified. We glimpse the foundational knowledge technological decisions are founded upon.

As you coax the reader to be sensitive to what isn’t included in those conditions of knowing, by which standards might they learn to include? How do “we” agree on what to include in technological knowing?

That is the question. The book’s fictive algorithmic agents argue for the necessity of shifting from a model of AI as a “view from nowhere” to a vision of knowledge grounded in the body and in embodied experience. So, one way to look to what isn’t there, to look to data that have been omitted or occluded from visibility, is to look to data embedded in the body, the data of lived experience. In terms of determining who the “we” is, I would cede my response to thinkers like Sasha Costanza-Chock. In Design Justice: Community-Led Practices to Build the Worlds We Need (2020), they write that in any given instance of data collection, labeling, classification, or training, those determining what ought to be there and what ought not to be there should be those most directly impacted by how that data will be used.

“Many among the readers of this data packet won’t remember why they’re reading it,” you write in the book’s endnote, “A Note on the Neural Architecture of Memory.” “Taking into account the possibility of intergenerational forgetting, this document functions as part of an ongoing initiative in distributed memory work, preserving collective memory through data storage.”

There is a mournful, elegiac quality to the Institute’s gathering. I don’t think I had ever considered the melancholy of algorithmic agents. They evolve over time alongside us; they haunt us and are haunted by us too, as we together hurtle towards a future actively produced by systems replicating the past, trained on past data. I’m moved by these agents, taking up distributed memory work of “vintage algorithms.” How do you theorize and account for the evolution and generational growth of algorithms in your work, particularly the human’s relationship to many different algorithmic eras?

If we look to vintage chatbots like ELIZA, or more recently, the Professor chatbot, we can broadly trace a history of algorithms and bots as agents who are imbued with a quasi-mystical omniscience, but who are actually dizzyingly limited in what they can know. What they teach us, I think, is the danger and seductiveness of ascribing omniscience to algorithmic agents. But if we expand the frame of what constitutes an algorithm or computation, we can trace histories of computation to non-Western pasts, and surface the omissions in prevailing histories of technology and media. My current book project is on ancestral intelligences and how they inflect engagements with emerging technologies. How would ancestral intelligence (contra artificial intelligence) enable us to see quasi-algorithmic, or predictive, or divinatory logic operating in a host of practices that precede the development of computational systems in the context of the US military-industrial complex?

Do you think of humans evolving in relation to these agents’ evolutions, as they grow and twist deep through our social and psychological lives? How do their ways of knowing shape our own ways of knowing? What possibilities in our lives become curtailed or expanded by thinking through the decisions of evolving algorithmic agents?

It depends emphatically on the infrastructures through which those agents are built and through which they circulate. If we’re talking about agents that are designed and developed within commercial or state structures—aimed toward surveillance, management of populations, and so forth—then our relations with them are most likely designed to be relations of asymmetrical power. The spaces of knowing that I most gravitate toward are the speculative spaces proposed by artists. For instance, Stephanie Dinkins’s work Not the Only One (N’TOO), an AI model trained on the small data of oral histories from several generations of women from Dinkins’s family. It’s endlessly compelling to imagine what it would look like for a descendant in the artist’s family to engage with N’TOO 50 years into the future, to imagine an experience of ancestral linkage that’s mediated through an algorithmic agent.

Your book firmly centers the protection of collective memory of technological resistance, especially given the ouroboros of critiques about AI that repeat with each new product release. Tell us a bit about what this memory of technological resistance can mean.

I wanted to leave space open for the possibility of a scenario where AI systems might function as movement archives—specifically, as repositories of struggles that can feel resolutely unwinnable until they’re situated within a longer arc and a larger dataset.

Turning to another passage here: “We’re here to trace a set of historical shifts in the way we see ourselves from 1) neutral intelligent systems that execute instructions; to 2) biased actors; to 3) co-engineers of radical outcomes.” While the book traces the historical shifts, you do leave the question of how we might move from understanding ourselves as biased actors to “co-engineers of radical outcomes” open-ended. And you’re not proposing that there’s some future elimination of bias. Instead, we might learn to negotiate between all of our fully inhabited, embodied worldviews, and in holding an awareness of how we enact bias, is it that we then move to co-engineering radical outcomes?

First, a prefatory note. In the four years since I started writing this, a profusion of justified critiques have warned against an overinvestment in the question of bias, which often obfuscates other important questions. An overinvestment in the question of bias might, for example, lead us to think about how a system functions improperly, instead of thinking about the broader sociotechnical structures in which that system is embedded.

The answer is not to just fix the system.

Exactly. One point I belabored throughout the book was that I’m not arguing for—in fact, I’m ardently arguing against—techno-solutionism, or the idea that sociotechnical problems can be addressed through technical fixes. (I’m taking my language here from Ruha Benjamin.) From my vantage, the shift from biased actor to co-engineer of radical outcomes would involve the de-emphasis of an anthropocentric view, structural shifts in processes of data collection, labor practices, undoing assumptions about data as value-neutral, and then transformations that have little or nothing to do with data whatsoever.

The last four years have also brought critiques of AI explainability. Sera Schwarz argues well here: even if you handed me a stack of 200 pages on how an important algorithm worked, and even if it’s explainable, it’s not necessarily useful for me.

In the book’s section “The Faces of Tomorrow, Today,” we dive into the construction of “neutral” and “intelligent” systems. You give a diagram of a facial reading “scanning” exercise in which a downward turn of the mouth is not a reveal of shifty intent. Instead, the person might be dreaming of collective liberation. Elsewhere, bots create an acoustic model trained on a full array of regional dialects and nonnative speakers to make the model more full and dimensional. How does one fill the dataset and render it more dimensional, without capture?

A fantastic question, and put with trademark Nora Khan poetics, which I would like noted for the record. The answer depends on the technology and context toward which the dataset is going to be deployed. With many extant technologies, the only viable position is abolition.

Returning to the intoxicating pull of neutrality: Wendy Chun theorized this perfectly, the way this fantasy of objectivity, of neutrality, serves a powerful ideological purpose. Daily we confront systems that make our lives easier and worse at once, designed to make us believe in their neutrality. You fully explore this as an ideological tool of social control, the belief in a neutral space. It’s hard to name; it’s hard to see. It’s hard to pin down. That said, do you find any value to the idea of neutrality?

To answer that, we would first have to assume that neutrality is possible. Neutrality, as I understand it, is a condition that enables decision-making from a position of no positionality. Put otherwise, an impossible position. Neutrality is premised on knowledge that exists outside of a knowing body.

If we look to the lineages of feminist technoscience and the recent work of Catherine D’Ignazio and Lauren Klein—who argue for the elevation of emotion and embodiment—we can understand lived experience as valuable data. One of Institute’s primary claims (posed as an interrogatory statement) is, “How do we deploy what we know if not from within a body?”

You close this brilliant book with “A Note On ‘The Human.’” Who is considered human or less than human—who is worthy of care, rights, legal protection, love—is negotiated each day through law and policy. You end with an insistence that the definition of human has long been an unstable one.

What would a disobedient technology, based on a constantly shifting understanding of a human being, changing from moment to moment, not being fixed by the past, look like? How can changing epistemologies of the human inform technological design? And what would it mean for technology to shift and move with the way definitions of human shift and move?

The book argues first and foremost for the valorization of undervalued knowledges. Specifically, undervalued knowledges associated with particular subject positions. The feminist killjoy, for example, is articulated by Sara Ahmed as a resistant voice that issues from a minoritized subject, one whose knowledge is dismissed because of the subject position from which it emanates, and because that knowledge requires a reevaluation of the social conditions of a given moment. By ending on “A Note on the ‘Human,’” I was coding the knowledge of the bot as an avatar for other knowledges and other subjects who have been omitted from historical constructions of the human.

¤


Mashinka Firunts Hakopian is an Armenian writer, artist, and researcher born in Yerevan and residing in Glendale, California. She is an associate professor in technology and social justice at ArtCenter College of Design, and was a 2021 visiting Mellon Professor of the Practice at Occidental College, where she co-curated the exhibition Encoding Futures: Critical Imaginaries of AI with Meldia Yesayan at Oxy Arts.

Nora N. Khan is a curator, editor, and writer of criticism on digital visual culture, the politics of software, and philosophy of emerging technology, as well as the executive director of Project X Foundation for Art & Criticism, publishing X-TRA Contemporary Art Journal in Los Angeles. She is the curator for the next Biennale de l’Image en Mouvement in 2024, with Andrea Bellini, hosted by Centre d’Art Contemporain Genève.

LARB Contributor

Nora N. Khan is a curator, editor, and writer of criticism on digital visual culture, the politics of software, and philosophy of emerging technology. She is the executive director of Project X Foundation for Art & Criticism, publishing X-TRA Contemporary Art Journal in Los Angeles. She is the curator for the next Biennale de l’Image en Mouvement in 2024, with Andrea Bellini, hosted by Centre d’Art Contemporain Genève. In 2020, she curated Manual Override at The Shed, featuring Sondra Perry, Morehshin Allahyari, Lynn Hershman Leeson, Simon Fujiwara, and Martine Syms. Khan’s short books are Seeing, Naming, Knowing (Brooklyn Rail), on the logic of machine vision, and Fear Indexing the X-Files (Primary Information), co- written with Steven Warwick. Forthcoming are No Context: AI Art, Machine Learning, and the Stakes for Art Criticism (Lund Humphries), The Artificial and the Real (Art Metropole), and a hybrid memoir about criticism from Strange Attractor Press. She frequently publishes in publications like Artforum and Art in America, and has written commissioned essays for major exhibitions at Serpentine Galleries, Chisenhale Gallery, and the Venice Biennale.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!