Can Algorithms Make Art?: On Eduardo Navas’s “The Rise of Metacreativity”

By Francesco D’IsaApril 16, 2023

Can Algorithms Make Art?: On Eduardo Navas’s “The Rise of Metacreativity”

The Rise of Metacreativity by Eduardo Navas

THE RECENT RISE of machine learning technologies in the public arena—the so-called AI, a misleading term because these technologies are not truly intelligent—has split the public into enthusiasts and haters, following a dynamic that recalls what happened at the birth of photography, when intellectuals and artists of the stature of Charles Baudelaire, Jean-Auguste-Dominique Ingres, and Honoré Daumier came out against the new technology, while others such as Nadar and Eugène Delacroix welcomed it.

Will AI technologies really kill creativity, as some critics suggest? Certainly not metacreativity, at least according to Eduardo Navas in his new book, The Rise of Metacreativity: AI Aesthetics After Remix (2022). This must-read text for all those interested in the emerging relationship between AI and art has the merit of not considering the development of these technologies as a phenomenon that has suddenly turned the tables but instead placing it within a historical and theoretical framework rooted in our cultural, political, and economic past. The aim of the book, the author states, is to demonstrate and reflect on “how an advanced state of creativity has emerged and is connected to human history.” It is precisely this advanced state that he calls metacreativity.

As far as the visual arts are concerned, two questions have been uppermost in shaping the public debate. First, can machine learning technologies be used to make art? Second, does their power come from the theft of the work of other artists, which is often present in the dataset? Both questions are misplaced, in my view, and it’s good to see how Navas answers them without overstating their importance.

Seeing a work of art as indissolubly linked with an individual creator’s technical skills is a worn-out myth. The further we develop, the more this link displays its conventionality. Were cave paintings the result of a craftsman’s unique abilities or were they part of a communal ritual? What about Eastern art, where imitation of past masters was often essential in preserving the continuity of a style? And what about readymades, collage, pop art, action painting, net.art, conceptual and collective art—all practices that have in different ways unhinged the work of art from the artist’s labor, from the technique and even the intentionality of the person who creates it? It seems that the only criterion that can hold up is that of Dino Formaggio, who wrote in 1973 that “Art is everything that people call art.”

Navas reminds us how art has long been disengaged from the creator’s labor, often tied to chance, modularity, and remix. We can find the split between art and manual labor already in the artists’ workshops of the Renaissance and Baroque periods, where apprentices worked under the guidance of a master who would then sign the finished pieces themselves, a method later used in Andy Warhol’s Factory. The most blatant examples are undoubtedly Duchamp’s readymades: the method proved quite successful among later generations of artists, including not only neo-Dadaists such as Robert Rauschenberg and Jasper Johns but also the pop artists and conceptual artists of the 1970s.

Navas defines chance as a “meta-algorithm” that can be employed for specific actions, a technique used by Dadaists, surrealists, and futurists during the first half of the 20th century, and later by multidisciplinary artists such as John Cage and Nam June Paik. The creation of art objects made of modular parts was also explored creatively via collage art, wherein pieces of preexisting material were treated as modules that could be recombined to create new compositions. Closely approximating such “prompting” in text-to-image software are the drawing installations by Sol LeWitt, executed by gallery or museum staff following his written instructions—algorithms, in essence. For example, the instructions for Wall Drawing #715 (February 1993) read: “On a black wall, pencil scribbles to maximum density”—an analog prompt if there ever was one.

One has to wait until the birth of machine learning, and the work of more recent artists such as that of the arts collective Obvious, to see the machine take on almost all the work, thus preparing for the advent of metacreativity—which, according to the author’s definition, is “a cultural variable that emerges when the creative process moves beyond human production to include non-human systems.” According to Navas, this is a gradual process, a further step beyond postmodernism:

[W]e can take the postmodern and conceptual tendencies to appropriate and to delegate the process of producing the work to enter yet another order of meta, in which the artist no longer produces the algorithm, itself, but rather writes a computer program that will take on a creative process according to parameters written by the artist.


Navas doesn’t extensively discuss the data harvesting required to create machine learning software, assuming that these technologies have been around for some time, grounded in practices (remix, compression, modularity, etc.) that have been part of our cultural, political, and economic worlds for centuries. In order to deal with them, to understand the risks and opportunities that come with them, it is imperative that we grasp them properly.

But how do they work? These technologies are essentially algorithms trained with billions of bits of data. Even if we ignore the human input needed to make such software work, the influence of each object in the dataset on the outputs would be completely insignificant compared to any artistic or creative work made by humans. The situation is different if you use machine learning for plagiarizing (e.g., “draw Mickey Mouse”), but in that case the responsibility lies with the user. These systems can only be developed with enormous masses of data and the copyrighted material they incorporate. Once the algorithm is created, these data are no longer in use. Those who create them claim that this process is protected by fair use laws, and in fact, the contribution of individuals is negligible. But there are companies that use such algorithms to create proprietary software, and that might seem unethical. The right path is certainly hard to find. That said, I believe these companies should produce open-source software and public domain outputs. The dataset is a common asset, and so should be the results (this is actually how Stable Diffusion works). But the debate is still open.

Central among Navas’s many insightful concepts is that of “remix,” borrowed from the sphere of music (a topic to which he dedicates a splendid chapter) and applied to the realm of images and texts. A quote attributed to musician Shock G expresses the connection well:

Perhaps it’s a little easier to take a piece of music than it is to learn to play a guitar. True. Just like it’s a little bit easier to snap a picture than it is to actually paint a picture. What the photographer is to the painter is what the modern producer, DJ, and computer musician is to the instrumentalist.


Although what machine learning does is not an actual remix, because it never takes up elements identical to the material in the dataset from which it is built, the comparison offers a useful foothold. Navas never mentions software such as Midjourney, DALL·E 2, or Stable Diffusion, perhaps because their development was more recent than the completion of his book. Yet, through remix, it is easy to understand the risks and possibilities of such software.

One major risk of machine learning is the creation of echo chambers: by relying completely on algorithms to select the content we experience, we wind up living exclusively within the biases these systems have been trained on. Real autonomy belongs to “artificial general intelligence” (AGI), “an algorithm that would be able to assign itself tasks rather than functioning within the limits of an action for self-training written by a programmer.” These algorithms are still science fiction. Were they to appear in the future, though, their modes of creativity—truly nonhuman at that point—might be inaccessible to us. To the question of whether an AGI will ever produce works of art comes the answer that, if it ever does, only similar forms of software will be able to appreciate them.

Navas’s book is undoubtedly an important step forward in the AI debate, and the notion of metacreativity a valuable tool. It underscores a shift that seems to be more quantitative than qualitative in nature. After all, even oil painting, like photography and computer graphics, incorporates a dense network of theoretical and technological knowledge, stylistic choices, limits, and potentialities that all derive from the practices of many people over the years, centuries, and millennia.

A tool is not an inert object: it builds on the legacy of those who have used, developed, and modified it before us. In a sense, every tool has a will of its own with which we need to come to terms, for it internalizes and bequeaths ancient knowledge that becomes explicit only through use.

Perhaps, to paraphrase Bruno Latour, we have always been metacreative.

¤


Francesco D’Isa is a philosopher and artist from Florence, Italy. He writes and draws for various magazines and is the editorial director of the Italian magazine L’Indiscreto.

LARB Contributor

Francesco D’Isa is a philosopher and artist from Florence, Italy. He writes and draws for various magazines and is the editorial director of the Italian magazine L’Indiscreto.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!