The ‘Art’ in ARTIFICIAL INTELLIGENCE
An Interview with Nina Begus
With AI poetry on the rise, we’re faced with ethical and artistic questions. Nina Begus, a researcher at UC Berkeley who holds a PhD from Harvard, says AI might be more creative than we think: “We implicitly consider the machine as by default not creative. However, our machines are not industrial machines anymore…Our idea of what the machine is and could be has to change and same goes for art made in collaboration with machines.” In exploring the roles of creator versus creation in machine-generated language, this interview also probes the deeper definition of art and poetry themselves.
As a baseline for readers, can you explain how poetry—sometimes even good poetry—can be created through Artificial Intelligence?
There are several types of AI poetry, depending on the way language is generated. In the 2010s, for example, generators imitated a particular poet’s style or a lyrical form. Here, I will mostly talk about large language models, also known as foundation models under a more recent and general term coined at Stanford. LLMs emerged from 2018 on and are created with the neural architecture called transformers. This architecture generates text by learning to predict sequences of words, with attention directed back and forth in the string of words being the key to their success. AI language won’t remain solely in this architectural space, but for now this is what we have and it has already been productive and exhilarating enough.
In a nutshell, AI poetry arises from text itself. It’s language done differently – and a completely nonhuman way of writing. Granted, the spectrum of quality is broad, and many people discredit AI creativity because there is no mind behind it. Does poetry need consciousness or a poetic subject? Does poetry necessarily arise from experience? AI poetry does not have all this and it still may hold artistic quality.
I’m not that interested if AI is writing well by human standards. I’m interested in AI writing that departs from these criteria into a completely new creative and epistemic space. This is where AI adds unique nonhuman value, opening possibilities that would otherwise not exist. True cognitive novelty lies here; and not in familiar and predictable imitation, automation, and acceleration, which seem to be the default ways of imagining how AI can enrich anything. It is a missed opportunity to reduce AI to a human on steroids.
How does one account for unquantifiable greatness essential to poetry, literature, rhetorics, art, music, games, sports? This is where the quality lies, and I don’t consider it inaccessible to AI. In a game of Go, AlphaGo made an unprecedented move in the human history of the game. If this can be done in the limited space of a four millennia-old game, then AI language is guaranteed to surprise and innovate.
The same people who accept avant-garde movements like Oulipo as art might not be ready to call AI-generated poetry the same. What’s your take on artificial poetry as a new art form? Going deeper: how do you define art?
In my view, novel technologies have challenged the human-centered way of conceptualizing the world. Many people who oppose calling anything machine-created art do so because they cling to modern ontology where only humans make art.
And who could blame them? For almost a millennium, art was related to human skill, in opposition to nature. What is considered art does not only change with time but also with the locale: in my home country Slovenia the domain of art is much narrower than in the U.S. where I live. In the Western world, technology and art fall under the domain of ‘artificial’, which is related to art through the Latin and from here artifact, both completely human domains. What many people dislike about AI is akin to another art-related word ‘artifice’ as AI can seem deceiving and outright fake when, say, making art.
Language is in the space of tension today more so than it was in the times of Oulipo. Oulipo writers and mathematicians sought out mathematical structures to extend literary potential. Large language models could be linked to this practice—the mathematics under the architecture that produces the text serves as the conditional space for literature—however, literature here is written by the machine that responds to writer’s prompts. Now, the human author is twice-removed: through mathematically-produced language and through machinic composition of the text. The author is really not just a human writer anymore, as with Oulipo experimentation.
As Donna Haraway has pointed out already in the 1980s, old dichotomies are troubling nowadays: who is the creator and creation when AI writes literature? We implicitly consider the machine as by default not creative. However, our machines are not industrial machines anymore: they are not fully known to engineers, they don’t simply respond to a button or a lever, they are not a simple extension of the human. Our idea of what the machine is and could be has to change and same goes for art made in collaboration with machines. Are LLMs a tool or a creator? The line is rather blurry and the sweet spot is right in-between.
As AI’s status as creation and creator is blurring, what do you think it can teach us about language and its mathematical properties?
Large language models – and other deep learning architectures that work well on language, such as GANs – don’t have all the aspects of language that human language has. And yet, the models are linguistically intriguing enough to add value, both in research and as products. Their language is really something else: it’s fully produced by mathematical vectors, it’s abstract and dehumanized. In the transformer architecture, used for all LLMs so far, hundreds of billions of parameters, built on huge amounts of text, are used to generate humanlike text based on a probability distribution. It’s structuralism par excellence. Stepping out of text, we can use these models for pixels, proteins, and other kinds of data.
Not just skeptics but even AI enthusiasts were surprised by LLMs. Rather unexpectedly, AI tackled the creative domains among the first. The fascinating turn of events in written language was that a neural network environment, where machines learn from patterns of data, proved better than the previously prevailing symbolic AI. Symbolism seemed to correspond to the rules of language better because it’s based on logic and rule-based orders. The biggest surprise of all was that language can be autonomous: there is enough information in language about language.
Extending Donna Haraway’s idea that the cyborg is a feminist frontier beyond traditional gender norms, do you feel that AI-generated writing has the potential to move society away from sexist and racist discourse?
Yes, Haraway ascribed technology a transgressive force and called writing the technology of cyborgs. She begins her manifesto showing that cyborgs are not defined or limited by social factors. The union of cyborgs and humans would positively affect our social order, but, as she underlines, the alliance with the cyborg and the revolt against it always coexist. The dream of a common language for women is ironic.
The way AI is currently done, I don’t think purifying AI language could be technologized without a colossal and intentional effort, let alone expanding this achievement to societal norms. Technologists are trying hard to attune machines to our cultural norms, of course, with value alignment being just the most recent approach. I guess it says something about us that we’ve created AI this way.
The reason I started writing a book on AI and language is because I wondered why we make this technology in the human image, where, for example, the voice of the first virtual assistants was by default female, as was her personality. Why should AI be a persona, masking itself as something that it is not? The good news is AI is far from fully formed and having more humanities people at the table where these decisions are made is crucial for its success. AI ethics should grow from there.
The ethics discourse has been limited to a handful of biggest discriminatory problems in AI outputs but ethical considerations with these models are much broader. We’re dealing with huge amounts of data and with machine learning that, in their essence, defy control: they are all about uncertainty, probability, and navigation through correlations and patterns. The world they produce is never finished and closed. This has also become evident in LLMs that used to be trained on an old, fixed, internet-based corpus and couldn’t learn after training, but this is not the case anymore with metalearning.
The problem with LLMs is even more basic: not only do we not know their ramifications, we don’t really know their applications, which makes them difficult to regulate. Language is loaded with notions of truth, provenance, freedom, trust, community, security. How to commensurate human morals with nonhuman agency? These are problems that can only be tackled in collaboration of humanities and engineering, conceptual and technical.
It would be easy to say that what separates AI-generated literature from human work is the lived human experience—yet fiction also depends on our ability to imagine what we haven’t experienced. For you, where does the distinction between the two forms lie?
This is a great question to which I suspect we would get an array of polarized responses. I stand firmly on the side that quality writing can emerge from other sources but mere experience, from empathy, from feeling and vibe, from a model of the world. With AI, however, this stance gets really radical, too radical for some. AI has no sentience, no existence, no subjectivity or inner world – all of which seems constitutional for fictional writing. For me, fiction got an opportunity to be enriched and not impoverished with AI, as long as we don’t treat AI writing on human terms.
Creativity has a certain signature. Will it get more unrecognizable? The writing process with AI has already shifted into a dialogue. A new kind of literary form or language might arise sheerly from this original, synthetic way of doing language. How long until digital poetry appears on the bestselling lists? We need to refurbish the criteria for this new, strange frontier of writing.
We need to explore this zone of writing where humans and machines collaborate. As interlocutors, LLMs analyze our inputs and can bring to light combinations and ideas we could otherwise never discover. This is akin to Oulipo’s quest for potential structures, but here the inspiration and creative interplay actually goes two-ways.
Borges’ work famously explores the enrichment versus impoverishment of literature through reproduction. While we all want to think our writing is important and unique, do you think AI might change our understanding of a text’s value?
I’m amused by the Slovenian writer Andrej Tomažin’s short story, in which a literary prize committee discusses how they might end up awarding a computer. Referring to Roland Barthes’s The Death of the Author, these scholars conclude that the author is irrelevant and the text is dead: it’s all algorithmic combinatorics, a Borghesian paradise, really.
Borges’s stories—‘The Book of Sand,’ ‘The Library of Babel,’ ‘The Garden of Forking Paths’—are thought experiments that translate to AI. How to think about the multiverse where AI can help us explore the options by strategizing way ahead, as in Go? Instead of having to choose one plot for the story, Borges offers a myriad of storylines, tracing the infinite monkey theorem. Even if we think of the theoretical monkey—representing AI in my analogy—as typing blindly and randomly, we would probably agree that the monkey is not a poet and yet it is intelligent and capable in its own way.
Writers have long speculated about what would happen if machines took over their profession, and the common presumption is that they will usurp the market by automatization, as found in Roald Dahl’s ‘The Great Grammatisator’. That writing integrity will become devalued and that there is nothing valuable human creativity can provide is just a dystopian fear, largely fed to the public by fiction itself. Surely there will be advantages and drawbacks of AI writing practice, as with every technology. Considering that writing technologies have changed so much just over the last two centuries, we can count on our grandchildrens’ amazement that we used to write everything from scratch.
The generated text does not have to be the final outcome: the process of writing with AI can be the product itself. LLMs and their spin-offs on image are not truly collaborative and relational yet. Writers and artists are absolutely needed to get us there. If you just take a look at Kenric Allado-McDowell’s novel co-written with GPT-3 or to Sheila Heti’s recent conversations with chatbot Eliza, you’ll see that a commoner can’t quite incite such marvelous dialogue with the machine as these writers can. Same goes for visual models under Ian Cheng’s masterful guidance. And transformers are only the beginning. There is much more left to build and explore because even though these models are hyper-structuralist, they offer different intelligences, skills, capabilities, discoveries, experiences.