Skip to main content
Skip to main content

Lillian-Yvonne Bertram on ‘Writing with Machines’

September 08, 2025 English

Illustration of Lillian-Yvonne Bertram

The poet and professor discusses computational poetics, the history of machine-made texts and their vision for UMD’s new Studio for Literary Technology.

By Jessica Weiss ’05

In their poetry courses at the University of Maryland, Lillian-Yvonne Bertram asks students to treat artificial intelligence not as a shortcut, but as a medium to interrogate critically and creatively: what happens when we write and read with machines?

A poet and associate professor of English who directs the MFA program in creative writing, Bertram specializes in computational poetics, a field that explores how machines can be collaborators in literature. Through assignments that range from manually “hard coding” poems—writing small programs line by line to generate text—to experimenting with large language models (LLMs) like ChatGPT, Bertram encourages students to see computer-generated text as part of a long lineage of remixing language and reimagining its limits.

Bertram’s creative research often asks what large language models “know” about Blackness, anti-Blackness and oppression, and how those histories surface in machine-generated text. The author of six books of poetry, Bertram’s 2019 “Travesty Generator” was longlisted for the National Book Award. 

This fall, Bertram is launching the Studio for Literary Technology, an entity within the English department that will host events and workshops on AI and other literary technologies. Its first event, on October 9, features poet and digital artist Sasha Stiles, known internationally for blending human and machine voices in poetry and for treating LLMs as a collaborative writing partner.

We spoke with Bertram about their journey into computational poetry, the long history of computer-generated text and what they hope the new studio will bring to campus.

What first drew you to computational poetry?

It grew out of my longstanding interest in innovative and experimental poetry. I’ve always been interested in treating language as material—breaking it apart, reassembling it, approaching it mathematically. I don’t have a background in math or computer science. In fact, the one math course I took in college was designed for English majors and I still did poorly! But while I was an undergraduate at Carnegie Mellon, I worked in the Human Computer Interaction Institute, and I became interested in the ways language could be constructed or modified according to patterns and algorithms.

William Carlos Williams, the famous modernist poet, is famous for saying “a poem is a small (or large) machine made of words.” And so I took that to heart—the idea that if a poem had mechanics then it had materiality. And so it had a material that was worked with, and that material was language. 

I didn’t know anything about coding—and there were lots of tears!—but I learned enough to accomplish what I wanted. Around 2016, 2017, I started experimenting with remixed open-source code and generating poems through small programs. That led to my 2019 book “Travesty Generator,” which focuses on how computational determinism—the way algorithms quietly shape outcomes in our lives—is reflected in oppression and racism. The algorithm isn’t just our social media; it’s credit scores, where we live, our zip codes—tools that are used to reinforce, often invisibly, racial and ethnic disparities. Part of the goal with that book was to turn those very computational methods back on themselves, to expose the encoded level of anti-Blackness and oppression.

When we think of large language models we think of ChatGPT, and it can feel like it just came out of thin air recently. But that’s not true, right?

I started working with GPT in 2017, 2018, and this is before it had an interface—I was doing all of this through coding in the backend. You had a lot more control over it then. We remember the good old days of GPT-2 when you could change the parameters and see exactly what it would do. It was weird and wacky and it would say strange things. It was a very cool party trick in a lot of ways. 

From the beginning I was interested in how familiar the network was with anti-Blackness and racial and ethnic oppression and what it had to say about it. That led to my project “A Black Story May Contain Sensitive Content,” where I fine-tuned GPT on the Black poet Gwendolyn Brooks—meaning I retrained the model on her poems to shift how it generated text—and compared it to an out-of-the-box model by prompting both with “tell me a Black story.” Of course the results were completely different. I was interested in seeing how good the model isn’t, especially when it comes to bias and racism. 

Today the models are very different. With consumer-facing interfaces like ChatGPT there are multiple layers of guardrails. It has far more cultural competencies, but it’s also a corporate idea of what we should be thinking and saying. We don’t really know who’s making those decisions on the backend, or what’s being filtered out. 

You’ve emphasized that computer-generated text isn’t new. Can you talk about its longer history and how you teach that in your classes?

An early example I teach is German mathematician and programmer Theo Lutz’s “Stochastic Texts” from 1959. He used a word corpus of nouns from Franz Kafka’s “The Castle” and generated what we would call poems now, but you could also look at them as “generated sentences.” You get these very eerie mashups and remixes of this text—it’s haunting and it’s strange, with the kind of defamiliarization of language that we come to expect of a certain kind of poetry and poetics. That’s held up as an early, canon example. 

A lot of computer-generated work is intertextual, almost like layering different beats together to form a new song. And that’s not new. You can even look back to the cento, a very old Italian form that uses lines or passages taken entirely from other existing poems to create a new work. So it participates in this lineage of remixing and meshing of different kinds of texts.

That history is part of why my co-editor Nick Montfort and I put together “Output: An Anthology of Computer-Generated Text, 1953–2023.” It reprints examples from across seven decades and provides resources for readers who want to dive deeper. We wanted to show that computer-generated text has never been a single practice, but a wide-ranging and evolving field. 

You’ve said it’s important to demystify AI for students. What do you want them to understand about these tools, and what possibilities or risks do you see in how they use them?

I approach teaching this from a very critical standpoint so that students are informed. I call this providing them with a level of “intellectual self-defense”—so they’re able to approach these technologies with the wisdom of how they can be used and why they would use them, and how they can perhaps be part of a creative writing and composition process, not the outcome. We’re not just asking GPT to do something for us, but seeing how it might be part of the process. 

Where it impacts English negatively is students using it as the product. But when they use it critically, it can actually clarify their own thinking. The questions they pose in a language model are ultimately the questions they need to pose themselves. That kind of clarity can be very powerful. 

Are there benefits? Yes, but with an asterisk. The benefits we see have to do with efficiency—it can do something really fast. But is it good? That requires interrogation. Should you accept it uncritically? No.

What inspired you to create the Studio for Literary Technology and what do you hope people will take away?

Much of the campus conversation around AI is happening in computer science, engineering, biology, medicine—even media arts. Very little is centered on English. Yet when it comes to generated text, an English department has the most to lose, and it has the most at stake. Creative writing is a culture-making enterprise—our students are making culture. So what does it mean to make culture in the face of a language model that can mimic it? We need to be having these conversations, experimenting and being aware of it. 

This is why I created the studio. Its 2025-26 programming is called “Ways to Meet AI.” The first event is Sasha Stiles, who has been working with language models about as long as I have. She has a fine-tuned, bespoke model and thinks of it as an alter ego, a collaborative writing partner. Her work is fascinating and visually stunning, exhibited around the world, and deeply thoughtful about humanism and posthumanism—what the future looks like going forward. It’s an opportunity for students to see what it means to build an artistic practice around literary technology. 

I hope the studio is for all students, for the entire campus community—because is there any place now on campus where AI is not touching? I don’t think so. Hopefully people will see what thoughtful, dedicated engagement looks like, both creative and critical, and understand there’s so much more than just using ChatGPT to finish an assignment. There are ways to build a practice around these tools that really questions them and understands them. 

The Studio for Literary Technology’s first event, featuring poet and digital artist Sasha Stiles, takes place October 9. Learn more