How Matthew Kirschenbaum Sees AI’s Impact
May 05, 2025

The Distinguished University Professor of English is a leading voice on how AI is transforming the way we communicate, educate and govern.
By ARHU Staff
As artificial intelligence increasingly reshapes our lives, Distinguished University Professor of English and AIM Affiliate Faculty member Matthew Kirschenbaum is helping us make sense of it—not by decoding algorithms, but by decoding language. A leading public voice on AI, Kirschenbaum has emerged as a sharp commentator on how new technologies are transforming the way we communicate, educate and govern, with work featured in The Atlantic and the Chronicle of Higher Education.
Drawing on a career in media studies and the digital humanities, his recent work turns a critical eye to the discourse surrounding AI. He’s less concerned with the nuts and bolts of how AI works than the language of AI—both the text generated by AI itself, and the way leaders and institutions are framing and invoking it.
He has coined terms such as “textpocalypse,” addressing the growing detachment between words and their real-world meaning, and “university as a service,” referring in part to how some higher education institutions are outsourcing their curricula to third-party softwares that increasingly include AI.
We spoke with Kirschenbaum to learn more about his recent work and his perspective on how AI is shaping our present and future.
As an English professor, how do you see AI impacting teaching and learning today? Do you see any benefits for students who use AI tools? What are the drawbacks?
From the moment someone picked up a stick to scratch marks in the dirt or stained a cave wall with berry juice, writing was an unnatural act. It’s fundamentally weird and uncanny, the closest we come to speaking with the dead. I don’t want to see students lose sight of that weirdness and wonder. But I also don’t want to be a cop in the classroom. I don’t believe in detection software, which is not nearly reliable enough (especially when students write in non-standard English). When I was doing my research on the literary history of word processing, I learned that some novelists back in the 1980s would add disclaimers to their books that even though they had recently bought a computer the book was still 100% human-written. So, these are not new anxieties: there’s something about the act of writing we see as vital, and uniquely human. But a little thought also reminds us that our writing materials are also always pushing back on us: whether it’s a hand-press printer who runs out of a letter in their drawer of type and thus has to choose a different word, or the kinds of abbreviations and shortcuts we use in text messaging, the medium is never neutral. I try to keep that in mind when balancing the new AI technologies against my learning outcomes.
In your work, you speak of a “textpocalypse,” which is also the title of your current book project. What is a “textpocalypse” and how might it impact society?
We usually think of “text” as something we read or something we write (either on a printed page or a screen), or else something that we send in the form of a short message. But text is also something else: it’s a form of data that is legible to computers. Computers can search text, they can manipulate and rearrange it, and now, with large language models like ChatGPT, they can produce it in vast quantities. “Textpocalypse” is my name for the idea that we may be in danger of surrendering the most powerful way of communicating humanity has devised—the written word—as the internet and other communications media become swamped by enormous tides of machine-made text—machine-made text which we know will then be used to train other machine learning models (AI) as it is swept up and included in the training sets. This is not a theoretical proposition: late last year, a long-running computational linguistics project which used the internet as a data source for tracking human language use across some forty different languages shut down because the researchers in charge could no longer assume the samples they were looking at were predominantly human generated. That’s an alarm bell.
AI is everywhere now, as you’ve noted in your recent works. Can you talk about its implications for higher education and explain what you mean by the idea of the “university as a service”? How might it change the value of universities to society?
The “university as a service,” a phrase I coined with my co-author Rita Raley, is a play on the idea of “software as a service” where instead of actually buying and owning software, users license it from a third-party provider. So what does it mean to think about universities in such terms? Increasingly, more and more of the infrastructure of a large research institution like this one is similarly outsourced to third parties. This includes software (like Canvas or Workday), but it also includes other kinds of infrastructure—instruction, for example, that is delivered by contract workers instead of tenure-track faculty, or even actual courses and curricula which (on some campuses) are now being purchased from commercial services. AI, in my view, will exacerbate this trend in two ways: first, it’s already being embedded into those same third-party services, often in ways that local users don’t understand or control; and second, AI itself is something that campuses like this one also outsource, through contracts like the one we now have with OpenAI. Who is making decisions about how these AI’s are trained and what their underlying models look like? We don’t know, and yet we’re asking them to help guide our research and teaching.
You recently gave a talk at Princeton University about how the Department of Government Efficiency (DOGE) is using AI to achieve their aims, including to decrease the size of the federal government. What should people know about how AI is being used?
Let me give you an example. Back in February, Georgia GOP Representative Rich McCormick held a town hall with constituents. They were concerned and angry about large numbers of job cuts at federal agencies like the Center for Disease Control and Prevention, which is headquartered in the state. When asked how the CDC’s essential services would be maintained, McCormick replied that that work could now be done with AI. In such a moment McCormick is not talking about any actually existing AI system, or the painstaking work of benchmarking it against the expertise of human scientists. Instead, AI is a kind of magic passphrase which politicians like him can use to punt on the question without having to offer any real, viable alternative. My “US of AI” project, which will take the form of a short book hopefully to be released in under a year, argues that AI is being used as a cover for an extreme agenda that is consolidating access to sensitive data in the hands of a few while diminishing access to government services for the many. This is a dark and dangerous time, and AI is being invoked in a very deliberate way to distract from what’s at stake.
Is there anything else you would like to weigh in on?
Maybe this question should have come first! I am not an AI-naysayer. I don’t believe we can bury our heads in the sand or turn back the clocks. AI is here to stay. There will be no Great Unplugging. But I do have concerns. I think Big Tech has not always done a good job of addressing the real harms of these technologies, everything from reenacting systemic forms of violence and injustice to the devastating environmental impact that follows from the enormous amounts of computational processing power these technologies require. Too many, in both industry and now government, are deploying these technologies opportunistically, even cynically. Universities should be places where we can formulate counter-narratives to what we’re being told to accept and take for granted. For all that, I have moments of wonder. As a lifelong student of textual technologies I did not expect to see developments like this in my lifetime.