A Human-Centered Approach to AI
April 06, 2026
In the College of Arts and Humanities, students are exploring artificial intelligence not just as a technology, but as a force shaping human life.
By Jessica Weiss ’05
In one University of Maryland classroom, students are considering whether artificial intelligence can be creative. In another discussion, they’re grappling with a different question: if AI replaces human work, what gives life meaning?
These are the kinds of deep and thoughtful debates that animate “AI and the Human Experience,” a philosophy course where students from across disciplines come together to examine AI as a force both shaping and reshaping human life. The course introduces students to the applications and ethics of AI, from recommender systems and facial recognition to questions of creativity, justice and well-being.
Taught in recent semesters by Professor of Philosophy Fabrizio Cariani, the course is one of many across the College of Arts and Humanities—spanning departments from communication to women, gender and sexuality studies to linguistics—exploring the broad implications of AI and the kind of world these systems are helping to build.
Cariani, who also chairs the Department of Philosophy, studies questions of meaning, uncertainty and human decision-making. We spoke with him about what it means to study AI from a humanities perspective, and why students are increasingly drawn to these questions now.
In the Department of Philosophy, students are engaging with AI in ways far beyond the technical. Why is it important for students outside traditional STEM pathways to have a grounding in it?
AI is not just a tool, and it’s no longer something to be understood only with a restricted body of technical knowledge. It’s just part of our world. We deal with AI in ways that go well beyond the scope of engineering questions, raising concerns of ethics and justice.
Students are not just using AI. Their opinions are as fragmented as the faculty’s and broader implications matter to them. Some are concerned about energy use and climate impact. Others are worried that it will harm their education. Others yet are interested in the potential of the tools to benefit us.
So the idea is that we need people who understand enough about how these systems work—even if they’re not going to build them themselves—to think carefully about what they’re doing and how they shape the world.
You use the phrase “human-centered AI.” What does that mean to you?
There is a technical side to understanding how these systems actually work: how they’re built, how they’re deployed and how they’re used in real-world contexts, whether that’s in engineering, scientific research or other fields.
But to me, the human-centered aspect comes down to approaching AI from a critical and inquisitive perspective. That perspective asks what kinds of dangers we face from AI, both short- and long-term. How is it impacting our society? But also questions like: What can we learn about cognition and creativity by studying these artificial systems?
“Human-centered AI” is a kind of gathering point for those questions. It’s not critique in the purely negative sense, but in the sense of understanding what these systems are doing and being attentive to their consequences, including questions of social justice, long-term risk, equity and access.
How should universities be talking about AI right now? Is it inevitable?
I think “inevitable” can mean two different things. One is that, for the foreseeable future, AI is going to be part of our lives. In that sense, I think AI is inevitable.
But that doesn’t mean we have to be resigned to it or accept every form it takes. There are still important regulatory questions we need to address and social choices to make. We can still have a societal effort to understand AI’s implications and try to intervene where intervention is meaningful. There is still space for human judgment, for values and for decision-making. It’s important to combine that sense of inevitability with ongoing critical reflection.
What kinds of ethical questions do you think students most need to grapple with when it comes to AI?
A lot of the discussion has focused on things like predictive policing and the use of algorithms in the criminal justice system. Those are really important. But there are also ethical questions around autonomous weapons, the use of AI in war, labor and what happens if AI replaces large portions of human work. There are questions about access: who benefits from AI and who doesn’t.
And there’s the alignment problem: if we want AI systems to reflect our values, what does that actually mean? Whose values? What counts as being close enough to our values?
What kind of student do you imagine being drawn to this kind of inquiry?
People who are excited by the technology but also worried by its implications. They want to know more, but not just in the sense of learning how to program it. They want to understand what these systems are, how they work, how they’re changing society and what kinds of questions they raise.
In the course I taught, we had students from computer science, philosophy and other majors all getting excited about the same material. There’s a real appetite for these conversations.
“AI and the Human Experience” is being planned as the gateway course for a proposed new major in human-centered AI at UMD. Why should students consider this path?
The capabilities of these systems are just staggering, and it’s natural to be excited about what they can do. But it’s equally important that students feel equipped to question them—to ask what these systems are doing, what they should be doing and whether there are ways we need to guide their development.
I think there are a lot of different directions students can take this in terms of careers and paths. Some may pursue more technical paths, building on that foundation and moving into research or advanced study in areas like machine learning. Others may be more interested in policy, law or governance—thinking about how these systems should be regulated or how they’re shaping society. And then there are students interested in creative work, communication or other fields where AI is becoming part of the process. What they have in common is that they’re not just using the technology, they’re thinking about it. And I think that kind of perspective is going to be valuable in a lot of different spaces.
Conversations like these are helping to inform a proposed new undergraduate major in human-centered AI, currently in development. Designed as an interdisciplinary B.A., the program would combine foundational training in AI with coursework in the humanities, social sciences and other human-facing disciplines, equipping students to engage with the technology and examine its broader impact. Learn more.
Artwork by Olivia King ’26.