Can AI Hear the World, Fool the Eye and Tell a Joke? Doctoral Students in Communication Explore How AI Shapes Media and Culture
Artificial intelligence is often discussed in terms of automation, efficiency and disruption. But for three UMass Amherst doctoral students in the Department of Communication, AI is also a lens for understanding sound, trust and humor.
Their research spans machine listening, deepfake perception and the cultural limits of AI-generated humor. The scholars are examining how existing technologies are shaped by human values and how people, in turn, respond to machines that increasingly see, hear and speak for them.
Together, their work reflects an interdisciplinary approach within communication studies that asks not only what artificial intelligence can do, but how it reshapes the way people hear, see and make meaning.
Valentina Paskar: Listening to sound beyond speech and music
Paskar, a doctoral candidate, studies artificial intelligence through sound studies.
“I’m interested in sound recognition that is fueled by artificial intelligence, machine learning,” Paskar says. “There is this term that people sometimes use, machine listening.”
While much of the conversation around AI centers on generative tools, Paskar is more interested in what happens before generation can even occur. Recognition systems, she explains, determine how sound is labeled and understood. These decisions shape everything that follows.
Paskar’s work moves beyond familiar categories like music and speech. Instead, she studies what she calls “the rest of the sonic world,” such as environmental sounds, background noise and nonhuman audio that often receive less attention in AI development.
She says humans are still far better than machines at categorizing environmental noise. For instance, distinguishing between a burglar and a cat jumping on the bed. However, the technology is advancing rapidly.
Paskar says it’s exciting to be studying a field that is still emerging.
“Because it’s rare when you, as a researcher, can actually see it in the moment,” she says
Paskar also places current debates about AI into historical context, comparing today’s anxieties to earlier moments of technological change.
“The panic that is around AI is very similar to when phonographs arrived,” she points out, noting that fears about new media often reflect broader social shifts.
By studying how machines listen, Paskar hopes to make people more aware of the assumptions embedded in technology and provide the knowledge to scrutinize them.
“Sound is really under-looked… underrated,” she says.
Shahnaz Bashir: The rise of deepfakes
As AI-generated deepfakes proliferate in social media feeds and elsewhere online, Bashir, an advanced doctoral candidate, examines pressing questions about trust and perception.
“People are looking at deepfakes from every different angle,” Bashir says. “How do people perceive deepfakes. What do they think?”
His research focuses on audience reactions rather than technical production. Bashir analyzes thousands of comments on deepfake videos posted to platforms such as YouTube and X, examining how viewers interpret authenticity, intent and credibility across different cultural contexts.
He has found that deepfakes are no longer confined to politics or misinformation campaigns. AI-generated figures are now used in advertising and social media influencing. In some cultures, people even create AI-generated versions of deceased loved ones to cope with loss.
At the same time, Bashir sees growing confusion about what can be trusted visually.
“It’s really difficult to tell a video apart from its fake version,” he says. “The technology is making it more refined and sophisticated.”
While detection tools are often presented as a solution, Bashir urges caution.
“There’s no forensics that can tell you what is and is not a deepfake,” he warns. “You have to verify.”
For Bashir, that uncertainty has consequences. One concern in his research is what scholars call the “liar’s dividend,” in which real footage can be dismissed as fake simply because deepfakes exist.
His work highlights the need for media literacy that stresses context, verification and critical thinking.
Ibukun Filani: Can AI tell African jokes?
Filani, a doctoral student, approaches artificial intelligence through humor, language and culture. His research asks a deceptively simple question: Can AI tell African jokes?
Originally from Nigeria, Filani studies how AI systems generate humor based in African cultures, and why those jokes often fail. Humor, he explains, relies heavily on shared knowledge, language and social context.
Filani recently published a book exploring comedy discourse and how comedians contribute to our socio-political consciousness.
“What will happen if we ask AI to tell us jokes?” he muses. “Are we going to see it as jokes?”
Filani notes that computer-generated humor is not new.
“Since around 1995, people have been working on computer-generated jokes,” he says. “We have known for a fact that computers can create jokes.”
But African humor poses unique challenges for AI systems trained primarily on Western data. When jokes fall flat, Filani sees more than a technical limitation. He sees a reflection of whose cultures and experiences are prioritized in machine learning, demonstrating the current limitations of generative AI.
He says this is rooted in the fact that large language models don’t learn just language, they also learn genres, which he describes as ways by which we model and understand the world.
“AI tends to just simply reproduce the colonial archive, which in itself is incomplete,” he observes.
By using humor as his lens, Filani’s work opens broader conversations about representation, inclusion and cultural knowledge in artificial intelligence. These questions extend far beyond comedy.