

Study Finds Accelerating Adoption of AI Language Models in Economics Research Papers
A new University of Massachusetts Amherst study delivers clear evidence of how large language models (LLMs), such as ChatGPT, are reshaping the landscape of academic research. It applies a new language analysis method that can help determine whether humans are getting an assist from AI in performing a range of tasks inside and outside of academia. The paper, published in the journal Economics Letters, reveals a marked and accelerating adoption of LLMs by economists in scholarly writing.
By analyzing the writing style in 25 top economics journals over 24 years, Maryam Feyzollahi and Nima Rafizadeh, doctoral students in the Department of Resource Economics at UMass Amherst, discovered a significant shift in language patterns after ChatGPT became available in late 2022.
AI-characteristic language increased by nearly 5 percentage points in economics papers published in 2023-24. Notably, this effect more than doubled from the first year to the second — rising from about 3 percentage points in 2023 to nearly 7 in 2024, suggesting economists are becoming more comfortable with LLM tools over time.
“Think of it like a linguistic fingerprint,” Feyzollahi explains. “AI-assisted writing tends to use certain words and phrases more frequently than traditional academic writing.”

Think of it like a linguistic fingerprint — AI-assisted writing tends to use certain words and phrases more frequently than traditional academic writing.
Maryam Feyzollahi, doctoral students in the Department of Resource Economics at UMass Amherst
They compared the frequency of 25 AI-associated terms — such as “underscore,” “nuance” and “leverage” — against a control group of traditional economics terms, measuring how AI has influenced academic writing.
“We’re looking at statistical patterns across thousands of papers over time,” Rafizadeh notes. “This allows us to detect subtle shifts in linguistic patterns that would be impossible to notice in single papers.”
Feyzollahi and Rafizadeh say their estimates are likely conservative because the method captures only linguistic, not analytical or coding, uses of AI — and authors may already be modifying LLM-generated text to mask its origin.
Journal rules about AI vary widely, from complete prohibition to required disclosure to complete silence on the issue. The findings raise a range of ethical questions about everything from research integrity to peer-review standards.

Feyzollahi and Rafizadeh, who did not use AI to help write their paper to maintain methodological integrity, note that scholars regularly use spellcheck and grammar-improvement tools in their writing. Where exactly does ChatGPT fall on this continuum? What level of assistance crosses the line from tool to co-author? Should AI systems receive formal acknowledgment in papers?
“Our study doesn’t answer these questions, but it provides empirical evidence that these discussions are no longer just hypothetical — LLMs are already influencing how economics research is written and published,” Feyzollahi says.
The research suggests that rather than prohibiting AI’s use, which would be practically unenforceable, consistent standards should be developed for acknowledging how the technology is employed. This includes ethical guidelines for AI use in research, proper citation of sources and addressing potential biases in LLM outputs.
“We need to shift the conversation from ‘detection and punishment’ to ‘responsible integration,’” Rafizadeh says.
The authors also note that AI has the potential to make research more accessible. For non-native English speakers or early career researchers without access to extensive editing resources, the technology could help level the playing field. However, it could also widen existing gaps between researchers who have access to advanced AI tools and those who do not.
Beyond analyzing economics research, Feyzollahi says their methods can be adapted to track similar changes across other academic disciplines or even outside of academia, where AI’s adoption is even more pronounced.
“What makes our research particularly valuable is that academic publishing provides a unique longitudinal dataset where we can systematically track changes over time,” she says. “In most other contexts, there’s no comparable historical record that allows for rigorous before-and-after analysis.”
Related

Andrew Barto, Manning College of Information and Computer Sciences professor emeritus, and his former graduate student Richard Sutton have been honored with the 2024 ACM A.M. Turing Award as pioneers of reinforcement learning.

The piece, co-authored by Trust, College of Education professor of learning technology, explores how AI can be effectively leveraged to enhance existing OER and improve educational outcomes.