Studying Why and How Speaking Face-to-Face Helps Aging Listeners
AMHERST, Mass. – The vast majority of speech perception research has focused on how we recognize what the speaker says through listening only, and has failed to capture the value of speaking face-to-face, says speech perception expert Alexandra Jesse at the University of Massachusetts Amherst.
Now she has a two-year, $100,000 grant from NIH’s National Institute of Aging to explore the mechanisms underlying audiovisual speech perception, that is, investigating how listeners, in particular older adults with age-related hearing loss, combine information from both hearing and seeing a speaker to their benefit.
Jesse, a cognitive psychologist in the department of psychological and brain sciences who has devoted her research career to the study of auditory and audiovisual speech perception, says she and others have found that almost everyone recognizes speech better when they can hear and see the speaker than when they just hear the speaker. Her new studies will look at “how we process audiovisual speech and how that changes with aging,” she says.
She adds that most older adults – the fastest growing segment of the U.S. population – at some point will experience difficulty understanding what others say, primarily due to age-related hearing decline. “It is estimated that by 2030, 20 percent of the population will be over 65, and the probability of hearing loss doubles with every decade,” Jesse says. “In the over-70 group, nearly two-thirds of adults are affected and 70 percent of adults who could benefit from a hearing aid never use one.”
Age-related hearing decline leads to poor speech comprehension, which is not only a problem for communication, but is associated with accelerated cognitive decline, increased risk for dementia and social isolation of the elder and his or her family. “With social interaction as a primary predictor of general cognitive functioning, this affects the whole family,” she says, adding that “seeing the speaker can help older listeners with comprehension and thus have a far-reaching impact on their healthy aging.”
So-called “visual speech” can often help, especially in noisy situations, and earlier studies have shown that people benefit but to widely varying degrees, Jesse says. “Some people understand tremendously better when seeing the speaker while for others it doesn’t seem to help so much. In some people we find larger benefits for recognition, and in some smaller ones.”
“When listeners can hear and see the speaker at the same time, they show, within 100 milliseconds after the onset of a word, a different response than when they can just hear – and not see – the speaker,” Jesse explains. “The early brain response is modulated by visual speech. It’s different, but we don’t know what it means for the listeners’ ability to recognize what was said. What role does it potentially play for how much listeners benefit from audiovisual speech? This is one thing we are trying to figure out.”
Specifically, Jesse and colleagues will also study the most difficult type of speech perception for older adults, that is, in a noisy situation when just one other person is talking. She plans a series of electrophysiological and cognitive experiments in three groups of volunteer study participants: those under age 30, middle-aged people 45 to 59 years old and adults 65 to 80 years old, to define age-related differences in audiovisual processing.
Her studies will also look at how individual differences in cognitive skills such as memory and attention affect the use of audiovisual speech in older adults. “I hope we may be able to predict why one person can achieve good comprehension from audiovisual speech and another can’t,” she says, “This could allow us to propose treatment regimes to boost the benefits of audiovisual speech in those who are not currently able to take full advantage.”
The cognitive psychologist hopes her findings will significantly advance the theoretical understanding of how listeners in different age groups benefit from audiovisual information during real-time speech processing. She expects results will help to set the basis for clinical assessments and interventions that can improve the use of audiovisual speech to achieve functional speech perception and enhance healthy aging.