| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Computer engineers and hearing scientists at The Ohio State University have made a potential breakthrough in solving a 50-year-old problem in hearing technology: how to help the hearing-impaired understand speech in the midst of background noise. In the Journal of the Acoustical1 Society of America, they describe how they used the latest developments in neural2 networks to boost test subjects' recognition of spoken words from as low as 10 percent to as high as 90 percent.
The researchers hope the technology will pave the way for next-generation digital hearing aids. Such hearing aids could even reside(居住,属于) inside smartphones; the phones would do the computer processing, and broadcast the enhanced signal to ultra-small earpieces wirelessly3.
Several patents are pending4 on the technology, and the researchers are working with leading hearing aid manufacturer Starkey, as well as others around the world to develop the technology.
Conquering background noise has been a "holy grail" in hearing technology for half a century, explained Eric Healy, professor of speech and hearing science and director of Ohio State's Speech Psychoacoustics Laboratory.
The desire to understand one voice in roomful of chatter5 has been dubbed6 the "cocktail7 party problem."
"Focusing on what one person is saying and ignoring the rest is something that normal-hearing listeners are very good at, and hearing-impaired listeners are very bad at," Healy said. "We've come up with a way to do the job for them, and make their limitations moot8(无实际意义的)."
Key to the technology is a computer algorithm developed by DeLiang "Leon" Wang, professor of computer science and engineering, and his team. It quickly analyzes9 speech and removes most of the background noise.
"For 50 years, researchers have tried to pull out the speech from the background noise. That hasn't worked, so we decided10 to try a very different approach: classify the noisy speech and retain only the parts where speech dominates the noise," Wang said.
In initial tests, Healy and doctoral student Sarah Yoho removed twelve hearing-impaired volunteers' hearing aids, then played recordings11 of speech obscured by background noise over headphones. They asked the participants to repeat the words they heard. Then they re-performed the same test, after processing the recordings with the algorithm to remove background noise.
They tested the algorithm's effectiveness against "stationary12 noise" -- a constant noise like the hum of an air conditioner -- and then with the babble13(喋喋不休,呀呀学语) of other voices in the background.
The algorithm was particularly affective against background babble, improving hearing-impaired people's comprehension from 25 percent to close to 85 percent on average. Against stationary noise, the algorithm improved comprehension from an average of 35 percent to 85 percent.
For comparison, the researchers repeated the test with twelve undergraduate Ohio State students who were not hearing-impaired. They found that scores for the normal-hearing listeners without the aid of the algorithm's processing were lower than those for the hearing-impaired listeners with processing.
"That means that hearing-impaired people who had the benefit of this algorithm could hear better than students with no hearing loss," Healy said.
A new $1.8 million grant from the National Institutes of Health will support the research team's refinement14 of the algorithm and testing on human volunteers.
点击收听单词发音
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
上一篇:安体舒通能降低心衰住院率 下一篇:科学家发明可自我修复的电极 |
- 发表评论
-
- 最新评论 进入详细评论页>>