Recently, Jon Hamilton of NPR’s All Things Considered interviewed Dr. Edward Chang, one of the neurosurgeons and investigators involved in a study focused on decoding cortical activity into spoken words.
Currently, those who cannot produce speech rely upon technology that allows them to use eye gaze to produce synthesized speech one letter at a time. While this gives those who otherwise could not speak a voice, it is considerably slower than natural speech production.
In the current study, cortical electrodes gathered information as subjects read hundreds of sentences. The electrodes monitored various portions of the cortex involved in speech production. This information was processed and resulted in intelligible synthesized speech.
Reference
Anumanchipalli GK, Chartier J, Change E. (2019) Speech synthesis from neural decoding of spoken sentences. Nature568:493–498.
Recent Posts
Dizziness and Neck Pain: A Perspective on Cervicogenic Dizziness
Cervicogenic dizziness is a somewhat controversial topic, as this condition is often considered a diagnosis of exclusion without a specific objective standardized test across health-care…
Audiologists Advocate for Fair Use of “Doctor” Title in Florida
This week, the American Academy of Audiology, in collaboration with the Florida Academy of Audiology (FLAA), voiced concerns about House Bill (HB) 1341—legislation that would…
Arkansas Enacts Law Expanding Audiologists’ Scope of Practice
Arkansas Senate Bill 118 has been signed into law, updating the state’s audiology scope of practice statute. The law allows audiologists who are licensed to…