MD/PhD Student UT Health Science Center at Houston
Introduction: Aphasia is a devastating speech disorder affecting millions of Americans. While some patients show improvements over time with speech therapy, many have long-term impairments. Recent research aims to develop speech brain-computer interfaces (speech-BCIs) to restore speech function by detecting intended speech based on neural activity. However, this technology has yet to be investigated in aphasia. We aim to investigate the feasibility of decoding speech from residual eloquent cortex in a patient with non-fluent aphasia following traumatic brain injury.
Methods: We recorded neural activity via depth electrodes during picture naming. We implemented a phoneme-level sequence-to-sequence (seq2seq) model for phonological decoding and a word-level semantic decoder using linear models. Phonemic error trials were included to assess their impact on decoding performance. A virtual aphasia model was constructed by training decoders on correctly articulated responses and testing them on trials with phonemic errors to identify neural substrates of dissonance between phonological and semantic information.
Results: The phonological decoder achieved 73% accuracy, indicating that phonological information remains accessible even in the presence of aphasia. Including phonemic error trials in training improved decoding performance, suggesting robustness to the speech variability inherent in aphasia. For the virtual aphasia models, we found that word-level semantic decoding remains stable across trials with increasing phonemic errors, and the phonemic decoder predicted the patient’s actual utterances better than the target word. This suggests that this patient’s aphasia is characterized by a breakdown in phonological retrieval with preserved semantic access.
Conclusion : Our framework yields a systematic decoder based on the fluency of minimal preserved speech and shows the utility of implementing neural decoding models to improve our understanding of aphasia. Ultimately, this approach will enable the development of brain-computer interfaces that allow fluent and reliable communication in individuals with aphasia.