I am a computer scientist at Johns Hopkins University, working at the intersection of neuroscience, machine learning, and spoken language technologies for the development of speech Brain-Computer Interfaces (BCIs). My research focuses on decoding neural signals associated with different speaking modalities, with the goal of restoring communication capabilities for individuals who have lost their voice due to neurodegenerative diseases, such as ALS.
My work combines advanced deep learning with invasive neuroimaging techniques to create more intuitive and effective ways for the brain to communicate directly with computing devices. I’m particularly interested in building real-time decoding algorithms that translate neural activity into intelligible speech with minimal latency to allow for immediate and continuous acoustic feedback. This involves tackling challenges in digital signal processing, neural network architecture design, and the optimization of BCI systems for clinical applications.
I am currently conducting research for the CortiCom project – a clinical trial spanning national and international institutions investigating the safety and efficacy of chronic ECoG implants for long-term BCI control. Here, we evaluate the stability of neural signals and the reliability of BCI control over extended periods of time, while developing robust decoding algorithms that translate attempted speech or motor movements into text or synthesized speech.
Feel free to explore ongoing projects, my publications, or other resources shared here.