I am a MD/PhD Candidate at Georgetown University. My interests lie at the intersection of artificial intelligence/machine learning, medicine, and technology. I am in my final year of medical school and will be graduating in May 2021.
I completed my PhD in Neuroscience in January 2019 in the Laboratory for Computational Neuroscience at Georgetown University. My research spanned several topics including developing a sensory substitution device that allowed users to understand spoken speech through patterns of vibration on the skin, and developing machine learning models to predict individual differences in cognition using multimodal neuroimaging data (e.g., fMRI, EEG, DTI).
Before medical school, I spent a year as a Research Fellow at the National Institutes of Health in the Human Motor Control Section of the Medical Neurology Branch. My research projects included the characterization of resting state neural networks in movement disorders, and post-operative impedance of deep brain stimulation implants at the tissue-electrode interface.
I completed my undergraduate at Emory University with a degree in Neuroscience and Behavioral Biology, and completed an honors thesis in the area of electrophysiological changes in a rodent model of spinal cord injury.
Predicting individual differences in cognition using multimodal machine learning
The vast majority of neuroimaging research to date has studied brain function by averaging data from multiple subjects. There has been a recent surge in interest, however, in the study of individual differences of neuroimaging-based features, and in relating this variability to differences in behavior. Most of this work has focused on a single imaging modality, but leveraging multiple modalities (e.g. resting state and task fMRI, structural MRI, diffusion MRI), captures a richer description of individual differences. We designed a multimodal prediction stacking machine learning model using elastic-net regression to predict individual differences in human cognition. We found that leveraging stacked machine learning models and multimodal neuroimaging data significantly improves prediction of individual differences in cognition beyond single modality models.
Learning to understand speech through touch: neural mechanisms of auditory-to-vibrotactile sensory substitution
Speech perception is one of the most remarkable achievements of the human brain, but how the brain extracts meaning from a speech signal is still poorly understood. Amazingly, individuals can be trained perceive speech through their sense of touch in the absence of auditory input. There is a long history of research on the communication of speech through the somatosensory system, dating back to 1924. Remarkably, no previous study has investigated the neural mechanisms of speech perception in the somatosensory system. In this study, we trained participants to perceive vibrotactile speech: tactile patterns generated from recordings of spoken syllables using a sensory-substitution device. We used advanced EEG and fMRI techniques to determine how the human brain learns to perceive speech through the sense of touch. The knowledge gained will help inform the design of speech prostheses for those that are deaf or hard-of-hearing, as well addressing the fundamental question about how speech perception is accomplished by the human brain.