The Hamilton Lab is jointly housed in the Department of Speech, Language, and Hearing Sciences in the Moody College of Communication and the Department of Neurology/Dell Children's Hospital at the University of Texas at Austin. We have a number of arms of our research, all with the aim to understand how speech and language are processed in the brain in children and adults.
How does the brain represent natural sounds during speech perception?
Our research aims to determine how natural sounds including speech are represented by the human brain, and how these representations change during development. We study how the human brain processes speech sounds using intracranial electrocorticography (ECoG) recordings from patients with medication-refractory epilepsy who are undergoing surgery to treat their epilepsy. This work is performed in collaboration with patients and clinicians at Dell Children's Hospital in Austin, Texas. This research will not only inform us on how speech and language function relates to epilepsy, but will also help us to develop new assistive technologies for those with communication disorders.
Some results of this type of research are shown here. The movie below shows the brain's real-time response to an English sentence as the person heard it in the clinic (credit: Liberty Hamilton, data from Edward Chang's Lab at UC San Francisco). Each dot represents an electrode that was implanted during treatment for temporal lobe epilepsy. The activity is shown from light to dark red, with darker colors representing more activity during sound listening. The sound waveform is shown below. Based on responses like these, we are trying to determine how the brain can take sounds and build up information to represent phonemes, words, phrases, and whole sentences.
Scalp EEG Studies
In addition to our studies using electrocorticography and stereo-EEG, we also use noninvasive scalp EEG to understand the fast dynamics of speech processing in the brain. Here are some current studies:
Generalizable EEG encoding models with naturalistic audiovisual stimuli
Available as a preprint: Desai M, Holder J, Villarreal C, Clark N, Hamilton LS (2021). Generalizable EEG encoding models with naturalistic audiovisual stimuli. Biorxiv.
As humans, we live in a noisy world where constant overlapping speech and non-speech sounds occur in a highly uncontrolled and naturalistic setting. In our study, we investigated how we can predict brain responses to speech in a controlled listening situation (continuous speech sentences from a corpora) and compared neural responses to when speech, when presented in a noisy environment (children’s movie trailers). We recorded high-density 64-channel EEG in participants with typical hearing as they listened to these contrasting stimuli. We found that neural responses to the movie trailers were generalizable to more controlled data sets. We showed that modeling neural responses to highly noisy, audiovisual movie trailers can uncover tuning for acoustic and phonetic information that generalizes to simpler stimuli used in sensory neuroscience experiments.