Example electrocortigraphy traces during speech listening (hero photo)

We are seeking a lab manager/research assistant to join our group. Apply here.

The Hamilton Lab is part of the NeuroComm Labs at UT Austin's Department of Speech, Language, and Hearing Sciences in the Moody College of Communication. We are also jointly affiliated with the Department of Neurology at UT Austin's Dell Medical School. Our research aims to determine how natural sounds including speech are represented by the human brain during speech perception and production, and how these representations change during development. We study how the human brain processes speech sounds using intracranial electrocorticography (ECoG) recordings from patients with intractable epilepsy who are undergoing surgery to treat their epilepsy.  We also study how the brain can track speech in clean and noisy environments during perception and production using noninvasive EEG and behavioral studies in healthy participants. We use a combination of electrophysiology, behavior, neuroimaging, and computational modeling to address these questions. 

We would not be able to do this work without the generous help of clinicians and our patient volunteers, who participate in listening tasks during their hospital stay. You can read more about our research on the Research page.

Funding Sources:

We are grateful to funding from the following organizations:

  • The Texas Speech Language Hearing Foundation
  • The National Institutes of Health - National Institute on Deafness and Other Communication Disorders
  • Facebook

December 2020 - Congratulations to Garret Kurteff for completing his Master's thesis!

Garret Kurteff has completed his Master's Thesis in Speech, Language, and Hearing Sciences. It is entitled "Modulation of Neural Responses to Naturalistic Speech Production and Perception" and addresses methods for understanding continuous speech production using EEG, how brain responses differ during perception and production, and how this is modulation by expectation.

October 2020 - Liberty presents at the Sociedad Argentina de Investigación en Neurociencias

Liberty presented a talk about the lab's ongoing research at a special symposium for the Argentine Society for Research in Neuroscience (SAN) 2020. Her talk was part of the symposium "Representation of language networks. A talk between the Brain and Artificial Intelligence." Her talk was entitled "Understanding naturalistic speech processing using invasive and noninvasive electrophysiology."

October 2020 - The lab receives NIH funding for our research on speech and language in the brain

We received an R01 grant from the National Institutes of Health and the National Institute on Deafness and Other Communication Disorders to understand how the brain processes speech and language during natural perception and production, and how this changes across development. This grant is a collaborative effort with researchers and clinicians at Dell Children's Medical Center in Austin, TX and Texas Children's Hospital in Houston, TX.

October 2020 - The lab presents at APAN 2020

Maansi Desai presented her work at the Advances and Perspectives in Auditory Neuroscience (APAN) 2020 virtual conference. Her presentation was entitled "Training and evaluation of EEG encoding models for naturalistic and controlled stimuli".

October 2020 - The lab presents research at the Society for Neurobiology of Language

Garret Kurteff and Maansi Desai presented their work at the Society for Neurobiology of Language virtual meeting. Garret's presentation was entitled "Methods for Investigating Continuous Speech Production and Perception with EEG." Maansi's presentation was Modeling EEG responses to audiovisual features in quiet and noisy listening situations using naturalistic stimuli".

September 2020 - The lab welcomes Brittany and Tasha!

The lab welcomes Brittany Hoang and Tasha Anslyn. Brittany is undertaking an independent study with Dr. Hamilton on speech and music processing in the brain. Tasha is participating in the Neuroscience Undergraduate Reading Program (NURP) and assisting Garret Kurteff with research on errors during naturalistic speech production as measured with EEG.

May 2020 - Congratulations to Maansi Desai on completing her Master's Thesis!

Maansi Desai has completed her Master's Thesis in Speech, Language, and Hearing Sciences. It is entitled "Neural Speech Tracking in Quiet and Noisy Listening Environments Using Naturalistic Stimuli", and addresses how the brain responds to acoustic and phonological information in both quiet, clear speech, and more naturalistic, noisy audiovisual speech.

Piano and brain decorative image

March 2020 - "Minds at Work"

Maansi Desai was interviewed by Moody College Marketing & Communication expert Natalie England about her article in Frontiers for Young Minds.

Cartoon picture of a brain and a boy at a piano. The brain is looking confused.

February 2020 - Brain Stimulation to Understand Music and Language

Our paper in Frontiers for Young Minds is out! This paper explains how cortical stimulation mapping is used to understand the neural representations of speech and language in musicians, and was written for and reviewed by kids! The paper was led by PhD student Maansi Desai in collaboration with undergraduate Rachel Sorrells and our UCSF collaborators Matthew Leonard and Edward Chang.

A plot showing results from our EEG study on speech in noisy vs. clear contexts

Feb 2020 - Speech in natural noisy situations coming to OHBM!

MA-PhD student Maansi Desai and current and former undergraduates Jade Holder, Cassandra Villarreal, and Natalie Clark will have their work on using EEG to understand the processing of speech in natural, noisy situations presented at the Organization for Human Brain Mapping Conference in Montreal this June, 2020.

January 2020 - Welcome Jacob and Nick!

The lab welcomes Jacob Cheek and Nick Arreguy. Jacob continues with independent study in the lab after participating in the Moody Intellectual Entrepreneurship program. Jacob is interested in speech in noise perception and word recognition in people with moderate to severe hearing loss. Nick joins as an independent study student and is learning to collect EEG data to help with a collaboration with the Aphasia Research and Treatment Lab.

Diagram of a convolutional neural network model used to understand pitch and phoneme representations in the brain.

October 2019 - The Lab presents at APAN and SfN

Research Assistant Ian Griffith and Liberty Hamilton presented work at the Advances and Perspectives in Auditory Neuroscience Meeting (APAN) as well as the Society for Neuroscience Meeting.

August 2019 - Congratulations to Mary Lowery!

The lab says farewell and good luck to Mary Lowery, who is pursuing her clinical fellowship (CF) in Speech Language Pathology!

June 25, 2019 - New paper out in Scientific Data

Our new paper on standards for sharing intracranial EEG data has moved from preprint to journal, and is now out in Scientific Data! We are happy to be involved in a large consortium of iEEG and ECoG researchers who are interested in better standards for data sharing. Thanks to Chris Holdgraf and Dora Hermes for leading this project.

June 2019 - Welcome Amanda, Nicole, Paranjaya, and Natalie!

Nicole Currens, Amanda Martinez, and Paranjaya Pokharel will join the lab, helping Garret Kurteff with a speech production and perception EEG project. Natalie Miller joins the lab to assist Maansi Desai with an EEG project on natural speech perception and influences of background noise. Welcome!

May 2019 - Congratulations to Mary, Natalie, and Cassie!

Congrats to Mary Lowery, who graduated May 2019 with her Master's degree in Speech Language Pathology, and to Natalie Clark and Cassandra Villarreal, who graduated with their bachelor's degrees in Communication Sciences and Disorders! Natalie and Cassie are going on to graduate masters programs in Speech Language Pathology. Mary will be working in the lab over the summer and then working on her clinical fellowship (CF). We wish them all the best!

March 2019 - The Lab wins awards from the Texas Speech-Language-Hearing Foundation

Congratulations to 1st year MA-PhD student Maansi Desai, who received the Elizabeth Wiig Doctoral Student Research Fund, and to Liberty Hamilton, who received the Tina E. Bangs Research Endowment Fund. The awards were presented at the TSH Foundation breakfast in Fort Worth, Texas on March 1st, 2019.

February 2019 - Welcome, Stephanie!

Stephanie Shields is a first year PhD student in the Institute for Neuroscience and has started a 10-week research rotation with our group. Welcome, Stephanie!

January 2019 - Welcome, Ian and Rachel!

The lab welcomes Ian Griffith, who joins us as a research associate, and Rachel Sorrells, who joins as an undergraduate in neuroscience. Ian brings experience in cognitive neuroscience, EEG, and computational neuroscience and will work on a project related to invariance in the auditory system. Rachel will be working on a project related to natural sound perception.

Schematic of the BIDS-iEEG data structure for sharing information about intracranial EEG data, including event times, electrode positions, and metadata.

December 2018 - Making intracranial EEG data sharing easier!

We recently contributed to an excellent preprint by Christopher Holdgraf (Berkeley Institute for Data Science) and Dora Hermes (Brain Center Rudolf Magnus, UMC Utrecht) on improving reuse and external sharing of intracranial EEG data.

September 2018 - Welcome, Maansi and Garret!

Maansi Desai and Garret Kurteff join the lab as MA-PhD students in Speech, Language, and Hearing Sciences.

July 2018 - New review on natural stimuli!

We have a new review out in Language, Cognition, and Neuroscience -- Hamilton & Huth. "The revolution will not be controlled: natural stimuli in speech neuroscience".

Welcome Cassandra, Jade, Natalie, and Noora!

The lab welcomes CSD students Cassandra Villarreal, Jade Holder, Natalie Clark, and CSD graduate Noora Raad to our group. They will be working on stimulus transcription for natural stimuli as well as some data collection!

Graphical abstract showing schematic of electrocorticography recording setup, neural responses to speech, and unsupervised clustering of these neural responses to reveal a posterior onset and an anterior sustained area of superior temporal gyrus.

May 2018 - New paper in Current Biology!

Our paper "A spatial map of onset and sustained responses to speech in the human superior temporal gyrus" is out in Current Biology! We used ECoG and computational methods to describe a new spatial map in STG.

Music segmentation preliminary results from Hamilton lab

April 2018 - UT Care Research Day

At UT Care Research Day here at UT Austin, we presented research from the lab showing onset and sustained responses to speech and how they might be used in a speech decoder. Mary Lowery's preliminary results on speech and music were also shared with the group.

Seagull at the ARO conference

February 2018 - ARO Conference

Liberty gave a talk at the Association for Research in Otolaryngology (ARO) conference in San Diego, CA, as part of the symposium on "Linking theoretical and experimental approaches to understand auditory cortical processing". (Pictured: San Diego seagull giving suggestions for future speech production tasks).

Picture of brain with electrodes localized and labeled using our software.

October 31, 2017

Our new paper with colleagues from the University of California San Francisco on localization, labeling, and warping of electrodes in electrocorticography (ECoG) is now out! The paper includes open source python software for electrode localization, brain plotting, and more.

Intracranial EEG waveforms showing detected seizure activity from methods described in Baud et al. in the journal Neurosurgery

October 10, 2017

A paper with our collaborators from the University of California, San Francisco Department of Neurological Surgery and Department of Neurology is now out in the journal Neurosurgery! Read how Maxime Baud and colleagues were able to apply unsupervised machine learning techniques to detecting seizure activity in intracranial EEG.

Brain image from Auditory Cortex meeting poster showing regions sensitive to onsets and sustained portions of speech sounds

September 14, 2017

Dr. Hamilton presents her research at the 6th International Conference on Auditory Cortex in Banff, Canada on the functional organization of the human speech cortex using intracranial recordings.

Sound waveforms from English sentences used in our studies

August 24, 2017

Write-up from NPR on Tang, Hamilton, & Chang Science 2017. "Really? Really. How Our Brains Figure Out What Words Mean Based On How They're Said"

Picture of one electrode on the brain showing a response to a pitch contour in speech.

August 24, 2017

Write-up from Wired on Tang, Hamilton, & Chang Science 2017. "Scientists found the neurons that respond to uptalk"