Cognitive and linguistic factors that contribute to stuttering
The goal of this study is to investigate cognitive (e.g., working memory, attention, inhibition) and linguistic (e.g., phonology, semantics) parameters that affect speech planning and execution and their link to stuttered speech. We are currently recruiting monolingual English-speaking adults (18+ y.o.) who do not stutter and have no history of any speech/language/reading disorders or any cognitive, neurological, psychological diagnoses. Please fill out the form "Adults who want to participate in research" on the right of the screen.
Organization of the mental lexicon in children who stutter
This study aims to examine the role of sound structures and word meanings in the developing lexicon of children. We are examining how words are activated, stored, processed, and retrieved by children who do and do not stutter. We are currently recruiting children who do and do not stutter ages 5-12. Please fill out the form "Children who want to participate in research" on the right of the screen.
Disfluencies in bilingual and multilingual speakers
The goal of this study is to understand speech planning and production in bilingual speakers, detect similarities and differences in fluent and disfluent speech between monolingual and bilingual speakers, and establish stuttering classification and treatment protocols for bilingual speakers. We are currently recruiting Greek-English bilingual speakers who stutter. Please fill out the form "Adults who want to participate in research" on the right of the screen.
Improving evidence-based practices for stutteringThe purpose of this project is to investigate factors that promote long-term success of treatment outcomes in individuals who stutter, such as the use of self-disclosure and voluntary stuttering, and to determine key elements that will lead to the improvement of assessment procedures for stuttering diagnosis.
Generative ML and Cognitive Reserve in Bilingual Artificial Networks
The purpose of this project is to study whether artificial neural networks are more robust when trained on multiple languages or multiple tasks. We trained monolingual and bilingual GPT-2 models with the same architecture and dataset sizes and we introduced structural noise by randomly deleting neurons or adding noise to the weights. Bilingual models degraded more gracefully and eventually outperformed the monolingual models in the high-noise regime. We observed this phenomenon for numerous models across three different types of corruption (Additive Gaussian noise to the weights, random weight pruning and magnitude-based weight pruning).
Our research is supported by the American Speech Language and Hearing Foundation, the Texas Speech Language and Hearing Foundation, the Machine Learning Laboratory lab at UT and CISCO.