Computational Cognitive Neuroscience of Human Audition (360G-Wellcome-219460_Z_19_Z)
Hearing is critical to human communication and intelligence. The cascade of neuronal processes that enable hearing remain poorly understood, particularly in computational terms. These gaps in knowledge limit our ability to design treatments for hearing impairment. The proposed research has three goals. First, to develop new computational models that can account for human perceptual abilities and neuronal responses. Second, to reveal representational transformations within auditory cortex that contribute to auditory recognition. Third, to use these models to develop auditory prostheses that augment human hearing. The overarching hypothesis is that the functional organization and tuning properties of the auditory system are constrained by ecologically important tasks (speech recognition, sound localization etc.), such that task-optimized models may converge to the structure of the auditory system. We will leverage deep learning to develop new neural network models of auditory computation. These models will be evaluated for their matches to behavior and brain data using sound synthesis methods introduced by the PI. Candidate hearing aids will then be derived by backpropagating recognition errors through the model to optimize a front-end audio transformation. Such audio transformation should restore model performance given an impaired model cochlea. We will then test their benefits for hearing-impaired listeners.
£2,832,201 03 Dec 2019