I am a PhD Candidate in the Laboratory for Computational Audition at MIT working with Josh McDermott. I am currently a Friends of McGovern Institute Graduate Fellow as part of the McGovern Institute and previously was a DOE Computational Science Graduate Fellow. I am also affiliated with the Center for Brains Minds and Machines. Previously I was an intern at Google, a research assistant with Nancy Kanwisher, and received undergraduate degrees in Physics and Brain and Cognitive Sciences from MIT.
I’m broadly interested in how the human brain transforms representations of sounds. Currently, the best models we have of human auditory (and visual) processing are deep neural networks. Much of my recent work focuses on comparing the representations learned by these models to those used by humans.
Metamers are two stimuli that are physically different in the world but that a system perceives as the same. We investigated whether sounds and images that are metamers for a given deep neural network were also metameric for humans and other models. This work was presented at NeurIPS 2019, and you can find a recorded video abstract here.
Auditory textures are sounds that are composed of a superposition of many similar elements, but are perceived as a single sound (such as rain, wind, and fire). Textures are believed to be represented in the brain with a set of time-averaged statistics. We developed a new auditory texture model based on the representations learned by a task-optimized convolutional neural network, and demonstrated that it captured many aspects of human texture perception, which you can read about in our 2018 CCN paper.