I am an Assistant Professor in the Neuroscience Institute and Psychology Department at Carnegie Mellon University, where I run the Laboratory for Computational Perception. My group investigates the complex patterns of neural activity underlying perception and cognition. Our research lies at the intersection of neuroscience, cognitive science, and artificial intelligence, where we combine computational modeling approaches with behavioral observations and brain measurements to compare model representations with those in biological systems. I work in both auditory and visual perception, and am inspired by similarities and differences across domains.
Previously, I was a Research Fellow at Flatiron Institute Center for Computational Neuroscience working with SueYeon Chung and Eero Simoncelli. I received my PhD in 2022 from MIT Brain and Cognitive Science working in the Laboratory for Computational Audition with Josh McDermott. During that time I was a Friends of McGovern Institute Graduate Fellow as part of the McGovern Institute, was a DOE Computational Science Graduate Fellow, and was affiliated with the Center for Brains Minds and Machines.
I will be recruiting students for Fall 2026. I can advise students through the Neuroscience Institute or the Psychology Department. I will also be able to advise students in the Machine Learning Department. Please include the phrase “Student Inquiry” with the program title you plan to apply to if you email about interest in working in my group!
Current Research
Model Metamers
Metamers are two stimuli that are physically different in the world but that a system perceives as the same. We investigated whether sounds and images that are metamers for a given deep neural network were also metameric for humans and other models. Our 2023 Nat Neuro paper explores model metamers from many different model types such as self-supervised and adversarially robust models. We presented an early version of this work at NeurIPS 2019, and you can find a recorded video abstract here.
Brain-Model Comparisons
The hierarchical nature of biological neural systems has motivated the use of artificial neural network models that transform sensory inputs into task-relevant representations, and recent work has proposed measures for comparing the responses of these models to brain responses. In a recent study we analyzed a how well a large set of audio models could predict neural responses via regression and representational similarity analysis, finding that the training data and task modulate the fidelity of neural predictions. In other work that will soon be presented as a spotlight at NeurIPS 2023, we looked at how the geometric properties of models can lead to better or worse neural predictions.
Neural Population Geometry
Recent work has proposed using manifold analysis techniques to investigate the geometrical properties of network representations. At NeurIPS 2021 we investigated how the representational geometry of neural networks changes when including biologically inspired stochastic responses, resulting in a representation that is more robust to adversarial attacks. At NeurIPS 2019 we investigated the neural population geometry of auditory networks trained on phonemes, words, and speakers, to show how different concepts emerge through the layers of the network.
Sound Textures
Auditory textures are sounds that are composed of a superposition of many similar elements, but are perceived as a single sound (such as rain, wind, and fire). Textures are believed to be represented in the brain with a set of time-averaged statistics. We developed a new auditory texture model based on the representations learned by a task-optimized convolutional neural network, and demonstrated that it captured many aspects of human texture perception, which you can read about in our 2018 CCN paper.