I am a Research Fellow at Flatiron Institute Center for Computational Neuroscience working with SueYeon Chung and Eero Simoncelli. I received my PhD in 2022 from MIT Brain and Cognitive Science working in the Laboratory for Computational Audition with Josh McDermott. During that time I was a Friends of McGovern Institute Graduate Fellow as part of the McGovern Institute, was a DOE Computational Science Graduate Fellow and was affiliated with the Center for Brains Minds and Machines. Previously I was an intern at Google, a research assistant with Nancy Kanwisher, and received undergraduate degrees in Physics and Brain and Cognitive Sciences from MIT.

I will be starting as an Assistant Professor at Carnegie Mellon University in Fall 2025 in the Neuroscience Institute and Psychology Department. I will also be able to advise students in the Machine Learning Department. Feel free to reach out if you are interested in working in my group!

Current Research

I’m broadly interested in how the human brain transforms representations of sounds. Currently, the best models we have of human auditory (and visual) processing are deep neural networks. Much of my recent work focuses on comparing the representations learned by these models to those used by humans.

Model Metamers

Metamers are two stimuli that are physically different in the world but that a system perceives as the same. We investigated whether sounds and images that are metamers for a given deep neural network were also metameric for humans and other models. Our 2023 Nat Neuro paper explores model metamers from many different model types such as self-supervised and adversarially robust models. We presented an early version of this work at NeurIPS 2019, and you can find a recorded video abstract here.

Brain-Model Comparisons

The hierarchical nature of biological neural systems has motivated the use of artificial neural network models that transform sensory inputs into task-relevant representations, and recent work has proposed measures for comparing the responses of these models to brain responses. In a recent study we analyzed a how well a large set of audio models could predict neural responses via regression and representational similarity analysis, finding that the training data and task modulate the fidelity of neural predictions. In other work that will soon be presented as a spotlight at NeurIPS 2023, we looked at how the geometric properties of models can lead to better or worse neural predictions.

Neural Population Geometry

Recent work has proposed using manifold analysis techniques to investigate the geometrical properties of network representations. At NeurIPS 2021 we investigated how the representational geometry of neural networks changes when including biologically inspired stochastic responses, resulting in a representation that is more robust to adversarial attacks. At NeurIPS 2019 we investigated the neural population geometry of auditory networks trained on phonemes, words, and speakers, to show how different concepts emerge through the layers of the network.

Sound Textures

Auditory textures are sounds that are composed of a superposition of many similar elements, but are perceived as a single sound (such as rain, wind, and fire). Textures are believed to be represented in the brain with a set of time-averaged statistics. We developed a new auditory texture model based on the representations learned by a task-optimized convolutional neural network, and demonstrated that it captured many aspects of human texture perception, which you can read about in our 2018 CCN paper.