In the natural world, brains are faced with a mixture of sensory cues from multiple modalities. For example, when listening to someone speak, we combine the sounds they produce with their lip movements–which is one reason that masks make conversations more difficult! How and where are these auditory and visual streams of information combined in the brain? Our lab uses the mouse model system to answer this question.
We train mice to perform complex behaviours in custom-designed audiovisual chambers. We combine these behaviours with the latest electrophysiology tools and optogenetic manipulations to dissect the neural circuits that underly audiovisual integration. We aim to determine how and where these sensory modalities are combined in the brain, both to localize external objects in space and to localize oneself when navigating a multisensory environment.