I am interested in understanding how humans hear in 3-D, which is also known as spatial hearing. The overarching goal of my research is to understand which cues the brain extracts from an acoustic signal to make decisions in spatial hearing, and how we can apply novel signal processing methods to present these cues to the brain when parts of the natural hearing system have been bypassed, or are not functioning normally. This work has scientific importance (it helps us to understand the functioning of the auditory system), clinical translation (my work is directly applicable to hearing impaired populations, especially those who have bilateral cochlear implants), and engineering applications (understanding the cues that the auditory system uses for spatial hearing can be applied to a number of applications such as biologically inspired front-end processing for speech recognition systems, autonomous search-and-rescue robots, virtual and augmented reality, games and entertainment, etc.). My research is currently supported by a grant from the National Institute on Deafness and Other Communication Disorders (NIH-NIDCD), and previously by grants from the Hearing Health Foundation.