The goal of our research is to improve socioeconomic outcomes with hearing loss via improving speech perception and cognition under auditory deprivation.
The lab is composed of myself and medical students. The Department of Otolaryngology provides administrative/IRB support and IU Research Database Complex provides the computing environment that our lab software uses to setup, run, and store subject protocols.
My goal is to design experiments, obtain funding, and provide the students with the resources to run experiments and collect data. Generally, I will analyze the data, write the paper with myself as the first author. Experiments are often multi-layered, offering students a crack at analyzing data. If you wish to do so, please let me know. As an early-career scientist, I will be first author on papers resulting from the below projects.
The student personnel should function as a team managing the recruitment, scheduling, and testing of research subjects. Unlike the medical world in which you are expected to carry out tasks in a relatively rote fashion, success of the lab depends on communication and preparation. If you do not understand or think a process could be improved, please let me know.
Steps you will need to complete to become a lab member:
The Combined Indexical and Linguistic Speech Perception Assessment (CLISPA) is designed to assess the perception of the words in speech as well as the gender, speaker identity, and emotion with which they are delivered.
See the Review the protocol for recruitment and testing.
Cochlear implants (CIs) take advantage of the tonotopic arrangement of the cochlea by replacing damaged hair cells with an electrode array. Early CIs consisted of a single electrode modulated directly by the essentially the whole sound signal. To take advantage of the place code, sound was filtered into 4 to 8 bands and the sound output of the filter modulated the current on an electrode (below figure Panel B). However, the discharge of electrical current simultaneously across all electrodes resulted distortion from spreading current overlapping with adjacent electrical fields and neural populations. To mitigate this a series of interleaved pulses instead of analog waves. In this design, only one channel is stimulated at any given time. Current CIs contain 12 to 24 frequency bands. Loudness is encoded by the amplitude of the pulse train. Rhythm is encoded by the amplitude modulation of the pulse train. Pitch is encoded by place (i.e. which electrode is fired), amplitude modulation on the electrode, as well as the listeners ability compare timing changes across multiple electrodes.
Comparing the analog and pulsatile strategies in the figure below demonstrate the greater information transmitted on one channel for the analog processing strategy. This fine timing information is demonstrated in the variable frequency of zero crossings for the analog current versus the fixed width, constant frequency of zero crossings in the pulsatile strategy. This constant pulse frequency results in loss of this fine timing information. This is a critical loss of information because normal hearing ears use tonotopic place (i.e. electrode) and fine timing cues to to encode pitch.
Above figure from Nature (Wilson et al, Nature, 1991).
For bilateral CI users, we’ve developed DPASS (dichotic pulsatile and analog stimulation strategy). Instead of placing essentially identical pulsatile speech processing strategies on each ear, we increase the temporal information available by using a single channel of analog on one of the ears. This channel corresponds to the lowest frequency electrode in the cochlea which is most important for pitch perception. Using a single channel rather than the whole array of analog current cuts down electrical field overlap of simultaneously discharging electrodes. The use of a single channel of analog alone provides very, very low speech perception because you are encoding vowel information because a single channel lacks place pitch to encode the consonants. DPASS requires central integration of two signals—the same that occurs with bimodal and hybrid cochlear implants.