Date of Award


Document Type


Degree Name

Doctor of Philosophy (PhD)


Audiology and Speech Pathology

Research Advisor

Tim Saltuklaroglu, Ph.D.


Daniela Corbetta, Ph.D. Ashley Harkrider, Ph.D. Deborah von Hapsburg, Ph.D


Background: The functional significance of sensorimotor integration in acoustic speech processing is unclear despite more than three decades of neuroimaging research. Constructivist theories have long speculated that listeners make predictions about articulatory goals functioning to weight sensory analysis toward expected acoustic features (e.g. analysis-by-synthesis; internal models). Direct-realist accounts posit that sensorimotor integration is achieved via a direct match between incoming acoustic cues and articulatory gestures. A method capable of favoring one account over the other requires an ongoing, high-temporal resolution measure of sensorimotor cortical activity prior to and following acoustic input. Although scalp-recorded electroencephalography (EEG) provides a measure of cortical activity on a millisecond time scale, it has low-spatial resolution due to the blurring or mixing of cortical signals on the scalp surface. Recently proposed solutions to the low-spatial resolution of EEG known as blind source separation algorithms (BSS) have made the identification of distinct cortical signals possible. The µ rhythm of the EEG is known to briefly suppress (i.e., decrease in spectral power) over the sensorimotor cortex during the performance, imagination, and observation of biological movements, suggesting that it may provide a sensitive index of sensorimotor integration during speech processing. Neuroimaging studies have traditionally investigated speech perception in two-forced choice designs in which participants discriminate between pairs of speech and nonspeech control stimuli. As such, this classical design was employed in the current dissertation work to address the following specific aims to: 1) isolate independent components with traditional EEG signatures within the dorsal sensorimotor stream network; 2) identify components with features of the sensorimotor µ rhythm and; 3) investigate changes in timefrequency activation of the µ rhythm relative to stimulus type, onset, and discriminability (i.e., perceptual performance). In light of constructivist predictions, it was hypothesized that the µ rhythm would show significant suppression for syllable stimuli prior to and following stimulus onset, with significant differences between correct discrimination trials and those discriminated at chance levels.

Methods: The current study employed millisecond temporal resolution EEG to measure ongoing decreases and increases in spectral power (event-related spectral perturbations; ERSPs) prior to, during, and after the onset of acoustic speech and tone-sweep stimuli embedded in white-noise. Sixteen participants were asked to passively listen to or actively identify speech and tone signals in a two-force choice same/different discrimination task. To investigate the role of ERSPs in perceptual identification performance, high signal-to-noise ratios (SNRs) in which speech and tone identification was significantly better than chance (+4dB) and low SNRs in which performance was below chance (-6dB and -18dB) were compared to a baseline of passive noise. Independent component analysis (ICA) of the EEG was used to reduce artifact and source mixing due to volume conduction. Independent components were clustered using measure product methods and cortical source modeling, including spectra, scalp distribution, equivalent current dipole estimation (ECD), and standardized low-resolution tomography (sLORETA).

Results: Data analysis revealed six component clusters consistent with a bilateral dorsal-stream sensorimotor network, including component clusters localized to the precentral and postcentral gyrus, cingulate cortex, supplemental motor area, and posterior temporal regions. Timefrequency analysis of the left and right lateralized µ component clusters revealed significant (pFDR<.05) suppression in the traditional beta frequency range (13-30Hz) prior to, during, and following stimulus onset. No significant differences from baseline were found for passive listening conditions. Tone discrimination was different from passive noise in the time period following stimulus onset only. No significant differences were found for correct relative to chance tone stimuli. For both left and right lateralized clusters, early suppression (i.e., prior to stimulus onset) compared to the passive noise baseline was found for the syllable discrimination task only. Significant differences between correct trials and trials identified at chance level were found for the time period following stimulus offset for the syllable discrimination task in left lateralized cluster.

Conclusions: As this is the first study to employ BSS methods to isolate components of the EEG during acoustic speech and non-speech discrimination, findings have important implications for the functional role of sensorimotor integration in speech processing. Consistent with expectations, the current study revealed component clusters associated with source models within the sensorimotor dorsal stream network. Beta suppression of the µ component clusters in both the left and right hemispheres is consistent with activity in the precentral gyrus prior to and following acoustic input. As early suppression of the µ was found prior the syllable discrimination task, the present findings favor internal model concepts of speech processing over mechanisms proposed by direct-realists. Significant differences between correct and chance syllable discrimination trials are also consistent with internal model concepts suggesting that sensorimotor integration is related to perceptual performance at the point in time when initial articulatory hypotheses are compared with acoustic input. The relatively inexpensive, noninvasive EEG methodology used in this study may have translational value in the future as a brain computer interface (BCI) approach. As deficits in sensorimotor integration are thought to underlie cognitive-communication impairments in a number of communication disorders, the development of neuromodulatory feedback approaches may provide a novel avenue for augmenting current therapeutic protocols.