Show simple item record

dc.contributor.advisorChandrasekaran, Bharath
dc.creatorXie, Zilong
dc.date.accessioned2018-07-24T16:52:27Z
dc.date.available2018-07-24T16:52:27Z
dc.date.created2018-05
dc.date.issued2018-06-22
dc.date.submittedMay 2018
dc.identifierdoi:10.15781/T20000H8N
dc.identifier.urihttp://hdl.handle.net/2152/65752
dc.description.abstractReal-world speech processing often takes place in complex multisensory environments. Listeners may need to prioritize sensory inputs from modalities other than audition. Selective attention is thought to be critical in selecting the sensory modality most relevant to the task at hand. Two critical research questions have driven crossmodal attention research thus far: first, how early does crossmodal attention influences processing in the unattended modality? Second, is there a limitation in attentional resources between sensory modalities? Set within the context of this prior work, this dissertation aims to examine the effects of crossmodal attention on speech processing when sensory inputs from vision are prioritized. In study 1, we demonstrate that modulating visual perceptual load can impact the early sensory representation of linguistically-relevant pitch contours (Mandarin tones), a suprasegmental feature that is critical to the percept of lexical tones. Further, we provide novel evidence that the impact of the visual load is highly dependent on the predictability of the incoming speech stream. In study 2, we utilized ecologically valid, continuous speech and tested the extent to which dividing attention to a visual task affects neural processing of speech signals. We show that dividing attention between auditory and visual tasks leads to both behavioral and electrophysiological costs in the processing of continuous speech stimuli. The results also demonstrate that the neural encoding of suprasegmental features (e.g., envelope and fundamental frequency) in continuous speech is modulated by diverting attention away from the auditory modality. In contrast, the neural encoding of segmental features (e.g., phonetic features) may be unaffected by taking attention away from the auditory stream. The theoretical and practical implications of the two studies are discussed.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectCrossmodal attention
dc.subjectSpeech processing
dc.subjectSuprasegmental features
dc.subjectSegmental features
dc.subjectMachine learning
dc.titleTaking attention away from the auditory modality : investigations of the effect on speech processing using machine learning
dc.typeThesis
dc.date.updated2018-07-24T16:52:28Z
dc.contributor.committeeMemberBeevers, Christopher G
dc.contributor.committeeMemberChamplin, Craig A
dc.contributor.committeeMemberLiu, Chang
dc.description.departmentCommunication Sciences and Disorders
thesis.degree.departmentCommunication Sciences and Disorders
thesis.degree.disciplineCommunication Sciences & Disorders
thesis.degree.grantorThe University of Texas at Austin
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy
dc.type.materialtext


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record