Browsing by Subject "Psycholinguistics"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Item Applying mixed-effects receiver operating characteristic (ROC) curve analysis to diagnostic evaluations of human learning(2001-08) Stacy, Catherine Ann; Blozis, Shelley AnneThis dissertation introduces a novel application of mixed-effects receiver operating characteristic (ROC) curve analysis in the study of individual differences. By applying a mixed-effects ordinal regression model to the rating outcomes of a spelling discrimination study, the underlying sensitivities of good and poor undergraduate spellers to a set of correctly spelled and misspelled words were estimated to test two competing hypotheses about individual differences in spelling. Variation in random slopes was modeled as a function of person-level covariates to identify possible predictors of spelling ability. Results provide strong support for the role of strategy-use in spelling discrimination. After partialling for strategy-use, between-group differences for good and poor spellers were greatest on irregular words, lending support to the hypotheses that good spellers have more well-defined mental lexicons than poor spellers. A significant proportion of unexplained variation in the random slopes remained after modeling the ratings assigned to irregular words, suggesting that spelling discrimination may involve additional strategies not identified in this study. This study showed that discrimination accuracy tasks can be sensitive, valid and reliable measures of underlying ability, attesting to their use in the study of individual differences.Item Examining the role of phonological awareness, speech-based phonological recoding, and orthographic processing on reading development in deaf bilinguals of ASL and English(2021-12-03) Cooley, Frances Grosvenor; Quinto-Pozos, David; Meier, Richard P; Schotter, Elizabeth; Llanos, Fernando; Singleton, JennyThis dissertation targets the role of speech-based phonology on reading development in deaf and hard of hearing (DHH) children. Researchers have long debated the role of spoken language phonics knowledge and phonological awareness on reading development in DHH children without access to speech sounds (Allen et al., 2009; Wang et al., 2008). Phonological awareness, which is the metalinguistic awareness of basic units of speech and the ability to consciously manipulate the linguistic units within words and sentences (Castles & Coltheart, 2004; Liberman & Shankweiler, 1985; Wagner & Torgesen, 1987), relates to reading skill in typically developing hearing children (Goswami & Bryant, 1990). Hearing readers of orthographic scripts begin reading by sounding out words and is dependent on the association between graphemes and speech sounds. However, our understanding of the processes by which DHH children read is vague at best as some investigations have shown a positive association between reading and spoken language phonological awareness in DHH children (Campbell & Wright, 1988; Dyer et al., 2003), while others have failed to find such a correlation (Izzo, 2002; Kyle & Harris, 2006; Leybaert & Alegria, 1993; Miller, 1997). I test the degree to which speech-based codes are active in adolescent DHH readers who grew up with robust exposure to a signed language thought childhood and school. Chapter 1 provides an overview of the relevant literature pertaining to the two reported studies. Chapter 2 discusses phonological awareness of speech and sign, as well as a variety of approaches to testing phonological awareness. Within this Chapter I introduce the methodology and results from the first half of the first study. Chapter 3 will then introduce eye-tracking and reading, as well as the eye-tracking results from the first study. Chapter 4 describes the last study of the dissertation, which tests the impact of spelling knowledge and speech-based homophony on reading and lexical decision tasks in DHH students. Finally, Chapter 5 provides a discussion of the three content Chapters together.Item How auditory discontinuities and linguistic experience affect the perception of speech and non-speech in English- and Spanish-speaking listeners(2005) Hay, Jessica Sari Fleming; Diehl, Randy L.Speech perception results from a complex interplay between the operating characteristics of the auditory system (i.e., auditory discontinuities) and linguistic experience. Research in human infants and animals, and research using tone-onset-time (TOT) stimuli, a type of non-speech analogue of voice-onset-time (VOT) stimuli, has suggested that there is an underlying auditory basis for the perception of stop consonants based on a threshold for detecting temporal onset asynchronies in the vicinity of + 20 ms. Languages, however, differ in their reliance on temporal onset asynchrony-based auditory discontinuities in their [voice] categories. This dissertation sought to examine whether long-term linguistic experience with different [voice] categories (i.e., English or Spanish) affects the perception of non-speech stimuli that are analogous in their acoustic timing characteristics. This research was also designed to investigate the joint effects of linguistic experience and auditory mechanisms on phoneme structure and category learning. Three cross-linguistic studies were designed to look at (1) the production and perception of VOT and the perception of TOT, (2) the effects of stimulus range on the perception of VOT, and (3) the effects of auditory discontinuities on non-speech category learnability. Results indicate that linguistic experience does affect the perception of nonspeech stimuli, at least in certain circumstances. Thus, there is some commonality in the processes used to discriminate between non-speech sounds and those used to discriminate between speech sounds. Additionally, auditory discontinuities were found to influence both phoneme structure and category learning. It is suggested that English- and Spanishspeaking listeners use different cues to discriminate their [voice] categories. Results also suggest that there are perceptual asymmetries between the positive and the negative onset asynchrony-based auditory discontinuities. The relationships between auditory discontinuities, linguistic experience, discriminability, phoneme category structure, and learnability are discussed.Item Language contexts in speech categorization: testing the double phonetic standard in bilinguals(2007) Garcia-Sierra, Adrián, 1973-; Champlin, Craig A.Speech sounds are typically perceived categorically. The acoustic information in speech sounds is perceptually grouped into phonetic categories. It is widely known that language influences the way speech sounds are categorized. That is, one's native language influences where category boundaries are placed. However, it is less understood how bilingual listeners categorize speech sounds. There is evidence showing that bilinguals have different category boundaries from monolinguals, but there is also evidence suggesting that bilinguals have different category boundaries depending on the language they are using at the moment. This phenomenon has been referred as the double phonetic boundary. The goal of this investigation was to verify the existence of the double phonemic boundary in bilingual listeners. As has been done in other studies, bilingual speakers of Spanish and English were asked to identify the speech sound /ta/ from a 10-token speech continuum ranging in VOT from /da/ to /ta/ in two language contexts. In this study, however, two additional procedures were carried out. First, English monolinguals were asked to identify the continuum in two language contexts. It was expected that bilinguals, but not monolinguals, would show a double phonetic boundary. Second, while participants' behavioral measures were assessed, electrophysiological measures [event-related potentials, (ERPs)] also were recorded. This was done in order to observe how speech sounds are represented in the brain. It as expected that bilinguals, but not monolinguals, would show different ERP amplitudes across language contexts. The behavioral results showed that phonemic boundaries did not differ across language contexts for either bilinguals or monolinguals. Further analyses showed bilinguals, but not monolinguals, perceived specific speech sounds--in the "ambiguous zone"--differently across language contexts. The electrophysiological results showed that the ERPs of bilinguals, but not monolinguals, differed across language contexts. Interestingly, behavioral measures correlated significantly with electrophysiological measures only in bilinguals. This result showed that the ERP amplitude was in accordance with the number of sounds perceived as 'ta' across language contexts. The challenges of testing the double phonemic boundary are discussed, along with the limitations of the methodology used in this study.Item The "resolution" of verb meaning in context(2013-05) Gaylord, Nicholas L.; Erk, Katrin; Bannard, ColinIt is well-known that the meaning of a word often changes depending on the context in which the word is used. Determining the appropriate interpretation for a word occurrence requires a knowledge of the range of possible meanings for that word, and consideration of those possibilities given available contextual evidence. However, there is still much to be learned about the nature of our lexical knowledge, as well as how we make use of that knowledge in the course of language comprehension. I report on a series of three experiments that explore these issues. I begin with the question of how precise our perceptions of word meaning in context really are. In Experiment 1, I present a Magnitude Estimation study in which I obtain judgments of meaning-in-context similarity over pairs of intransitive verb occur- rences, such as The kid runs / The cat runs, or The cat runs / The lane runs. I find that participants supply a large range of very specific similarity judgments, that judgments are quite consistent across participants, and that these judgments can be at least partially predicted even by simple measures of contextual properties, such as subject noun animacy and human similarity ratings over pairs of subject nouns. However, I also find that while some participants supply a great variety of ratings, many participants supply only a few unique values during the task. This suggests that some individuals are making more fine-grained judgments than others. These differences in response granularity could stem from a variety of sources. However, the offline nature of Experiment 1 does not enable direct examination of the comprehension process, but rather focuses on its end result. In Experiment 2, I present a Speed-Accuracy Tradeoff study that explores the earliest stages of meaning-in-context resolution to better understand the dynamics of the comprehension process itself. In particular, I focus on the timecourse of meaning resolution and the question of whether verbs carry context-independent default interpretations that are activated prior to semantic integration. I find, consistent with what has previously been shown for nouns, that verbs do in fact carry such a default meaning, as can be seen in early false alarms to stimuli such as The dawn broke -- Something shattered. These default meanings appear to reflect the most frequent interpretation of the verb. While these default meanings are likely an emergent effect of repeated exposure to frequent interpretations of a verb, I hypothesize that they additionally support a shallow semantic processing strategy. Recently, a growing body of work has begun to demonstrate that our language comprehension is often less than exhaustive and less than maximally accurate -- people often vary the depth of their processing. In Experiment 3, I explore changes in depth of semantic processing by making an explicit connection to research on human decision making, particularly as regards questions of strategy selection and effort- accuracy tradeoffs. I present a semantic judgment task similar to that used in Experiment 2, but incorporating design principles common in studies on decision making, such as response-contingent financial payoffs and trial-by-trial feedback on response accuracy. I show that participants' preferences for deep and shallow semantic processing strategies are predictably influenced by factors known to affect decision making in other non-linguistic domains. In lower-risk situations, participants are more likely to accept default meanings even when they are not contextually supported, such as responding "True" to stimuli such as The dawn broke -- Something shattered, even without the presence of time pressure. In Experiment 3, I additionally show that participants can adjust not only their processing strategies but also their stimulus acceptance thresholds. Stimuli were normed for truthfulness, i.e. how strongly implied (or entailed) a probe sentence was given its context sentence. Some stimuli in the task posessed an intermediate degree of truthfulness, akin to implicature, as in The log burned -- Something was dangerous (truthfulness 4.55/7). Across 3 conditions, the threshold separating "true" from "false" stimuli was moved such that stimuli such as the example just given would be evaluated differently in different conditions. Participants rapidly learned these threshold placements via feedback, indicating that their perceptions of meaning-in-context, as expressed via the range of possible conclusions that could be drawn from the verb, could vary dynamically in response to situational constraints. This learning was additionally found to occur both faster and more accurately under increased levels of risk. This thesis makes two primary contributions to the literature. First, I present evidence that our knowledge of verb meanings is at least two-layered -- we have access to a very information-rich base of event knowledge, but we also have a more schematic level of representation that is easier to access. Second, I show that these different sources of information enable different semantic processing strategies, and that moreover the choice between these strategies is dependent upon situational characteristics. I additionally argue for the more general relevance of the decision making literature to the study of language processing, and suggest future applications of this approach for work in experimental semantics and pragmatics.Item Typicality in Chinese sentence processing : evidence from offline judgment and online self-paced reading(2014-08) Chen, Po-Ting; Meier, Richard P.; Wicha, Nicole Y.This study examines how Chinese speakers understand sentences describing events that have varying degrees of typicality. How the interpretation of typicality is obtained from linguistic input is not fully understood. In this study, I investigate the association of pairs of content words in order to determine their contribution to judgments of event typicality. The associations between words could influence the interpretation of event typicality. Two words that are not associated semantically, for example baby and wine, may be seen as an atypical combination. However, when these words are placed in a sentence context, the resulting sentences can be a typical scenario, such as the baby spilled the wine. Four offline judgment studies were conducted to obtain quantitative measurements of the association of word pairs and of judgments of event typicality in sentences. These studies demonstrated that noun pairs showed larger differences in their association ratings than those of noun-verb pairs. When the sentences containing the word pairs were judged, the association of the noun pair strongly influenced the sentence’s event typicality ratings, regardless of word order or of the typicality of the verb. Two online, word-by-word self-paced reading studies were conducted to examine whether judgments of word associations and event typicality are used in real-time sentence processing. The results showed that there was a slowdown in reading times at the critical regions when the noun pairs were atypical. The typicality of the verb did not result in a difference in reading times, regardless of the word order of the sentences, although offline judgment scores of event typicality were predictive of online reading times. The findings of these studies suggest that: (1) event typicality is more than the semantic association between words. Noun-noun and noun-verb associations contribute to event typicality but the association of two nouns has a more significant contribution and is not affected by an intervening word, (2) the typicality of verbs contributed to real-time sentence processing, insofar as the verbs contributed to the judged typicality of the events expressed by SVO and SOV clauses, and (3) in real-time sentence processing, regardless of the sentence’s word order, the association of nouns has greater impact on event typicality processing. This is not likely to be due simply to a priming effect between nouns, but rather also reflects the processing of the sentence’s event typicality.