Browsing by Subject "Visual attention"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item An experimental assessment of visual attention and sign language comprehension in ADHD and neurotypical second language learners of American Sign Language(2023-04-21) Joyce, Taylor Renee; Quinto-Pozos, DavidSign language requires the perceiver’s visual attention, whether accessed in the foveal or peripheral vision; managing visual attention in real-world conversation is a skill second language (L2) sign learners may need practice to master. This task may be a challenge for an individuals with differences in visual attentional processing, such as those with ADHD, but little research on atypical signed L2 learners exists to date. This report introduces a novel online experiment to investigate visuo-attentional differences in hearing, L2 signers of American Sign Language while undergoing signed language comprehension. Participants attempt to rapidly perceive both linguistic and non-linguistic visual stimuli in their immediate and periphery vision while attending to a visual anchor. A small analysis of L2 signers finds a robust effect of the attentional manipulation (distance from stimulus to anchor), but conflicting evidence for sensitivity to linguistic factors such as sign complexity, sign frequency, and the difference between signed language and non-linguistic gestural stimuli. Importantly, this task is demonstrated to be successful at probing visuo-attentional mechanisms in a sign language processing context, and will be suitable for a test population between neurotypical and ADHD L2 ASL learners. Further, the results suggest that even somewhat experienced L2 signers may be relying on general perceptual, rather than specific linguistic, processes for sign and sign-like stimulus processing, which serves important implications for L2 signed language learners of all neurotypes.Item The role of uncertainty and reward on eye movements in natural tasks(2012-05) Sullivan, Brian Thomas; Hayhoe, Mary; Ballard, Dana Harry; Geisler, Wilson S.; Cormack, Lawrence K.; Pillow, Jonathan W.The human visual system is remarkable for the variety of functions it can be used for and the range of conditions under which it can perform, from the detection of small brightness changes to guiding actions in complex movements. The human eye is foveated and humans continually make eye and body movements to acquire new visual information. The mechanisms that control this acquisition and the associated sequencing of eye movements in natural circumstances are not well understood. While the visual system has highly parallel inputs, the fovea must be moved in a serial fashion. A decision process continually occurs where peripheral information is evaluated and a subsequent fixation target is selected. Prior explanations for fixation selection have largely focused on computer vision algorithms that find image areas with high salience, ones that incorporate reduction of uncertainty or entropy of visual features, as well as heuristic models. However, these methods are not well suited to model natural circumstances where humans are mobile and eye movements are closely coordinated for gathering ongoing task information. Following a computational model of gaze scheduling proposed by Sprague and Ballard (2004), I argue that a systematic explanation of human gaze behavior in complex natural tasks needs to represent task goals, a reward structure for these goals and a representation of uncertainty concerning progress towards those goals. If these variables are represented it is possible to formulate a decision computation for choosing fixation targets based on an expected value from uncertainty weighted reward. I present two studies of human gaze behavior in a simulated driving task that provide evidence of the human visual system’s sensitivity to uncertainty and reward. In these experiments observers tended to more closely monitor an information source if it had a high level of uncertainty but only for information also associated with high reward. Given this behavioral finding, I then present a set of simple candidate models in an attempt to explain how humans schedule the acquisition of information over time. These simple models are shown to be inadequate in describing the process of coordinated information acquisition in driving. I present an extended version of the gaze scheduling model adapted to our particular driving task. This formulation allows ordinal predictions on how humans use reward and uncertainty in the control of eye movements and is generally consistent with observed human behavior. I conclude by reviewing main results and discussing the merits and benefits of the computational models used, possible future behavioral experiments that would serve to more directly test the gaze scheduling model, as well as revisions to future implementations of the model to more appropriately capture human gaze behavior.