Browsing by Subject "Eye tracking"
Now showing 1 - 14 of 14
- Results Per Page
- Sort Options
Item A low cost eye-tracking system for human-computer interface(2017-08-09) Justiss, Daniel McCravey; Li, Wei (Of University of Texas at Austin)The quality of life for heavily disable people could be greatly improved if they had a way to interface with a computer. There are already methods in existence attempting to solve this problem, but they all have certain drawbacks. Existing methods are either restricted to use of only more lightly disabled individuals, too expensive, too inaccurate, too invasive, or too uncomfortable to be viable for everyone. Using a system of a laptop with a built-in camera or a computer with a camera otherwise mounted on the screen, it may be possible to create a human computer interface without any of these drawbacks. With image tracking software it can be attempted to determine where on the screen the eyes of the user are looking. Using this data as the input for the computer interface software, the user could control the position of the computer cursor using only their eyes. This method has been attempted several times by several different research groups and no one has managed to develop a system that is accurate and fast enough. However, with one significant change that is proposed in this study it may be possible to have a successful eye tracking system. That new idea is for the user to wear circular eyeglass frames without the lenses that the computer can track along with the eyes. This would provide additional information to make it possible for an eye tracker of high enough quality for use in a human computer interface. The report describes the design and validation of such a low-cost eye-tracking system for heavily disabled people.Item Allocation of attention and automaticity in expert music teaching(2022-08-12) Heinsen, Robin Schaps; Duke, Robert A.; Hayhoe, Mary M; Jellison, Judith A; Scott, Laurie P; Simmons, Amy LTo assess the degree to which teachers’ attention allocation operates below conscious awareness, I analyzed expert teachers’ gaze during brief teaching episodes and compared their gaze behavior to their post hoc descriptions of their thinking. In the first study, I devised a unique case-study approach in which I served as both research participant and experimenter. I compared my gaze behavior while teaching and while observing a video recording of my teaching. Gaze analyses revealed well-timed attentional shifts while teaching that were related to the accomplishment of momentary goals and were controlled primarily without conscious attention. My gaze differed markedly when I observed the video of the lesson absent the need to make momentary decisions and deliver instruction. In the second study, two artist-level chamber music coaches wore eye-tracking glasses while teaching a chamber rehearsal, then participated in two interviews to discuss their teaching. The first interview took place immediately after the rehearsal; the second occurred approximately two weeks later, during which the teachers recalled what they remembered about their attention as they watched their lesson videos and eye-tracking recordings. Teachers’ gaze behavior revealed rich networks of rapid information tracking and goal-directed attention, but both teachers struggled to articulate the details of their thinking or their attention allocation, suggesting that they conceive of their teaching behaviors in broad construals and the complex attentional behaviors detected in their gaze behavior occurred unconsciously. These results reveal important features of “teacher thinking” that are not overtly observable or typically construed as conscious (intentional) behavior. Each teacher’s allocation of attention was nevertheless strategic, albeit governed by processes to which they devoted no conscious attention while teaching. That this component of teaching practice does not involve volitional control suggests that teachers’ descriptions of their thinking may not reveal to novices important elements of pedagogical expertise.Item Associations between health literacy and attention to food packaging, front of pack labels, and claims(2013-05) Ripley, Lindsey Powers; Pasch, Keryn E.; Mackert, MichaelBackground: Poor diet is an important risk factor for chronic disease and obesity in the U.S. and diet related health disparities disproportionately affect race/ethnic minorities and adults with the lowest levels of income, education, and health literacy. Packaged foods contribute to poor diet by adding excess calories, saturated fat, sugars, and sodium. Nutrition labels and claims influence knowledge of the healthfulness and nutrient content of packaged foods, and may ultimately impact purchasing decisions. Objective: The purpose of this study was to investigate how adults with varying levels of health literacy view food packaging components. Sample: 25 University Staff and 25 Adult Education students were recruited for participation in the study. The final sample included 47 adults (24 University Staff and 23 Adult Education Students), with 61% of adults were between 25-44 years old, 77% were female, and 47% non-Hispanic White. Methods: Eye tracking technology was used to measure participant attention to food packaging stimuli. The Newest Vital Sign (NVS) was used to measure health literacy. Using objectively measured attention variables, in linear regression models, this study tested the relationship between health literacy and attention to food packaging, claims, and front of pack labels. Results: Health literacy was found to be positively associated with attention to food packaging (B = 13.52, SE = 6.06, p < .05) and front of pack labels (B = 0.29, SE = 0.12, p < .05). After controlling for education and race/ethnicity, health literacy remained positively associated with attention to food packaging (B = 17.01, SE = 8.45, p = 0.050) but not with front of pack labels. Conclusions: Health literacy may be an important factor influencing initial exposure to nutrition information on food packaging. Understanding attention to food packaging and nutrition labels for vulnerable populations, such as those with limited health literacy, is needed to inform strategies focused on increasing nutrition label use and comprehension.Item Conducting eye tracking studies in interactive information retrieval(2017-05-02) Verma, Nitin; Gwizdka, Jacek; Nordquist, EricThis report seeks to contextualize the field of Interactive Information Retrieval (IIR) in the broader context of Information Retrieval (IR), and the psychological aspects of the human visual system. In this report, I present to the readers a concise background of the fields of IIR, and IR, and experiment design in eye tracking research. After the background and literature review on IIR, I provide the readers a brief tour of the concepts behind eye tracking technology. Finally, I describe the stimulus presentation and control software called EventIDE, which offers numerous capabilities for conducting behavioral research. In particular, I focus on eye tracking, and continuing with the theme of deploying eye tracking to conduct IIR research take the readers through a sample experiment in EventIDE. At the end I analyze specific outcomes of the sample experiment to demonstrate how eye tracking can be used to collect data about users’ interaction with the layout and presentation of information on modern digital interfaces.Item Cross-domain adaptation and geometric data synthesis for near-eye to remote gaze tracking(2023-05-04) Tinn, Phil; Hart, Justin W.; Klivans, Adam R.Learning-based Gaze Estimation leveraging deep neural networks has shown remarkable progress in performing direct mapping of images of human subjects to their directions of gaze. In Human-Robot Interaction, it is desirable to perform such estimates in relatively unconstrained settings with diverse subject appearances, for which data generation and ground-truth annotation have remained a costly process. This dependence on data continues to limit the generalization of gaze models for unseen situations. This work contributes a new data adaption approach of combining the comparably economical annotated gaze data from head-mounted eye trackers into the training of a remote gaze estimator, by way employing classic 3D computer vision as a technique for data synthesis. Experimentally, we use TEyeD as the source domain in conjunction with UnityEyes as the intermediate domain to test their generalization and adaption performance on MPIIGaze. Results demonstrate that a head-mounted gaze dataset can be adopted across domains to further the generalization of gaze models trained on synthetic gaze data, yielding a method that can complement other state-of-the-art gaze adaptation approaches.Item Disentangling perceptual mechanisms maintaining social anxiety disorder using VR and eye tracking(2022-06-28) Rubin, Mikael; Telch, Michael Joseph; Hayhoe, Mary M; Smits, Jasper AJ; Beevers, Christopher G; Hixon, John GSocial anxiety disorder (SAD) is highly prevalent and confers significant life impairment. Attention processes associated with social evaluative threat have been broadly implicated in the conceptualization of SAD. This dissertation investigates specific attentional mechanisms maintaining SAD. Both theoretical and empirical research have emphasized two important attentional processes in SAD: hypervigilance to social threat (e.g. quickly scanning faces) and avoidance of social information (e.g. avoiding looking at people). This dissertation consists of three studies investigating attentional avoidance in social anxiety. Study 1 investigated the relationship between social anxiety and eye movements during a real social-evaluative situation – giving a speech. We used 360º-video because it was both very realistic and allowed for a high degree of experimental control. The primary findings from Study 1 revealed that fear of public speaking was associated with greater avoidance of the uninterested (socially threatening) audience members compared with interested audience members. Drawing from the findings from Study 1, Study 2 addressed whether direct modification of attentional (through attention guidance) during virtual reality exposure therapy could enhance intervention outcomes compared with standard virtual reality exposure. Our pilot randomized controlled trial (n =21) indicated a strong effect of both intervention groups on fear of public speaking as well as evidence that the guidance component engaged the target attentional mechanism (decreased avoidance of audience members). However, our Bayesian analyses provided no conclusive support for either the null or alternative hypotheses. Further research with larger sample sizes is needed to elucidate the link between attentional avoidance and social anxiety disorder. Study 3 used data from study 2 to test whether a machine learning approach well suited to high-eye dimensional eye movement data (hidden Markov models) could identify heterogenous attentional styles among those with social anxiety disorder and predict differential treatment outcomes. We identified two distinct groups reflecting “avoidant” and “vigilant” styles pre-treatment. Moreover, we found meaningful differences between the groups post-treatment – with only the hypervigilant group showing treatment response to virtual reality exposure therapy. These findings suggest that evaluating attentional processes in flexible, data-driven ways may provide unique insights into social anxiety disorder and has implications for treatment.Item The effect of the length and structure of sentences upon the silent reading process(1931) Holland, Benjamin F.; Gray, Clarence Truman, 1877-1951Item The effects of payoffs and feedback on the disambiguation of relative clauses(2014-12) Chacartegui Quetglas, Luis; Bannard, ColinThis dissertation investigates two facts about language processing. The Good Enough Approach claims that language users do not form a fully detailed representation of the input unless the task at hand requires it. On the other hand it has been shown that language users display internal preferences when they are faced with ambiguous input, as to what direction disambiguation should take. It has been proposed that these preferences are based on previous experience with similar inputs. This thesis investigates these two issues using tools from the fields of decision making and reinforcement learning. Specifically feedback and payoffs associated with sentence interpretations are manipulated to explore reading behavior, understood as a process of information seeking, and disambiguation choices. In four eye-tracking-reading experiments, the experimental stimuli are sentences containing a relative clause attachment ambiguity. Experiment 1 investigates whether the combination of the degree of ambiguity of a sentence and the possible payoffs, affect people’s reading times for the potentially ambiguous parts of a sentence, as well as their disambiguation choices. Experiment 2 investigates the role of feedback in such processes, a combination related to expected utility maximization. Experiment 3 studies how participants learn from feedback under risky or non-risky conditions. The last experiment investigates whether participants adjust their responses to evidence provided by feedback even overriding their internal initial bias towards a default response.Item Embodied and environmental influences on early word learning(2023-04-05) Schroer, Sara E.; Yu, Chen, Ph. D.; Hayhoe, Mary; Crandall, David; de Barbaro, Kaya; Legare, Cristine HMost early language learning research has primarily focused on the words infants hear in their daily lives and the referents they see. Growing evidence suggests that infant’s hands, bodies, and motor development play an important role in language acquisition, shaping the available statistics for word learning beyond visual attention alone. Simultaneously, other work has shown that infants’ environments, including the physical, temporal, and contextual environments that words occur in, also play a critical role in language acquisition. The goal of this dissertation is to present three studies that consider both the embodied and environmental factors that influence word learning. Paper 1 demonstrates that coordinated infant object manipulation and visual attention, not visual attention alone, creates moments optimal for word learning. Paper 2 extends this work by showing that visually ambiguous naming moments can be “resolved” through infant’s object handling, but not their parent’s, to support real-time learning. Paper 3 presents findings from an at-home study, showing that the coordination of parents’ and toddlers’ eyes and hands improves the quality of naming moments in both object play and mealtime interactions. In the rich, noisy world of toddlers’ everyday lives, embodied attention plays a critical role in coordinating dyadic behaviors and creating informative naming moments.Item A laboratory study of the reading habits of Spanish-speaking children(1931) Knight, James, 1895-1965; Gray, Clarence Truman, 1877-1951Item Memory and visual search in natural environments(2017-05) Li, Chia-Ling; Hayhoe, Mary; Colgin, Laura L; Cormack, Lawrence K; Huk, Alexander C; Schnyer, David MVision is an active process involving the deployment of gaze to acquire information to guide behaviors. What are the mechanisms that control this deployment of overt attention in scenes? This thesis explores the role of memory representations in guiding visual search in naturalistic environments. Despite evidence that memory can guide search in simple two-dimensional (2D) displays, it is unclear how significant a factor memory is in natural vision. Two experiments were carried out in a multi-compartmental, three-dimensional (3D) virtual reality environment that allowed subjects to be immersed and walk freely within the environment. A parallel 2D version was examined in one experiment in order to compare performance in the two kinds of environment, and to test the generality of previous findings. Eye tracking was performed in all the experiments. In the first experiment, subjects searched for a set of geometric objects for three blocks, and then searched for realistic contextual objects in the last two blocks. Many aspects of search performance were similar in 2D and 3D, with some suggestive differences. Search performance improved rapidly in both environments in the first few trials despite different targets, suggesting learning of spatial context. However, incidental fixations did not improve subsequent search performance, and the location of irrelevant context objects was not learned with experience, suggesting that some other aspect of the environment like global structure was being learned. Consistent with this, subjects were more likely to make the correct room choice in 3D than in 2D with experience, suggesting that memory was used more in 3D, where the cost of moving around was higher. Consistent with this, more fixations were used in 2D where eye movements did not entail more costly head movements. Thus the first set of experiments revealed basic similarities with 2D and 3D, but pointed to an additional impact of head and body movements on the way the space was searched. In the second 3D experiment, subjects searched for the same set of geometric objects for five blocks in the same 3D environment. The locations of targets were fixed in the first three blocks, but were moved around in the fourth block and returned to original locations in the fifth block. Subjects actively avoided fixating irrelevant areas of the space in a hierarchical manner, indicating the importance of learning the global structure of the space. When target locations changed, subjects directed eye and head movements toward the old locations of the target, indicating that memory was the primary component guiding target selection with little weighting from the visual image. This guidance started before the search scene became visible, suggesting advance planning of movements as a result of memory for target location, and that memory was used to guide action throughout the search process. Together the results suggest that spatial memory is the primary factor guiding attention allocation in 3D environments, and that this may allow more energetically efficient search strategies. This strategy also allows more efficient use of limited attentional resources.Item Perceptual acuity and music teaching : tracking teacher gaze(2017-09-13) Marcum, Travis Dixon; Duke, Robert A.; Hayhoe, Mary M; Jellison, Judith A; Scott, Laurie P; Simmons, Amy LPerceptual acuity is a central component of a teacher’s ability to structure successful learning experiences that lead to students’ accomplishment of proximal goals. Yet, to date there has been little research focused on this important aspect of teachers’ thinking. We used eye-tracking technology in a series of three studies to measure music teachers’ gaze in relation to moment-to-moment attention and instructional outcomes. In the first investigation, we recorded an artist-level violin teacher and a highly skilled graduate violinist each teaching 5-min violin lessons to a student of their own and a novel student, and examined the frequency and duration of teachers’ visual fixations in relation to momentary instructional goals. Fixation targets and durations for both teachers were quite similar when teaching their own students. When teaching novel students, the artist teacher’s fixations resembled those in the lesson with the familiar student, but the graduate student’s fixations were much different and less often directed toward the targets that were related to momentary instructional goals. A second investigation assessed the extent to which an auditory signal influences gaze patterns in music teaching. Eight expert teachers viewed a life-size video projection of a violin student performing a brief music excerpt, with and without hearing the sound of the instrument. We found no meaningful differences in teachers’ gaze between the two presentation conditions. In a third study, four violin teachers (freshman, senior, graduate student, and artist teacher) each taught a 5-minute lesson to a familiar student. We identified the proximal performance goals in each lesson and examined visual fixations in relation to those goals. We found marked differences in the sequence and duration of fixations among the four teachers, with more experienced teachers fixating more frequently and for longer durations on targets relevant to their momentary instructional goals. These data illustrate the utility of gaze analysis in illuminating the thinking of expert music teachers and provide insight into the deployment of attention among music teachers with varied levels of experience and expertise.Item Scene statistics in 3D natural environments(2010-08) Liu, Yang, 1976-; Bovik, Alan C. (Alan Conrad), 1958-; Cormack, Lawrence K.; Geisler, Wilson G.; Vishwanath, Sriram; Ghosh, JoydeepIn this dissertation, we conducted a stereoscopic eye tracking experiment using naturalistic stereo images. We analyzed low level 2D and 3D scene features at binocular fixations and randomly selected places. The results reveal that humans tend to fixate on regions with higher luminance variations, but lower disparity variations. Because of the often observed co-occurrence of luminance and depth changes in natural environments, the dichotomy between luminance features and disparity features inspired us to study the accurate statistics of 2D and 3D scene properties. Using a range map database, we studied the distribution of disparity in natural scenes. The natural disparity distribution has a high peak at zero, and heavier tails that are similar to a Laplace distribution. The relevance of natural disparity distribution to other studies in neurobiology and visual psychophysics are discussed in detail. We also studied luminance, range and disparity statistics in natural scenes using a co-registered luminance-range database. The distributions of bandpass 2D and 3D scene features can be well modeled by generalized Gaussian models. There are positive correlations between bandpass luminance and depth, which can be captured by varying shape parameters in the probability density functions of the generalized Gaussians. In another study on suprathreshold luminance and depth discontinuities, we show that observing a significant luminance edge at a significant depth edge is much more likely than at homogeneous depth surfaces. It is also true that a significant depth edge happens at a significant luminance edge with a greater probability than at homogeneous luminance regions. Again, the dependency between luminance and depth discontinuities can be modeled successfully by generalized Gaussians. We applied our statistical models in 3D natural scenes to stereo correspondence. A Bayesian framework is proposed to incorporate the bandpass disparity prior, and the luminance-disparity dependency in the likelihood function. We compared our algorithm with a classical simulated annealing method based on heuristically defined energy functions. The computed disparity maps show great improvements both perceptually and objectively.Item What teachers see : the centrality of noticing in the skill of teaching(2023-04-21) Batisla-ong, Lorelei J.; Duke, Robert A.; Simmons, Amy L; Jellison, Judith A; Scott, Laurie P; Schallert, Diane LThis dissertation comprises three projects, all of which are related to teacher observation. Previous research in multiple disciplines has shown that perceptions of ongoing events are an essential component of informed decision making, and that one of the hallmarks of expertise is the ability to accurately perceive relevant events in a given environment. The first project involves a music teacher in-service program that I designed and implemented in a small school district. The program was focused on aspects of teaching selected by the individual teacher participants. The second project was a study of gaze behavior in which I tracked the gaze of a novice and an expert teacher as they observed a video recording of a music lesson. The third project was a study of observation by expert and nonexpert teachers who observed video recordings of two brief music lessons. In each of these studies, I described what and how teachers see, how they talk about what they see, and how they experience teaching. The results suggest that experts’ observations of teaching differ from the observations of novices in systematic ways.