Memory and visual search in natural environments
MetadataShow full item record
Vision is an active process involving the deployment of gaze to acquire information to guide behaviors. What are the mechanisms that control this deployment of overt attention in scenes? This thesis explores the role of memory representations in guiding visual search in naturalistic environments. Despite evidence that memory can guide search in simple two-dimensional (2D) displays, it is unclear how significant a factor memory is in natural vision. Two experiments were carried out in a multi-compartmental, three-dimensional (3D) virtual reality environment that allowed subjects to be immersed and walk freely within the environment. A parallel 2D version was examined in one experiment in order to compare performance in the two kinds of environment, and to test the generality of previous findings. Eye tracking was performed in all the experiments. In the first experiment, subjects searched for a set of geometric objects for three blocks, and then searched for realistic contextual objects in the last two blocks. Many aspects of search performance were similar in 2D and 3D, with some suggestive differences. Search performance improved rapidly in both environments in the first few trials despite different targets, suggesting learning of spatial context. However, incidental fixations did not improve subsequent search performance, and the location of irrelevant context objects was not learned with experience, suggesting that some other aspect of the environment like global structure was being learned. Consistent with this, subjects were more likely to make the correct room choice in 3D than in 2D with experience, suggesting that memory was used more in 3D, where the cost of moving around was higher. Consistent with this, more fixations were used in 2D where eye movements did not entail more costly head movements. Thus the first set of experiments revealed basic similarities with 2D and 3D, but pointed to an additional impact of head and body movements on the way the space was searched. In the second 3D experiment, subjects searched for the same set of geometric objects for five blocks in the same 3D environment. The locations of targets were fixed in the first three blocks, but were moved around in the fourth block and returned to original locations in the fifth block. Subjects actively avoided fixating irrelevant areas of the space in a hierarchical manner, indicating the importance of learning the global structure of the space. When target locations changed, subjects directed eye and head movements toward the old locations of the target, indicating that memory was the primary component guiding target selection with little weighting from the visual image. This guidance started before the search scene became visible, suggesting advance planning of movements as a result of memory for target location, and that memory was used to guide action throughout the search process. Together the results suggest that spatial memory is the primary factor guiding attention allocation in 3D environments, and that this may allow more energetically efficient search strategies. This strategy also allows more efficient use of limited attentional resources.