Representations of Active Vision
dc.contributor.advisor | Niell, Cristopher | |
dc.contributor.author | Abe, Elliott | |
dc.date.accessioned | 2023-07-06T14:00:56Z | |
dc.date.available | 2023-07-06T14:00:56Z | |
dc.date.issued | 2023-07-06 | |
dc.description.abstract | This dissertation focuses on the interplay between visual processing and motor action during natural behaviors, which has previously been limited due to technological constraints in experimental paradigms. However, recent technological innovations have improved the data collection process, enabling a better understanding of visual processing under naturalistic conditions. This dissertation lays out the foundational experimental methods, data analysis, and theoretical modeling to study visual processing during natural behaviors. Chapter II of the dissertation establishes and characterizes how mice change their gaze during prey capture behavior using a miniaturized camera system to simultaneously record the eyes and head as the mice captured crickets. The study finds that there are two types of eye movements during prey capture. The majority of eye movements are compencatory, however there is a subset that shift the gaze of the mouse and are initiated due to head movements in a 'saccade and fixate' strategy. Chapter III, expands upon the previous methods and records neural activity, eye position, head orientation, and visual scene simultaneously while mice freely explore an arena. The data is used to create a model to correct the visual scene for gaze position, enabling the mapping of the first visual receptive fields in a free-moving animal. The study discovers neurons in the primary visual cortex that are tuned to eye position and head orientation, with most cells integrating positional and visual information through a multiplicative gain modulation mechanism. Chapter IV explores mechanisms for computing higher-order visual representations, like distance estimation, from predictions. The study creates a simulated environment where an agent records visual scene, depth maps, and positional information while navigating an arena. A deep convolutional recurrent neural network is trained on the visual scene and tasked with predicting future visual input. Post-training, the study is able to linearly decode the pixel-wise distance of the visual scene without explicit distance information. This work establishes that predictive processing is a viable mechanism for the visual system to learn to create higher-order visual representations without explicit training. This dissertation consists of previously published co-authored material. | en_US |
dc.identifier.uri | https://hdl.handle.net/1794/28483 | |
dc.language.iso | en_US | |
dc.publisher | University of Oregon | |
dc.rights | All Rights Reserved. | |
dc.subject | Electrophysiology | en_US |
dc.subject | Ethological behavior | en_US |
dc.subject | Predictive Learning | en_US |
dc.subject | Visual Cortex | en_US |
dc.subject | Visual Neuroscience | en_US |
dc.title | Representations of Active Vision | |
dc.type | Electronic Thesis or Dissertation | |
thesis.degree.discipline | Department of Biology | |
thesis.degree.grantor | University of Oregon | |
thesis.degree.level | doctoral | |
thesis.degree.name | Ph.D. |
Files
Original bundle
1 - 1 of 1