Representations of Active Vision

dc.contributor.advisorNiell, Cristopher
dc.contributor.authorAbe, Elliott
dc.date.accessioned2023-07-06T14:00:56Z
dc.date.available2023-07-06T14:00:56Z
dc.date.issued2023-07-06
dc.description.abstractThis dissertation focuses on the interplay between visual processing and motor action during natural behaviors, which has previously been limited due to technological constraints in experimental paradigms. However, recent technological innovations have improved the data collection process, enabling a better understanding of visual processing under naturalistic conditions. This dissertation lays out the foundational experimental methods, data analysis, and theoretical modeling to study visual processing during natural behaviors. Chapter II of the dissertation establishes and characterizes how mice change their gaze during prey capture behavior using a miniaturized camera system to simultaneously record the eyes and head as the mice captured crickets. The study finds that there are two types of eye movements during prey capture. The majority of eye movements are compencatory, however there is a subset that shift the gaze of the mouse and are initiated due to head movements in a 'saccade and fixate' strategy. Chapter III, expands upon the previous methods and records neural activity, eye position, head orientation, and visual scene simultaneously while mice freely explore an arena. The data is used to create a model to correct the visual scene for gaze position, enabling the mapping of the first visual receptive fields in a free-moving animal. The study discovers neurons in the primary visual cortex that are tuned to eye position and head orientation, with most cells integrating positional and visual information through a multiplicative gain modulation mechanism. Chapter IV explores mechanisms for computing higher-order visual representations, like distance estimation, from predictions. The study creates a simulated environment where an agent records visual scene, depth maps, and positional information while navigating an arena. A deep convolutional recurrent neural network is trained on the visual scene and tasked with predicting future visual input. Post-training, the study is able to linearly decode the pixel-wise distance of the visual scene without explicit distance information. This work establishes that predictive processing is a viable mechanism for the visual system to learn to create higher-order visual representations without explicit training. This dissertation consists of previously published co-authored material.en_US
dc.identifier.urihttps://hdl.handle.net/1794/28483
dc.language.isoen_US
dc.publisherUniversity of Oregon
dc.rightsAll Rights Reserved.
dc.subjectElectrophysiologyen_US
dc.subjectEthological behavioren_US
dc.subjectPredictive Learningen_US
dc.subjectVisual Cortexen_US
dc.subjectVisual Neuroscienceen_US
dc.titleRepresentations of Active Vision
dc.typeElectronic Thesis or Dissertation
thesis.degree.disciplineDepartment of Biology
thesis.degree.grantorUniversity of Oregon
thesis.degree.leveldoctoral
thesis.degree.namePh.D.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Abe_oregon_0171A_13518.pdf
Size:
1.91 MB
Format:
Adobe Portable Document Format