Show simple item record

dc.contributor.advisorStanley, Garrett B.
dc.contributor.authorKelly, Sean T.
dc.date.accessioned2016-05-27T13:09:01Z
dc.date.available2016-05-27T13:09:01Z
dc.date.created2015-05
dc.date.issued2014-12-19
dc.date.submittedMay 2015
dc.identifier.urihttp://hdl.handle.net/1853/54840
dc.description.abstractMotion in the outside world forms one of the primary uses of visual information for many animals. The ability to interpret motion quickly and accurately permits interaction with and response to events in the outside world. While much is known about some aspects of motion perception, there is less agreement about how feature selectivity leading to motion perception is actually formed in the convergent and divergent pathways of the visual system. It is even less clear how these classical understandings of motion processing, often driven by artificial stimuli with little resemblance to the outside world, correspond to responses of neurons when using more natural stimuli. In this thesis, we probe these gaps, first by demonstrating that synchronization within the visual thalamus leads to efficient representations of motion (through tuning properties) in primary visual cortex, exploiting precise timing across populations in a unique manner compared to traditional models. We then create a novel “minimally-natural” stimulus with the appearance of an infinite hallway wallpapered with sinusoidal gratings, to probe how such minimally natural features modulate our predictions of neural responses based upon feature tuning properties. Through encoding and decoding models we find that measuring a restricted tuning parameter space limits our ability to capture all response properties but preserves relevant information for decoding. We finish with an exploration of ethologically relevant natural features, perspective and complex motion, and show that even moderate amounts of each feature within or near the classical V1 receptive field changes the neural response from what classical feature tuning would predict and improves stimulus classification tremendously. Together all of these results indicate that capturing information about motion in the outside world through visual stimuli requires a more advanced model of feature selectivity that incorporates parameters based on more complex spatial relationships.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectPrimary visual cortex
dc.subjectNeural coding
dc.subjectLGN
dc.subjectOrientation tuning
dc.subjectNatural scenes
dc.subjectMotion
dc.subjectNeural models
dc.subjectComputation neuroscience
dc.titleNeural population coding of visual motion
dc.typeDissertation
dc.description.degreePh.D.
dc.contributor.departmentBiomedical Engineering (Joint GT/Emory Department)
thesis.degree.levelDoctoral
dc.contributor.committeeMemberRozell, Christopher J.
dc.contributor.committeeMemberAlonso, Jose M.
dc.contributor.committeeMemberButera, Robert
dc.contributor.committeeMemberLiu, Robert
dc.date.updated2016-05-27T13:09:01Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record