Show simple item record

dc.contributor.authorParikh, Devi
dc.date.accessioned2015-12-08T20:47:38Z
dc.date.available2015-12-08T20:47:38Z
dc.date.issued2015-12-01
dc.identifier.urihttp://hdl.handle.net/1853/54216
dc.descriptionPresented on December 1, 2015 at 12:00 p.m. in the TSRB Banquet Hall.en_US
dc.descriptionDevi Parikh is an assistant professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech (VT) and an Allen Distinguished Investigator of Artificial Intelligence. Parikh’s research interests include computer vision, pattern recognition, and AI, particularly visual recognition problems.
dc.descriptionRuntime: 52:45 minutes
dc.description.abstractAs computer vision and natural language processing techniques are maturing, there is heightened activity in exploring the connection between images and language. In this talk, I will present several recent and ongoing projects in my lab that take a new perspective on problems like automatic image captioning, which are receiving a lot of attention lately. In particular, I will start by describing a new methodology for evaluating image-captioning approaches. I will then discuss image specificity — a concept capturing the phenomenon that some images are specific and elicit consistent descriptions from people, while other images are ambiguous and elicit a wider variety of descriptions from different people. Rather than think of this variance as noise, we model this as a signal. We demonstrate that modeling image specificity results in improved performance in applications such as text-based image retrieval. I will then talk about our work on leveraging visual common sense for seemingly non-visual tasks such as textual fill-in-the-blanks or paraphrasing. We propose imagining the scene behind the text to solve these problems. The imagination need not be photorealistic; so we imagine the scene as a visual abstraction using clipart. We show that jointly reasoning about the imagined scene and the text results in improved performance of these textual tasks than reasoning about the text alone. Finally, I will introduce a new task that pushes the understanding of language and vision beyond automatic image captioning — visual question answering (VQA). Not only does it involve computer vision and natural language processing, doing well at this task will require the machine to reason about visual and non-visual common sense, as well as factual knowledge bases. More importantly, it will require the machine to know when to tap which source of information. I will describe our ongoing efforts at collecting a first-of-its-kind, large VQA dataset that will enable the community to explore this rich, challenging, and fascinating task, which pushes the frontier towards truly AI-complete problems.en_US
dc.format.extent00:00 minutes
dc.format.extent52:45 minutes
dc.relation.ispartofseriesIRIM Seminar Seriesen_US
dc.subjectAutomatic image captioningen_US
dc.subjectImage modelingen_US
dc.subjectMachine learningen_US
dc.titleWords, Pictures, and Common Senseen_US
dc.typeLectureen_US
dc.typeVideoen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Institute for Robotics and Intelligent Machineen_US
dc.contributor.corporatenameVirginia Institute of Technology. Dept. of Electrical and Computer Engineeringen_US
dc.embargo.termsnullen_US


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

  • IRIM Seminar Series [106]
    Each semester a core seminar series is announced featuring guest speakers from around the world and from varying backgrounds in robotics.

Show simple item record