Show simple item record

dc.contributor.authorUrtasun, Raquelen_US
dc.date.accessioned2013-08-30T20:20:44Z
dc.date.available2013-08-30T20:20:44Z
dc.date.issued2013-01-25
dc.identifier.urihttp://hdl.handle.net/1853/48750
dc.descriptionRaquel Urtasun is an Asssistant Professor at TTI-Chicago a philanthropically endowed academic institute located in the campus of the University of Chicago. She was a visiting professor at ETH Zurich during the spring semester of 2010. Previously, she was a postdoctoral research scientist at UC Berkeley and ICSI and a postdoctoral associate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Raquel Urtasun completed her PhD at the Computer Vision Laboratory, at EPFL, Switzerland in 2006 working with Pascal Fua and David Fleet at the University of Toronto. She has been area chair of multiple learning and vision conferences (i.e., NIPS, UAI, ICML, ICCV), and served in the committee of numerous international computer vision and machine learning conferences. Her major interests are statistical machine learning and computer vision, with a particular interest in non-parametric Bayesian statistics, latent variable models, structured prediction and their application to semantic scene understanding.en_US
dc.descriptionPresented on Wednesday, January 25, 2013 from 12 noon to 1 pm in the TSRB Banquet Hall, rooms 132-134.en_US
dc.descriptionRuntime: 61:06 minutes.en_US
dc.description.abstractDeveloping autonomous systems that are able to assist humans in everyday’s tasks is one of the grand challenges in modern computer science. Notable examples are personal robotics for the elderly and people with disabilities, as well as autonomous driving systems which can help decrease fatalities caused by traffic accidents. In order to perform tasks such as navigation, recognition and manipulation of objects, these systems should be able to efficiently extract 3D knowledge of their environment. While a variety of novel sensors have been developed in the past few years, in this work we focus on the extraction of this knowledge from visual information alone. In this talk, I'll show how Markov random fields provide a great mathematical formalism to extract this knowledge. In particular, I'll focus on a few examples, i.e., 3D reconstruction, 3D layout estimation, 2D holistic parsing and object detection, and show representations and inference strategies that allow us to achieve state-of-the-art performance as well as several orders of magnitude speed-ups.en_US
dc.format.extent61:06 minutes
dc.language.isoen_USen_US
dc.publisherGeorgia Institute of Technologyen_US
dc.relation.ispartofseriesIRIM Seminar Seriesen_US
dc.subjectRoboticsen_US
dc.subjectSemantic scene parsingen_US
dc.subjectVisual sensorsen_US
dc.titleEfficient Algorithms for Semantic Scene Parsingen_US
dc.typeVideoen_US
dc.typeLectureen_US
dc.contributor.corporatenameToyota Technological Institute at Chicagoen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Center for Robotics and Intelligent Machinesen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

  • IRIM Seminar Series [112]
    Each semester a core seminar series is announced featuring guest speakers from around the world and from varying backgrounds in robotics.

Show simple item record