Show simple item record

dc.contributor.authorNowak, Robert
dc.date.accessioned2020-10-13T22:24:49Z
dc.date.available2020-10-13T22:24:49Z
dc.date.issued2020-10-07
dc.identifier.urihttp://hdl.handle.net/1853/63782
dc.descriptionPresented online on October 7, 2020 at 12:15 p.m.en_US
dc.descriptionRobert Nowak holds the Nosbusch Professorship in Engineering at the University of Wisconsin-Madison, where his research focuses on signal processing, machine learning, optimization, and statistics.en_US
dc.descriptionRuntime: 70:16 minutesen_US
dc.description.abstractThe field of Machine Learning (ML) has advanced considerably in recent years, but mostly in well-defined domains using huge amounts of human-labeled training data. Machines can recognize objects in images and translate text, but they must be trained with more images and text than a person can see in nearly a lifetime. The computational complexity of training has been offset by recent technological advances, but the cost of training data is measured in terms of the human effort in labeling data. People are not getting faster nor cheaper, so generating labeled training datasets has become a major bottleneck in ML pipelines. Active ML aims to address this issue by designing learning algorithms that automatically and adaptively select the most informative examples for labeling so that human time is not wasted labeling irrelevant, redundant, or trivial examples. This talk explores the development of active ML theory and methods over the past decade, including a new approach applicable to kernel methods and neural networks, which views the learning problem through the lens of representer theorems. This perspective highlights the effect that adding a given training example has on the representation. The new approach is shown to possess a variety of desirable mathematical properties that allow active learning algorithms to learn good classifiers from few labeled examples.en_US
dc.format.extent70:16 minutes
dc.language.isoen_USen_US
dc.relation.ispartofseriesMachine Learning @ Georgia Tech (ML@GT)en_US
dc.subjectLinear algebraen_US
dc.subjectMachine learningen_US
dc.titleActive Learning: From Linear Classifiers to Overparameterized Neural Networksen_US
dc.typeLectureen_US
dc.typeVideoen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Machine Learningen_US
dc.contributor.corporatenameUniversity of Wisconsin-Madison. College of Engineeringen_US


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record