Few-shot Learning with Meta-Learning: Progress Made and Challenges Ahead
MetadataShow full item record
A lot of the recent progress on many AI tasks enabled in part by the availability of large quantities of labeled data. Yet, humans are able to learn concepts from as little as a handful of examples. Meta-learning is a very promising framework for addressing the problem of generalizing from small amounts of data, known as few-shot learning. In meta-learning, our model is itself a learning algorithm: it takes input as a training set and outputs a classifier. For few-shot learning, it is (meta-)trained directly to produce classifiers with good generalization performance for problems with very little labeled data. In this talk, I'll present an overview of the recent research that has made exciting progress on this topic (including my own) and will discuss the challenges as well as research opportunities that remain.
Showing items related by title, author, creator and subject.
Mehta, Nishant A. (Georgia Institute of Technology, 2013-05-15)Given the "right" representation, learning is easy. This thesis studies representation learning and meta-learning, with a special focus on sparse representations. Meta-learning is fundamental to machine learning, and it ...
Berlind, Christopher (Georgia Institute of Technology, 2015-07-22)Traditional supervised machine learning algorithms are expected to have access to a large corpus of labeled examples, but the massive amount of data available in the modern world has made unlabeled data much easier to ...
Beier, Margaret E. (Georgia Institute of Technology, 2003-12-01)