Supersizing Self-Supervision: Learning Perception and Action Without Human Supervision
MetadataShow full item record
This talk will discuss how to learn representation for perception and action without using any manual supervision. Gupta will discuss how we can learn ConvNets for vision in a completely unsupervised manner using auxiliary tasks. Specifically, he will demonstrate how the spatial context in images and viewpoint changes in videos can be used to train visual representations. Gupta will briefly introduce NEIL (Never Ending Image Learner), a computer program that runs 24/7 to automatically build visual detectors and common sense knowledge from web data. Finally, he will discuss how we can perform end-to-end learning for actions using self-supervision. He will also discuss the scaling issues; e.g, will this self-supervised learning scale-up for multiple tasks? How can we use multiple robots to scale-up the learning? I will demonstrate how competition across multiple robots is significantly better than collaboration for tasks such as grasping.
- IRIM Seminar Series