Visual Search and Summarization
MetadataShow full item record
Widespread visual sensors and unprecedented connectivity have left us awash with visual data---from online photo collections, home videos, news footage, medical images, or surveillance feeds. How can we efficiently browse image and video collections based on semantically meaningful criteria? How can we bring order to the data, beyond manually defined keyword tags? We are exploring these questions in our recent work on interactive visual search and summarization. I will first present a novel form of interactive feedback for visual search, in which a user helps pinpoint the content of interest by making visual comparisons between his envisioned target and reference images. The approach relies on a powerful mid-level representation of interpretable relative attributes to connect the user’s descriptions to the system’s internal features. Whereas traditional feedback limits input to coarse binary labels, the proposed “WhittleSearch” lets a user state precisely what about an image is relevant, leading to more rapid convergence to the desired content. Turning to issues in video browsing, I will then present our work on automatic summarization of egocentric videos. Given a long video captured with a wearable camera, our method produces a short storyboard summary. Whereas existing summarization methods define sampling-based objectives (e.g., to maximize diversity in the output summary), we take a “story-driven” approach that predicts the high-level importance of objects and their influence between subevents. We show this leads to substantially more accurate summaries, allowing a viewer to quickly understand the gist of a long video. This is work done with Adriana Kovashka, Yong Jae Lee, Devi Parikh, and Lu Zheng.
- IRIM Seminar Series