Machine Learning for Video-Based Rendering

Show full item record

Please use this identifier to cite or link to this item: http://hdl.handle.net/1853/3421

Title: Machine Learning for Video-Based Rendering
Author: Schodl, Arno ; Essa, Irfan A.
Abstract: We recently introduced a new paradigm for computer animation, video textures, which allows us to use a recorded video to generate novel animations by replaying the video samples in a new order. Video sprites are a special type of video texture. Instead of storing whole images, the object of interest is separated from the background and the video samples are stored as a sequence of alpha-matted sprites with associated velocity information. They can be rendered anywhere on the screen to create a novel animation of the object. To create such an animation, we have to find a sequence of sprite samples that is both visually smooth and shows the desired motion. In this paper, we address both problems. To estimate visual smoothness, we train a linear classifier to estimate visual similarity between video samples. If the motion path is known in advance, we then use a beam search algorithm to find a good sample sequence. We can also specify the motion interactively by precomputing a set of cost functions using Q-learning.
Type: Technical Report
URI: http://hdl.handle.net/1853/3421
Date: 2000
Relation: GVU Technical Report;GIT-GVU-00-11
Publisher: Georgia Institute of Technology

All materials in SMARTech are protected under U.S. Copyright Law and all rights are reserved, unless otherwise specifically indicated on or in the materials.

Files in this item

Files Size Format View
00-11.pdf 170.9Kb PDF View/ Open

This item appears in the following Collection(s)

Show full item record