Motion Based Decompositing of Video

Show full item record

Please use this identifier to cite or link to this item:

Title: Motion Based Decompositing of Video
Author: Brostow, Gabriel Julian ; Essa, Irfan A.
Abstract: We present a method to decompose video sequences into layers that represent the relative depths of complex scenes. Our method combines spatial information with temporal occlusions to determine relative depths of these layers. Spatial information is obtained through edge detection and a customized contour completion algorithm. Activity in a scene is used to extract temporal occlusion events, which are in turn, used to classify objects as occluders or occludees. The path traversed by the moving objects determines the segmentation of the scene. Several examples of decompositing and compositing of video are shown. This approach can be applied in the pre-processing of sequences for compositing or tracking purposes and to determine the approximate 3D structure of a scene.
Type: Technical Report
Date: 1999
Relation: GVU Technical Report;GIT-GVU-99-31
Publisher: Georgia Institute of Technology
Subject: Vision
Occlusion tracking
Special effect

All materials in SMARTech are protected under U.S. Copyright Law and all rights are reserved, unless otherwise specifically indicated on or in the materials.

Files in this item

Files Size Format View
99-31.pdf 162.3Kb PDF View/ Open

This item appears in the following Collection(s)

Show full item record