Multiple Global Affine Motion Models Used in Video Coding

Show full item record

Please use this identifier to cite or link to this item: http://hdl.handle.net/1853/14631

Title: Multiple Global Affine Motion Models Used in Video Coding
Author: Li, Xiaohuan
Abstract: With low bit rate scenarios, a hybrid video coder (e.g. AVC/H.264) tends to allocate greater portion of bits for motion vectors, while saving bits on residual errors. According to this fact, a coding scheme with non-normative global motion models in combination with conventional local motion vectors is proposed, which describes the motion of a frame by the affine motion parameter sets drawn by motion segmentation of the luminance channel. The motion segmentation task is capable of adapting the number of motion objects to the video contents. 6-D affine model sets are driven by linear regression from the scalable block-based motion fields estimated by the existent MPEG encoder. In cases that the number of motion objects exceeds a certain threshold, the global affine models are disabled. Otherwise the 4 scaling factors of the affine models are compressed by a vector quantizer, designed with a unique cache memory for efficient searching and coding. The affine motion information is written in the slice header as a syntax. The global motion information is used for compensating those macroblocks whose Lagrange cost is minimized by the AFFINE mode. The rate-distortion cost is computed by a modified Lagrange equation, which takes into consideration the perceptual discrimination of human vision in different areas. Besides increasing the coding efficiency, the global affine model manifests the following two features that refine the compressed video quality. i) When the number of slices per frame is more than 1, the global affine motion model can enhance the error-resilience of the video stream, because the affine motion parameters are duplicated in the headers of different slices over the same frame. ii) The global motion model predicts a frame by warping the whole reference frame and this helps to decrease blocking artifacts in the compensation frame.
Type: Dissertation
URI: http://hdl.handle.net/1853/14631
Date: 2007-03-05
Publisher: Georgia Institute of Technology
Subject: Motion estimation
Vector quantizer
H.264
Perceptual PSNR
Affine motion model
Department: Electrical and Computer Engineering
Advisor: Committee Chair: Jackson, Joel; Committee Member: anderson, david; Committee Member: fritz, hermann; Committee Member: Mersereau, Russel; Committee Member: Yezzi, Anthony
Degree: Ph.D.

All materials in SMARTech are protected under U.S. Copyright Law and all rights are reserved, unless otherwise specifically indicated on or in the materials.

Files in this item

Files Size Format View
li_xiaohuan_200705_phd.pdf 1015.Kb PDF View/ Open

This item appears in the following Collection(s)

Show full item record