Uncertainty estimation of visual attention models using spatiotemporal analysis
MetadataShow full item record
In this dissertation, we analyze eye tracking data and video content to discover general patterns of human visual attention that can be used for estimating the reliability or the confidence of video saliency maps that are often used in many video processing applications. We, first, analyze eye-fixation data and discover patterns such as map consistency and scene motion to be used for uncertainty estimation. Based on such analysis, we introduce a procedure to estimate the correlation between eye-fixation data of a given video by using its corresponding optical flow map. We, also, utilize the eye-fixation correlation analysis to design an unsupervised video feature for uncertainty estimation based on local spatiotemporal neighborhoods. We combine our findings from eye-fixation correlation study and the analysis of the unsupervised uncertainty estimation feature for video saliency with a data-driven approach to directly obtain a multi-factor estimation model that is both computationally-efficient and effective in estimating uncertainty in the application of video saliency detection.