Flicker perception on digital videos
MetadataShow full item record
As digital videos are tremendously pervasive in our daily life, developing accurate and automatic video quality assessment (VQA) tools is highly desirable to help optimize video processing systems that provide more satisfactory levels of quality of experience to the end user. One potentially important aspect in the design of VQA models that remains poorly understood is the effect of temporal visual masking on the visibility of temporal distortions. Interestingly, the mere presence of spatial or temporal distortions does not imply a corresponding degree of quality degradation, since the visibility of distortions can be strongly reduced or completely eliminated by visual masking. In this dissertation, I study flicker perception on digital videos. The contributions of this dissertation are fourfold. I find two unknown mechanisms underlying the motion silencing phenomenon as temporal visual masking using human psychophysical studies. I develop a flicker video database to understand flicker visibility on naturalistic videos. Finally, I propose a perceptual flicker visibility index and a new VQA model accounting for temporal flicker masking. First, I discover a lawful relationship between motions and object changes when motion silencing happens as a function of objects’ velocity, flicker frequency, and spacing by conducting psychophysical human experiments. From spectral analysis of visual stimuli, I develop a simple filter-based, spatiotemporal flicker detector model as a working hypothesis of motion silencing. The proposed model successfully captures the psychophysical data over a wide range of velocities and flicker frequencies. Second, I find the effect of eccentricity and spatiotemporal energy on motion silencing. From human psychophysics, I measure the amount of motion silencing as a function of eccentricity, where the threshold velocity for motion silencing almost linearly decreases against log eccentricity. I propose a plausible explanation that as eccentricity increases, the combined motion-flicker signal falls outside the narrow spatiotemporal frequency response regions of the receptive fields, thereby reducing flicker visibility. Third, I investigate the influence of motion and eccentricity on the visibility of flicker distortions in naturalistic videos. I develop a LIVE Flicker Video database by executing a series of human subjective studies to understand flicker distortions as a function of object motion, eccentricity, flicker frequency, and video quality. I describe the motion silencing effects on flicker distortions in naturalistic videos and propose a model of flicker visibility on naturalistic videos using backward visual masking. Lastly, I propose a new VQA model, called Flicker Sensitive – MOtion-based Video Integrity Evaluation (FS-MOVIE), accounting for temporal flicker masking. I augment the MOVIE Index by combining motion tuned video integrity with a new perceptual flicker visibility index on natural videos. FS-MOVIE captures the local spectral signatures of flicker, predicts perceptually suppressed flicker, and evaluates video quality. FS-MOVIE not only substantially improves the MOVIE index, but is also highly competitive with top performing VQA models tested on the LIVE VQA database.