The USC Andrew and Erna Viterbi School of Engineering USC Signal and Image Processing Institute USC Ming Hsieh Department of Electrical and Computer Engineering University of Southern California

Technical Report USC-SIPI-424

“Experimental Design and Evaluation Methodology for Human-Centric Visual Quality Assessment”

by Yu-Chieh Lin

December 2015

The problem of human-centric visual quality assessment (VQA) is extensively studied in this thesis. Our study includes three major topics: 1) design of a dataset for streaming video quality assessment, 2) development of a new and effective video quality assessment index, 3) exploration of a new methodology for human visual quality assessment based on the notion of just-noticeable-differences (JND). For the first topic, we present a high-definition VQA dataset that captures two typical video distortion types in streaming video services in Chapter 3. The VQA dataset, called MCL-V, contains 12 source video clips and 96 distorted video clips with subjective assessment scores. The source video clips are selected from a large pool of public domain video sequences with representative and diversified contents. Both distortion types are perceptually adjusted to distinguishable distortion levels. An improved pairwise comparison method is adopted for subjective evaluation to save evaluation time. Several VQA algorithms are evaluated against the MCL-V dataset. For the second topic, we propose two objective assessment indices to predict subjective video quality in Chapter 4. They are a fusion-based video quality assessment (FVQA) index and an ensemble-learning video quality assessment (EVQA) index. The FVQA index first classifies video sequences according to their content complexity so as to reduce content diversity within each group. Then, it fuses several VQA methods to provide the final video quality score, where fusion coefficients are learned from training samples in the same group. Being motivated by ensemble learning, we propose another video quality assessment index to extend FVQA furthermore, and call it the EVQA index. The basic idea is to fuse multiple VQA methods with diverse and complementary merits so that the fused outcome outperforms that of any single method. The superior performance of EVQA is demonstrated by comparing it with other video quality assessment indices with several benchmarking video quality datasets. For the third topic, we propose a new human-centric methodology for visual quality assessment based on the JND notion in Chapter 5. JND is characterized by the detectable minimum amount of two visual stimuli, and has been used to enhance perceptual visual quality in the context of image/video compression. We first argue that the perceived quality of coded image/video is a stairwise function with several discrete jump points defined by JND. Then, we present a novel bisection method in performing the JND test on JPEG-coded images. Finally, we construct a JND dataset called MCL-JCI that contains 50 source images and analyze the relationship between the source content and the number of its distinguishable quality levels. The impact of JND-based quality assessment on image/video coding is also discussed.

To download the report in PDF format click here: USC-SIPI-424.pdf (2.5Mb)