Learning a Blind Measure of Perceptual Image Quality

Huixuan Tang, Neel Joshi and Ashish Kapoor, In CVPR2011.
Abstract
    It is often desirable to evaluate an image based on its quality. For many computer vision applications, a perceptually meaningful measure is the most relevant for evaluation; however, most commonly used measure do not map well to human judgements of image quality. A further complication of many existing image measure is that they require a reference image, which is often not available in practice. In this paper, we present a “blind” image quality measure, where potentially neither the groundtruth image nor the degradation process are known. Our method uses a set of novel low-level image features in a machine learning framework to learn a mapping from these features to subjective image quality scores. The image quality features stem from natural image measure and texture statistics. Experiments on a standard image quality benchmark dataset shows that our method outperforms the current state of art.
Publication
  • Tang, H. Joshi, N. and Kapoor A., Learning a Blind Measure of Perceptual Image Quality, In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition(CVPR), Colorado Springs, CO, 2011. (Poster). [paper(.pdf)] [poster(.pdf)] [supplementary material(.pdf)]
Performance evaluation on LIVE and TID2008
Our setting in the paper is different from (in fact, more difficult than) the common setting used in other learning based image quality assessment methods. For fair comparison with existing and emerging methods, we share share scores and the training/testing split using the common 80/20 split, and score on the TID dataset.
  • New scores on LIVE (.mat)
    This new result uses 80/20 split of training and testing, and achieves a Spearman correlation of 0.9056. Our CVPR paper used 50/50 training/testing splits and therefore reports a slightly lower performance.
  • Scores on TID2008 (.mat)
    This result was obtained from distorted images of all 17 distortion types. For each of the 1000 trials we ran, the model was trained on 1360 distorted images (20 original images) and tested on 272 distorted images (4 original image). The last 68 distorted images was excluded in the experiment because their reference image is not a natural image.
    Our LBIQ measure achieves a Spearman correlation of 0.7432. As a reference, it is approximately the performance of VIF (rank 3 among many full-reference metrics), as reported by the website of TID2008 dataset.