Perceptual Image Quality
Published:
Introduction
- What is Image quality assessment?
- Are technical image quality and aesthetic image quality different?
- What are the popular types of image quality assessment? Full Reference vs No Reference
- What are the popular benchmarks?
- How do people collect ground truth for perceptual image quality measurement?
- How do we convert subjective ratings into objective metrics?
- What are the computational challenges involved?
- Is personalization required?
- What are the important applications of perceptual image quality assessment?
Important Datasets
*Under construction
Dataset | Number of Images | Ratings Metadata | Number of Annotators | Annotator Instructions | License Information | Description and Creation Process |
---|---|---|---|---|---|---|
LIVE | 29 reference, 982 distorted | MOS from multiple observers; Mean and standard deviation | 101 | Conducted Mean Opinion Score (MOS) tests with paired comparisons | Not specified | Contains distorted images generated from reference images. Collected human-rated Mean Opinion Scores (MOS). |
TID2013 | 25 reference, 3000 distorted | MOS for various distortion types; Std. dev. and skewness | 8 - 25 | Gave explicit instructions for scoring distortions | Not specified | Reference and distorted images with various distortions. Collected MOS for each distortion type from observers. |
CSIQ | 30 reference, 866 distorted | MOS for different distortion types; Mean, median, and range | 6 - 25 | Performed ACR-based subjective quality evaluations | Not specified | Reference and distorted images from different sources. MOS collected for various distortion types. |
MCL-JCI | 150 reference, 1650 distorted | MOS from multiple observers; Std. dev. and skewness | 6 - 15 | Conducted absolute categorical judgment experiments | Not specified | Includes reference and distorted images with diverse distortions. Collected in a controlled lab environment. |
TID2008 | 25 reference, 1700 distorted | MOS for various distortion types; Std. dev. and skewness | 25 | Provided detailed guidelines and protocols for subjective tests | Not specified | Contains reference and distorted images with multiple distortion types. Subjective scores gathered from observers. |
KADID-10k | images | MOS from diverse observers; Std. dev. and skewness | 229 | Performed paired comparison tests | Creative Commons BY-NC-SA 4.0 | Large-scale database with diverse distortions. MOS collected from a variety of observers. |
KONVID-1k | videos | MOS for video sequences; Std. dev. and skewness | 120 - 180 | Collected MOS using Single Stimulus Continuous Quality Evaluation | Not specified | Focuses on video quality. Contains video sequences with distortion types. Provides MOS for each sequence. |
KOSMO-1k | 1350 videos | MOS for video sequences; Std. dev. and skewness | 120 - 180 | Collected MOS using Single Stimulus Continuous Quality Evaluation | Not specified | Focuses on video quality. Contains video sequences with distortion types. Provides MOS for each sequence. |
EVA | MOS for video sequences; Std. dev. and skewness | 120 - 180 | Collected MOS using Single Stimulus Continuous Quality Evaluation | Not specified | Explainable Visual Aesthetics | |
RPCD | MOS for video sequences; Std. dev. and skewness | 120 - 180 | Collected MOS using Single Stimulus Continuous Quality Evaluation | Not specified | Explainable Visual Aesthetics |
References
- https://github.com/chaofengc/Awesome-Image-Quality-Assessment
- https://towardsdatascience.com/deep-image-quality-assessment-30ad71641fac
- https://github.com/idealo/image-quality-assessment
- https://github.com/weizhou-geek/Image-Quality-Assessment-Benchmark
- https://en.wikipedia.org/wiki/Image_quality