3260 papers • 126 benchmarks • 313 datasets
See No-Reference Image Quality Assessment (NR-IQA).
(Image credit: Papersgraph)
These leaderboards are used to track progress in blind-image-quality-assessment-7
No benchmarks available.
Use these libraries to find blind-image-quality-assessment-7 models and implementations
No subtasks available.
This work presents a systematic and scalable approach to creating KonIQ-10k, the largest IQA dataset to date, consisting of 10,073 quality scored images, and proposes a novel, deep learning model (KonCept512), to show an excellent generalization beyond the test set.
This work introduces the largest (by far) subjective picture quality database, containing about 40, 000 real-world distorted pictures and 120, 000 patches, and built deep region-based architectures that learn to produce state-of-the-art global picture quality predictions as well as useful local picture quality maps.
This work uses prediction of distortion type and degree as an auxiliary task to learn features from an unlabeled image dataset containing a mixture of synthetic and realistic distortions and trains a deep Convolutional Neural Network using a contrastive pairwise objective to solve the auxiliary problem.
This work introduces two novel quality-relevant auxiliary tasks at the batch and sample levels to enable TTA for blind IQA and introduces a group contrastive loss and a relative rank loss at the sample level to make the model quality aware and adapt to the target data.
The proposed PQR method is shown to not only speed up the convergence of deep model training, but to also greatly improve the achievable level of quality prediction accuracy relative to scalar quality score regression methods.
This paper proposes a novel no-reference image quality assessment method that significantly outperforms the state-of-the-art methods on two realistic blur image databases and achieves comparable performance on two synthetic blur images databases.
Image content variation is a typical and challenging problem in no-reference image-quality assessment (NR-IQA). This work pays special attention to the impact of image content variation on NR-IQA methods. To better analyze this impact, we focus on blur-dominated distortions to exclude the impacts of distortion-type variations. We empirically show that current NR-IQA methods are inconsistent with human visual perception when predicting the relative quality of image pairs with different image contents. In view of deep semantic features of pretrained image classification neural networks always containing discriminative image content information, we put forward a new NR-IQA method based on semantic feature aggregation (SFA) to alleviate the impact of image content variation. Specifically, instead of resizing the image, we first crop multiple overlapping patches over the entire distorted image to avoid introducing geometric deformations. Then, according to an adaptive layer selection procedure, we extract deep semantic features by leveraging the power of a pretrained image classification model for its inherent content-aware property. After that, the local patch features are aggregated using several statistical structures. Finally, a linear regression model is trained for mapping the aggregated global features to image-quality scores. The proposed method, SFA, is compared with nine representative blur-specific NR-IQA methods, two general-purpose NR-IQA methods, and two extra full-reference IQA methods on Gaussian blur images (with and without Gaussian noise/JPEG compression) and realistic blur images from multiple databases, including LIVE, TID2008, TID2013, MLIVE1, MLIVE2, BID, and CLIVE. Experimental results show that SFA is superior to the state-of-the-art NR methods on all seven databases. It is also verified that deep semantic features play a crucial role in addressing image content variation, and this provides a new perspective for NR-IQA.
A new no-reference method of tone-mapped image quality assessment based on multi-scale and multi-layer features that are extracted from a pre-trained deep convolutional neural network model that achieves better performance.
A BIQA model and an approach of training it on multiple IQA databases (of different distortion scenarios) simultaneously are developed, demonstrating that the optimized model by the proposed training strategy is effective in blindly assessing image quality in the laboratory and wild, outperforming previous BIZA methods by a large margin.
A deep bilinear model for blind image quality assessment that works for both synthetically and authentically distorted images and achieves state-of-the-art performance on both synthetic and authentic IQA databases is proposed.
Adding a benchmark result helps the community track progress.