3260 papers • 126 benchmarks • 313 datasets
An Image Quality Assessment approach where no reference image information is available to the model. Sometimes referred to as Blind Image Quality Assessment (BIQA).
(Image credit: Papersgraph)
These leaderboards are used to track progress in no-reference-image-quality-assessment-7
Use these libraries to find no-reference-image-quality-assessment-7 models and implementations
No subtasks available.
This work proposes a novel concept to measure face quality based on an arbitrary face recognition model that avoids the training phase completely and further outperforms all baseline approaches by a large margin.
This work proposes a no-reference image quality assessment (NR-IQA) approach that learns from rankings (RankIQA), and demonstrates how this approach can be made significantly more efficient than traditional Siamese Networks by forward propagating a batch of images through a single network and backpropagating gradients derived from all pairs of images in the batch.
A very simple but effective metric for predicting quality of contrast-altered images based on the fact that a high-contrast image is often more similar to its contrast enhanced image is proposed.
Multi-dimension Attention Network for no-reference Image Quality Assessment (MANIQA) is proposed to improve the performance on GAN-based distortion images and outperforms state-of-the-art methods on four standard datasets by a large margin.
The Mixture of Experts approach to train two separate encoders to learn high-level content and low-level image quality features in an unsupervised setting achieves state-of-the-art performance on multiple large-scale image quality assessment databases containing both real and synthetic distortions, demonstrating how deep neural networks can be trained in anUnsupervisedSetting to produce perceptually relevant representations.
This work uses prediction of distortion type and degree as an auxiliary task to learn features from an unlabeled image dataset containing a mixture of synthetic and realistic distortions and trains a deep Convolutional Neural Network using a contrastive pairwise objective to solve the auxiliary problem.
A high-scale (8x) controlled experiment is presented which evaluates five recent DL techniques tailored for blind image SR: Adaptive Pseudo Augmentation (APA), Blind Image SR with Spatially Variant Degradations (BlindSR), Deep Alternating Network (DAN), FastGAN, and Mixture of Experts Super-Resolution (MoESR).
Image content variation is a typical and challenging problem in no-reference image-quality assessment (NR-IQA). This work pays special attention to the impact of image content variation on NR-IQA methods. To better analyze this impact, we focus on blur-dominated distortions to exclude the impacts of distortion-type variations. We empirically show that current NR-IQA methods are inconsistent with human visual perception when predicting the relative quality of image pairs with different image contents. In view of deep semantic features of pretrained image classification neural networks always containing discriminative image content information, we put forward a new NR-IQA method based on semantic feature aggregation (SFA) to alleviate the impact of image content variation. Specifically, instead of resizing the image, we first crop multiple overlapping patches over the entire distorted image to avoid introducing geometric deformations. Then, according to an adaptive layer selection procedure, we extract deep semantic features by leveraging the power of a pretrained image classification model for its inherent content-aware property. After that, the local patch features are aggregated using several statistical structures. Finally, a linear regression model is trained for mapping the aggregated global features to image-quality scores. The proposed method, SFA, is compared with nine representative blur-specific NR-IQA methods, two general-purpose NR-IQA methods, and two extra full-reference IQA methods on Gaussian blur images (with and without Gaussian noise/JPEG compression) and realistic blur images from multiple databases, including LIVE, TID2008, TID2013, MLIVE1, MLIVE2, BID, and CLIVE. Experimental results show that SFA is superior to the state-of-the-art NR methods on all seven databases. It is also verified that deep semantic features play a crucial role in addressing image content variation, and this provides a new perspective for NR-IQA.
This paper proposes a novel no-reference image quality assessment method that significantly outperforms the state-of-the-art methods on two realistic blur image databases and achieves comparable performance on two synthetic blur images databases.
Adding a benchmark result helps the community track progress.