Toward a blind image quality evaluator in the wild by learning beyond human opinion scores
•A unified learning framework is proposed to learn computational opinion-free BIQA models from synthetically-distorted images for the BIQA in the wild without using human scores. Agent-specific and agent-agnostic modules are designed to learn FR-IQA annotators' agree-to-disagree information.•A...
Saved in:
Published in | Pattern recognition Vol. 137; p. 109296 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.05.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •A unified learning framework is proposed to learn computational opinion-free BIQA models from synthetically-distorted images for the BIQA in the wild without using human scores. Agent-specific and agent-agnostic modules are designed to learn FR-IQA annotators' agree-to-disagree information.•A simple and easy-to-implement yet effective UDA method, incorporating an adaptive weight loss and domain mix-up, to reduce the distributional shift between synthetically-distorted images and the authentically-distorted ones captured in the wild. To the best of our knowledge, our work is the first to exploit adversarial UDA for IQA.•Extensive experiments on two large-scale realistic IQA datasets demonstrate that our proposed method achieves state-of-the-art performance when being evaluated using both human opinion scores and gMAD competition.
Nowadays, most existing blind image quality assessment (BIQA) models inthewild heavily rely on human ratings, which are extraordinarily labor-expensive to collect. Here, we propose an opinion−free BIQA method that learns from multiple annotators to assess the perceptual quality of images captured in the wild. Specifically, we first synthesize distorted images based on the pristine counterparts. We then randomly assemble a set of image pairs from the synthetic images, and use a group of IQA models to assign pseudo-binary labels for each pair indicating which image has higher quality as the supervisory signal. Based on the newly established pseudo-labeled dataset, we train a deep neural network (DNN)-based BIQA model to rank the perceptual quality, optimized for consistency with the binary rank labels. Since there exists domain shift, e.g., distortion shift and content shift, between the synthetic and in-the-wild images, we leverage two ways to alleviate this issue. First, the simulated distortions should be similar to authentic distortions as much as possible. Second, an unsupervised domain adaptation (UDA) module is further applied to encourage learning domain-invariant features between two domains. Extensive experiments demonstrate the effectiveness of our proposed opinion−free BIQA model, yielding SOTA performance in terms of correlation with human opinion scores, as well as gMAD competition. Our code is available at: https://github.com/wangzhihua520/OF_BIQA. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2022.109296 |