Troubleshooting Blind Image Quality Models in the Wild
Recently, the group maximum differentiation competition (gMAD) has been used to improve blind image quality assessment (BIQA) models, with the help of full-reference metrics. When applying this type of approach to troubleshoot "best-performing" BIQA models in the wild, we are faced with a...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
14.05.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recently, the group maximum differentiation competition (gMAD) has been used
to improve blind image quality assessment (BIQA) models, with the help of
full-reference metrics. When applying this type of approach to troubleshoot
"best-performing" BIQA models in the wild, we are faced with a practical
challenge: it is highly nontrivial to obtain stronger competing models for
efficient failure-spotting. Inspired by recent findings that difficult samples
of deep models may be exposed through network pruning, we construct a set of
"self-competitors," as random ensembles of pruned versions of the target model
to be improved. Diverse failures can then be efficiently identified via
self-gMAD competition. Next, we fine-tune both the target and its pruned
variants on the human-rated gMAD set. This allows all models to learn from
their respective failures, preparing themselves for the next round of self-gMAD
competition. Experimental results demonstrate that our method efficiently
troubleshoots BIQA models in the wild with improved generalizability. |
---|---|
DOI: | 10.48550/arxiv.2105.06747 |