Blind image quality assessment by simulating the visual cortex

Objective assessment of image quality seeks to predict image quality without human perception. Given that the ultimate goal of a blind/no-reference image quality assessment (BIQA) algorithm is to provide a score consistent with the subject’s prediction, it makes sense to design an algorithm that res...

Full description

Saved in:
Bibliographic Details
Published inThe Visual computer Vol. 39; no. 10; pp. 4639 - 4656
Main Authors Cai, Rongtai, Fang, Ming
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.10.2023
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Objective assessment of image quality seeks to predict image quality without human perception. Given that the ultimate goal of a blind/no-reference image quality assessment (BIQA) algorithm is to provide a score consistent with the subject’s prediction, it makes sense to design an algorithm that resembles human behavior. Recently, a large number of image features have been introduced to image quality assessment. However, only a few of these features are generated by using the computational mechanisms of the visual cortex. In this paper, we propose bioinspired algorithms to extract image features for BIQA by simulating the visual cortex. We extract spatial features like texture and energy from images by mimicking the retinal circuit. We extract spatial-frequency features from images by imitating the simple cell of the primary visual cortex. And we extract color features from images by employing the color opponent mechanism of the biological vision system. Then, by using the statistical features derived from these physiologically plausible features, we train a support vector regression model to predict image quality. The experimental results show that the proposed algorithm is more consistent with subjective evaluations than the comparison algorithms in predicting image quality.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-022-02614-y