A Relation-Aware Network for Defocus Blur Detection

Defocus blur detection (DBD) is an important task in computer vision that aims to segment clear regions from images. Recently, deep learning based methods have made great progress in the defocus blur detection task based on their powerful learning capabilities. However, most of existing methods ofte...

Full description

Saved in:
Bibliographic Details
Published in2023 7th Asian Conference on Artificial Intelligence Technology (ACAIT) pp. 66 - 74
Main Authors Wang, Yi, Huang, Peiliang, Han, Longfei, Xu, Chenchu
Format Conference Proceeding
LanguageEnglish
Published IEEE 10.11.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Defocus blur detection (DBD) is an important task in computer vision that aims to segment clear regions from images. Recently, deep learning based methods have made great progress in the defocus blur detection task based on their powerful learning capabilities. However, most of existing methods often directly predict clear regions without considering the complementary relationship between clear and blur context information, which produce clutter and low confident predictions in boundary areas. To address this challenge, we propose a relation-aware network for defocus blur detection. Specifically, we disentangle the complementary relationship both from region level and pixel level. For region level, we introduce the separated attention mechanism to highlight the contrast between clear and blur areas of an image, where the normal attention is benefit to distinguish the clear region, and the reverse attention helps to focus on blur region. These two-stream separated attention module would generate the segmentation mask with high confidence. Furthermore, we try to uncover the pixel-to-pixel relationship via the connectivity contour in eight directions which can enhance the accuracy of contour detection. To evaluate the superiority of the proposed method, we implement extensive experiments on two two public benchmark datasets, CUHK and DUT. The experimental results demonstrate that our method achieves state-of-the-art performance.
DOI:10.1109/ACAIT60137.2023.10528486