A context-aware progressive attention aggregation network for fabric defect detection

Fabric defect detection plays a critical role for measuring quality control in the textile manufacturing industry. Deep learning-based saliency models can quickly spot the most interesting regions that attract human attention from the complex background, which have been successfully applied in fabri...

Full description

Saved in:
Bibliographic Details
Published inJournal of engineered fibers and fabrics Vol. 18
Main Authors Liu, Zhoufeng, Tian, Bo, Li, Chunlei, Li, Xiao, Wang, Kaihua
Format Journal Article
LanguageEnglish
Published London, England SAGE Publications 01.06.2023
SAGE Publishing
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Fabric defect detection plays a critical role for measuring quality control in the textile manufacturing industry. Deep learning-based saliency models can quickly spot the most interesting regions that attract human attention from the complex background, which have been successfully applied in fabric defect detection. However, most of the previous methods mainly adopted multi-level feature aggregation yet ignored the complementary relationship among different features, and thus resulted in poor representation capability for the tiny and slender defects. To remedy these issues, we propose a novel saliency-based fabric defect detection network, which can exploit the complementary information between different layers to enhance the representation features ability and discrimination of defects. Specifically, a multi-scale feature aggregation unit (MFAU) is proposed to effectively characterize the multi-scale contextual features. Besides, a feature fusion refinement module (FFR) composed of an attention fusion unit (AFU) and an auxiliary refinement unit (ARU) is designed to exploit complementary important information and further refine the input features for enhancing the discriminative ability of defect features. Finally, a multi-level deep supervision (MDS) is adopted to guide the model to generate more accurate saliency maps. Under different evaluation metrics, our proposed method outperforms most state-of-the-art methods on our developed fabric datasets.
ISSN:1558-9250
1558-9250
DOI:10.1177/15589250231174612