Saliency Model Based on Discriminative Feature and Bi-directional Message Interaction for Fabric Defect Detection

Due to the complexity of the texture background and the diversity of the defect types of the fabric image, the traditional method of fabric defect detection shows poor detection performance. Recent advances on salient object detection benefit from Fully Convolutional Neural Network (FCN) and achieve...

Full description

Saved in:
Bibliographic Details
Published inDigital TV and Wireless Multimedia Communication pp. 181 - 193
Main Authors Liu, Zhoufeng, Wang, Menghan, Li, Chunlei, Guo, Zhenduo, Wang, Jinjin
Format Book Chapter
LanguageEnglish
Published Singapore Springer Singapore
SeriesCommunications in Computer and Information Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Due to the complexity of the texture background and the diversity of the defect types of the fabric image, the traditional method of fabric defect detection shows poor detection performance. Recent advances on salient object detection benefit from Fully Convolutional Neural Network (FCN) and achieve the good performance. Meanwhile, the deficiencies in a fabric image are salient compared to the texture background, so the saliency model is very feasible for fabric defect detection. In this paper, we propose a novel saliency model based on discriminative feature and bi-directional message interaction for fabric defect detection. Firstly, we design a multi-scale attention-guided feature extraction module in which the multi-scale context-aware feature extraction block and channel attention block are respectively used to capture multi-scale contextual information and assign greater weight to more discriminative features corresponding to the right fabric defect scale. Then a bi-directional message interaction module is designed to promote the effectiveness of feature with specific resolution by interacting message along both directions, which further improves the availability of the feature extraction. After the bi-directional message interaction module, we use a cross-level contrast feature extraction module, which elevates features with locally strong contrast along each resolution axis, to predict saliency maps. Finally, the predicted saliency maps are efficiently merged to produce the final prediction result. We conduct extensive experiments to evaluate our net and experiment results demonstrate that the proposed method outperforms the state-of-the-art approaches.
Bibliography:This work was supported by NSFC (No. 61772576, 61379113, U1804157), Science and technology innovation talent project of Education Department of Henan Province (17HASTIT019), The Henan Science Fund for Distinguished Young Scholars (184100510002), Henan science and technology innovation team (CXTD2017091), IRTSTHN (18IRTSTHN013), Scientific research projects of colleges and universities in Henan Province (19A510027, 16A540003), Program for Interdisciplinary Direction Team in Zhongyuan University of Technology.
ISBN:9811611939
9789811611933
ISSN:1865-0929
1865-0937
DOI:10.1007/978-981-16-1194-0_16