A novel visible and infrared image fusion method based on convolutional neural network for pig-body feature detection

The visible (VI) and infrared (IR) image fusion has been an active research task because of its higher segmentation accuracy rate during recent years. However, traditional VI and IR image fusion algorithms could not extract more texture and edge features of fused image. In order to more effectively...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 81; no. 2; pp. 2757 - 2775
Main Author Zhong, Zhen
Format Journal Article
LanguageEnglish
Published New York Springer US 2022
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The visible (VI) and infrared (IR) image fusion has been an active research task because of its higher segmentation accuracy rate during recent years. However, traditional VI and IR image fusion algorithms could not extract more texture and edge features of fused image. In order to more effectively extract pig-body shape and temperature feature, a new multisource fusion algorithm for shape segmentation and temperature extraction is presented based on convolutional neural network (CNN), named as MCNNFuse. Firstly, visible and infrared images are fused by modified CNN fusion model. Then, shape feature is extracted by Otsu threshold and morphological operation in view of fusion results. Finally, pig-body temperature feature is extracted based on shape segmentation. Experimental results show that segmentation model based on presented fusion method is capable of achieving 1.883–7.170% higher average segmentation accuracy rate than prevalent traditional and previously published methods. Furthermore, it establishes the groundwork for accurate measurement of pig-body temperature.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-021-11675-5