Infrared Image Enhancement Method of Substation Equipment Based on Self-Attention Cycle Generative Adversarial Network (SA-CycleGAN)

During the acquisition of infrared images in substations, low-quality images with poor contrast, blurred details, and missing texture information frequently appear, which adversely affects subsequent advanced visual tasks. To address this issue, this paper proposes an infrared image enhancement algo...

Full description

Saved in:
Bibliographic Details
Published inElectronics (Basel) Vol. 13; no. 17; p. 3376
Main Authors Wang, Yuanbin, Wu, Bingchao
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.01.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:During the acquisition of infrared images in substations, low-quality images with poor contrast, blurred details, and missing texture information frequently appear, which adversely affects subsequent advanced visual tasks. To address this issue, this paper proposes an infrared image enhancement algorithm for substation equipment based on a self-attention cycle generative adversarial network (SA-CycleGAN). The proposed algorithm incorporates a self-attention mechanism into the CycleGAN model’s transcoding network to improve the mapping ability of infrared image information, enhance image contrast, and reducing the number of model parameters. The addition of an efficient local attention mechanism (EAL) and a feature pyramid structure within the encoding network enhances the generator’s ability to extract features and texture information from small targets in infrared substation equipment images, effectively improving image details. In the discriminator part, the model’s performance is further enhanced by constructing a two-channel feature network. To accelerate the model’s convergence, the loss function of the original CycleGAN is optimized. Compared to several mainstream image enhancement algorithms, the proposed algorithm improves the quality of low-quality infrared images by an average of 10.91% in color degree, 18.89% in saturation, and 29.82% in feature similarity indices. Additionally, the number of parameters in the proposed algorithm is reduced by 37.89% compared to the original model. Finally, the effectiveness of the proposed method in improving recognition accuracy is validated by the Centernet target recognition algorithm.
ISSN:2079-9292
DOI:10.3390/electronics13173376