TGAN-AD: Transformer-Based GAN for Anomaly Detection of Time Series Data

Anomaly detection on time series data has been successfully used in power grid operation and maintenance, flow detection, fault diagnosis, and other applications. However, anomalies in time series often lack strict definitions and labels, and existing methods often suffer from the need for rigid hyp...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 12; no. 16; p. 8085
Main Authors Xu, Liyan, Xu, Kang, Qin, Yinchuan, Li, Yixuan, Huang, Xingting, Lin, Zhicheng, Ye, Ning, Ji, Xuechun
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.08.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Anomaly detection on time series data has been successfully used in power grid operation and maintenance, flow detection, fault diagnosis, and other applications. However, anomalies in time series often lack strict definitions and labels, and existing methods often suffer from the need for rigid hypotheses, the inability to handle high-dimensional data, and highly time-consuming calculation costs. Generative Adversarial Networks (GANs) can learn the distribution pattern of normal data, detecting anomalies by comparing the reconstructed normal data with the original data. However, it is difficult for GANs to extract contextual information from time series data. In this paper, we propose a new method, Transformer-based GAN for Anomaly Detection of Time Series Data (TGAN-AD), The transformer-based generators of TGAN-AD can extract contextual features of time series data to prompt the performance. TGAN-AD’s discriminator can also assist in determining abnormal data. Anomaly scores are calculated through both the generator and the discriminator. We have conducted comprehensive experiments on three public datasets. Experimental results show that our TGAN-AD has better performance in anomaly detection than the state-of-the-art anomaly detection techniques, with the highest Recall and F1 values on all datasets. Our experiments also demonstrate the high efficiency of the model and the optimal choice of hyperparameters.
ISSN:2076-3417
2076-3417
DOI:10.3390/app12168085