A Pre-Training Pruning Strategy for Enabling Lightweight Non-Intrusive Load Monitoring On Edge Devices

A novel pre-training Deep Neural Network (DNN) model compression strategy within the context of Non-Intrusive Load Monitoring (NILM) is presented. Our approach leverages an iterative magnitude pruning technique based on L 1 norms to identify an optimal, compressed in terms of trainable parameters DN...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) pp. 249 - 253
Main Authors Athanasoulias, Sotirios, Sykiotis, Stavros, Temenos, Nikos, Doulamis, Anastasios, Doulamis, Nikolaos
Format Conference Proceeding
LanguageEnglish
Published IEEE 14.04.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A novel pre-training Deep Neural Network (DNN) model compression strategy within the context of Non-Intrusive Load Monitoring (NILM) is presented. Our approach leverages an iterative magnitude pruning technique based on L 1 norms to identify an optimal, compressed in terms of trainable parameters DNN that balances the model's computational complexity, translated as number of Multiply-and-Accumulate units, with respect to the model's performance. By doing so, in the training phase, a smaller DNN will be used, resulting in reduced computational cost up to 95%, while at the same time having a negligible performance degradation, in contrast to existing NILM post-training compression schemes. Experimental results on the UK-DALE dataset demonstrate the proposed strategy's efficacy, isolating sub-networks using only 5% of the original model's parameters, achieving comparable performance to the initial model. These features make the proposed strategy suitable for edge IoT deployment in real-world scenarios.
DOI:10.1109/ICASSPW62465.2024.10626463