A Pre-Training Pruning Strategy for Enabling Lightweight Non-Intrusive Load Monitoring On Edge Devices
A novel pre-training Deep Neural Network (DNN) model compression strategy within the context of Non-Intrusive Load Monitoring (NILM) is presented. Our approach leverages an iterative magnitude pruning technique based on L 1 norms to identify an optimal, compressed in terms of trainable parameters DN...
Saved in:
Published in | 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) pp. 249 - 253 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
14.04.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | A novel pre-training Deep Neural Network (DNN) model compression strategy within the context of Non-Intrusive Load Monitoring (NILM) is presented. Our approach leverages an iterative magnitude pruning technique based on L 1 norms to identify an optimal, compressed in terms of trainable parameters DNN that balances the model's computational complexity, translated as number of Multiply-and-Accumulate units, with respect to the model's performance. By doing so, in the training phase, a smaller DNN will be used, resulting in reduced computational cost up to 95%, while at the same time having a negligible performance degradation, in contrast to existing NILM post-training compression schemes. Experimental results on the UK-DALE dataset demonstrate the proposed strategy's efficacy, isolating sub-networks using only 5% of the original model's parameters, achieving comparable performance to the initial model. These features make the proposed strategy suitable for edge IoT deployment in real-world scenarios. |
---|---|
DOI: | 10.1109/ICASSPW62465.2024.10626463 |