Non-Intrusive Load Monitoring (NILM) enables the disaggregation of the global power consumption, measured from a single smart electrical meter, into appliance-level details. State-of-the-Art is based on Machine Learning methods and on the fusion of time- and frequency-domain features. Running compute-demanding and low-latency NILM on low-cost MCU-based meters is currently an open challenge. This paper addresses the optimization of the feature spaces and the computational and storage cost reduction for SoA NILM algorithms on memory- and compute-limited MCUs. We compare four supervised learning techniques on different classification scenarios and characterize the overall NILM pipeline's implementation. Experimental results demonstrate that optimizing the feature space enables edge-based NILM with 95.15% accuracy, resulting in a small drop compared to the most-accurate technique (96.19%), while achieving up to 5.45x speed-up and 80.56% storage reduction. Furthermore, we show that low-latency NILM relying only on current measurements achieves 80% accuracy, allowing cost reduction by removing voltage sensors.
Titolo: | Trimming Feature Extraction and Inference for MCU-based Edge NILM: a Systematic Approach | |
Autore/i: | Tabanelli E.; Brunelli D.; Acquaviva A.; Benini L. | |
Autore/i Unibo: | ||
Anno: | 2021 | |
Rivista: | ||
Digital Object Identifier (DOI): | http://dx.doi.org/10.1109/TII.2021.3078186 | |
Abstract: | Non-Intrusive Load Monitoring (NILM) enables the disaggregation of the global power consumption, measured from a single smart electrical meter, into appliance-level details. State-of-the-Art is based on Machine Learning methods and on the fusion of time- and frequency-domain features. Running compute-demanding and low-latency NILM on low-cost MCU-based meters is currently an open challenge. This paper addresses the optimization of the feature spaces and the computational and storage cost reduction for SoA NILM algorithms on memory- and compute-limited MCUs. We compare four supervised learning techniques on different classification scenarios and characterize the overall NILM pipeline's implementation. Experimental results demonstrate that optimizing the feature space enables edge-based NILM with 95.15% accuracy, resulting in a small drop compared to the most-accurate technique (96.19%), while achieving up to 5.45x speed-up and 80.56% storage reduction. Furthermore, we show that low-latency NILM relying only on current measurements achieves 80% accuracy, allowing cost reduction by removing voltage sensors. | |
Data stato definitivo: | 2022-02-25T22:05:33Z | |
Appare nelle tipologie: | 1.01 Articolo in rivista |