Non-Intrusive Load Monitoring (NILM) enables the disaggregation of the global power consumption, measured from a single smart electrical meter, into appliance-level details. State-of-the-Art is based on Machine Learning methods and on the fusion of time- and frequency-domain features. Running compute-demanding and low-latency NILM on low-cost MCU-based meters is currently an open challenge. This paper addresses the optimization of the feature spaces and the computational and storage cost reduction for SoA NILM algorithms on memory- and compute-limited MCUs. We compare four supervised learning techniques on different classification scenarios and characterize the overall NILM pipeline's implementation. Experimental results demonstrate that optimizing the feature space enables edge-based NILM with 95.15% accuracy, resulting in a small drop compared to the most-accurate technique (96.19%), while achieving up to 5.45x speed-up and 80.56% storage reduction. Furthermore, we show that low-latency NILM relying only on current measurements achieves 80% accuracy, allowing cost reduction by removing voltage sensors.

Trimming Feature Extraction and Inference for MCU-based Edge NILM: a Systematic Approach / Tabanelli E.; Brunelli D.; Acquaviva A.; Benini L.. - In: IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS. - ISSN 1551-3203. - ELETTRONICO. - 18:2(2022), pp. 9426443.943-9426443.952. [10.1109/TII.2021.3078186]

Trimming Feature Extraction and Inference for MCU-based Edge NILM: a Systematic Approach

Tabanelli E.
;
Brunelli D.;Acquaviva A.;Benini L.
2022

Abstract

Non-Intrusive Load Monitoring (NILM) enables the disaggregation of the global power consumption, measured from a single smart electrical meter, into appliance-level details. State-of-the-Art is based on Machine Learning methods and on the fusion of time- and frequency-domain features. Running compute-demanding and low-latency NILM on low-cost MCU-based meters is currently an open challenge. This paper addresses the optimization of the feature spaces and the computational and storage cost reduction for SoA NILM algorithms on memory- and compute-limited MCUs. We compare four supervised learning techniques on different classification scenarios and characterize the overall NILM pipeline's implementation. Experimental results demonstrate that optimizing the feature space enables edge-based NILM with 95.15% accuracy, resulting in a small drop compared to the most-accurate technique (96.19%), while achieving up to 5.45x speed-up and 80.56% storage reduction. Furthermore, we show that low-latency NILM relying only on current measurements achieves 80% accuracy, allowing cost reduction by removing voltage sensors.
2022
Trimming Feature Extraction and Inference for MCU-based Edge NILM: a Systematic Approach / Tabanelli E.; Brunelli D.; Acquaviva A.; Benini L.. - In: IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS. - ISSN 1551-3203. - ELETTRONICO. - 18:2(2022), pp. 9426443.943-9426443.952. [10.1109/TII.2021.3078186]
Tabanelli E.; Brunelli D.; Acquaviva A.; Benini L.
File in questo prodotto:
File Dimensione Formato  
trimming feature extraction post print.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 1.34 MB
Formato Adobe PDF
1.34 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/869751
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 24
  • ???jsp.display-item.citation.isi??? 19
social impact