Neural Architecture Search (NAS) is quickly becoming the go-to approach to optimize the structure of Deep Learning (DL) models for complex tasks such as Image Classification or Object Detection. However, many other relevant applications of DL, especially at the edge, are based on time-series processing and require models with unique features, for which NAS is less explored. This work focuses in particular on Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged as a promising alternative to more complex recurrent architectures. We propose the first NAS tool that explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive-field and number of features in each layer. The proposed approach searches for networks that offer good trade-offs between accuracy and number of parameters/operations, enabling an efficient deployment on embedded platforms. Moreover, its fundamental feature is that of being lightweight in terms of search complexity, making it usable even with limited hardware resources. We test the proposed NAS on four real-world, edge-relevant tasks, involving audio and bio-signals: (i) PPG-based Heart-Rate Monitoring, (ii) ECG-based Arrythmia Detection, (iii) sEMG-based Hand-Gesture Recognition, and (iv) Keyword Spotting.Results show that, starting from a single seed network, our method is capable of obtaining a rich collection of Pareto optimal architectures, among which we obtain models with the same accuracy as the seed, and 15.9-152× fewer parameters. Moreover, the NAS finds solutions that Pareto-dominate state-of-the-arthand-tuned models for 3 out of the 4 benchmarks, and are Pareto-optimal on the fourth (sEMG). Compared to three state-of-the-art NAS tools, ProxylessNAS, MorphNet and FBNetV2, our method explores a larger search space for TCNs (up to 1012×) and obtains superior solutions, while requiring low GPU memory and search time. We deploy our NAS outputs on two distinct edge devices, the multicore GreenWaves Technology GAP8 IoT processor and the single-core STMicroelectronics STM32H7 microcontroller. With respect to the state-of-the-art hand-tuned models, we reduce latency and energy of up to 5.5× and 3.8× on the two targets respectively, without any accuracy loss.
Risso, M., Burrello, A., Conti, F., Lamberti, L., Chen, Y., Benini, L., et al. (2022). Lightweight Neural Architecture Search for Temporal Convolutional Networks at the Edge. IEEE TRANSACTIONS ON COMPUTERS, 0, 744-758 [10.1109/TC.2022.3177955].
Lightweight Neural Architecture Search for Temporal Convolutional Networks at the Edge
Burrello, Alessio;Conti, Francesco;Lamberti, Lorenzo;Benini, Luca;
2022
Abstract
Neural Architecture Search (NAS) is quickly becoming the go-to approach to optimize the structure of Deep Learning (DL) models for complex tasks such as Image Classification or Object Detection. However, many other relevant applications of DL, especially at the edge, are based on time-series processing and require models with unique features, for which NAS is less explored. This work focuses in particular on Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged as a promising alternative to more complex recurrent architectures. We propose the first NAS tool that explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive-field and number of features in each layer. The proposed approach searches for networks that offer good trade-offs between accuracy and number of parameters/operations, enabling an efficient deployment on embedded platforms. Moreover, its fundamental feature is that of being lightweight in terms of search complexity, making it usable even with limited hardware resources. We test the proposed NAS on four real-world, edge-relevant tasks, involving audio and bio-signals: (i) PPG-based Heart-Rate Monitoring, (ii) ECG-based Arrythmia Detection, (iii) sEMG-based Hand-Gesture Recognition, and (iv) Keyword Spotting.Results show that, starting from a single seed network, our method is capable of obtaining a rich collection of Pareto optimal architectures, among which we obtain models with the same accuracy as the seed, and 15.9-152× fewer parameters. Moreover, the NAS finds solutions that Pareto-dominate state-of-the-arthand-tuned models for 3 out of the 4 benchmarks, and are Pareto-optimal on the fourth (sEMG). Compared to three state-of-the-art NAS tools, ProxylessNAS, MorphNet and FBNetV2, our method explores a larger search space for TCNs (up to 1012×) and obtains superior solutions, while requiring low GPU memory and search time. We deploy our NAS outputs on two distinct edge devices, the multicore GreenWaves Technology GAP8 IoT processor and the single-core STMicroelectronics STM32H7 microcontroller. With respect to the state-of-the-art hand-tuned models, we reduce latency and energy of up to 5.5× and 3.8× on the two targets respectively, without any accuracy loss.File | Dimensione | Formato | |
---|---|---|---|
neural architecture search post print.pdf
accesso aperto
Tipo:
Postprint
Licenza:
Licenza per accesso libero gratuito
Dimensione
4.48 MB
Formato
Adobe PDF
|
4.48 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.