Convolutional Neural Networks (CNNs) have revolutionized the world of image classification over the last few years, pushing the computer vision close beyond human accuracy. The required computational effort of CNNs today requires power-hungry parallel processors and GP-GPUs. Recent efforts in designing CNN Application-Specific Integrated Circuits (ASICs) and accelerators for System-On-Chip (SoC) integration have achieved very promising results. Unfortunately, even these highly optimized engines are still above the power envelope imposed by mobile and deeply embedded applications and face hard limitations caused by CNN weight I/O and storage. On the algorithmic side, highly competitive classification accuracy canbe achieved by properly training CNNs with binary weights. This novel algorithm approach brings major optimization opportunities in the arithmetic core by removing the need for the expensive multiplications as well as in the weight storage and I/O costs. In this work, we present a HW accelerator optimized for BinaryConnect CNNs that achieves 1510 GOp/s on a corearea of only 1.33 MGE and with a power dissipation of 153 mW in UMC 65 nm technology at 1.2 V. Our accelerator outperforms state-of-the-art performance in terms of ASIC energy efficiency as well as area efficiency with 61.2 TOp/s/W and 1135 GOp/s/MGE, respectively.

YodaNN: An ultra-low power convolutional neural network accelerator based on binary weights / Andri, Renzo; Cavigelli, Lukas; Rossi, Davide; Benini, Luca. - STAMPA. - 2016-:(2016), pp. 7560203.236-7560203.241. (Intervento presentato al convegno 15th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016 tenutosi a usa nel 2016) [10.1109/ISVLSI.2016.111].

YodaNN: An ultra-low power convolutional neural network accelerator based on binary weights

ROSSI, DAVIDE;BENINI, LUCA
2016

Abstract

Convolutional Neural Networks (CNNs) have revolutionized the world of image classification over the last few years, pushing the computer vision close beyond human accuracy. The required computational effort of CNNs today requires power-hungry parallel processors and GP-GPUs. Recent efforts in designing CNN Application-Specific Integrated Circuits (ASICs) and accelerators for System-On-Chip (SoC) integration have achieved very promising results. Unfortunately, even these highly optimized engines are still above the power envelope imposed by mobile and deeply embedded applications and face hard limitations caused by CNN weight I/O and storage. On the algorithmic side, highly competitive classification accuracy canbe achieved by properly training CNNs with binary weights. This novel algorithm approach brings major optimization opportunities in the arithmetic core by removing the need for the expensive multiplications as well as in the weight storage and I/O costs. In this work, we present a HW accelerator optimized for BinaryConnect CNNs that achieves 1510 GOp/s on a corearea of only 1.33 MGE and with a power dissipation of 153 mW in UMC 65 nm technology at 1.2 V. Our accelerator outperforms state-of-the-art performance in terms of ASIC energy efficiency as well as area efficiency with 61.2 TOp/s/W and 1135 GOp/s/MGE, respectively.
2016
Proceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI
236
241
YodaNN: An ultra-low power convolutional neural network accelerator based on binary weights / Andri, Renzo; Cavigelli, Lukas; Rossi, Davide; Benini, Luca. - STAMPA. - 2016-:(2016), pp. 7560203.236-7560203.241. (Intervento presentato al convegno 15th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2016 tenutosi a usa nel 2016) [10.1109/ISVLSI.2016.111].
Andri, Renzo; Cavigelli, Lukas; Rossi, Davide; Benini, Luca
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/572205
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 145
  • ???jsp.display-item.citation.isi??? 124
social impact