Spiking Neural Networks (SNNs) have emerged as a promising bio-inspired solution to address the need for low-latency, energy-efficient artificial intelligence systems. SNNs pose a challenge to traditional CPUs, GPUs and neural network accelerators due to their inherent sparsity, spike-based communication between neurons and complex activation functions. Many neuromorphic accelerators have been developed to handle this complex workload, but these systems are often designed solely to accelerate spiking networks, resulting in huge area costs and a lack of flexibility. We address this problem by proposing a novel mapping methodology for Convolutional SNNs (S-CNNs) on a general-purpose open-source RISC-V core equipped with Indirection Streaming Semantic Registers, a lightweight ISA extension for accelerating sparse-dense linear algebra. NARS is the first work to map S-CNNs in a classical sparse-dense algebra paradigm. Our methodology shows that it is possible to achieve speedups on S-CNNs microkernels with sparsity degrees compatible with state-of-the-art S-CNNs ranging from 4.33× to 10.23× on a dense baseline and from 1.12× to 2.66× on a optimized dense implementation.
Manoni, S., Scheffler, P., Di Mauro, A., Zanatta, L., Acquaviva, A., Benini, L., et al. (2024). NARS: Neuromorphic Acceleration through Register-Streaming Extensions on RISC-V Cores. New York : Association for Computing Machinery, Inc [10.1145/3637543.3652879].
NARS: Neuromorphic Acceleration through Register-Streaming Extensions on RISC-V Cores
Di Mauro A.;Benini L.;Bartolini A.
2024
Abstract
Spiking Neural Networks (SNNs) have emerged as a promising bio-inspired solution to address the need for low-latency, energy-efficient artificial intelligence systems. SNNs pose a challenge to traditional CPUs, GPUs and neural network accelerators due to their inherent sparsity, spike-based communication between neurons and complex activation functions. Many neuromorphic accelerators have been developed to handle this complex workload, but these systems are often designed solely to accelerate spiking networks, resulting in huge area costs and a lack of flexibility. We address this problem by proposing a novel mapping methodology for Convolutional SNNs (S-CNNs) on a general-purpose open-source RISC-V core equipped with Indirection Streaming Semantic Registers, a lightweight ISA extension for accelerating sparse-dense linear algebra. NARS is the first work to map S-CNNs in a classical sparse-dense algebra paradigm. Our methodology shows that it is possible to achieve speedups on S-CNNs microkernels with sparsity degrees compatible with state-of-the-art S-CNNs ranging from 4.33× to 10.23× on a dense baseline and from 1.12× to 2.66× on a optimized dense implementation.| File | Dimensione | Formato | |
|---|---|---|---|
|
NARS__Neuromorphic_Acceleration_through_Register_Streaming_Extensions_on_RISC_V_Cores__ACM_ (1).pdf
accesso aperto
Descrizione: AAM
Tipo:
Postprint / Author's Accepted Manuscript (AAM) - versione accettata per la pubblicazione dopo la peer-review
Licenza:
Creative commons
Dimensione
594.9 kB
Formato
Adobe PDF
|
594.9 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


