Low-precision formats have recently driven major breakthroughs in neural network (NN) training and inferenceby redncing the memory footprint of the NN models and improving the energy elIiciency of the underlying hardware arehitectures, Narrow integer data types have been vastly investigated for NN inference and have successfully beeo pnsbed to the extreme of ternary and binary representations. In contrast, most training-oriented platforms use at least 16-bit floating-point (FP) Cormats. Lower-precision data types such as 8-bit FP formats and mixed-precision techniques have only recently been explored in hardware implementations. We present MiniFloat-NN, a RISe-v instruction set architecture extension for low-precision NN training, providing support Cor two 8-bit and two 16-bit FP Cormats andexpanding operations. The extension includes sum-of-dot-product instructions that accnmulate the result in a larger format and three-term additions in two variations: expanding and non-expanding. We implement an ExSdolp unit to elliciently support in hardware both instruetion types. The fused nature of the ExSdotp module prevents precision losses generated by the non-assofiativity of two consecutive FP additions while saving around 30% of the area and eritieal poth compared to a cascade of two expanding fused multiply-add units. We repUcate the ExSdolp module in a SIMD wrapper and integrate It into an open-source floating-point unit, which, conpled to an open-source RISC-V core, lays the foundation for future scalable architectures targeting low-precision and mixed-precision NN training. A cluster containing eight extended oores sharing a scratehpad memory, implemented in 12nm FinFET technology, achieves up to 575 GFLOPS/W when computing FP8-to-FP16 GEMMs at 0.8 V, 1.26GHz.

MiniFloat-NN and ExSdotp: An ISA Extension and a Modular Open Hardware Unit for Low-Precision Training on RISC-V Cores / Bertaccini, Luca; Paulin, Gianna; Fischer, Tim; Mach, Stefan; Benini, Luca. - ELETTRONICO. - (2022), pp. 1-8. (Intervento presentato al convegno 2022 IEEE 29th Symposium on Computer Arithmetic (ARITH) tenutosi a Lyon, France nel 12-14 September 2022) [10.1109/ARITH54963.2022.00010].

MiniFloat-NN and ExSdotp: An ISA Extension and a Modular Open Hardware Unit for Low-Precision Training on RISC-V Cores

Bertaccini, Luca;Benini, Luca
2022

Abstract

Low-precision formats have recently driven major breakthroughs in neural network (NN) training and inferenceby redncing the memory footprint of the NN models and improving the energy elIiciency of the underlying hardware arehitectures, Narrow integer data types have been vastly investigated for NN inference and have successfully beeo pnsbed to the extreme of ternary and binary representations. In contrast, most training-oriented platforms use at least 16-bit floating-point (FP) Cormats. Lower-precision data types such as 8-bit FP formats and mixed-precision techniques have only recently been explored in hardware implementations. We present MiniFloat-NN, a RISe-v instruction set architecture extension for low-precision NN training, providing support Cor two 8-bit and two 16-bit FP Cormats andexpanding operations. The extension includes sum-of-dot-product instructions that accnmulate the result in a larger format and three-term additions in two variations: expanding and non-expanding. We implement an ExSdolp unit to elliciently support in hardware both instruetion types. The fused nature of the ExSdotp module prevents precision losses generated by the non-assofiativity of two consecutive FP additions while saving around 30% of the area and eritieal poth compared to a cascade of two expanding fused multiply-add units. We repUcate the ExSdolp module in a SIMD wrapper and integrate It into an open-source floating-point unit, which, conpled to an open-source RISC-V core, lays the foundation for future scalable architectures targeting low-precision and mixed-precision NN training. A cluster containing eight extended oores sharing a scratehpad memory, implemented in 12nm FinFET technology, achieves up to 575 GFLOPS/W when computing FP8-to-FP16 GEMMs at 0.8 V, 1.26GHz.
2022
2022 IEEE 29th Symposium on Computer Arithmetic (ARITH)
1
8
MiniFloat-NN and ExSdotp: An ISA Extension and a Modular Open Hardware Unit for Low-Precision Training on RISC-V Cores / Bertaccini, Luca; Paulin, Gianna; Fischer, Tim; Mach, Stefan; Benini, Luca. - ELETTRONICO. - (2022), pp. 1-8. (Intervento presentato al convegno 2022 IEEE 29th Symposium on Computer Arithmetic (ARITH) tenutosi a Lyon, France nel 12-14 September 2022) [10.1109/ARITH54963.2022.00010].
Bertaccini, Luca; Paulin, Gianna; Fischer, Tim; Mach, Stefan; Benini, Luca
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/956823
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? 3
social impact