Massive Data Parallel workloads, driven by inference on large ML models, are pushing hardware vendors to develop efficient and cost-effective multi-core server CPUs. The RISC-V architecture plays a prominent role due to its open, extensible, and energy-friendly ISA. Despite significant progress in recent years, finding efficient methods to run AI applications in parallel on new architectures to fully harness their maximum performance remains a challenge. In this study, we investigate the impact of model parallelism on the inference of machine learning models on the SOPHON SG2042 SoC, the first server-grade CPU based on the RV64 ISA, composed of 64 cores arranged in a grid of 16 groups of 4 cores. Specifically, we aim to enhance performance via better data locality stemming from splitting and assigning parts of the model to specific (groups of) cores handling dependencies via a pipeline execution. We orchestrate execution using FastFlow, a low-level programming framework designed for multithreaded streaming applications. By comparing the results against the standard multi-core inference approach based on data parallelism and analyzing the effects of different submodel-to-core mapping strategies, we aim to provide a comprehensive understanding of how the model parallel approach can maximize efficiency and utilization of hardware resources. In our experiments, using model parallelism improved up to 8.4 times the performance over the native PyTorch parallelism.

Malenza, G., Garcia, A.M., Birke, R., Benini, L., Aldinucci, M. (2025). Analysis of Model Parallelism for AI Applications on a 64-core RV64 Server CPU. INTERNATIONAL JOURNAL OF PARALLEL PROGRAMMING, 53(4), .-. [10.1007/s10766-025-00802-6].

Analysis of Model Parallelism for AI Applications on a 64-core RV64 Server CPU

Benini, Luca;
2025

Abstract

Massive Data Parallel workloads, driven by inference on large ML models, are pushing hardware vendors to develop efficient and cost-effective multi-core server CPUs. The RISC-V architecture plays a prominent role due to its open, extensible, and energy-friendly ISA. Despite significant progress in recent years, finding efficient methods to run AI applications in parallel on new architectures to fully harness their maximum performance remains a challenge. In this study, we investigate the impact of model parallelism on the inference of machine learning models on the SOPHON SG2042 SoC, the first server-grade CPU based on the RV64 ISA, composed of 64 cores arranged in a grid of 16 groups of 4 cores. Specifically, we aim to enhance performance via better data locality stemming from splitting and assigning parts of the model to specific (groups of) cores handling dependencies via a pipeline execution. We orchestrate execution using FastFlow, a low-level programming framework designed for multithreaded streaming applications. By comparing the results against the standard multi-core inference approach based on data parallelism and analyzing the effects of different submodel-to-core mapping strategies, we aim to provide a comprehensive understanding of how the model parallel approach can maximize efficiency and utilization of hardware resources. In our experiments, using model parallelism improved up to 8.4 times the performance over the native PyTorch parallelism.
2025
Malenza, G., Garcia, A.M., Birke, R., Benini, L., Aldinucci, M. (2025). Analysis of Model Parallelism for AI Applications on a 64-core RV64 Server CPU. INTERNATIONAL JOURNAL OF PARALLEL PROGRAMMING, 53(4), .-. [10.1007/s10766-025-00802-6].
Malenza, Giulio; Garcia, Adriano Marques; Birke, Robert; Benini, Luca; Aldinucci, Marco
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1039406
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact