Amongst link technologies, InfiniBand has gained wide acceptance in the framework of High Performance Computing (HPC), due to its high bandwidth and in particular to its low latency. Since InfiniBand is very flexible, supporting several kinds of messages, it is suitable, in principle, not only for HPC, but also for the data acquisition systems of High Energy Physics (HEP) Experiments. In order to check the InfiniBand capabilities in the framework of on-line systems of HEP Experiments, we performed measurements with point-to-point UDP data transfers over a 4-lane Double Data Rate InfiniBand connection, by means of the IPoIB (IP over InfiniBand) protocol stack, using the Host Channel Adaper cards mounted on a 8-lanes PCI-Express bus of commodity PCs both as transmitters and receivers, thus measuring not only the capacity of the link itself, but also the effort required by the host CPUs, buses and Operating Systems. Using either the “Unreliable Datagram” or the “Reliable Connected” InfiniBand transfer modes, we measured the maximum achievable UDP data transfer throughput, the frame rate and the CPU loads of the sender/receiver processes and of the interrupt handlers as a function of the datagram size.

High Rate Packet Transmission via IP-over-InfiniBand Using Commodity Hardware

CARBONE, ANGELO;GALLI, DOMENICO;PERAZZINI, STEFANO;VAGNONI, VINCENZO MARIA;ZANGOLI, MARIA
2010

Abstract

Amongst link technologies, InfiniBand has gained wide acceptance in the framework of High Performance Computing (HPC), due to its high bandwidth and in particular to its low latency. Since InfiniBand is very flexible, supporting several kinds of messages, it is suitable, in principle, not only for HPC, but also for the data acquisition systems of High Energy Physics (HEP) Experiments. In order to check the InfiniBand capabilities in the framework of on-line systems of HEP Experiments, we performed measurements with point-to-point UDP data transfers over a 4-lane Double Data Rate InfiniBand connection, by means of the IPoIB (IP over InfiniBand) protocol stack, using the Host Channel Adaper cards mounted on a 8-lanes PCI-Express bus of commodity PCs both as transmitters and receivers, thus measuring not only the capacity of the link itself, but also the effort required by the host CPUs, buses and Operating Systems. Using either the “Unreliable Datagram” or the “Reliable Connected” InfiniBand transfer modes, we measured the maximum achievable UDP data transfer throughput, the frame rate and the CPU loads of the sender/receiver processes and of the interrupt handlers as a function of the datagram size.
2010
2010 17th IEEE-NPSS Real Time Conference - Conference Record
1
6
Bortolotti D.; Carbone A.; Galli D.; Lax I.; Marconi U.; Peco G.; Perazzini S.; Vagnoni V. M.; Zangoli M.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/90623
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact