Amongst link technologies, InfiniBand has gained wide acceptance in the framework of High Performance Computing (HPC), due to its high bandwidth and in particular to its low latency. Since InfiniBand is very flexible, supporting several kinds of messages, it is suitable, in principle, not only for HPC, but also for the data acquisition systems of High Energy Physics (HEP) Experiments. In order to check the InfiniBand capabilities in the framework of on-line systems of HEP Experiments, we performed measurements with point-to-point UDP data transfers over a 4-lane Double Data Rate InfiniBand connection, by means of the IPoIB (IP over InfiniBand) protocol stack, using the Host Channel Adapter cards mounted on a 8-lane PCI-Express bus of commodity PCs both as transmitters and receivers, thus measuring not only the capacity of the link itself, but also the effort required by the host CPUs, buses and Operating Systems. Using either the "Unreliable Datagram" or the "Reliable Connected" InfiniBand transfer modes, we measured the maximum achievable UDP data transfer throughput, the frame rate and the CPU loads of the sender/receiver processes and of the interrupt handlers as a function of the datagram size. Performance of InfiniBand in UDP point-to-point data transfer are then compared with that obtained with analogous tests performed between the same PCs, using a 10-Gigabit Ethernet link.
D. Bortolotti, A. Carbone, D. Galli, I. Lax, U. Marconi, G. Peco, et al. (2011). Comparison of UDP Transmission Performance Between IP-Over-InfiniBand and 10-Gigabit Ethernet. IEEE TRANSACTIONS ON NUCLEAR SCIENCE, 58(4), 1606-1612 [10.1109/TNS.2011.2114368].
Comparison of UDP Transmission Performance Between IP-Over-InfiniBand and 10-Gigabit Ethernet
CARBONE, ANGELO;GALLI, DOMENICO;PERAZZINI, STEFANO;VAGNONI, VINCENZO MARIA;ZANGOLI, MARIA
2011
Abstract
Amongst link technologies, InfiniBand has gained wide acceptance in the framework of High Performance Computing (HPC), due to its high bandwidth and in particular to its low latency. Since InfiniBand is very flexible, supporting several kinds of messages, it is suitable, in principle, not only for HPC, but also for the data acquisition systems of High Energy Physics (HEP) Experiments. In order to check the InfiniBand capabilities in the framework of on-line systems of HEP Experiments, we performed measurements with point-to-point UDP data transfers over a 4-lane Double Data Rate InfiniBand connection, by means of the IPoIB (IP over InfiniBand) protocol stack, using the Host Channel Adapter cards mounted on a 8-lane PCI-Express bus of commodity PCs both as transmitters and receivers, thus measuring not only the capacity of the link itself, but also the effort required by the host CPUs, buses and Operating Systems. Using either the "Unreliable Datagram" or the "Reliable Connected" InfiniBand transfer modes, we measured the maximum achievable UDP data transfer throughput, the frame rate and the CPU loads of the sender/receiver processes and of the interrupt handlers as a function of the datagram size. Performance of InfiniBand in UDP point-to-point data transfer are then compared with that obtained with analogous tests performed between the same PCs, using a 10-Gigabit Ethernet link.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.