The analysis of data collected by the ATLAS and CMS experiments at CERN, ahead of the next phase of high-luminosity at the LHC, requires flexible and dynamic access to big amounts of data, as well as an environment capable of dynamically accessing distributed resources. An interactive high throughput platform, based on a parallel and geographically distributed back-end, has been developed in the framework of the “Italian Research Center for High Performance Computing, Big Data, and Quantum Computing" (ICSC), providing experiment-agnostic resources. Starting from container technology and orchestrated via Kubernetes, the platform provides analysis tools via the Jupyter interface and Dask scheduling system, masking complexity for frontend users and rendering cloud resources flexibly. An overview of the technologies involved and the results on benchmark use cases will be provided, with suitable metrics to evaluate the preliminary performance of the workflow. The comparison between the legacy analysis workflows and the interactive and distributed approach will be provided based on several metrics, from event throughput to resource consumption. The use cases include the search for direct pair production of supersymmetric particles and for dark matter in events with two opposite-charge leptons, jets and missing transverse momentum using data collected by the ATLAS detector in Run 2, and searches for rare flavor decays at the CMS experiment in Run 3 using large datasets collected by high-rate dimuon triggers.
D'Onofrio, A., Diotalevi, T., Gravili, F.G., Loffredo, S., Rossi, E., Simone, F.M., et al. (2025). Leveraging distributed resources through high throughput analysis platforms for enhancing HEP data analyses. EDP Sciences [10.1051/epjconf/202533701035].
Leveraging distributed resources through high throughput analysis platforms for enhancing HEP data analyses
Diotalevi, Tommaso;
2025
Abstract
The analysis of data collected by the ATLAS and CMS experiments at CERN, ahead of the next phase of high-luminosity at the LHC, requires flexible and dynamic access to big amounts of data, as well as an environment capable of dynamically accessing distributed resources. An interactive high throughput platform, based on a parallel and geographically distributed back-end, has been developed in the framework of the “Italian Research Center for High Performance Computing, Big Data, and Quantum Computing" (ICSC), providing experiment-agnostic resources. Starting from container technology and orchestrated via Kubernetes, the platform provides analysis tools via the Jupyter interface and Dask scheduling system, masking complexity for frontend users and rendering cloud resources flexibly. An overview of the technologies involved and the results on benchmark use cases will be provided, with suitable metrics to evaluate the preliminary performance of the workflow. The comparison between the legacy analysis workflows and the interactive and distributed approach will be provided based on several metrics, from event throughput to resource consumption. The use cases include the search for direct pair production of supersymmetric particles and for dark matter in events with two opposite-charge leptons, jets and missing transverse momentum using data collected by the ATLAS detector in Run 2, and searches for rare flavor decays at the CMS experiment in Run 3 using large datasets collected by high-rate dimuon triggers.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


