Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems

Bringing the CMS distributed computing system into scalable operations / S. Belforte ; A. Fanfani ; I. Fisk ; J. Flix ; J.M. Hernandez; T. Kress; J. Letts; N. Magini; V. Miccio; A. Sciaba. - In: JOURNAL OF PHYSICS. CONFERENCE SERIES. - ISSN 1742-6596. - ELETTRONICO. - 219, 062015:(2010), pp. 1-11. (Intervento presentato al convegno 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP'09) tenutosi a Prague, Czech Republic nel 21 - 27 March 2009) [10.1088/1742-6596/219/6/062015].

Bringing the CMS distributed computing system into scalable operations

FANFANI, ALESSANDRA;
2010

Abstract

Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems
2010
1
11
Bringing the CMS distributed computing system into scalable operations / S. Belforte ; A. Fanfani ; I. Fisk ; J. Flix ; J.M. Hernandez; T. Kress; J. Letts; N. Magini; V. Miccio; A. Sciaba. - In: JOURNAL OF PHYSICS. CONFERENCE SERIES. - ISSN 1742-6596. - ELETTRONICO. - 219, 062015:(2010), pp. 1-11. (Intervento presentato al convegno 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP'09) tenutosi a Prague, Czech Republic nel 21 - 27 March 2009) [10.1088/1742-6596/219/6/062015].
S. Belforte ; A. Fanfani ; I. Fisk ; J. Flix ; J.M. Hernandez; T. Kress; J. Letts; N. Magini; V. Miccio; A. Sciaba
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/100928
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact