The Offline Computing system must allow the LHCb physicists to perform an efficient processing of the collected data (about 20 billion events a year), an accurate alignment and calibration of the sub-detectors and an efficient selection of events of interest as well as provide facilities for extracting physics results from the selected samples. The measurements aimed at by LHCb require a very high precision; hence systematic errors must be mastered to a very high degree. Amongst the 2 kHz of HLT-accepted events, a large fraction is dedicated to a very precise calibration and understanding of the detector and its capabilities. Each group of physicists working on specific decay modes of B-particles will only handle a limited number of events; hence they rely heavily on a full central processing chain from the raw data to very elaborated and pre-selected reconstructed data. It is expected that individual analyses will cope with only a few million pre-selected events while manipulation of larger datasets will be handled centrally by a production team. The Computing project is responsible for providing the software infrastructure for all software data processing applications (from L1 trigger to event selection and physics analysis). It is also in charge of coordinating the computing resources (processing and storage) as well as providing all the tools needed to manage the large amounts of data and of processing jobs. In order to develop efficiently the software, for example developing L1 or HLT applications using simulated data, it is beneficial to implement a high level of standardisation in the underlying software infrastructure provided. Algorithms must be able to be executed in very different contexts, from the Online Event Filter Farm to a physicist’s laptop. The Core Software sub-project is in charge of providing this software infrastructure. The large amounts of data and of computing power needs imply that data processing must be performed in a distributed manner, taking best advantage of all resources available throughout the sites that allow the collaboration to use their resources. These resources (CPU and storage) are expected to be accessible through a standard set of services provided to all LHC experiments but also to the larger HEP community and beyond. The LHC Computing Grid project [4] is expected to provide these resources. The LHCb Collaboration is fully committed to participate in the LCG by utilising and contributing to the common software projects as well as making full use of LCG computing Grid infrastructure. It is expected that LHCb will be able to benefit from the developments made inside LCG or available through LCG. In particular, the offline software uses the software developed by the LCG Applications Area. The distributed computing (data management and job handling) uses the Grid infrastructure deployed by LCG as well as baseline services provided through the LCG.

LHCb Computing TDR / G. Avoni; G. Balbi; M. Bargiotti; A. Bertin; D. Bortolotti; M. Bruschi; A. Carbone; S. de Castro; P. Faccioli; L. Fabbri; D. Galli; B. Giacobbe; D. Gregori; F. Grimaldi; I. Lax; U. Marconi; I. Massa; G. Peco; M. Piccinini; N. Semprini Cesari; R. Spighi; V. Vagnoni; S. Vecchi; M. Villa; A. Vitale; A. Zoccoli; LHCB COLLABORATION. - STAMPA. - (2005).

LHCb Computing TDR

BALBI, GABRIELE;BERTIN, ANTONIO;CARBONE, ANGELO;DE CASTRO, STEFANO;FABBRI, LAURA;GALLI, DOMENICO;GREGORI, DANIELE;GRIMALDI, FILIPPO;MASSA, IGNAZIO GIACOMO;PICCININI, MAURIZIO;SEMPRINI CESARI, NICOLA;VILLA, MAURO;VITALE, ANTONIO;ZOCCOLI, ANTONIO;
2005

Abstract

The Offline Computing system must allow the LHCb physicists to perform an efficient processing of the collected data (about 20 billion events a year), an accurate alignment and calibration of the sub-detectors and an efficient selection of events of interest as well as provide facilities for extracting physics results from the selected samples. The measurements aimed at by LHCb require a very high precision; hence systematic errors must be mastered to a very high degree. Amongst the 2 kHz of HLT-accepted events, a large fraction is dedicated to a very precise calibration and understanding of the detector and its capabilities. Each group of physicists working on specific decay modes of B-particles will only handle a limited number of events; hence they rely heavily on a full central processing chain from the raw data to very elaborated and pre-selected reconstructed data. It is expected that individual analyses will cope with only a few million pre-selected events while manipulation of larger datasets will be handled centrally by a production team. The Computing project is responsible for providing the software infrastructure for all software data processing applications (from L1 trigger to event selection and physics analysis). It is also in charge of coordinating the computing resources (processing and storage) as well as providing all the tools needed to manage the large amounts of data and of processing jobs. In order to develop efficiently the software, for example developing L1 or HLT applications using simulated data, it is beneficial to implement a high level of standardisation in the underlying software infrastructure provided. Algorithms must be able to be executed in very different contexts, from the Online Event Filter Farm to a physicist’s laptop. The Core Software sub-project is in charge of providing this software infrastructure. The large amounts of data and of computing power needs imply that data processing must be performed in a distributed manner, taking best advantage of all resources available throughout the sites that allow the collaboration to use their resources. These resources (CPU and storage) are expected to be accessible through a standard set of services provided to all LHC experiments but also to the larger HEP community and beyond. The LHC Computing Grid project [4] is expected to provide these resources. The LHCb Collaboration is fully committed to participate in the LCG by utilising and contributing to the common software projects as well as making full use of LCG computing Grid infrastructure. It is expected that LHCb will be able to benefit from the developments made inside LCG or available through LCG. In particular, the offline software uses the software developed by the LCG Applications Area. The distributed computing (data management and job handling) uses the Grid infrastructure deployed by LCG as well as baseline services provided through the LCG.
2005
1-104
978-92-908-3248-7
LHCb Computing TDR / G. Avoni; G. Balbi; M. Bargiotti; A. Bertin; D. Bortolotti; M. Bruschi; A. Carbone; S. de Castro; P. Faccioli; L. Fabbri; D. Galli; B. Giacobbe; D. Gregori; F. Grimaldi; I. Lax; U. Marconi; I. Massa; G. Peco; M. Piccinini; N. Semprini Cesari; R. Spighi; V. Vagnoni; S. Vecchi; M. Villa; A. Vitale; A. Zoccoli; LHCB COLLABORATION. - STAMPA. - (2005).
G. Avoni; G. Balbi; M. Bargiotti; A. Bertin; D. Bortolotti; M. Bruschi; A. Carbone; S. de Castro; P. Faccioli; L. Fabbri; D. Galli; B. Giacobbe; D. Gregori; F. Grimaldi; I. Lax; U. Marconi; I. Massa; G. Peco; M. Piccinini; N. Semprini Cesari; R. Spighi; V. Vagnoni; S. Vecchi; M. Villa; A. Vitale; A. Zoccoli; LHCB COLLABORATION
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/6054
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact