PAGE OUT OT DATE.
Computing at the USC HEP
Within the computing section of our group we research in areas related to submission of scientific jobs and their execution in the shorter possible time and with the least possible expense for the user. To achieve this goal we make use of the software developed by the DIRAC Consortium, spawned from the LHCb DIRAC project in order to continue the developing of its software independently of the LHCb experiment. We make use of the latest technologies in use in the main Computing centers around the globe, and we are adapting them, not only to the needs of our group, but also to the needs of society.
The research fields we are pursuing are centered in the seamless integration, for the user or a system administrator, of the different distributed technologies like Cloud Computing, a trendy topic in Computer Science and which allows for a dynamical management of any cluster, or GRID, developed for the analysis of data produced by the experiments at the LHC accelerator. The main tools used for this purpose are DIRAC that, in the case of Cloud, takes care of the scheduling of the virtual machines from CloudStack; and CernVM-FS which provides, from a central repository, the software needed for the jobs to run . All this puts the Experimental Particle Physics Group of the Universidad de Santiago at the forefront of the development of scientific computing execution techniques.
Our Group also operates a Tier2 GRID center for the LHCb experiment, in cooperation with CESGA and the University of Barcelona. This Tier2 site is integrated in the GRID infraestructure for the CERN experiments of the LHC.
Tier2 GRID Infraestructure located at IGFAE
- 6 DELL PowerEdge SC1425, biprocessors, 2 PIV Xeon, 2.8 GHz, 2GB RAM. (control nodes)
- 2 DELL PowerEdge SC1425, biprocessors, 2 PIV Xeon, 3.2 GHz, 2GB RAM. (control nodes)
- 25 CATON Siamés 1235, biprocessors quadcore, 2 Xeon 5335 FSB 1333 @ 2.0GHz, 8GB RAM.
- 10 Supermicro A+ Server 2022TG-HTRF chassis with 4 servers. Each server is a biprocessor 8 core, 2 Opteron 6128 @ 2GHz, 16GB RAM
- 22 DELL PowerEdge R420, biprocessors 6 core, 2 Xeon E5-2420 @ 1.90GHz, 16GB RAM
- 2 DELL PowerEdge R430, biprocessor 12 core, 2 Xeon E5-2680 v3 @ 2.5GHz, 48GB RAM
Tier3 Infraestructure located at IGFAE
- 5 DELL PowerEdge R410, biprocessor quadcore, 2 Xeon E5404 @ 2GHz, 20GB RAM
- 2 DELL PowerEdge R430, biprocessor 12 core, 2 Xeon E5-2680 v3 @ 2.5GHz, 96GB RAM
- 5 DELL PowerEdge R430, biprocessor 14 core, 2 Xeon E5-2680 v4 @ 2.4GHz, 128GB RAM
LCG Documents and user guides
- https://edms.cern.ch/file/454439/LCG-2-UserGuide.html
- https://edms.cern.ch/file/454439/LCG-2-UserGuide.pdf
- https://edms.cern.ch/file/454439/LCG-2-UserGuide.ps
- http://www.infn.it/workload-grid/documents.html
- http://www.infn.it/workload-grid/docs/DataGrid-01-TEN-0118-1_2.pdf
- LCFGng Server Installation Guide V 2.0 (.pdf)
- http://lcgdeploy.cvs.cern.ch/cgi-bin/lcgdeploy.cgi/lcg-docs/LCG2UserGuide/LCG-2-Userguide.pdf
- http://grid-deployment.web.cern.ch/grid-deployment/gis/lcg-2_1_0/LCG2InstallNotes/LCG2InstallNotes.html
- http://grid-deployment.web.cern.ch/grid-deployment/gis/release-docs/MIG-index.html
- http://grid-deployment.web.cern.ch/grid-deployment/cgi-bin/index.cgi?var=documentation
- http://lcgdeploy.cvs.cern.ch/cgi-bin/lcgdeploy.cgi/lcg-docs/
Accounting, monitoring and Book-keeping of the LHCb DC04 production
Production | Test | |
Monitoring | http://lhcbweb.pic.es/DIRAC | |
Accouting | ||
Book-Keeping |