Difference between revisions of "Atlas:SC4"

Un article de lcgwiki.
Jump to: navigation, search
(Daily news)
(‎)
 
(12 intermediate revisions by 4 users not shown)
Ligne 17: Ligne 17:
 
** Clean-up the files  (?)
 
** Clean-up the files  (?)
 
* Other roles
 
* Other roles
** ATLAS (S. Jézéquel) : Initiate T1->T2 transfers
+
** ATLAS (S. Jézéquel, G. Rahal) : Initiate T1->T2 transfers
  
 
== Information from DDM monitoring ==
 
== Information from DDM monitoring ==
 
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]
 
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]
  
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png
+
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png
 
  
 
----
 
----
Ligne 74: Ligne 73:
 
* 4 processors 3 GHz
 
* 4 processors 3 GHz
 
* 4 GB of memory ( 2 GB dedicated to SWAP)
 
* 4 GB of memory ( 2 GB dedicated to SWAP)
 +
* The monitoring of the Vobox daily, weekly and monthly can be found [http://atlas-france.in2p3.fr/Activites/Informatique/OutilsCC/VO-cclcgatlas Here]
  
 
== Disk space availability ==
 
== Disk space availability ==
  
 +
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=BEIJING-LCG2&visibility=SE BEIJING]
 +
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=CPPM-LCG2&visibility=SE CPPM]
 
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&visibility=SE GRIF]
 
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&visibility=SE GRIF]
 
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&visibility=SE LPC]
 
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&visibility=SE LPC]
Ligne 87: Ligne 89:
 
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]
 
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]
  
* 20 June 2006: Mail from Miguel Branco (DDM responsible)
 
  
<span style="color:#663300;">Today we started deploying DQ2 on the remaining T1 sites (not all
+
* SC4 transfers from LYON to T2s:  
sites still available).<br/>
+
** Simultaneous transfers to the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s
Attached is the result of a (nice) ramp up, easily beating SC3's
+
**[[Image:StackdayLYON-27-7.png]]
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each
 
'step' in the graph is an additional T1 being added to the export.<br/>
 
Dataset subscriptions are now slowing down and will resume tomorrow.
 
Our DQ2 monitoring has been turned off and we expect to have it back
 
tomorrow! Still a long way to go until we have a reasonable
 
understanding of the limiting factors..
 
</span>
 
 
 
[[Image:Atlas day-jpeg.jpg]]
 
 
 
* 22 June: General power cut at CERN at 2:pm.
 
 
 
* 24 June : Dataset T0.D.run000949.ESD transfered from Lyon to LAL and TOKYO. Tranfering the same dataset to LAPP and LPC failed because these sites have same domain name (*.in2p3.fr) as Lyon.
 
 
 
* 25 June : Almost no transfer from CERN to T1s during the week-end.
 
 
 
* 26 June : SC4 transfers restarted with working DDM monitoring. Successfull transfers to LAL, SACLAY and TOKYO. Technical problem (domain name) for LAPP, LPNHE and LPC : under investigation. Contact BEIJING.
 
.
 
* 28 June :  
 
** Problem of domain name solved for LAPP, LPC and LPNHE. First transfers to these sites have been done.
 
** Increase the number of LFC connection to 40 (advice from CERN-IT and DDM).
 
<span style="color:#663300;"> AODs were transfered to all T2s associated to Lyon except BEIJING (looks like a FTS problem) </span>
 
  
**Transfer all AODs to LAL. Problems to transfer AODs to LAPP (one dcache server crashed)
+
== Post-mortem DDM meeting ==
 +
* [http://indico.cern.ch/conferenceDisplay.py?confId=4959 Presentation at CERN (M. Branco) showing the results of the SC4 tests.] (1 August 2006)
  
* 29 June :
+
== [[Atlas | Page principale du Twiki ATLAS]] ==
** Transfer of AODs to TOKYO
 
 
 
* 1 July
 
** Test transfers from LYONDISK to T2s
 
[[Image:T1-T2-0107-3.jpg]]
 
 
 
 
 
* 5 July :
 
** First DDM transfer to BEIJING
 
 
 
* 6 july
 
** [[Image:StackdayLYON.png]]
 
 
 
* 17 july
 
** Simultaneous transfers to T2 sites including BEIJING
 
**[[Image:StackdayLYON-17-7.png]]
 
 
 
* 27 july
 
** Simultaneous transfers to  the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s
 
**[[Image:StackdayLYON-27-7.png]]
 

Latest revision as of 14:20, 1 octobre 2006

Bienvenue sur la page Atlas SC4 LCG-France

Welcome to the LCG-France Atlas SC4 page

Twiki page : SC4 ATLAS

Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french)

  • T0 Role(CERN)
    • Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see T0 Twiki)
    • Initiate T0->T1 transfers
    • FTS server sents files to Lyon choosing between 'TAPE' (RAW 43,2 Mo/s) or 'DISK' (ESD,AOD 23+20 Mo/s) areas
  • T1 Role (CCIN2P3)
    • Get files from T0 (dedicated dcache area: L. Schwarz)
    • Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)
    • Send all AODs to each T2 (20 Mo/s) using Lyon FTS server
    • Regurlarly cleanup files
  • T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)
    • Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...
    • Clean-up the files (?)
  • Other roles
    • ATLAS (S. Jézéquel, G. Rahal) : Initiate T1->T2 transfers

Information from DDM monitoring



--

Information from FTS monitoring

  • T1->T2 :
    • 15 concurrent files and 10 streams for LYON-TOKYO
    • 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15 )
    • 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)

Information from dCache monitoring (provided by Lyon)

VOBOX Configuration

  • 4 processors 3 GHz
  • 4 GB of memory ( 2 GB dedicated to SWAP)
  • The monitoring of the Vobox daily, weekly and monthly can be found Here

Disk space availability

Daily news


  • SC4 transfers from LYON to T2s:
    • Simultaneous transfers to the 7 T2 sites for more than 24 hours; it reached more than 25MB/s
    • StackdayLYON-27-7.png

Post-mortem DDM meeting

Page principale du Twiki ATLAS