<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://lcg.in2p3.fr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Grahal</id>
	<title>lcgwiki - Contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://lcg.in2p3.fr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Grahal"/>
	<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/Special:Contributions/Grahal"/>
	<updated>2026-05-15T23:09:16Z</updated>
	<subtitle>Contributions</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2809</id>
		<title>Draft of the scientific programm</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2809"/>
		<updated>2007-02-22T15:20:17Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;D.ation modèles de cP.  31/01/2007&lt;br /&gt;
==== Objectifs ====&lt;br /&gt;
LCG-France organise son 2ème colloque à Clermont-Ferrand les 14 et 15 mars 2007. Ces journées sont destinées à tous les acteurs de la grille de calcul au LHC (gestionnaires de site et utilisateurs) de l&#039;IN2P3 et du Dapnia. Elles ont pour objectifs d’offrir un lieu d’échange et de communication  sur les actions, les idées et les expériences en cours dans la mise en place  du  calcul pour le LHC dans le cadre du projet LCG.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Programme Scientifique version 2 (9/2/07)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt;  14 mars matin (3h30)  9h-10h45 11H15-13h  pause 10h45-11h15&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; 9h-9h05 Welcome ====&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Ghita Rahal,CC-IN2P3; Dominique Pallin, LPC-Clermont  &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*       09:05-09:15  -Infrastructure globale de grille en France et ressources associées F Malek&lt;br /&gt;
*       09:20-09:50  -mise en place de La grille francaise : T1       F Hernandez&lt;br /&gt;
*       09:55-10:20  -mise en place de La grille francaise: T2 etT3   F Chollet  &lt;br /&gt;
&lt;br /&gt;
*       10:25-10:45 &amp;amp; 11h15-13:00 -Présentation de l&#039;état du calcul dans les expériences:&lt;br /&gt;
***      Les modèles de calcul des expériences (overview)&lt;br /&gt;
***      Quelle participation dans la mise en place du calcul (en france)?&lt;br /&gt;
***     	Adéquation entre les infrastructures/ressources et les besoins.&lt;br /&gt;
***      Etat de la production des données de simulation&lt;br /&gt;
***     	Etat d&#039;avancement par rapport à l&#039;objectif de la prise de données de fin 2007. &lt;br /&gt;
***     	Difficultés, les points à améliorer…&lt;br /&gt;
*&lt;br /&gt;
*	10:25-11:45  -ALICE   Y schutz&lt;br /&gt;
*       10:45-11h15 -----------------  Pause ---------------------------&lt;br /&gt;
*	11:15-11:35  -ATLAS   E Lancon &lt;br /&gt;
*	11:40-12:00  -CMS C Charlot &lt;br /&gt;
*	12:05-12:25  -LHCb A Tsaregorodtsev &lt;br /&gt;
*	12:30-13:00  -discussion/tour de table sur l&#039;ensemble de la session&lt;br /&gt;
*** la grille francaise, sa mise en place et son utilisation, Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
*	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 14 mars après-midi (3h)  14h45-16h15  16H45-18h15  pause 16h15-16h45&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion et exploitation des grilles de calcul ====  &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Couvrir 2 aspects complémentaires : &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des sites -&amp;gt; les conséquences au niveau des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des utilisateurs -&amp;gt; les conséquences au niveau des sites &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Christine Leroy, Dapnia-Saclay; Pierre Girard, Ingénieur, CC-IN2P3&lt;br /&gt;
&lt;br /&gt;
*	le fonctionnement d&#039;un site de la grille :&lt;br /&gt;
du déploiement du middleware à la gestion de sa production, en passant par les procédures qu&#039;il doit suivre, comme la déclaration de &amp;quot;scheduled downtime&amp;quot;, l&#039;utilisation des outils officiels de monitoring (SAM, GSTAT, etc) et leurs conséquences (disparition partielle ou complète de la production, etc), sécurité,etc.&lt;br /&gt;
&lt;br /&gt;
*	la gestion de jobs grilles : &lt;br /&gt;
de la façon dont ils sont soumis par les utilisateurs à la façon dont ils sont traités par les sites (dont la mise au point des formules de rank pour l&#039;élection d&#039;un site de soumission, le manque d&#039;attractivité d&#039;un site pour les jobs ou l&#039;inverse, la gestion des priorités sur les jobs, des jobs pilotes, exécuteurs, etc.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;A- Sous-session &amp;quot;Gestion des jobs grille&amp;quot;&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Dans cette première sous-session, il est demandé à chaque VO d&#039;apporter des clarifications concernant les points suivants :&lt;br /&gt;
&amp;lt;P&amp;gt;&lt;br /&gt;
&amp;lt;OL&amp;gt;&lt;br /&gt;
&amp;lt;LI&amp;gt;Méthode(s) de distribution des jobs sur les sites :&lt;br /&gt;
  &amp;lt;UL&amp;gt;&lt;br /&gt;
  &amp;lt;LI&amp;gt;utilisation ou non d&#039;un RB ?&amp;lt;/LI&amp;gt;&lt;br /&gt;
  &amp;lt;LI&amp;gt;utilisation ou non des informations publiées ?&amp;lt;/LI&amp;gt;&lt;br /&gt;
  &amp;lt;LI&amp;gt;Critère d&#039;élection d&#039;un site ? etc.&amp;lt;/LI&amp;gt;&lt;br /&gt;
  &amp;lt;/UL&amp;gt;&lt;br /&gt;
&amp;lt;/LI&amp;gt;&lt;br /&gt;
&amp;lt;LI&amp;gt;Organisation de la production&lt;br /&gt;
   &amp;lt;UL&amp;gt;&lt;br /&gt;
   &amp;lt;LI&amp;gt;qui fait quoi, qui soumet quoi, les rôles, les histoires de priorité...&amp;lt;/LI&amp;gt;&lt;br /&gt;
   &amp;lt;LI&amp;gt;une production française ou pas&amp;lt;/LI&amp;gt;&lt;br /&gt;
   &amp;lt;LI&amp;gt;Système de monitoring pour les jobs ? Si oui, utilisable ou pas par les sites ? Quel est son principe de fonctionnement, etc.&amp;lt;/LI&amp;gt;&lt;br /&gt;
   &amp;lt;LI&amp;gt;gestion des proxy&amp;lt;/LI&amp;gt;&lt;br /&gt;
   &amp;lt;LI&amp;gt;Méthode d&#039;installation des softs (+ desinstallation)&amp;lt;/LI&amp;gt;&lt;br /&gt;
   &amp;lt;LI&amp;gt;comment un nouvel OS est validé et l&#039;information transmise&amp;lt;/LI&amp;gt;&lt;br /&gt;
&lt;br /&gt;
   &amp;lt;/UL&amp;gt;&lt;br /&gt;
&amp;lt;/LI&amp;gt;&lt;br /&gt;
&amp;lt;LI&amp;gt;Amélioration attendue et perspectives pour la gestion des jobs&amp;lt;/LI&amp;gt;&lt;br /&gt;
&amp;lt;/OL&amp;gt;&lt;br /&gt;
&amp;lt;/P&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  14:45 Introduction (5&#039;, Christine/Pierre)&lt;br /&gt;
  14:50 Spécificités d&#039;Alice (10&#039;, Artem ?)&lt;br /&gt;
  15:00 Spécificités d&#039;Atlas (10&#039;, Jérôme Schwi? Stéphane ? Karim ?)&lt;br /&gt;
  15:10 Spécificités de Cms (10&#039;, Claude ? Artem ?)&lt;br /&gt;
  15:20 Spécificités de Lhcb (10&#039;, Andrei ou Sabine?)&lt;br /&gt;
  15:30 Discussions (45&#039;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;B- Sous-Session &amp;quot;Fonctionnement des sites&amp;quot; &amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;P&amp;gt;&lt;br /&gt;
  &amp;lt;B&amp;gt;16:45 &amp;quot;Exploitation globale de la grille&amp;quot; (20&#039; + 10&#039;, Hélène C.)&amp;lt;/B&amp;gt;&lt;br /&gt;
  &amp;lt;UL&amp;gt;&lt;br /&gt;
    &amp;lt;LI&amp;gt;Interaction entre outils de gestion de l&#039;exploitation et la production&amp;lt;/LI&amp;gt;&lt;br /&gt;
    &amp;lt;LI&amp;gt;Comment les utilisateurs peuvent accéder aux informations&amp;lt;/LI&amp;gt;&lt;br /&gt;
    &amp;lt;LI&amp;gt;les outils d&#039;exploitation (GOC DB, SAM, GSTAT, MonALISA et le CIC Portal)&amp;lt;/LI&amp;gt;&lt;br /&gt;
    &amp;lt;LI&amp;gt;L&#039;accounting&amp;lt;/LI&amp;gt; &lt;br /&gt;
    &amp;lt;LI&amp;gt;Comment mesurer l’efficacite d’un site (utilisationCPU/CPU; resource disponible/resource indisponible ;jobs plantes/ jobs/reussis)&amp;lt;/LI&amp;gt;&lt;br /&gt;
  &amp;lt;/UL&amp;gt;&lt;br /&gt;
&amp;lt;/P&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;P&amp;gt; &lt;br /&gt;
  &amp;lt;B&amp;gt;17:15 &amp;quot;Fonctionnement d’un site&amp;quot;? (45&#039;, 3 intervenants ?)&amp;lt;/B&amp;gt;&lt;br /&gt;
  &amp;lt;OL&amp;gt;&lt;br /&gt;
    &amp;lt;LI&amp;gt;Suivi de jobs(15&#039;, David Bouvet)&lt;br /&gt;
    &amp;lt;UL&amp;gt;&lt;br /&gt;
      &amp;lt;LI&amp;gt;Est-ce qu&#039;on peut retracer un job qui a planté pour savoir ce qu&#039;il a utilisé comme ressources et pourquoi il a planté?&amp;lt;/LI&amp;gt;&lt;br /&gt;
      &amp;lt;LI&amp;gt;Est-ce qu&#039;on peut s&#039;assurer qu&#039;un job qui a bien tourné, a tourné avec les bonnes librairies (Y&#039;a-t-il des vérifications par les sites?)&amp;lt;/LI&amp;gt;&lt;br /&gt;
      &amp;lt;LI&amp;gt;Detection  de jobs qui se plantent instantanement car probleme sur un WN ?&amp;lt;/LI&amp;gt;&lt;br /&gt;
    &amp;lt;/UL&amp;gt;&lt;br /&gt;
    &amp;lt;/LI&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;LI&amp;gt;Gestion des pannes(15&#039;,?)&lt;br /&gt;
    &amp;lt;UL&amp;gt;&lt;br /&gt;
       &amp;lt;LI&amp;gt;Monitoring local, avez-vous un moyen de verifier « l’integrite » des WN ? Place disque disponible sur chaque WN&amp;lt;/LI&amp;gt;&lt;br /&gt;
    &amp;lt;/UL&amp;gt;&lt;br /&gt;
    &amp;lt;/LI&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    &amp;lt;LI&amp;gt;Le scheduling et le tuning (15&#039;, Michel Jouvin)&lt;br /&gt;
    &amp;lt;UL&amp;gt;&lt;br /&gt;
      &amp;lt;LI&amp;gt;Peut on mixer job d’analyse et job de simulation sur les WNs (pour optimiser IO et CPU) ?&amp;lt;/LI&amp;gt;&lt;br /&gt;
      &amp;lt;LI&amp;gt;Le manque d&#039;attractivité d&#039;un site pour les jobs ou l&#039;inverse =&amp;gt; état des lieux et amélioration en vue côté mw if any&amp;lt;/LI&amp;gt;&lt;br /&gt;
      &amp;lt;LI&amp;gt;la gestion des priorités sur les jobs&amp;lt;/LI&amp;gt;&lt;br /&gt;
    &amp;lt;/UL&amp;gt;&lt;br /&gt;
    &amp;lt;/LI&amp;gt;&lt;br /&gt;
  &amp;lt;/OL&amp;gt;&lt;br /&gt;
&amp;lt;/P&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;P&amp;gt;&lt;br /&gt;
  &amp;lt;B&amp;gt;17:45 &amp;quot;Gestion de l&#039;infrastructure d&#039;un site&amp;quot; (30&#039;,Pierre-Louis, Clermont)&amp;lt;/B&amp;gt;&lt;br /&gt;
&amp;lt;/P&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Mise en place d’un site  ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;L’exemple du T2 du LPC Clermont-Ferrand. Visite du site possible durant les 2 jours par petits groupes&lt;br /&gt;
&lt;br /&gt;
*	Salle machine (réseau électrique, climatisation,…)&lt;br /&gt;
*	Choix matériel&lt;br /&gt;
*	Sécurité&lt;br /&gt;
*	Les difficultés rencontrées&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars matin (3h30)  9h-10h45 11H15-13h  pause 10h45-11h15&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion des données grilles ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Lionel Schwarz (CC), Stéphane Jézequel(LAPP, Atlas) &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	Transferts massifs de données&lt;br /&gt;
Dans cette partie, l&#039;objectif est de comprendre l&#039;utilisation de FTS et&lt;br /&gt;
les outils de mouvement de fichiers (Phedex, DDM...) ce qui fonctionne,&lt;br /&gt;
ce qui ne fonctionne pas. Il serait bon de rappeler le trajet des différentes&lt;br /&gt;
donnees et lesquelles sont sous la responsabilite de quel site.&lt;br /&gt;
Egalement, l&#039;organisation en T2-T3, nuages... des donnees des&lt;br /&gt;
expériences. Il faudrait obtenir des chiffres au niveau des debits&lt;br /&gt;
prevus entre T1-T2.&lt;br /&gt;
** Point sur l&#039;infrastructure réseau et logicielle (FTS, SRM) des sites (5&#039;) - L. Schwarz&lt;br /&gt;
** Transferts massifs Alice (10&#039;)&lt;br /&gt;
** Transferts massifs Atlas (10&#039;)&lt;br /&gt;
** Transferts massifs CMS (10&#039;)&lt;br /&gt;
** Transferts massifs LHCb (10&#039;)&lt;br /&gt;
** Discussion (15&#039;)&lt;br /&gt;
&lt;br /&gt;
* Accès aux données&lt;br /&gt;
Cette partie est consacrée à la problématique de l&#039;accès aux données par&lt;br /&gt;
les jobs. Quel protocole est envisagé? Accès local/à distance?&lt;br /&gt;
Debit prévu, nombre de jobs? Problème des données utilisateur? Cohabitation&lt;br /&gt;
transfert des données/accès pour l&#039;analyse. Téléchargement de données...&lt;br /&gt;
** Modèle d&#039;accès aux données de LHCb (10&#039;)&lt;br /&gt;
** Modèle d&#039;accès aux données de CMS (10&#039;)&lt;br /&gt;
** Modèle d&#039;accès aux données de Atlas (10&#039;)&lt;br /&gt;
** Modèle d&#039;accès aux données de Alice (10&#039;)&lt;br /&gt;
** Discussion (30&#039;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars après-midi (2h40)  14h15-15h45  16H05-17h15  pause 15h45-16h05&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Les centres d’analyses ==== &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Eric Lancon,Dapnia-Saclay ; Claude Charlot LLR; Frédéric Derue LPNHE &#039;&#039;&lt;br /&gt;
               &lt;br /&gt;
*	mise en place&lt;br /&gt;
*	les contours&lt;br /&gt;
*	les logiciels d’analyses (Ganga,..)&lt;br /&gt;
*	Coordination avec les T3s ? mise en commun de ressources ?&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Conclusions  F Malek ====&lt;br /&gt;
&lt;br /&gt;
*&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2722</id>
		<title>Draft of the scientific programm</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2722"/>
		<updated>2007-01-29T14:12:46Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;D.P. ,F.M. 7/12/2006&lt;br /&gt;
==== Objectifs ====&lt;br /&gt;
LCG-France organise son 2ème colloque à Clermont-Ferrand les 14 et 15 mars 2007. Ces journées sont destinées à tous les acteurs de la grille de calcul au LHC (gestionnaires de site et utilisateurs) de l&#039;IN2P3 et du Dapnia. Elles ont pour objectifs d’offrir un lieu d’échange et de communication  sur les actions, les idées et les expériences en cours dans la mise en place  du  calcul pour le LHC dans le cadre du projet LCG.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Programme Scientifique version 1 (7/12/06)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt;  14 mars matin (3h)  9h30-11h 11H30-13h  pause 11h-11h30&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Ghita Rahal,CC-IN2P3; Dominique Pallin, LPC-Clermont  &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	 9:30-10:00  -Infrastructure globale de grille en France et ressources associées&lt;br /&gt;
**20min+10min de discussion:Fayrouz Malek?&lt;br /&gt;
*       10:00-10:30  -Présentation des sites (T1 et sites associés,T2 etT3) + T2 étrangers associés&lt;br /&gt;
**20 min+10min de discussion: Dominique Boutigny, Frédérique Chollet? &lt;br /&gt;
*	10:30-11:00  -La grille: &lt;br /&gt;
**20min+10min discussion: Fabio Hernandez? &lt;br /&gt;
***Son évolution  en vue  des données LHC  2009-2010&lt;br /&gt;
***Les difficultés rencontrées;&lt;br /&gt;
*       ----------------------------  Pause ---------------------------&lt;br /&gt;
*       11:30-13:00 -Présentation de l&#039;état du calcul dans les expériences:&lt;br /&gt;
**20 min par expérience+10 min de discussion:Eric Lançon, Schultz, Tzagarodiev, Claude Charlot ?&lt;br /&gt;
***      Les modèles de calcul des expériences &lt;br /&gt;
***      Qui est impliqué dans la mise en place du calcul?&lt;br /&gt;
***     	Adéquation entre les infrastructure/ressources et les besoins.&lt;br /&gt;
***      Etat de la production des données de simulation&lt;br /&gt;
***     	Etat d&#039;avancement par rapport à l&#039;objectif de la prise de données de fin 2007. &lt;br /&gt;
***     	Difficultés, les points à améliorer…&lt;br /&gt;
*	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 14 mars après-midi (3h)  14h45-16h15  16H45-18h15  pause 16h15-16h45&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion et exploitation des grilles de calcul ====  &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Couvrir 2 aspects complémentaires : &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des sites -&amp;gt; les conséquences au niveau des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des utilisateurs -&amp;gt; les conséquences au niveau des sites &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Christine Leroy, Dapnia-Saclay; Pierre Girard, Ingénieur, CC-IN2P3&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	le fonctionnement d&#039;un site de la grille :&lt;br /&gt;
du déploiement du middleware à la gestion de sa production, en passant par les procédures qu&#039;il doit suivre, comme la déclaration de &amp;quot;scheduled downtime&amp;quot;, l&#039;utilisation des outils officiels de monitoring (SAM, GSTAT, etc) et leurs conséquences (disparition partielle ou complète de la production, etc), sécurité,etc.&lt;br /&gt;
&lt;br /&gt;
*	la gestion de jobs grilles : &lt;br /&gt;
de la façon dont ils sont soumis par les utilisateurs à la façon dont ils sont traités par les sites (dont la mise au point des formules de rank pour l&#039;élection d&#039;un site de soumission, le manque d&#039;attractivité d&#039;un site pour les jobs ou l&#039;inverse, la gestion des priorités sur les jobs, des jobs pilotes, exécuteurs, etc.)&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Mise en place d’un site  ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;L’exemple du T2 du LPC Clermont-Ferrand. Visite du site possible durant les 2 jours par petits groupes&lt;br /&gt;
&lt;br /&gt;
*	Salle machine (réseau électrique, climatisation,…)&lt;br /&gt;
*	Choix matériel&lt;br /&gt;
*	Sécurité&lt;br /&gt;
*	Les difficultés rencontrées&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars matin (3h30)  9h-10h45 11H15-13h  pause 10h45-11h15&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion des données grilles ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Lionel Schwarz, CC-IN2P3 ? ;  Representant ALICE, LHCb? &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	Réseaux, trafic&lt;br /&gt;
*	Transfert des données  SRM, FTS,…&lt;br /&gt;
*	Stockage ?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars après-midi (2h40)  14h15-15h45  16H05-17h15  pause 15h45-16h05&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Les centres d’analyses ==== &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Eric Lancon,Dapnia-Saclay ; Claude Charlot ? &#039;&#039;&lt;br /&gt;
               &lt;br /&gt;
*	mise en place&lt;br /&gt;
*	les contours&lt;br /&gt;
*	les logiciels d’analyses (Ganga,..)&lt;br /&gt;
*	Coordination avec les T3s ? mise en commun de ressources ?&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Grille de Recherche : Grid5000 ? ====&lt;br /&gt;
&lt;br /&gt;
*&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2721</id>
		<title>Draft of the scientific programm</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2721"/>
		<updated>2007-01-29T14:11:49Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;D.P. ,F.M. 7/12/2006&lt;br /&gt;
==== Objectifs ====&lt;br /&gt;
LCG-France organise son 2ème colloque à Clermont-Ferrand les 14 et 15 mars 2007. Ces journées sont destinées à tous les acteurs de la grille de calcul au LHC (gestionnaires de site et utilisateurs) de l&#039;IN2P3 et du Dapnia. Elles ont pour objectifs d’offrir un lieu d’échange et de communication  sur les actions, les idées et les expériences en cours dans la mise en place  du  calcul pour le LHC dans le cadre du projet LCG.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Programme Scientifique version 1 (7/12/06)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt;  14 mars matin (3h)  9h30-11h 11H30-13h  pause 11h-11h30&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Ghita Rahal,CC-IN2P3; Dominique Pallin, LPC-Clermont  &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	 9:30-10:00  -Infrastructure globale de grille en France et ressources associées&lt;br /&gt;
**20min+10min de discussion:Fayrouz Malek?&lt;br /&gt;
*       10:00-10:30  -Présentation des sites (T1 et sites associés,T2 etT3) + T2 étrangers associés&lt;br /&gt;
**20 min+10min de discussion: Dominique Boutigny, Frédérique Chollet? &lt;br /&gt;
*	10:30-11:00  -La grille: &lt;br /&gt;
**20min+10min discussion: Fabio Hernandez? &lt;br /&gt;
***Son évolution  en vue  des données LHC  2009-2010&lt;br /&gt;
***Les difficultés rencontrées;&lt;br /&gt;
*       ----------------------------  Pause ---------------------------&lt;br /&gt;
*       11:30-12:50 -Présentation de l&#039;état du calcul dans les expériences:&lt;br /&gt;
**20 min par expérience+10 min de discussion:Eric Lançon, Schultz, Tzagarodiev, Claude Charlot ?&lt;br /&gt;
***      Les modèles de calcul des expériences &lt;br /&gt;
***      Qui est impliqué dans la mise en place du calcul?&lt;br /&gt;
***     	Adéquation entre les infrastructure/ressources et les besoins.&lt;br /&gt;
***      Etat de la production des données de simulation&lt;br /&gt;
***     	Etat d&#039;avancement par rapport à l&#039;objectif de la prise de données de fin 2007. &lt;br /&gt;
***     	Difficultés, les points à améliorer…&lt;br /&gt;
*	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 14 mars après-midi (3h)  14h45-16h15  16H45-18h15  pause 16h15-16h45&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion et exploitation des grilles de calcul ====  &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Couvrir 2 aspects complémentaires : &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des sites -&amp;gt; les conséquences au niveau des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des utilisateurs -&amp;gt; les conséquences au niveau des sites &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Christine Leroy, Dapnia-Saclay; Pierre Girard, Ingénieur, CC-IN2P3&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	le fonctionnement d&#039;un site de la grille :&lt;br /&gt;
du déploiement du middleware à la gestion de sa production, en passant par les procédures qu&#039;il doit suivre, comme la déclaration de &amp;quot;scheduled downtime&amp;quot;, l&#039;utilisation des outils officiels de monitoring (SAM, GSTAT, etc) et leurs conséquences (disparition partielle ou complète de la production, etc), sécurité,etc.&lt;br /&gt;
&lt;br /&gt;
*	la gestion de jobs grilles : &lt;br /&gt;
de la façon dont ils sont soumis par les utilisateurs à la façon dont ils sont traités par les sites (dont la mise au point des formules de rank pour l&#039;élection d&#039;un site de soumission, le manque d&#039;attractivité d&#039;un site pour les jobs ou l&#039;inverse, la gestion des priorités sur les jobs, des jobs pilotes, exécuteurs, etc.)&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Mise en place d’un site  ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;L’exemple du T2 du LPC Clermont-Ferrand. Visite du site possible durant les 2 jours par petits groupes&lt;br /&gt;
&lt;br /&gt;
*	Salle machine (réseau électrique, climatisation,…)&lt;br /&gt;
*	Choix matériel&lt;br /&gt;
*	Sécurité&lt;br /&gt;
*	Les difficultés rencontrées&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars matin (3h30)  9h-10h45 11H15-13h  pause 10h45-11h15&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion des données grilles ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Lionel Schwarz, CC-IN2P3 ? ;  Representant ALICE, LHCb? &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	Réseaux, trafic&lt;br /&gt;
*	Transfert des données  SRM, FTS,…&lt;br /&gt;
*	Stockage ?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars après-midi (2h40)  14h15-15h45  16H05-17h15  pause 15h45-16h05&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Les centres d’analyses ==== &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Eric Lancon,Dapnia-Saclay ; Claude Charlot ? &#039;&#039;&lt;br /&gt;
               &lt;br /&gt;
*	mise en place&lt;br /&gt;
*	les contours&lt;br /&gt;
*	les logiciels d’analyses (Ganga,..)&lt;br /&gt;
*	Coordination avec les T3s ? mise en commun de ressources ?&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Grille de Recherche : Grid5000 ? ====&lt;br /&gt;
&lt;br /&gt;
*&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2720</id>
		<title>Draft of the scientific programm</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2720"/>
		<updated>2007-01-29T14:10:53Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;D.P. ,F.M. 7/12/2006&lt;br /&gt;
==== Objectifs ====&lt;br /&gt;
LCG-France organise son 2ème colloque à Clermont-Ferrand les 14 et 15 mars 2007. Ces journées sont destinées à tous les acteurs de la grille de calcul au LHC (gestionnaires de site et utilisateurs) de l&#039;IN2P3 et du Dapnia. Elles ont pour objectifs d’offrir un lieu d’échange et de communication  sur les actions, les idées et les expériences en cours dans la mise en place  du  calcul pour le LHC dans le cadre du projet LCG.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Programme Scientifique version 1 (7/12/06)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt;  14 mars matin (3h)  9h30-11h 11H30-13h  pause 11h-11h30&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Ghita Rahal,CC-IN2P3; Dominique Pallin, LPC-Clermont  &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	 9:30-10:00  -Infrastructure globale de grille en France et ressources associées&lt;br /&gt;
**20min+10min de discussion:Fayrouz Malek?&lt;br /&gt;
*       10:00-10:30  -Présentation des sites (T1 et sites associés,T2 etT3) + T2 étrangers associés&lt;br /&gt;
**20 min+10min de discussion: Dominique Boutigny, Frédérique Chollet? &lt;br /&gt;
*	10:30-11:00  -La grille: &lt;br /&gt;
**20min+10min discussion: Fabio Hernandez? &lt;br /&gt;
**Son évolution  en vue  des données LHC  2009-2010&lt;br /&gt;
**Les difficultés rencontrées;&lt;br /&gt;
*       ----------------------------  Pause ---------------------------&lt;br /&gt;
*       11:30-12:50 -Présentation de l&#039;état du calcul dans les expériences:&lt;br /&gt;
**20 min par expérience+10 min de discussion:Eric Lançon, Schultz, Tzagarodiev, Claude Charlot ?&lt;br /&gt;
**      Les modèles de calcul des expériences &lt;br /&gt;
**      Qui est impliqué dans la mise en place du calcul?&lt;br /&gt;
**     	Adéquation entre les infrastructure/ressources et les besoins.&lt;br /&gt;
**      Etat de la production des données de simulation&lt;br /&gt;
**     	Etat d&#039;avancement par rapport à l&#039;objectif de la prise de données de fin 2007. &lt;br /&gt;
**     	Difficultés, les points à améliorer…&lt;br /&gt;
*	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 14 mars après-midi (3h)  14h45-16h15  16H45-18h15  pause 16h15-16h45&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion et exploitation des grilles de calcul ====  &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Couvrir 2 aspects complémentaires : &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des sites -&amp;gt; les conséquences au niveau des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des utilisateurs -&amp;gt; les conséquences au niveau des sites &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Christine Leroy, Dapnia-Saclay; Pierre Girard, Ingénieur, CC-IN2P3&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	le fonctionnement d&#039;un site de la grille :&lt;br /&gt;
du déploiement du middleware à la gestion de sa production, en passant par les procédures qu&#039;il doit suivre, comme la déclaration de &amp;quot;scheduled downtime&amp;quot;, l&#039;utilisation des outils officiels de monitoring (SAM, GSTAT, etc) et leurs conséquences (disparition partielle ou complète de la production, etc), sécurité,etc.&lt;br /&gt;
&lt;br /&gt;
*	la gestion de jobs grilles : &lt;br /&gt;
de la façon dont ils sont soumis par les utilisateurs à la façon dont ils sont traités par les sites (dont la mise au point des formules de rank pour l&#039;élection d&#039;un site de soumission, le manque d&#039;attractivité d&#039;un site pour les jobs ou l&#039;inverse, la gestion des priorités sur les jobs, des jobs pilotes, exécuteurs, etc.)&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Mise en place d’un site  ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;L’exemple du T2 du LPC Clermont-Ferrand. Visite du site possible durant les 2 jours par petits groupes&lt;br /&gt;
&lt;br /&gt;
*	Salle machine (réseau électrique, climatisation,…)&lt;br /&gt;
*	Choix matériel&lt;br /&gt;
*	Sécurité&lt;br /&gt;
*	Les difficultés rencontrées&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars matin (3h30)  9h-10h45 11H15-13h  pause 10h45-11h15&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion des données grilles ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Lionel Schwarz, CC-IN2P3 ? ;  Representant ALICE, LHCb? &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	Réseaux, trafic&lt;br /&gt;
*	Transfert des données  SRM, FTS,…&lt;br /&gt;
*	Stockage ?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars après-midi (2h40)  14h15-15h45  16H05-17h15  pause 15h45-16h05&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Les centres d’analyses ==== &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Eric Lancon,Dapnia-Saclay ; Claude Charlot ? &#039;&#039;&lt;br /&gt;
               &lt;br /&gt;
*	mise en place&lt;br /&gt;
*	les contours&lt;br /&gt;
*	les logiciels d’analyses (Ganga,..)&lt;br /&gt;
*	Coordination avec les T3s ? mise en commun de ressources ?&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Grille de Recherche : Grid5000 ? ====&lt;br /&gt;
&lt;br /&gt;
*&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2719</id>
		<title>Draft of the scientific programm</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2719"/>
		<updated>2007-01-29T14:07:23Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;D.P. ,F.M. 7/12/2006&lt;br /&gt;
==== Objectifs ====&lt;br /&gt;
LCG-France organise son 2ème colloque à Clermont-Ferrand les 14 et 15 mars 2007. Ces journées sont destinées à tous les acteurs de la grille de calcul au LHC (gestionnaires de site et utilisateurs) de l&#039;IN2P3 et du Dapnia. Elles ont pour objectifs d’offrir un lieu d’échange et de communication  sur les actions, les idées et les expériences en cours dans la mise en place  du  calcul pour le LHC dans le cadre du projet LCG.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Programme Scientifique version 1 (7/12/06)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt;  14 mars matin (3h)  9h30-11h 11H30-13h  pause 11h-11h30&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Ghita Rahal,CC-IN2P3; Dominique Pallin, LPC-Clermont  &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	 9:30-10:00 (20min+10min de discussion)Fayrouz Malek?  Infrastructure globale de grille en France et ressources associées&lt;br /&gt;
*       10:00-10:30 (20 min+10min de discussion) Dominique Boutigny, Frédérique Chollet?          Présentation des sites (T1 et sites associés,T2 etT3) + T2 étrangers associés&lt;br /&gt;
*	10:30-11:00 (20min+10min) Fabio Hernandez?    La grille &lt;br /&gt;
**Son évolution  en vue  des données LHC  2009-2010&lt;br /&gt;
**Les difficultés rencontrées;&lt;br /&gt;
*       ----------------------------  Pause ---------------------------&lt;br /&gt;
*       11:30-12:50 : (20 min par expérience+10 min de discussion): (Eric Lançon, Schultz, Tzagarodiev, Claude Charlot ?) &lt;br /&gt;
*       Présentation de l&#039;état du calcul dans les expériences:&lt;br /&gt;
**      Les modèles de calcul des expériences &lt;br /&gt;
**      Qui est impliqué dans la mise en place du calcul?&lt;br /&gt;
**     	Adéquation entre les infrastructure/ressources et les besoins.&lt;br /&gt;
**      Etat de la production des données de simulation&lt;br /&gt;
**     	Etat d&#039;avancement par rapport à l&#039;objectif de la prise de données de fin 2007. &lt;br /&gt;
**     	Difficultés, les points à améliorer…&lt;br /&gt;
*	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 14 mars après-midi (3h)  14h45-16h15  16H45-18h15  pause 16h15-16h45&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion et exploitation des grilles de calcul ====  &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Couvrir 2 aspects complémentaires : &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des sites -&amp;gt; les conséquences au niveau des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des utilisateurs -&amp;gt; les conséquences au niveau des sites &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Christine Leroy, Dapnia-Saclay; Pierre Girard, Ingénieur, CC-IN2P3&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	le fonctionnement d&#039;un site de la grille :&lt;br /&gt;
du déploiement du middleware à la gestion de sa production, en passant par les procédures qu&#039;il doit suivre, comme la déclaration de &amp;quot;scheduled downtime&amp;quot;, l&#039;utilisation des outils officiels de monitoring (SAM, GSTAT, etc) et leurs conséquences (disparition partielle ou complète de la production, etc), sécurité,etc.&lt;br /&gt;
&lt;br /&gt;
*	la gestion de jobs grilles : &lt;br /&gt;
de la façon dont ils sont soumis par les utilisateurs à la façon dont ils sont traités par les sites (dont la mise au point des formules de rank pour l&#039;élection d&#039;un site de soumission, le manque d&#039;attractivité d&#039;un site pour les jobs ou l&#039;inverse, la gestion des priorités sur les jobs, des jobs pilotes, exécuteurs, etc.)&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Mise en place d’un site  ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;L’exemple du T2 du LPC Clermont-Ferrand. Visite du site possible durant les 2 jours par petits groupes&lt;br /&gt;
&lt;br /&gt;
*	Salle machine (réseau électrique, climatisation,…)&lt;br /&gt;
*	Choix matériel&lt;br /&gt;
*	Sécurité&lt;br /&gt;
*	Les difficultés rencontrées&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars matin (3h30)  9h-10h45 11H15-13h  pause 10h45-11h15&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion des données grilles ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Lionel Schwarz, CC-IN2P3 ? ;  Representant ALICE, LHCb? &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	Réseaux, trafic&lt;br /&gt;
*	Transfert des données  SRM, FTS,…&lt;br /&gt;
*	Stockage ?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars après-midi (2h40)  14h15-15h45  16H05-17h15  pause 15h45-16h05&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Les centres d’analyses ==== &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Eric Lancon,Dapnia-Saclay ; Claude Charlot ? &#039;&#039;&lt;br /&gt;
               &lt;br /&gt;
*	mise en place&lt;br /&gt;
*	les contours&lt;br /&gt;
*	les logiciels d’analyses (Ganga,..)&lt;br /&gt;
*	Coordination avec les T3s ? mise en commun de ressources ?&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Grille de Recherche : Grid5000 ? ====&lt;br /&gt;
&lt;br /&gt;
*&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2718</id>
		<title>Draft of the scientific programm</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2718"/>
		<updated>2007-01-29T14:05:39Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;D.P. ,F.M. 7/12/2006&lt;br /&gt;
==== Objectifs ====&lt;br /&gt;
LCG-France organise son 2ème colloque à Clermont-Ferrand les 14 et 15 mars 2007. Ces journées sont destinées à tous les acteurs de la grille de calcul au LHC (gestionnaires de site et utilisateurs) de l&#039;IN2P3 et du Dapnia. Elles ont pour objectifs d’offrir un lieu d’échange et de communication  sur les actions, les idées et les expériences en cours dans la mise en place  du  calcul pour le LHC dans le cadre du projet LCG.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Programme Scientifique version 1 (7/12/06)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt;  14 mars matin (3h)  9h30-11h 11H30-13h  pause 11h-11h30&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Ghita Rahal,CC-IN2P3; Dominique Pallin, LPC-Clermont  &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	 9:30-10:00 (20min+10min de discussion)Fayrouz Malek?  Infrastructure globale de grille en France et ressources associées&lt;br /&gt;
*       10:00-10:30 (20 min+10min de discussion) Dominique Boutigny, Frédérique Chollet?          Présentation des sites (T1 et sites associés,T2 etT3) + T2 étrangers associés&lt;br /&gt;
*	10:30-11:00 (20min+10min) Fabio Hernandez?    La grille et son évolution  en vue  des données LHC  2009-2010 ?&lt;br /&gt;
*       ----------------------------  Pause ---------------------------&lt;br /&gt;
*       11:30-12:50 : (20 min par expérience+10 min de discussion): (Eric Lançon, Schultz, Tzagarodiev, Claude Charlot ?) &lt;br /&gt;
*       Présentation de l&#039;état du calcul dans les expériences:&lt;br /&gt;
**      Les modèles de calcul des expériences &lt;br /&gt;
**      Qui est impliqué dans la mise en place du calcul?&lt;br /&gt;
**     	Adéquation entre les infrastructure/ressources et les besoins.&lt;br /&gt;
**      Etat de la production des données de simulation&lt;br /&gt;
**     	Etat d&#039;avancement par rapport à l&#039;objectif de la prise de données de fin 2007. &lt;br /&gt;
**     	Difficultés, les points à améliorer…&lt;br /&gt;
*	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 14 mars après-midi (3h)  14h45-16h15  16H45-18h15  pause 16h15-16h45&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion et exploitation des grilles de calcul ====  &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Couvrir 2 aspects complémentaires : &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des sites -&amp;gt; les conséquences au niveau des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des utilisateurs -&amp;gt; les conséquences au niveau des sites &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Christine Leroy, Dapnia-Saclay; Pierre Girard, Ingénieur, CC-IN2P3&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	le fonctionnement d&#039;un site de la grille :&lt;br /&gt;
du déploiement du middleware à la gestion de sa production, en passant par les procédures qu&#039;il doit suivre, comme la déclaration de &amp;quot;scheduled downtime&amp;quot;, l&#039;utilisation des outils officiels de monitoring (SAM, GSTAT, etc) et leurs conséquences (disparition partielle ou complète de la production, etc), sécurité,etc.&lt;br /&gt;
&lt;br /&gt;
*	la gestion de jobs grilles : &lt;br /&gt;
de la façon dont ils sont soumis par les utilisateurs à la façon dont ils sont traités par les sites (dont la mise au point des formules de rank pour l&#039;élection d&#039;un site de soumission, le manque d&#039;attractivité d&#039;un site pour les jobs ou l&#039;inverse, la gestion des priorités sur les jobs, des jobs pilotes, exécuteurs, etc.)&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Mise en place d’un site  ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;L’exemple du T2 du LPC Clermont-Ferrand. Visite du site possible durant les 2 jours par petits groupes&lt;br /&gt;
&lt;br /&gt;
*	Salle machine (réseau électrique, climatisation,…)&lt;br /&gt;
*	Choix matériel&lt;br /&gt;
*	Sécurité&lt;br /&gt;
*	Les difficultés rencontrées&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars matin (3h30)  9h-10h45 11H15-13h  pause 10h45-11h15&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion des données grilles ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Lionel Schwarz, CC-IN2P3 ? ;  Representant ALICE, LHCb? &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	Réseaux, trafic&lt;br /&gt;
*	Transfert des données  SRM, FTS,…&lt;br /&gt;
*	Stockage ?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars après-midi (2h40)  14h15-15h45  16H05-17h15  pause 15h45-16h05&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Les centres d’analyses ==== &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Eric Lancon,Dapnia-Saclay ; Claude Charlot ? &#039;&#039;&lt;br /&gt;
               &lt;br /&gt;
*	mise en place&lt;br /&gt;
*	les contours&lt;br /&gt;
*	les logiciels d’analyses (Ganga,..)&lt;br /&gt;
*	Coordination avec les T3s ? mise en commun de ressources ?&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Grille de Recherche : Grid5000 ? ====&lt;br /&gt;
&lt;br /&gt;
*&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2717</id>
		<title>Draft of the scientific programm</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2717"/>
		<updated>2007-01-29T14:00:21Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;D.P. ,F.M. 7/12/2006&lt;br /&gt;
==== Objectifs ====&lt;br /&gt;
LCG-France organise son 2ème colloque à Clermont-Ferrand les 14 et 15 mars 2007. Ces journées sont destinées à tous les acteurs de la grille de calcul au LHC (gestionnaires de site et utilisateurs) de l&#039;IN2P3 et du Dapnia. Elles ont pour objectifs d’offrir un lieu d’échange et de communication  sur les actions, les idées et les expériences en cours dans la mise en place  du  calcul pour le LHC dans le cadre du projet LCG.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Programme Scientifique version 1 (7/12/06)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt;  14 mars matin (3h)  9h30-11h 11H30-13h  pause 11h-11h30&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Ghita Rahal,CC-IN2P3; Dominique Pallin, LPC-Clermont  &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	 9:30-10:00 Fayrouz Malek?  Infrastructure globale de grille en France et ressources associées&lt;br /&gt;
*       10:00-10:30 Dominique Boutigny, Frédérique Chollet ? Présentation des sites (T1 et sites associés,T2 etT3) + T2 étrangers associés&lt;br /&gt;
*	10:30-11:00 Fabio Hernandez? La grille et son évolution  en vue  des données LHC  2009-2010 ?&lt;br /&gt;
*       ----------------------------  Pause ---------------------------&lt;br /&gt;
*       11:30-12:50 : 20 min par expérience: (Eric Lançon, Schultz, Tz., Claude Ch. ?) &lt;br /&gt;
*       Présentation de l&#039;état du calcul dans les expériences:&lt;br /&gt;
**      Les modèles de calcul des expériences &lt;br /&gt;
**      Qui est impliqué dans la mise en place du calcul?&lt;br /&gt;
**     	Adéquation entre les infrastructure/ressources et les besoins.&lt;br /&gt;
**      Etat de la production des données de simulation&lt;br /&gt;
**     	Etat d&#039;avancement par rapport à l&#039;objectif de la prise de données de fin 2007. &lt;br /&gt;
**     	Difficultés, les points à améliorer…&lt;br /&gt;
*	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 14 mars après-midi (3h)  14h45-16h15  16H45-18h15  pause 16h15-16h45&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion et exploitation des grilles de calcul ====  &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Couvrir 2 aspects complémentaires : &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des sites -&amp;gt; les conséquences au niveau des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des utilisateurs -&amp;gt; les conséquences au niveau des sites &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Christine Leroy, Dapnia-Saclay; Pierre Girard, Ingénieur, CC-IN2P3&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	le fonctionnement d&#039;un site de la grille :&lt;br /&gt;
du déploiement du middleware à la gestion de sa production, en passant par les procédures qu&#039;il doit suivre, comme la déclaration de &amp;quot;scheduled downtime&amp;quot;, l&#039;utilisation des outils officiels de monitoring (SAM, GSTAT, etc) et leurs conséquences (disparition partielle ou complète de la production, etc), sécurité,etc.&lt;br /&gt;
&lt;br /&gt;
*	la gestion de jobs grilles : &lt;br /&gt;
de la façon dont ils sont soumis par les utilisateurs à la façon dont ils sont traités par les sites (dont la mise au point des formules de rank pour l&#039;élection d&#039;un site de soumission, le manque d&#039;attractivité d&#039;un site pour les jobs ou l&#039;inverse, la gestion des priorités sur les jobs, des jobs pilotes, exécuteurs, etc.)&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Mise en place d’un site  ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;L’exemple du T2 du LPC Clermont-Ferrand. Visite du site possible durant les 2 jours par petits groupes&lt;br /&gt;
&lt;br /&gt;
*	Salle machine (réseau électrique, climatisation,…)&lt;br /&gt;
*	Choix matériel&lt;br /&gt;
*	Sécurité&lt;br /&gt;
*	Les difficultés rencontrées&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars matin (3h30)  9h-10h45 11H15-13h  pause 10h45-11h15&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion des données grilles ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Lionel Schwarz, CC-IN2P3 ? ;  Representant ALICE, LHCb? &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	Réseaux, trafic&lt;br /&gt;
*	Transfert des données  SRM, FTS,…&lt;br /&gt;
*	Stockage ?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars après-midi (2h40)  14h15-15h45  16H05-17h15  pause 15h45-16h05&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Les centres d’analyses ==== &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Eric Lancon,Dapnia-Saclay ; Claude Charlot ? &#039;&#039;&lt;br /&gt;
               &lt;br /&gt;
*	mise en place&lt;br /&gt;
*	les contours&lt;br /&gt;
*	les logiciels d’analyses (Ganga,..)&lt;br /&gt;
*	Coordination avec les T3s ? mise en commun de ressources ?&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Grille de Recherche : Grid5000 ? ====&lt;br /&gt;
&lt;br /&gt;
*&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2716</id>
		<title>Draft of the scientific programm</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Draft_of_the_scientific_programm&amp;diff=2716"/>
		<updated>2007-01-29T13:57:14Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;D.P. ,F.M. 7/12/2006&lt;br /&gt;
==== Objectifs ====&lt;br /&gt;
LCG-France organise son 2ème colloque à Clermont-Ferrand les 14 et 15 mars 2007. Ces journées sont destinées à tous les acteurs de la grille de calcul au LHC (gestionnaires de site et utilisateurs) de l&#039;IN2P3 et du Dapnia. Elles ont pour objectifs d’offrir un lieu d’échange et de communication  sur les actions, les idées et les expériences en cours dans la mise en place  du  calcul pour le LHC dans le cadre du projet LCG.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Programme Scientifique version 1 (7/12/06)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt;  14 mars matin (3h)  9h30-11h 11H30-13h  pause 11h-11h30&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Etat des lieux pour le calcul au LHC en France====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Ghita Rahal,CC-IN2P3; Dominique Pallin, LPC-Clermont  &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	 9:30-10:00 Fayrouz?  Infrastructure globale de grille en France et ressources associées&lt;br /&gt;
*       10:00-10:30 Dominique B, Frederique C ?Présentation des sites (T1 et sites associés,T2 etT3) + T2 étrangers associés&lt;br /&gt;
*	10:30-11:00 La grille et son évolution  en vue  des données LHC  2009-2010 ?&lt;br /&gt;
*       ----------------------------  Pause ---------------------------&lt;br /&gt;
*       11:30-12:50 : 20 min par expérience: (Eric L., Sch., Tz., Claude Ch. ?) &lt;br /&gt;
*       Présentation de l&#039;état du calcul dans les expériences:&lt;br /&gt;
**      Les modèles de calcul des expériences &lt;br /&gt;
**      Qui est impliqué dans la mise en place du calcul?&lt;br /&gt;
**     	Adéquation entre les infrastructure/ressources et les besoins.&lt;br /&gt;
**      Etat de la production des données de simulation&lt;br /&gt;
**     	Etat d&#039;avancement par rapport à l&#039;objectif de la prise de données de fin 2007. &lt;br /&gt;
**     	Difficultés, les points à améliorer…&lt;br /&gt;
*	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 14 mars après-midi (3h)  14h45-16h15  16H45-18h15  pause 16h15-16h45&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion et exploitation des grilles de calcul ====  &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Couvrir 2 aspects complémentaires : &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des sites -&amp;gt; les conséquences au niveau des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;-les actions des utilisateurs -&amp;gt; les conséquences au niveau des sites &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Christine Leroy, Dapnia-Saclay; Pierre Girard, Ingénieur, CC-IN2P3&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	le fonctionnement d&#039;un site de la grille :&lt;br /&gt;
du déploiement du middleware à la gestion de sa production, en passant par les procédures qu&#039;il doit suivre, comme la déclaration de &amp;quot;scheduled downtime&amp;quot;, l&#039;utilisation des outils officiels de monitoring (SAM, GSTAT, etc) et leurs conséquences (disparition partielle ou complète de la production, etc), sécurité,etc.&lt;br /&gt;
&lt;br /&gt;
*	la gestion de jobs grilles : &lt;br /&gt;
de la façon dont ils sont soumis par les utilisateurs à la façon dont ils sont traités par les sites (dont la mise au point des formules de rank pour l&#039;élection d&#039;un site de soumission, le manque d&#039;attractivité d&#039;un site pour les jobs ou l&#039;inverse, la gestion des priorités sur les jobs, des jobs pilotes, exécuteurs, etc.)&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Mise en place d’un site  ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;L’exemple du T2 du LPC Clermont-Ferrand. Visite du site possible durant les 2 jours par petits groupes&lt;br /&gt;
&lt;br /&gt;
*	Salle machine (réseau électrique, climatisation,…)&lt;br /&gt;
*	Choix matériel&lt;br /&gt;
*	Sécurité&lt;br /&gt;
*	Les difficultés rencontrées&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars matin (3h30)  9h-10h45 11H15-13h  pause 10h45-11h15&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt; Gestion des données grilles ====&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Lionel Schwarz, CC-IN2P3 ? ;  Representant ALICE, LHCb? &#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*	Réseaux, trafic&lt;br /&gt;
*	Transfert des données  SRM, FTS,…&lt;br /&gt;
*	Stockage ?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#0000F0;&amp;quot;&amp;gt; 15 mars après-midi (2h40)  14h15-15h45  16H05-17h15  pause 15h45-16h05&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Les centres d’analyses ==== &lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#990000;&amp;quot;&amp;gt;Du point de vue des sites, des collaborations LHC et des utilisateurs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&amp;lt;span style=&amp;quot;color:#006600;&amp;quot;&amp;gt;Coordination : Eric Lancon,Dapnia-Saclay ; Claude Charlot ? &#039;&#039;&lt;br /&gt;
               &lt;br /&gt;
*	mise en place&lt;br /&gt;
*	les contours&lt;br /&gt;
*	les logiciels d’analyses (Ganga,..)&lt;br /&gt;
*	Coordination avec les T3s ? mise en commun de ressources ?&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;span style=&amp;quot;color:#FF0000;&amp;quot;&amp;gt;Grille de Recherche : Grid5000 ? ====&lt;br /&gt;
&lt;br /&gt;
*&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4-Sept06&amp;diff=2283</id>
		<title>Atlas:SC4-Sept06</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4-Sept06&amp;diff=2283"/>
		<updated>2006-10-05T10:54:38Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* Logbook */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== DDM monitoring ==&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
* [[ Atlas:DDMmonitoring| DDM monitoring ]]&lt;br /&gt;
* [http://lcg2.in2p3.fr/wiki/index.php/Atlas#Resources_monitoring Other monitorings]&lt;br /&gt;
&lt;br /&gt;
== Logbook ==&lt;br /&gt;
* In LYONDISK, files older than 24 hours are deleted (not LFC entries). Since datasets are destroyed at CERN 24 hours after its creation, subscription to T2s are done only if datasets are younger than 0.8 days.&lt;br /&gt;
* 5 October:&lt;br /&gt;
** Transfer problems LYON-&amp;gt;GRIF; investigating&lt;br /&gt;
* 4 October:&lt;br /&gt;
** Many transfers to Lyon finish with &#039;&#039;&#039;State from FTS: Failed; Retries: 4; Reason: Transfer failed. ERROR an end-of-file was reached&#039;&#039;&#039; (Problem solved by CERN in the afternoon)&lt;br /&gt;
** Good efficiency for transfers CERN-LYON&lt;br /&gt;
** Transfer T1-&amp;gt; T2 to all sites except CPPM and SACLAY (SE full)&lt;br /&gt;
** 15:51 Update the VOBOX to glite 3.0.4&lt;br /&gt;
* 3 October &lt;br /&gt;
** dcache for SC4 tests back in operation&lt;br /&gt;
** Start transfer from LYONDISK to T2s (try to follow online)&lt;br /&gt;
** Since dataset are destroyed at CERN after 24 hours, only datasets younger than 12 hours are transfered to T2s.&lt;br /&gt;
** Bad transfer rate to GRIF (FTS never finish for the 3 ATLAS sites)&lt;br /&gt;
** SE in CPPM is full&lt;br /&gt;
** 20-30 MB/s rate for 4 sites &lt;br /&gt;
* 2 October :&lt;br /&gt;
** Problems with dcache in Lyon&lt;br /&gt;
* 30 September-1 October : &lt;br /&gt;
** T0-T1 transfer chaotic due to problems for DDM to get FTS messages (use CERN FTS server) (then transfers are killed).&lt;br /&gt;
* Status after one week :&lt;br /&gt;
** Transfer ran smoothly with the usual problem of LFC access speed&lt;br /&gt;
** No T1-T2 transfer&lt;br /&gt;
[[Image:LYONregion-20060925-SC4.png]]&lt;br /&gt;
* 24 September: Upgrade of DDM software. VOBOX activity much lower.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Atlas | Page principale du Twiki ATLAS]]&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2082</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2082"/>
		<updated>2006-09-25T12:58:34Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* Daily news */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel, G. Rahal) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 gridftp concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 gridft concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
* The monitoring of the Vobox daily, weekly and monthly can be found [http://atlas-france.in2p3.fr/Activites/Informatique/OutilsCC/VO-cclcgatlas Here]&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=BEIJING-LCG2&amp;amp;visibility=SE BEIJING]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=CPPM-LCG2&amp;amp;visibility=SE CPPM]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* SC4 transfers from LYON to T2s: &lt;br /&gt;
** Simultaneous transfers to  the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s&lt;br /&gt;
**[[Image:StackdayLYON-27-7.png]]&lt;br /&gt;
&lt;br /&gt;
== Post-mortem DDM meeting ==&lt;br /&gt;
* [http://indico.cern.ch/conferenceDisplay.py?confId=4959 Presentation at CERN (M. Branco) showing the results of the SC4 tests.] (1 August 2006)&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2081</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2081"/>
		<updated>2006-09-25T12:57:16Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* Post-mortem DDM meeting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel, G. Rahal) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 gridftp concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 gridft concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
* The monitoring of the Vobox daily, weekly and monthly can be found [http://atlas-france.in2p3.fr/Activites/Informatique/OutilsCC/VO-cclcgatlas Here]&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=BEIJING-LCG2&amp;amp;visibility=SE BEIJING]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=CPPM-LCG2&amp;amp;visibility=SE CPPM]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
* 20 June 2006: Mail from Miguel Branco (DDM responsible)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt;Today we started deploying DQ2 on the remaining T1 sites (not all&lt;br /&gt;
sites still available).&amp;lt;br/&amp;gt;&lt;br /&gt;
Attached is the result of a (nice) ramp up, easily beating SC3&#039;s&lt;br /&gt;
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each&lt;br /&gt;
&#039;step&#039; in the graph is an additional T1 being added to the export.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dataset subscriptions are now slowing down and will resume tomorrow.&lt;br /&gt;
Our DQ2 monitoring has been turned off and we expect to have it back&lt;br /&gt;
tomorrow! Still a long way to go until we have a reasonable&lt;br /&gt;
understanding of the limiting factors..&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Atlas day-jpeg.jpg]]&lt;br /&gt;
&lt;br /&gt;
* SC4 transfers from LYON to T2s: &lt;br /&gt;
** Simultaneous transfers to  the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s&lt;br /&gt;
**[[Image:StackdayLYON-27-7.png]]&lt;br /&gt;
&lt;br /&gt;
== Post-mortem DDM meeting ==&lt;br /&gt;
* [http://indico.cern.ch/conferenceDisplay.py?confId=4959 Presentation at CERN (M. Branco) showing the results of the SC4 tests.] (1 August 2006)&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2080</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2080"/>
		<updated>2006-09-25T10:28:38Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* &amp;#039;&amp;#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&amp;#039;&amp;#039; */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel, G. Rahal) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 gridftp concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 gridft concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
* The monitoring of the Vobox daily, weekly and monthly can be found [http://atlas-france.in2p3.fr/Activites/Informatique/OutilsCC/VO-cclcgatlas Here]&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=BEIJING-LCG2&amp;amp;visibility=SE BEIJING]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=CPPM-LCG2&amp;amp;visibility=SE CPPM]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
* 20 June 2006: Mail from Miguel Branco (DDM responsible)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt;Today we started deploying DQ2 on the remaining T1 sites (not all&lt;br /&gt;
sites still available).&amp;lt;br/&amp;gt;&lt;br /&gt;
Attached is the result of a (nice) ramp up, easily beating SC3&#039;s&lt;br /&gt;
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each&lt;br /&gt;
&#039;step&#039; in the graph is an additional T1 being added to the export.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dataset subscriptions are now slowing down and will resume tomorrow.&lt;br /&gt;
Our DQ2 monitoring has been turned off and we expect to have it back&lt;br /&gt;
tomorrow! Still a long way to go until we have a reasonable&lt;br /&gt;
understanding of the limiting factors..&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Atlas day-jpeg.jpg]]&lt;br /&gt;
&lt;br /&gt;
* SC4 transfers from LYON to T2s: &lt;br /&gt;
** Simultaneous transfers to  the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s&lt;br /&gt;
**[[Image:StackdayLYON-27-7.png]]&lt;br /&gt;
&lt;br /&gt;
== Post-mortem DDM meeting ==&lt;br /&gt;
* [http://indico.cern.ch/conferenceDisplay.py?confId=4959 Presentation at CERN] (1 August 2006)&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2079</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2079"/>
		<updated>2006-09-25T10:25:33Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* Daily news */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 gridftp concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 gridft concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
* The monitoring of the Vobox daily, weekly and monthly can be found [http://atlas-france.in2p3.fr/Activites/Informatique/OutilsCC/VO-cclcgatlas Here]&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=BEIJING-LCG2&amp;amp;visibility=SE BEIJING]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=CPPM-LCG2&amp;amp;visibility=SE CPPM]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
* 20 June 2006: Mail from Miguel Branco (DDM responsible)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt;Today we started deploying DQ2 on the remaining T1 sites (not all&lt;br /&gt;
sites still available).&amp;lt;br/&amp;gt;&lt;br /&gt;
Attached is the result of a (nice) ramp up, easily beating SC3&#039;s&lt;br /&gt;
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each&lt;br /&gt;
&#039;step&#039; in the graph is an additional T1 being added to the export.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dataset subscriptions are now slowing down and will resume tomorrow.&lt;br /&gt;
Our DQ2 monitoring has been turned off and we expect to have it back&lt;br /&gt;
tomorrow! Still a long way to go until we have a reasonable&lt;br /&gt;
understanding of the limiting factors..&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Atlas day-jpeg.jpg]]&lt;br /&gt;
&lt;br /&gt;
* SC4 transfers from LYON to T2s: &lt;br /&gt;
** Simultaneous transfers to  the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s&lt;br /&gt;
**[[Image:StackdayLYON-27-7.png]]&lt;br /&gt;
&lt;br /&gt;
== Post-mortem DDM meeting ==&lt;br /&gt;
* [http://indico.cern.ch/conferenceDisplay.py?confId=4959 Presentation at CERN] (1 August 2006)&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2075</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2075"/>
		<updated>2006-09-15T13:48:20Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* Information from DDM monitoring */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 gridftp concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 gridft concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
* The monitoring of the Vobox daily, weekly and monthly can be found [http://atlas-france.in2p3.fr/Activites/Informatique/OutilsCC/VO-cclcgatlas Here]&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=BEIJING-LCG2&amp;amp;visibility=SE BEIJING]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=CPPM-LCG2&amp;amp;visibility=SE CPPM]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
* 20 June 2006: Mail from Miguel Branco (DDM responsible)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt;Today we started deploying DQ2 on the remaining T1 sites (not all&lt;br /&gt;
sites still available).&amp;lt;br/&amp;gt;&lt;br /&gt;
Attached is the result of a (nice) ramp up, easily beating SC3&#039;s&lt;br /&gt;
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each&lt;br /&gt;
&#039;step&#039; in the graph is an additional T1 being added to the export.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dataset subscriptions are now slowing down and will resume tomorrow.&lt;br /&gt;
Our DQ2 monitoring has been turned off and we expect to have it back&lt;br /&gt;
tomorrow! Still a long way to go until we have a reasonable&lt;br /&gt;
understanding of the limiting factors..&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Atlas day-jpeg.jpg]]&lt;br /&gt;
&lt;br /&gt;
* 22 June: General power cut at CERN at 2:pm. &lt;br /&gt;
&lt;br /&gt;
* 24 June : Dataset T0.D.run000949.ESD transfered from Lyon to LAL and TOKYO. Tranfering the same dataset to LAPP and LPC failed because these sites have same domain name (*.in2p3.fr) as Lyon.&lt;br /&gt;
&lt;br /&gt;
* 25 June : Almost no transfer from CERN to T1s during the week-end.&lt;br /&gt;
&lt;br /&gt;
* 26 June : SC4 transfers restarted with working DDM monitoring. Successfull transfers to LAL, SACLAY and TOKYO. Technical problem (domain name) for LAPP, LPNHE and LPC : under investigation. Contact BEIJING.&lt;br /&gt;
.&lt;br /&gt;
* 28 June : &lt;br /&gt;
** Problem of domain name solved for LAPP, LPC and LPNHE. First transfers to these sites have been done.&lt;br /&gt;
** Increase the number of LFC connection to 40 (advice from CERN-IT and DDM).&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt; AODs were transfered to all T2s associated to Lyon except BEIJING (looks like a FTS problem) &amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
**Transfer all AODs to LAL. Problems to transfer AODs to LAPP (one dcache server crashed)&lt;br /&gt;
&lt;br /&gt;
* 29 June :&lt;br /&gt;
** Transfer of AODs to TOKYO&lt;br /&gt;
&lt;br /&gt;
* 1 July&lt;br /&gt;
** Test transfers from LYONDISK to T2s&lt;br /&gt;
[[Image:T1-T2-0107-3.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 5 July :&lt;br /&gt;
** First DDM transfer to BEIJING&lt;br /&gt;
&lt;br /&gt;
* 6 july&lt;br /&gt;
** [[Image:StackdayLYON.png]]&lt;br /&gt;
&lt;br /&gt;
* 17 july&lt;br /&gt;
** Simultaneous transfers to T2 sites including BEIJING&lt;br /&gt;
**[[Image:StackdayLYON-17-7.png]]&lt;br /&gt;
&lt;br /&gt;
* 27 july&lt;br /&gt;
** Simultaneous transfers to  the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s&lt;br /&gt;
**[[Image:StackdayLYON-27-7.png]]&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2074</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2074"/>
		<updated>2006-09-15T13:47:28Z</updated>

		<summary type="html">&lt;p&gt;Grahal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png] [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 gridftp concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 gridft concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
* The monitoring of the Vobox daily, weekly and monthly can be found [http://atlas-france.in2p3.fr/Activites/Informatique/OutilsCC/VO-cclcgatlas Here]&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=BEIJING-LCG2&amp;amp;visibility=SE BEIJING]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=CPPM-LCG2&amp;amp;visibility=SE CPPM]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
* 20 June 2006: Mail from Miguel Branco (DDM responsible)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt;Today we started deploying DQ2 on the remaining T1 sites (not all&lt;br /&gt;
sites still available).&amp;lt;br/&amp;gt;&lt;br /&gt;
Attached is the result of a (nice) ramp up, easily beating SC3&#039;s&lt;br /&gt;
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each&lt;br /&gt;
&#039;step&#039; in the graph is an additional T1 being added to the export.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dataset subscriptions are now slowing down and will resume tomorrow.&lt;br /&gt;
Our DQ2 monitoring has been turned off and we expect to have it back&lt;br /&gt;
tomorrow! Still a long way to go until we have a reasonable&lt;br /&gt;
understanding of the limiting factors..&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Atlas day-jpeg.jpg]]&lt;br /&gt;
&lt;br /&gt;
* 22 June: General power cut at CERN at 2:pm. &lt;br /&gt;
&lt;br /&gt;
* 24 June : Dataset T0.D.run000949.ESD transfered from Lyon to LAL and TOKYO. Tranfering the same dataset to LAPP and LPC failed because these sites have same domain name (*.in2p3.fr) as Lyon.&lt;br /&gt;
&lt;br /&gt;
* 25 June : Almost no transfer from CERN to T1s during the week-end.&lt;br /&gt;
&lt;br /&gt;
* 26 June : SC4 transfers restarted with working DDM monitoring. Successfull transfers to LAL, SACLAY and TOKYO. Technical problem (domain name) for LAPP, LPNHE and LPC : under investigation. Contact BEIJING.&lt;br /&gt;
.&lt;br /&gt;
* 28 June : &lt;br /&gt;
** Problem of domain name solved for LAPP, LPC and LPNHE. First transfers to these sites have been done.&lt;br /&gt;
** Increase the number of LFC connection to 40 (advice from CERN-IT and DDM).&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt; AODs were transfered to all T2s associated to Lyon except BEIJING (looks like a FTS problem) &amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
**Transfer all AODs to LAL. Problems to transfer AODs to LAPP (one dcache server crashed)&lt;br /&gt;
&lt;br /&gt;
* 29 June :&lt;br /&gt;
** Transfer of AODs to TOKYO&lt;br /&gt;
&lt;br /&gt;
* 1 July&lt;br /&gt;
** Test transfers from LYONDISK to T2s&lt;br /&gt;
[[Image:T1-T2-0107-3.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 5 July :&lt;br /&gt;
** First DDM transfer to BEIJING&lt;br /&gt;
&lt;br /&gt;
* 6 july&lt;br /&gt;
** [[Image:StackdayLYON.png]]&lt;br /&gt;
&lt;br /&gt;
* 17 july&lt;br /&gt;
** Simultaneous transfers to T2 sites including BEIJING&lt;br /&gt;
**[[Image:StackdayLYON-17-7.png]]&lt;br /&gt;
&lt;br /&gt;
* 27 july&lt;br /&gt;
** Simultaneous transfers to  the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s&lt;br /&gt;
**[[Image:StackdayLYON-27-7.png]]&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2033</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2033"/>
		<updated>2006-08-04T08:23:35Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* VOBOX Configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 gridftp concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 gridft concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
* The monitoring of the Vobox daily, weekly and monthly can be found [http://atlas-france.in2p3.fr/Activites/Informatique/OutilsCC/VO-cclcgatlas Here]&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
* 20 June 2006: Mail from Miguel Branco (DDM responsible)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt;Today we started deploying DQ2 on the remaining T1 sites (not all&lt;br /&gt;
sites still available).&amp;lt;br/&amp;gt;&lt;br /&gt;
Attached is the result of a (nice) ramp up, easily beating SC3&#039;s&lt;br /&gt;
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each&lt;br /&gt;
&#039;step&#039; in the graph is an additional T1 being added to the export.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dataset subscriptions are now slowing down and will resume tomorrow.&lt;br /&gt;
Our DQ2 monitoring has been turned off and we expect to have it back&lt;br /&gt;
tomorrow! Still a long way to go until we have a reasonable&lt;br /&gt;
understanding of the limiting factors..&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Atlas day-jpeg.jpg]]&lt;br /&gt;
&lt;br /&gt;
* 22 June: General power cut at CERN at 2:pm. &lt;br /&gt;
&lt;br /&gt;
* 24 June : Dataset T0.D.run000949.ESD transfered from Lyon to LAL and TOKYO. Tranfering the same dataset to LAPP and LPC failed because these sites have same domain name (*.in2p3.fr) as Lyon.&lt;br /&gt;
&lt;br /&gt;
* 25 June : Almost no transfer from CERN to T1s during the week-end.&lt;br /&gt;
&lt;br /&gt;
* 26 June : SC4 transfers restarted with working DDM monitoring. Successfull transfers to LAL, SACLAY and TOKYO. Technical problem (domain name) for LAPP, LPNHE and LPC : under investigation. Contact BEIJING.&lt;br /&gt;
.&lt;br /&gt;
* 28 June : &lt;br /&gt;
** Problem of domain name solved for LAPP, LPC and LPNHE. First transfers to these sites have been done.&lt;br /&gt;
** Increase the number of LFC connection to 40 (advice from CERN-IT and DDM).&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt; AODs were transfered to all T2s associated to Lyon except BEIJING (looks like a FTS problem) &amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
**Transfer all AODs to LAL. Problems to transfer AODs to LAPP (one dcache server crashed)&lt;br /&gt;
&lt;br /&gt;
* 29 June :&lt;br /&gt;
** Transfer of AODs to TOKYO&lt;br /&gt;
&lt;br /&gt;
* 1 July&lt;br /&gt;
** Test transfers from LYONDISK to T2s&lt;br /&gt;
[[Image:T1-T2-0107-3.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 5 July :&lt;br /&gt;
** First DDM transfer to BEIJING&lt;br /&gt;
&lt;br /&gt;
* 6 july&lt;br /&gt;
** [[Image:StackdayLYON.png]]&lt;br /&gt;
&lt;br /&gt;
* 17 july&lt;br /&gt;
** Simultaneous transfers to T2 sites including BEIJING&lt;br /&gt;
**[[Image:StackdayLYON-17-7.png]]&lt;br /&gt;
&lt;br /&gt;
* 27 july&lt;br /&gt;
** Simultaneous transfers to  the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s&lt;br /&gt;
**[[Image:StackdayLYON-27-7.png]]&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2032</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2032"/>
		<updated>2006-08-04T08:22:55Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* VOBOX Configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 gridftp concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 gridft concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
* The monitoring of the Vobox dayly, weekly and monthly can be found [http://atlas-france.in2p3.fr/Activites/Informatique/OutilsCC/VO-cclcgatlas Here]&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
* 20 June 2006: Mail from Miguel Branco (DDM responsible)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt;Today we started deploying DQ2 on the remaining T1 sites (not all&lt;br /&gt;
sites still available).&amp;lt;br/&amp;gt;&lt;br /&gt;
Attached is the result of a (nice) ramp up, easily beating SC3&#039;s&lt;br /&gt;
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each&lt;br /&gt;
&#039;step&#039; in the graph is an additional T1 being added to the export.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dataset subscriptions are now slowing down and will resume tomorrow.&lt;br /&gt;
Our DQ2 monitoring has been turned off and we expect to have it back&lt;br /&gt;
tomorrow! Still a long way to go until we have a reasonable&lt;br /&gt;
understanding of the limiting factors..&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Atlas day-jpeg.jpg]]&lt;br /&gt;
&lt;br /&gt;
* 22 June: General power cut at CERN at 2:pm. &lt;br /&gt;
&lt;br /&gt;
* 24 June : Dataset T0.D.run000949.ESD transfered from Lyon to LAL and TOKYO. Tranfering the same dataset to LAPP and LPC failed because these sites have same domain name (*.in2p3.fr) as Lyon.&lt;br /&gt;
&lt;br /&gt;
* 25 June : Almost no transfer from CERN to T1s during the week-end.&lt;br /&gt;
&lt;br /&gt;
* 26 June : SC4 transfers restarted with working DDM monitoring. Successfull transfers to LAL, SACLAY and TOKYO. Technical problem (domain name) for LAPP, LPNHE and LPC : under investigation. Contact BEIJING.&lt;br /&gt;
.&lt;br /&gt;
* 28 June : &lt;br /&gt;
** Problem of domain name solved for LAPP, LPC and LPNHE. First transfers to these sites have been done.&lt;br /&gt;
** Increase the number of LFC connection to 40 (advice from CERN-IT and DDM).&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt; AODs were transfered to all T2s associated to Lyon except BEIJING (looks like a FTS problem) &amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
**Transfer all AODs to LAL. Problems to transfer AODs to LAPP (one dcache server crashed)&lt;br /&gt;
&lt;br /&gt;
* 29 June :&lt;br /&gt;
** Transfer of AODs to TOKYO&lt;br /&gt;
&lt;br /&gt;
* 1 July&lt;br /&gt;
** Test transfers from LYONDISK to T2s&lt;br /&gt;
[[Image:T1-T2-0107-3.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 5 July :&lt;br /&gt;
** First DDM transfer to BEIJING&lt;br /&gt;
&lt;br /&gt;
* 6 july&lt;br /&gt;
** [[Image:StackdayLYON.png]]&lt;br /&gt;
&lt;br /&gt;
* 17 july&lt;br /&gt;
** Simultaneous transfers to T2 sites including BEIJING&lt;br /&gt;
**[[Image:StackdayLYON-17-7.png]]&lt;br /&gt;
&lt;br /&gt;
* 27 july&lt;br /&gt;
** Simultaneous transfers to  the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s&lt;br /&gt;
**[[Image:StackdayLYON-27-7.png]]&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2031</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2031"/>
		<updated>2006-08-04T08:20:21Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* VOBOX Configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 gridftp concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 gridft concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
* The monitoring of the Vobox dayly, weekly and monthly can be found here&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
* 20 June 2006: Mail from Miguel Branco (DDM responsible)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt;Today we started deploying DQ2 on the remaining T1 sites (not all&lt;br /&gt;
sites still available).&amp;lt;br/&amp;gt;&lt;br /&gt;
Attached is the result of a (nice) ramp up, easily beating SC3&#039;s&lt;br /&gt;
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each&lt;br /&gt;
&#039;step&#039; in the graph is an additional T1 being added to the export.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dataset subscriptions are now slowing down and will resume tomorrow.&lt;br /&gt;
Our DQ2 monitoring has been turned off and we expect to have it back&lt;br /&gt;
tomorrow! Still a long way to go until we have a reasonable&lt;br /&gt;
understanding of the limiting factors..&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Atlas day-jpeg.jpg]]&lt;br /&gt;
&lt;br /&gt;
* 22 June: General power cut at CERN at 2:pm. &lt;br /&gt;
&lt;br /&gt;
* 24 June : Dataset T0.D.run000949.ESD transfered from Lyon to LAL and TOKYO. Tranfering the same dataset to LAPP and LPC failed because these sites have same domain name (*.in2p3.fr) as Lyon.&lt;br /&gt;
&lt;br /&gt;
* 25 June : Almost no transfer from CERN to T1s during the week-end.&lt;br /&gt;
&lt;br /&gt;
* 26 June : SC4 transfers restarted with working DDM monitoring. Successfull transfers to LAL, SACLAY and TOKYO. Technical problem (domain name) for LAPP, LPNHE and LPC : under investigation. Contact BEIJING.&lt;br /&gt;
.&lt;br /&gt;
* 28 June : &lt;br /&gt;
** Problem of domain name solved for LAPP, LPC and LPNHE. First transfers to these sites have been done.&lt;br /&gt;
** Increase the number of LFC connection to 40 (advice from CERN-IT and DDM).&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt; AODs were transfered to all T2s associated to Lyon except BEIJING (looks like a FTS problem) &amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
**Transfer all AODs to LAL. Problems to transfer AODs to LAPP (one dcache server crashed)&lt;br /&gt;
&lt;br /&gt;
* 29 June :&lt;br /&gt;
** Transfer of AODs to TOKYO&lt;br /&gt;
&lt;br /&gt;
* 1 July&lt;br /&gt;
** Test transfers from LYONDISK to T2s&lt;br /&gt;
[[Image:T1-T2-0107-3.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 5 July :&lt;br /&gt;
** First DDM transfer to BEIJING&lt;br /&gt;
&lt;br /&gt;
* 6 july&lt;br /&gt;
** [[Image:StackdayLYON.png]]&lt;br /&gt;
&lt;br /&gt;
* 17 july&lt;br /&gt;
** Simultaneous transfers to T2 sites including BEIJING&lt;br /&gt;
**[[Image:StackdayLYON-17-7.png]]&lt;br /&gt;
&lt;br /&gt;
* 27 july&lt;br /&gt;
** Simultaneous transfers to  the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s&lt;br /&gt;
**[[Image:StackdayLYON-27-7.png]]&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=File:StackdayLYON-27-7.png&amp;diff=2017</id>
		<title>File:StackdayLYON-27-7.png</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=File:StackdayLYON-27-7.png&amp;diff=2017"/>
		<updated>2006-07-27T09:24:22Z</updated>

		<summary type="html">&lt;p&gt;Grahal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2016</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=2016"/>
		<updated>2006-07-27T09:23:54Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* Daily news */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 gridftp concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 gridft concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
* 20 June 2006: Mail from Miguel Branco (DDM responsible)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt;Today we started deploying DQ2 on the remaining T1 sites (not all&lt;br /&gt;
sites still available).&amp;lt;br/&amp;gt;&lt;br /&gt;
Attached is the result of a (nice) ramp up, easily beating SC3&#039;s&lt;br /&gt;
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each&lt;br /&gt;
&#039;step&#039; in the graph is an additional T1 being added to the export.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dataset subscriptions are now slowing down and will resume tomorrow.&lt;br /&gt;
Our DQ2 monitoring has been turned off and we expect to have it back&lt;br /&gt;
tomorrow! Still a long way to go until we have a reasonable&lt;br /&gt;
understanding of the limiting factors..&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Atlas day-jpeg.jpg]]&lt;br /&gt;
&lt;br /&gt;
* 22 June: General power cut at CERN at 2:pm. &lt;br /&gt;
&lt;br /&gt;
* 24 June : Dataset T0.D.run000949.ESD transfered from Lyon to LAL and TOKYO. Tranfering the same dataset to LAPP and LPC failed because these sites have same domain name (*.in2p3.fr) as Lyon.&lt;br /&gt;
&lt;br /&gt;
* 25 June : Almost no transfer from CERN to T1s during the week-end.&lt;br /&gt;
&lt;br /&gt;
* 26 June : SC4 transfers restarted with working DDM monitoring. Successfull transfers to LAL, SACLAY and TOKYO. Technical problem (domain name) for LAPP, LPNHE and LPC : under investigation. Contact BEIJING.&lt;br /&gt;
.&lt;br /&gt;
* 28 June : &lt;br /&gt;
** Problem of domain name solved for LAPP, LPC and LPNHE. First transfers to these sites have been done.&lt;br /&gt;
** Increase the number of LFC connection to 40 (advice from CERN-IT and DDM).&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt; AODs were transfered to all T2s associated to Lyon except BEIJING (looks like a FTS problem) &amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
**Transfer all AODs to LAL. Problems to transfer AODs to LAPP (one dcache server crashed)&lt;br /&gt;
&lt;br /&gt;
* 29 June :&lt;br /&gt;
** Transfer of AODs to TOKYO&lt;br /&gt;
&lt;br /&gt;
* 1 July&lt;br /&gt;
** Test transfers from LYONDISK to T2s&lt;br /&gt;
[[Image:T1-T2-0107-3.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 5 July :&lt;br /&gt;
** First DDM transfer to BEIJING&lt;br /&gt;
&lt;br /&gt;
* 6 july&lt;br /&gt;
** [[Image:StackdayLYON.png]]&lt;br /&gt;
&lt;br /&gt;
* 17 july&lt;br /&gt;
** Simultaneous transfers to T2 sites including BEIJING&lt;br /&gt;
**[[Image:StackdayLYON-17-7.png]]&lt;br /&gt;
&lt;br /&gt;
* 27 july&lt;br /&gt;
** Simultaneous transfers to  the 7 T2 sites for more than 24 hours; it reached  more than 25MB/s&lt;br /&gt;
**[[Image:StackdayLYON-27-7.png]]&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=File:StackdayLYON-27-07-06.png&amp;diff=2015</id>
		<title>File:StackdayLYON-27-07-06.png</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=File:StackdayLYON-27-07-06.png&amp;diff=2015"/>
		<updated>2006-07-27T09:22:46Z</updated>

		<summary type="html">&lt;p&gt;Grahal: SC4 transfers to 7 T2s&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SC4 transfers to 7 T2s&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=1997</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=1997"/>
		<updated>2006-07-18T10:31:18Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* Daily news */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
* 20 June 2006: Mail from Miguel Branco (DDM responsible)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt;Today we started deploying DQ2 on the remaining T1 sites (not all&lt;br /&gt;
sites still available).&amp;lt;br/&amp;gt;&lt;br /&gt;
Attached is the result of a (nice) ramp up, easily beating SC3&#039;s&lt;br /&gt;
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each&lt;br /&gt;
&#039;step&#039; in the graph is an additional T1 being added to the export.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dataset subscriptions are now slowing down and will resume tomorrow.&lt;br /&gt;
Our DQ2 monitoring has been turned off and we expect to have it back&lt;br /&gt;
tomorrow! Still a long way to go until we have a reasonable&lt;br /&gt;
understanding of the limiting factors..&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Atlas day-jpeg.jpg]]&lt;br /&gt;
&lt;br /&gt;
* 22 June: General power cut at CERN at 2:pm. &lt;br /&gt;
&lt;br /&gt;
* 24 June : Dataset T0.D.run000949.ESD transfered from Lyon to LAL and TOKYO. Tranfering the same dataset to LAPP and LPC failed because these sites have same domain name (*.in2p3.fr) as Lyon.&lt;br /&gt;
&lt;br /&gt;
* 25 June : Almost no transfer from CERN to T1s during the week-end.&lt;br /&gt;
&lt;br /&gt;
* 26 June : SC4 transfers restarted with working DDM monitoring. Successfull transfers to LAL, SACLAY and TOKYO. Technical problem (domain name) for LAPP, LPNHE and LPC : under investigation. Contact BEIJING.&lt;br /&gt;
.&lt;br /&gt;
* 28 June : &lt;br /&gt;
** Problem of domain name solved for LAPP, LPC and LPNHE. First transfers to these sites have been done.&lt;br /&gt;
** Increase the number of LFC connection to 40 (advice from CERN-IT and DDM).&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt; AODs were transfered to all T2s associated to Lyon except BEIJING (looks like a FTS problem) &amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
**Transfer all AODs to LAL. Problems to transfer AODs to LAPP (one dcache server crashed)&lt;br /&gt;
&lt;br /&gt;
* 29 June :&lt;br /&gt;
** Transfer of AODs to TOKYO&lt;br /&gt;
&lt;br /&gt;
* 1 July&lt;br /&gt;
** Test transfers from LYONDISK to T2s&lt;br /&gt;
[[Image:T1-T2-0107-3.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 5 July :&lt;br /&gt;
** First DDM transfer to BEIJING&lt;br /&gt;
&lt;br /&gt;
* 6 july&lt;br /&gt;
** [[Image:StackdayLYON.png]]&lt;br /&gt;
&lt;br /&gt;
* 17 july&lt;br /&gt;
** Simultaneous transfers to T2 sites including BEIJING&lt;br /&gt;
**[[Image:StackdayLYON-17-7.png]]&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=1996</id>
		<title>Atlas:SC4</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=Atlas:SC4&amp;diff=1996"/>
		<updated>2006-07-18T10:29:47Z</updated>

		<summary type="html">&lt;p&gt;Grahal: /* Daily news */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &#039;&#039;Bienvenue sur la page Atlas SC4 LCG-France &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;  Welcome to the LCG-France Atlas SC4 page&#039;&#039; ==&lt;br /&gt;
[https://uimon.cern.ch/twiki/bin/view/Atlas/ATLASServiceChallenges Twiki page : SC4 ATLAS]&lt;br /&gt;
&lt;br /&gt;
[http://lcg2.in2p3.fr/wiki/images/Sc4-juin06.txt Compte rendu de la réunion SC4 ATLAS au CERN du 9 Juin (S.Jézéquel, G. Rahal) (written in french) ]&lt;br /&gt;
&lt;br /&gt;
* T0 Role(CERN)&lt;br /&gt;
** Produce dummy files with 1 to 2 GB size(RAW, ESD et AOD) (see [https://uimon.cern.ch/twiki/bin/view/Atlas/AtlasTierZero T0 Twiki])&lt;br /&gt;
** Initiate T0-&amp;gt;T1 transfers&lt;br /&gt;
** FTS server sents files to Lyon choosing between &#039;TAPE&#039; (RAW 43,2 Mo/s) or &#039;DISK&#039; (ESD,AOD 23+20 Mo/s) areas&lt;br /&gt;
* T1 Role (CCIN2P3)&lt;br /&gt;
** Get files from T0 (dedicated dcache area: L. Schwarz)&lt;br /&gt;
** Provides LFC (lfc-atlas.in2p3.fr) and FTS service (cclcgftsli01.in2p3.fr) (D. Bouvet)&lt;br /&gt;
** Send all AODs to each T2 (20 Mo/s) using Lyon FTS server&lt;br /&gt;
** Regurlarly cleanup files&lt;br /&gt;
* T2 Role (BEIJING, LAL, LAPP, LPC, LPNHE, SACLAY, TOKYO)&lt;br /&gt;
** Get files from T1 (Lyon). Files on the T2 are written in /home/atlas/sc4tier0/...&lt;br /&gt;
** Clean-up the files  (?)&lt;br /&gt;
* Other roles&lt;br /&gt;
** ATLAS (S. Jézéquel) : Initiate T1-&amp;gt;T2 transfers&lt;br /&gt;
&lt;br /&gt;
== Information from DDM monitoring ==&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/ Main DDM monitoring page]&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4h.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stack4ht2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackday.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackdayt2.png&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweek.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekLYON.png&lt;br /&gt;
* http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/rrd/plots/stackweekt2.png  &lt;br /&gt;
&lt;br /&gt;
-- &lt;br /&gt;
&lt;br /&gt;
* Transfer rates for LYON and associated T2s (validated files by DDM):&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplots.php?site=LYON LYON]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=BEIJING BEIJING]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAL LAL]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LAPP LAPP]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPC LPC]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=LPNHE LPNHE]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=SACLAY SACLAY]&lt;br /&gt;
** [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/siteplotst2.php?site=TOKYO TOKYO]&lt;br /&gt;
&lt;br /&gt;
* [http://atldq02.cern.ch:8000/dq2/site_monitor/sites Detailed information on each site ]&lt;br /&gt;
&lt;br /&gt;
* [http://atlas-ddm-monitoring.web.cern.ch/atlas-ddm-monitoring/sitestates.php?site=LYON File transfer state in LYON ]&lt;br /&gt;
&lt;br /&gt;
== Information from FTS monitoring ==&lt;br /&gt;
&lt;br /&gt;
*[http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=atlas&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor=VO    Atlas FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
* [http://gridview.cern.ch/GRIDVIEW/graphs.php?GraphName=IN2PCC&amp;amp;ThruputDataOption=&amp;amp;SrcSite=&amp;amp;DestSite=&amp;amp;DurationOption=&amp;amp;StartDay=&amp;amp;StartMonth=&amp;amp;StartYear=&amp;amp;EndDay=&amp;amp;EndMonth=&amp;amp;EndYear=&amp;amp;HostType=&amp;amp;GraphFor= CERN-&amp;gt;Lyon FTS transfer rate]&lt;br /&gt;
&lt;br /&gt;
*T1-&amp;gt;T2 : &lt;br /&gt;
** 15 concurrent files and 10 streams for LYON-TOKYO&lt;br /&gt;
** 5 concurrent files and 5 streams for LYON-BEIJING (SE not enough powerful for 15/15  )&lt;br /&gt;
** 10 concurrent files and 10 streams for LYON-French T2s (LAL, LPNHE, LPC, SACLAY) except for LAPP (5 concurrent files and 1 stream)&lt;br /&gt;
&lt;br /&gt;
== Information from dCache monitoring (provided by Lyon) ==&lt;br /&gt;
&lt;br /&gt;
*[http://cctools.in2p3.fr/dcache/transfers/atlas-sc4.html Monitoring of dcache for SC4 areas]&lt;br /&gt;
* LYONDISK : 25 concurrent access maximum&lt;br /&gt;
* LYONTAPE : 10 concurrent access maximum&lt;br /&gt;
&lt;br /&gt;
== VOBOX Configuration ==&lt;br /&gt;
&lt;br /&gt;
* 4 processors 3 GHz&lt;br /&gt;
* 4 GB of memory ( 2 GB dedicated to SWAP)&lt;br /&gt;
&lt;br /&gt;
== Disk space availability ==&lt;br /&gt;
&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=GRIF&amp;amp;visibility=SE GRIF]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LPC&amp;amp;visibility=SE LPC]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-LAPP&amp;amp;visibility=SE LAPP]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=TOKYO-LCG2&amp;amp;visibility=SE TOKYO]&lt;br /&gt;
* [http://gridice2.cnaf.infn.it:50080/gridice/site/site_details.php?siteName=IN2P3-CC&amp;amp;visibility=SE LYON]&lt;br /&gt;
&lt;br /&gt;
== Daily news ==&lt;br /&gt;
&lt;br /&gt;
* [https://uimon.cern.ch/twiki/bin/view/Atlas/DDMSc4#Daily_log SC4 Data Managment Daily log]&lt;br /&gt;
&lt;br /&gt;
* 20 June 2006: Mail from Miguel Branco (DDM responsible)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt;Today we started deploying DQ2 on the remaining T1 sites (not all&lt;br /&gt;
sites still available).&amp;lt;br/&amp;gt;&lt;br /&gt;
Attached is the result of a (nice) ramp up, easily beating SC3&#039;s&lt;br /&gt;
record (on the 1st day of export of SC4) peaking at ~ 270 MB/s. Each&lt;br /&gt;
&#039;step&#039; in the graph is an additional T1 being added to the export.&amp;lt;br/&amp;gt;&lt;br /&gt;
Dataset subscriptions are now slowing down and will resume tomorrow.&lt;br /&gt;
Our DQ2 monitoring has been turned off and we expect to have it back&lt;br /&gt;
tomorrow! Still a long way to go until we have a reasonable&lt;br /&gt;
understanding of the limiting factors..&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Atlas day-jpeg.jpg]]&lt;br /&gt;
&lt;br /&gt;
* 22 June: General power cut at CERN at 2:pm. &lt;br /&gt;
&lt;br /&gt;
* 24 June : Dataset T0.D.run000949.ESD transfered from Lyon to LAL and TOKYO. Tranfering the same dataset to LAPP and LPC failed because these sites have same domain name (*.in2p3.fr) as Lyon.&lt;br /&gt;
&lt;br /&gt;
* 25 June : Almost no transfer from CERN to T1s during the week-end.&lt;br /&gt;
&lt;br /&gt;
* 26 June : SC4 transfers restarted with working DDM monitoring. Successfull transfers to LAL, SACLAY and TOKYO. Technical problem (domain name) for LAPP, LPNHE and LPC : under investigation. Contact BEIJING.&lt;br /&gt;
.&lt;br /&gt;
* 28 June : &lt;br /&gt;
** Problem of domain name solved for LAPP, LPC and LPNHE. First transfers to these sites have been done.&lt;br /&gt;
** Increase the number of LFC connection to 40 (advice from CERN-IT and DDM).&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:#663300;&amp;quot;&amp;gt; AODs were transfered to all T2s associated to Lyon except BEIJING (looks like a FTS problem) &amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
**Transfer all AODs to LAL. Problems to transfer AODs to LAPP (one dcache server crashed)&lt;br /&gt;
&lt;br /&gt;
* 29 June :&lt;br /&gt;
** Transfer of AODs to TOKYO&lt;br /&gt;
&lt;br /&gt;
* 1 July&lt;br /&gt;
** Test transfers from LYONDISK to T2s&lt;br /&gt;
[[Image:T1-T2-0107-3.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* 5 July :&lt;br /&gt;
** First DDM transfer to BEIJING&lt;br /&gt;
&lt;br /&gt;
* 6 july&lt;br /&gt;
** [[Image:StackdayLYON.png]]&lt;br /&gt;
&lt;br /&gt;
* 17 july&lt;br /&gt;
**[[Image:StackdayLYON-17-7.png]]&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
	<entry>
		<id>https://lcg.in2p3.fr/index.php?title=File:StackdayLYON-17-7.png&amp;diff=1995</id>
		<title>File:StackdayLYON-17-7.png</title>
		<link rel="alternate" type="text/html" href="https://lcg.in2p3.fr/index.php?title=File:StackdayLYON-17-7.png&amp;diff=1995"/>
		<updated>2006-07-18T10:27:49Z</updated>

		<summary type="html">&lt;p&gt;Grahal: T1 - T2 transfers&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;T1 - T2 transfers&lt;/div&gt;</summary>
		<author><name>Grahal</name></author>
	</entry>
</feed>