Difference between revisions of "Atlas"
Ligne 87: | Ligne 87: | ||
* [[ Atlas:Logbook| Logbook and access configuration]] | * [[ Atlas:Logbook| Logbook and access configuration]] | ||
− | * | + | * Questions |
** Enabling direct access creates much more network usage -> Is it usefull for ATLAS ? (processing urgent request faster ?) | ** Enabling direct access creates much more network usage -> Is it usefull for ATLAS ? (processing urgent request faster ?) | ||
− | |||
− | |||
− | |||
** Production_output can only be done to 1 site | ** Production_output can only be done to 1 site | ||
* Issues | * Issues | ||
− | ** Asynchronous transfers of input files goes to closest/fastest site to input site instead of the | + | ** Asynchronous transfers of input files goes to closest/fastest site to input site instead of the read_lan0 to the WN (would help to reduce network occupancy between IN2P3-CC and LAPP/LPSC since smoothed by FTS) -> Request to change on 21st September (Panda level) |
** Jobs brokering should take into account downtime of remote SEs (issue with IN2P3-CC downtime) -> Request sent by Rod | ** Jobs brokering should take into account downtime of remote SEs (issue with IN2P3-CC downtime) -> Request sent by Rod | ||
− | + | ** 10 Gb/s connection LAPP-CC (used for all LAPP WAN transfers to any site) can get saturated if huge amount of jobs starting at same time (No smoothing by Panda) -> No suggestion yet | |
+ | ** Production_output can only be done to 1 site (usefull if SE destination in downtime while local/remote WN is not) | ||
Ligne 105: | Ligne 103: | ||
** Monitor job efficiency vs RTT between SE and WN -> Identify when cache does not have impact (assuming no network bandwidth limitation) | ** Monitor job efficiency vs RTT between SE and WN -> Identify when cache does not have impact (assuming no network bandwidth limitation) | ||
** Understand the ATLAS job brokering algorithm | ** Understand the ATLAS job brokering algorithm | ||
− | ** Get the typical transfer rate per job type | + | ** Get the typical transfer rate per job type (Johannes) |
Version du 20:09, 21 septembre 2018
Bienvenue sur la page Atlas LCG-France
Welcome to the LCG-France Atlas page
DOMA_FR project
DOMA_FR tests
- Global transfers from/ to (same site dest/source site excluded)
Header text | LAPP | LPSC | CC |
---|---|---|---|
From | |||
To |
- Data access monitorings as seen by site WN
Destination of access | LAPP | LPSC | CC |
---|---|---|---|
Production download | |||
Production upload | |||
Production input | |||
Production output | |||
Analysis Download | |||
Analysis Direct Access |
- Questions
- Enabling direct access creates much more network usage -> Is it usefull for ATLAS ? (processing urgent request faster ?)
- Production_output can only be done to 1 site
- Issues
- Asynchronous transfers of input files goes to closest/fastest site to input site instead of the read_lan0 to the WN (would help to reduce network occupancy between IN2P3-CC and LAPP/LPSC since smoothed by FTS) -> Request to change on 21st September (Panda level)
- Jobs brokering should take into account downtime of remote SEs (issue with IN2P3-CC downtime) -> Request sent by Rod
- 10 Gb/s connection LAPP-CC (used for all LAPP WAN transfers to any site) can get saturated if huge amount of jobs starting at same time (No smoothing by Panda) -> No suggestion yet
- Production_output can only be done to 1 site (usefull if SE destination in downtime while local/remote WN is not)
- Next steps
- Deploy Rucio 1.17 to use protocol priority defined in AGIS (bug in 1.16). Solving bug : use most trusted protocol and would help to control srm decreasing usage
- Monitor job efficiency vs RTT between SE and WN -> Identify when cache does not have impact (assuming no network bandwidth limitation)
- Understand the ATLAS job brokering algorithm
- Get the typical transfer rate per job type (Johannes)