Difference between revisions of "Atlas:Analysis ST 2009 Errors"

Un article de lcgwiki.
Jump to: navigation, search
(Comments and Errors follow-up)
(Comments and Errors follow-up)
Ligne 1: Ligne 1:
 
30.01.09
 
30.01.09
  
== Comments and Errors follow-up ==
+
== Errors follow-up ==
*http://atlas-ganga-storage.cern.ch/test_124/  DQ2_LOCAL (rfio)
 
*http://atlas-ganga-storage.cern.ch/test_125/  FILESTAGER (lcg_cp)
 
'''Note that ATLAS Production was ON on the FR-Cloud on January 29'''
 
  
 
* IN2P3-LPC_MCDISK: f(w)  - Errors due to the load induced by MC production running at that time. Then ST tests jobs (2 x 50 jobs added)were aborted with an "Unspecified gridmanager error" logged by WMS.<br>  
 
* IN2P3-LPC_MCDISK: f(w)  - Errors due to the load induced by MC production running at that time. Then ST tests jobs (2 x 50 jobs added)were aborted with an "Unspecified gridmanager error" logged by WMS.<br>  

Version du 16:32, 20 mars 2009

30.01.09

Errors follow-up

  • IN2P3-LPC_MCDISK: f(w) - Errors due to the load induced by MC production running at that time. Then ST tests jobs (2 x 50 jobs added)were aborted with an "Unspecified gridmanager error" logged by WMS.

In fact, the submission to the batch system was failing because the max. total number of jobs (GlueCEPolicyMaxTotalJobs) was reached . Probably this value is not looked at as it could be before submission by WMS.

Jan 29 23:54:46 clrlcgce03 lcgpbs:internal_ FAILED during submission 
to batch system lcgpbs(Maximum number of jobs already in queue)..

This limitation has been removed by site.

  • IN2P3-CPPM_MCDISK: The same problem has in previous test. Jobs running forever with error ."send2dpm: DP000 - disk pool manager not running on marwn04.in2p3.fr ". This arrive for 13 jobs, all starts running nearly at the same time Thu Jan 29 22:37:53 and run in error around Jan 30 00:21. I have put two of this stdout, stderr there

http://marwww.in2p3.fr/~knoops/752629.marce01.in2p3.fr/
http://marwww.in2p3.fr/~knoops/752631.marce01.in2p3.fr/ The load of the local DPM server was around 9 at that time.

  • IN2P3-LAPP : during ST125, 30/01/09 jobs still running after 2500 minutes, failing to connect to LFC with the message :
send2nsd: NS002 - send error : _Csec_recv_token: Received magic:30e1301 expecting ca03
cannot connect to LFC 

Service was up and running fine at that time Is it due to an expired proxy ?

  • RO_07 : ST jobs can not store the output locally and are always using the fail-over storing output files in Lyon

still configuration problem with the USERDISK token on tbit00.nipne.ro SE ?

lcg-cr --vo atlas -s ATLASUSERDISK  -t 2400 -d srm://tbit00.nipne.ro/dpm/.....
dpm_getspacetoken: Unknown user space token description