Minutes of NAF User Committee meeting from 13.1.2010 ---------------------------------------------------- Present: Steve Aplin (ILC), Wolfgang Ehrenfeld (ATLAS), Kai Leffhalm (NAF), Angela Lucaci-Timoce (ILC), Andreas Nowack (CMS), Harmut Stadie (CMS) Excused: Jan Erik Sundermann (ATLAS), Andreas Haupt (IT), Yves Kemp (IT) 1. News from the chair: The NAF web documentation was partly rewritten and restructured. Everybody is welcome to have a look and give feedback. 2. Status report: Status report was given by Kai. See the agenda for the report. Below a few highlights from the discussion are listed: - Lustre quota Quota per user can be set per Lustre mount by the VO admins. This works for Zeuthen Lustre instance and after the January shutdown also for the Hamburg Lustre instance. By default there is no quota set. Even if no quota is set, the tools will report the current user usage, which was appreciated by all experiment. From the NAF side there will be no sub groups, which are needed for group quota. Some of the experiments envision the use of group quota. For example CMS would like to use half of the Lustre space for users and half as data cache. It is not clear how this can be achieved with user quotas alone and a growing number of users. ILC explicitly asked for group space for a few working groups. The anticipated solution from the NAF side with user quota is the following. One user becomes a power user with larger quota and is then responsible to fill and administrate one group space. Is might work for some time but is not really scalable. ATLAS always asked for group quota and not user quota as ATLAS wants to split the Lustre space into two partitions with different file life time. This is not feasible with user quotas and ATLAS is not happy at all with the current development. One solution is to do some partitioning of the Lustre space by splitting up the Lustre instances. This would avoid creating sub groups but it a rather rigid solution. - Lustre cleaning The NAF will not provide any tool for automatic clean up of files in the Lustre space. ATLAS ask for this many times and request sudo rights to do the cleaning on their own. - AFS scratch space It is planned to give every user 10 GB of AFS scratch space (no backup), mounted into the home directory as ~/scratch. As soon as the NAF people find time they will implement it for new accounts and migrate the old ones. - glite For SL5 glite 3.2 is installed and can be accessed via ini glite32. glite 3.2 is mainly a 64 bit build for SL5. ATLAS and CMS need glite 3.1 for they Grid tools for some time. ILC will do some tests with the new glite version. - group profiles ATLAS asked explicitly for group profiles as there are a few use cases which are not very practical with the ini setup. The NAF will check on the status of the implementation. - ATLAS CMT update The NAF provided for ATLAS an AFS patch from CERN, which should give better performance for cmt. This was seen by ATLAS but to the nature of the patch the NAF didn't want to install it on more than one machine. Forcing all cmt user onto one machine will give usually worse performance than splitting them up over three machines with some worse performance. Therefore this patch will not be used. Another option is software installation into the local disk, but clearly not into /tmp. Will be followed up offline. 3. SL5 migration of work group servers During the January shut down more of the work group server should be migrated to SL5 and it was agreed that the login hosts should point to SL5. All experiment requested 2 SL5 work group server and one SL4 work group server. For ATLAS this should be 2 SL4 work group server. 4. Action items: Many action items were touched but not finally closed. The list will be updated at the next meeting. 5. AOB: k5log should be used instead of klog for creating an AFS token for external AFS cells. There are no restrictions as before with klog. ATLAS and CMS gave very positive feedback to the NAF during the time of first LHC data in 2009. CMS was able to run already after a few hours after data taking over the data on Te NAF batch system with out any problems. The turn around time for ATLAS was not that good but data was analysed successfully at the NAF.