Spring HEPiX 2007
Main Auditorium
DESY Hamburg
-
-
Registration Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg GermanyThe registration desk at the DESY Main Auditorium will be
open in the morning. -
Keynotes Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany- 1
- 2
-
Site Reports I Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany- 3
-
4
LAPP site report - A French Tier3LAPP (Laboratoire d’Annecy le Vieux de Physique des Particules) is a French IN2P3 laboratory involved in LHC experiments (Atlas, LHCb) as a Tier3. We will describe our computing resources (shared by local and grid users), storage resources, services running at site, monitoring and “home made” accounting tools.Speakers: Mr Eric Fede (LAPP/IN2P3/CNRS), Mrs muriel gougerot (LAPP/IN2P3/CNRS)
- 5
-
10:45
Coffee Break Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Site Reports II Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany- 6
- 7
- 8
- 9
- 10
-
13:00
Lunch Break Canteen
Canteen
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Site Reports III Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany- 11
-
12
Scotgrid - Site ReportWe present a site report for the 3 UKI Scotgrid Tier 2 sites (Glasgow, Edinburgh, Durham) covering the Status of the sites, Availability, Operations. We will also cover distributed support and stress testing of both DPM and dCacheSpeaker: Mr Andrew Elwell (University of Glasgow)
- 13
- 14
- 15
-
15:30
Coffee Break Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Site Reports IV Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany- 16
- 17
- 18
-
-
-
Registration Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Solutions and Architectures I Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany- 19
-
20
Highly Available Central Services III (A Virtualization Approach)Besides clustering and content based routing the technique of host virtualization is another possibility of enhancing the availability of central services. The talk will give a short introduction to pseudo virtualization before focusing on the open source XEN virtualization and the Sun Solaris container concept. Different aspects like base features, automatic provisioning, file system support and version dependencies will be shown. The benefits in the context of providing central services are easy service separation, enhanced availability, flexible resource usage and control and simple provisioning.Speaker: Mr Thomas Finnern (DESY)
-
21
A High Availability Central Content Management Sever SystemThe computersystem for the central content management system at DESY consists of loadbalancers, webservers, application servers and database servers. The setup makes use of common software such as Apache, Zope etc. This talk will give an overview over the setup and goes into some details of the Apache and loadbalancing setup.Speaker: Mr Carsten Germer (DESY)
- 22
-
10:45
Coffee Break Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Solutions and Architectures II Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany-
23
Support for web projects at DESYDESY offers to the research and administration groups full technical support for their web projects. The talk will show, how the service is organized and give information on the Web-Office project, which started 5 years ago.Speaker: Renate Roude (DESY)
-
24
Computing and Network structure for DiamondDiamond Light Source is a light source synchrotron facility of recent construction. Computing and networking must support both production and research. Mostly production in the running of the synchrotron, mostly research in the experimental beamlines attached to it. This has required a dual structure, and especially for the beamline system a careful attention to growth requirement. Existing plans call for experimental data rates in aggregate approaching those of the LHC at CERN. Synchrotron computing is based on PowerPC and ARM based control and monitoring systems and powerful workstations running monitoring software; synchrotron networking on a multimode fibre, 1gb/s infrastructure and CAT6 1gb/s connections to leaf nodes. Experimental computing is based on industry standard storage servers, clusters and GNU/Linux; networking is based on a 10gb/s singlemode fibre infrastructure and 1gb/s CAT6 links to end nodes, but soon we will have 10gb/s links to servers both on singlemode fibre and CAT6 when 10GBASE-T products become available. Interesting challenges and research in the near future as detectors improve resolution and diffractometers improve sample positioning. A tomography experiment which results in data rates of 400MB/s for a day is already being investigated.Speaker: Mr Peter Grandi (Diamond Light Source Ltd.)
-
25
Oracle Database services at the INFN-CNAF Tier-1Most of the services of the GRID infrastructure require robust and efficient database backends, providing high performances as well as fault tolerance mechanisms and fast disaster recovery procedures. To this end, the Italian Tier-1, in collaboration with CERN and the other Tier-1s of the WLCG, has started a deployment and production phase of Oracle database services, that will be used e.g. to provide backends for file catalogues such as LFC, condition databases, storage resource managers, mass storage systems like Castor-2, etc.. In this talk we give an overview of the service infrastructure at CNAF, describing how the various Oracle technologies - RAC, ASM, RMAN, Streams, GridControl, etc. - are used. We also present the results of some specific tests realized to validate and measure the performance of the system.Speaker: Mr Gianluca Peco (INFN Bologna)
-
23
-
13:00
Lunch Break Canteen
Canteen
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
BOF Meetings I Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg GermanyBirds of a feather flock together -
If you wish to discuss special topics with colleagues,
partners, or experts, look at the pinboard in front of the
registration desk. If your topic already has a BOF
scheduled, join them. If not, set one up on your own by
posting a note with your name and topic of choice to the
pinboard. Ask on of the organizers for possible additional
locations. -
15:30
Coffee Break Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
BOF Meetings II Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg GermanyBirds of a feather flock together -
If you wish to discuss special topics with colleagues,
partners, or experts, look at the pinboard in front of the
registration desk. If your topic already has a BOF
scheduled, join them. If not, set one up on your own by
posting a note with your name and topic of choice to the
pinboard. Ask on of the organizers for possible additional
locations.
-
-
-
Registration Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Storage And File Systems I Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany- 26
-
27
AFS + Object StorageIn a common project between CERN, CASPUR, and RZG an AFS extension to support object storage has been developed. The Object Storage Devices (OSD) are loosely based on SCSI T10-standard and uses the mature AFS components rx-interface to the network and namei-interface to the disks. The AFS fileserver got a new role as OSD-metatdataserver. A ubik-database to store information about OSDs has been developed. The AFS-client has been restructured to allow for direct parallel access to OSDs. This technique allows to distribute files belonging to an AFS-volume over multiple OSDs and offers new techniques such as write-replication of files and file-striping. Also a legacy interface has been implemented to allow any old AFS client access to data stored in OSDs. In March a stress test of the beta-version took place at CERN with 120 clients and 8 servers showing stability and the expected scalability and performance.Speaker: Dr Hartmut Reuter (RZG)
- 28
-
10:50
Coffee Break Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Storage And File Systems II Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany-
29
Storage Evaluations at BNLSeveral disk storage systems have been evaluated at the RHIC/USATLAS Computing Facility as part of an ongoing project to identify solutions capable of accommodating a large projected growth in storage demand over the next five years. A preference is given toward lower-cost, high-density, commodity solutions using SATA and SAS drives. This talk will survey the testing methodology, configuration, and performance of a number of products thus far evaluated including the SunFire x4500 (Thumper).Speaker: Robert Petkus (Brookhaven National Laboratory)
- 30
- 31
- 32
- 33
-
29
-
13:00
Lunch Break Canteen
Canteen
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Storage And File Systems III Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany- 34
- 35
- 36
- 37
-
15:30
Coffee Break Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Storage and File Systems BOF Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg GermanyInformal discussion on topics presented today.
-
38
Silent CorruptionsWe report on the progress of ongoing silent data corruptions investigation at CERN. In the last couple of months, CERN has been systematically collecting and analysing observations of data corruptions in the CERN computer centre. Current results and the toolset used in the investigations will be presented.Speaker: Mr Peter Kelemen (CERN)
- 39
-
38
-
Workshop Dinner Museumshafen Oevelgoenne (D.E.S. Bergedorf)
Museumshafen Oevelgoenne
D.E.S. Bergedorf
-
-
-
Registration Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Systems Management I Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany-
40
GRIF : management of a distributed site with QuattorThis talk will present GRIF experience of managing a distributed site with Quattor and show how Quattor has been a critical tool for building a unique, geographically distributed, technical team.Speaker: Mr Michel Jouvin (LAL / IN2P3)
-
41
Monitoring at GridKa using NagiosAt GridKa the system monitoring tool Nagios is used to check the status of servers, worker nodes, storage systems, network components, services, and infrastructure (e.g. UPS and cooling). We'll present a brief summary about the setup and the hierarchical structure of the Nagios system at GridKa.Speaker: Mr Axel Jäger (Forschungszentrum Karlsruhe)
-
42
Cfengine - Stress reduction for System AdministratorsWe will cover the use of Cfengine (http://www.cfengine.org) to fully manage a grid cluster, and maintain software and configuration of worker nodes, grid and disk servers. We will demonstrate the ease of extending the cluster to new hosts, and classes of hosts, together with the simplicity of maintaining the grid software ie, R-GMA bugfixSpeaker: Mr Andrew Elwell (University of Glasgow)
-
43
Virtualization Users Workshop ReportThis talk reports upon the Virtualization users workshop held earlier this year at DESY. HEP Use cases and applications of Virtualization on the worker node became the focus of the discussions. To provide some illustration D-Caches usage of Virtualization will be summarized.Speaker: Mr Owen Synge (Desy)
-
40
-
10:45
Coffee Break Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Systems Management II Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany-
44
HEPiX/WLCG System Management Working Group: an update.System Management Working Group (SMWG) of sys admins from Hepix and grid sites has been setup to address the fabric management problems that HEP sites might have. The group is open and its goal is not to implement new tools but to share what is already in use at sites according to existing best practices. Some sites are already publicly sharing their tools and sensors and some other sites do write very good documentation and share it. The aim is to extend this to a general practice and in a more organised way and avoid the duplication of effort that occurs when system administrators are solving mostly the same problems over and over. The result has been the creation of a WEB site (www.sysadmin.hep.ac.uk) that hosts a subversion repository for management and monitoring tools and a wiki. It works as a file sharing system and single entry point for documentation distributed in other sites. The site, based on gridsite. We will describe how the group is working and what has been achieved so far.Speaker: Ms Alessandra Forti (University of Manchester)
-
45
Overview of WLCG Grid Services Monitoring Working GroupThis talk will summarise the work and experience to date of the WLCG Grid Services Monitoring Working Group whose goal is, through better service monitoring, to improve the reliability and availability of the Grid. The talk will cover proposed standardizations for service metric gathering and grid monitoring data exchange and the use of a Nagios-based prototype deployment for validation.Speaker: Mr Ian Neilson (CERN)
-
46
Future Grid Deployment StrategyThe paper reflects on the experience gained from the deployment of the gLite middleware on the EGEE infrastructure and describes the changes required to meet the demanding requirements over the next few years. In particular, focus is given to the the changes required in YAIM, the grid middleware configuration, to addresses the required granularity of releases, the changes that are necessary for a modular approach and how a smooth transition between can be achieved. In addition, most computing centers are upgrading hardware from 32 bit CPUs to 64 bit. The additional problems this creates is discussed along with the possible solutions so that full support can be given and the resulting affect on YAIM and grid middleware in general is described. An additional requirement is the need to support multiple platforms and migrating to newer versions of the OS. The middleware evolution also contains similar transitions such as upgrading VDT, which is at the core of the middleware. This paper contains information that will be of use to system administers of computing centers who run grid services.Speaker: Mr Louis Poncet (CERN)
- 47
-
44
-
13:00
Lunch Break Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Scientific Linux Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany- 48
- 49
- 50
-
15:30
Coffee Break Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Miscellaneous Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany-
51
Increasing Reliability Through System Testing And Failure PredictionBuilding data centers from a very large number of components of finite reliability increases the probability of hardware failures, potentially leading to data corruption and unscheduled downtime. In addition, the typical extensive variations in hardware types increase the probability of similar errors due to software incompatibility. We report on the testing and verification methods and software used to check system integrity and decrease service downtime by early problem detection and prediction.Speaker: Mr Andras Horvath (CERN)
- 52
-
51
-
-
-
Benchmarks I Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany-
53
Performance of modern processor with HEP codeI compared the performance of several HEP processor from Intel and AMD when running 32bit and 64 bit code on SL4 using typical HEP code I started also a look to Spec CPU 2006 int and fp on a 4core intel processorSpeaker: Dr michele michelotto (INFN Padova)
-
54
CPU Benchmarking at GridKa - Update April 2007I'll continue the discussion about CPU benchmarking. New topics are: - experiences with new worker nodes at GridKa, - quad core measurements, - differing levels of optimization, - first SPEC CPU2006 results.Speaker: Manfred Alef (Forschungszentrum Karlsruhe)
-
55
Many-Core CPUs - Parallel Computing in HEPParallel computing in HEP is regarded as exotic and unnecessary (at best). I will talk about the recent BaBar D0 mixing results and how parallel computation helped. I will also illustrate that parallel computing will be the only way to take advantage of the upcoming "many-core" CPUs.Speaker: Alf Wachsmann (SLAC)
-
56
Multi-core CPU performance in High Energy Physics applicationsMulti-core CPUs are the standard way for a performance efficient utilization of additional on-chip CPU space provided by advanced silicon technologies. Though this leads to a more fine grained parallel approach on the programming level for instance by introducing multithreading it is also expected that trivial parallel applications like the event processing in High Energy Physics can take advantage of these new technologies. In the talk the performance of dual- and quad-core systems is compared based on real HEP applications like the ROOT stress benchmark and the ATLAS Athena framework. The goal of the tests was to investigate the ability of those systems to beintegrated into large farm systems controlled by a queuing system. Besides the benchmark results coming from different compute servers other relevant numbers like the price performance ratio and ratio of electrical power consumption versus performance are discussed. Additionally a short view on the design of a certain multi-core architectures is given and possible bottlenecks in using such systems are addressed.Speaker: Dr Peter Wegner (DESY)
-
53
-
10:45
Coffee Break Foyer
Foyer
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany -
Benchmarks II Main Auditorium
Main Auditorium
DESY Hamburg
Notkestrasse 85 22607 Hamburg Germany- 57
- 58
- 59
-