The computing infrastructure for the LHC experiments at CERN was planned in a distributed way right from the beginning. In parallel to the construction of the experiments the World Wide LHC Computing Grid (WLCG) has been commissioned in the early 2000-years. After more than a decade of experiences with globally distributed data management the ATLAS experiment developed a new tool for that purpose: RUCIO. It has been designed to cope with the demands of the ATLAS experiment in the next 15 years. After a successful deployment for ATLAS RUCIO turned into community tool that has been adopted by other HEP experiments and beyond
The upcoming SKA Observatory is a world class radio observatory headquartered in the UK. When the telescopes are operational (~2028), they will be producing data at unprecedented rates from the two sites (in South Africa and Australia), with data reduction taking place through the central signal processors and science data processors to build the Observatory data products. Due to the similar demands of SKAO compared to LHC experiments, RUCIO is being tested extensively as a core data management tool.
During the seminar leading RUCIO developers from ATLAS will present the concepts of RUCIO and its use in particle physics. In a second contribution, Rohini Joshi and Rob Barnsley will present some experiences from the prototype SKAO RUCIO deployment related to data lifecycle use cases.
Rob Barnsley (SKAO), Martin Barisits (CERN), Thomas Beermann (DESY), Mario Lassnig (CERN),
Rohini Joshi (SKAO)
ZOOM Meeting “PUNCHLunch seminar”:
Webinar ID: 919 1665 4877, passcode: 481572