Due to the very dynamic Corona situation we decided to move the Terascale meeting to a fully online format.
Welcome to the 14th Annual Helmholtz Alliance Workshop on "Physics at the Terascale"! As in the past 14 years, the workshop will offer a rich programme of stimulating plenary talks and parallel sessions full of intense discussion on topics connected to the LHC, linear colliders, and Belle.
This meeting will be organised as a fully online meeting, with all sessions organised via zoom. Connection details will be made available to registered participants shortly before the meeting.
The following parallel sessions will be held:
Please contact the corresponding session convener if you would like to give a presentation in one of the parallel sessions.
Accurate simulation of the interaction of particles with the detector materials is of
utmost importance for the success of modern particle physics. Software libraries like
GEANT4 are tools that already allow the modeling of physical processes inside detectors
with high precision. The downside of this method is its computational cost in terms of
Recent developments in generative machine learning models seem to provide a promising
alternative for faster and accurate simulations to accelerate this process. We show the
taken steps in the development of a GraphGAN for the simulation of the CMS High
Granularity Calorimeter (HGCal) that is being developed for the High-Luminosity upgrade
at the LHC.
As a first result, we will show an energy regression using Graph Neural Networks.
Detector simulation is a key cornerstone of modern high energy physics. Traditional simulation tools are reliant upon Monte Carlo methods, which consume significant computational resources and are projected to be a major bottleneck at the high luminosity stage of the LHC and for future colliders. Calorimeter shower simulation has been a focus of fast simulation efforts, as it is particularly intensive from a computational standpoint due to a large number of particle interactions with the detector material. Deep generative models hold promise as a potential solution, offering drastic reductions in compute times.
This contribution presents progress towards accurate simulation of particle showers in highly granular calorimeters in two directions. Firstly, initial progress on accurately simulating hadronic showers using a Wasserstein-GAN (WGAN) and a Bounded Information Bottleneck Autoencoder (BIB-AE) is demonstrated. The degree of fidelity achieved is compared before and after interfacing with a state-of-the-art pattern recognition algorithm - the Pandora Particle Flow Algorithm. Secondly, an ongoing study that seeks to extend the success of previous work, which demonstrated accurate simulation of electromagnetic showers, is presented. While the prior work focused on the specific case of a particle incident perpendicular to the calorimeter face, this study aims to additionally condition on the incident angle of the particle.
Quantum Machine Learning is among others the most promising applications on near-term quantum devices which possess the potential to combat problems faster than traditional computers. Classical Machine Learning (ML) is taking up a significant role in particle physics to speed up detector simulations. Generative Adversarial Networks (GANs) have scientifically proven to achieve a similar level of accuracy compared to Monte Carlo -based simulations while decreasing the computation time by orders of magnitude. In this research we are going one step further and apply quantum computing to GAN-based detector simulations. Given the practical limitations of current quantum hardware in terms of number of qubits, connectivity, and coherence time, we perform initial tests with a simplified GAN model running on quantum simulators. The model is a classical-quantum hybrid ansatz. It consists of a quantum generator, defined as a parameterised circuit based on single and two qubit gates, and a classical discriminator network. Our initial qGAN prototype focuses on a one-dimensional toy-distribution, representing the energy deposited in a detector by a single particle. It employs three qubits and achieves high physics accuracy thanks to hyper-parameter optimisation. Furthermore, we study the influence of real hardware noise for the quantum ML GAN training. A second qGAN is developed to simulate 2D images with a 64-pixel resolution, representing the energy patterns in the detector. Different quantum ansatzes are studied. We obtained the best results using a tree tensor network architecture with six qubits. Additionally, we discuss challenges and potential benefits of quantum computing as well as our plans for future development.
Highly precise simulations of elementary particles interaction and processes are fundamental to accurately reproduce and interpret the experimental results in High Energy Physics (HEP) detectors and to correctly reconstruct the particle flows. Today, detector simulations typically rely on Monte Carlo-based methods which are extremely demanding in terms of computing resources. The need for simulated data at future experiments - like the ones that will run at the High Luminosity Large Hadron Collider (HL-LHC) - are expected to increase by orders of magnitude, increasing drastically the computational challenge. This expectation motivates the research for alternative deep learning-based simulation strategies. In this research we speed-up HEP detector simulations for the specific case of calorimeters using Generative Adversarial Networks (GANs) with a huge factor of over 150 000x compared to the standard Monte Carlo simulations. This could only be achieved by designing smart convolutional 2D network architectures for generating 3D images representing the detector volume. Detailed physics evaluation shows an accuracy similar to the Monte Carlo simulation.
Furthermore, we quantize the data format for the neural network architecture (float32) with the Intel Low Precision Optimization tool (LPOT) to a reduced precision (int8) data format. This results in an additional $1.8$x speed-up on modern Intel hardware while maintaining the physics accuracy. These excellent results consolidate the beneficial use of GANs for future fast detector simulations.
The pixel vertex detector (PXD) is the newest and the most sensitive subdetector at the Belle II. Data from the PXD and other sensors allow us to reconstruct particle tracks and decay vertices. The effect of background processes on track reconstruction is simulated by adding measured or simulated background hit patterns to the hits produced by simulated signal particles which originates from the processes of interest. This model requires a large set of statistically independent PXD background noise samples to avoid a systematic bias of reconstructed tracks. However, the fine-grained PXD data requires a substantial amount of storage. As an efficient way of producing background information for fast simulation, we introduce the idea of an on-demand PXD background generator with Intra-Event Aware GAN (IEAGAN), conditioned over the number of PXD sensors in order to produce sensor-dependent PXD images by approximating the concept of an ”event” in the detector as these PXD images share both semantic and statistical features that makes it extremely hard for even the state-of-the-art GANs to mimic these exact properties. As a result, we developed IEAGAN model which tries to capture these dependencies by imposing relational inductive bias over the batch space.
We present NPointFunctions, which was developed in order to obtain any desired one-loop amplitudes for an arbitrary BSM model.
The tool aims to be customizable, modular and extensible for additional process- or amplitude- dependent contributions.
It relies on the SARAH-generated output used with FeynArts/FormCalc packages, interfaced in an appropriate way.
Currently, several LFV processes were already implemented and applied in different models, for example the MRSSM.
The resulting tool became an extension to FlexibleSUSY, a spectrum generator - generator program.
Calorimeter simulation is the most computationally expensive part of Monte Carlo generation of samples necessary for analysis of experimental data at the Large Hadron Collider (LHC). The High-Luminosity upgrade of the LHC would require an even larger amount of such samples. We present a technique based on Discrete Variational Autoencoders (DVAEs) to simulate particle showers in Electromagnetic Calorimeters. We discuss how this work paves the way towards exploration of quantum annealing processors as sampling devices for generation of simulated High Energy Physics datasets.
Imaging capabilities of highly granular calorimeters allow to study in detail the inner structure of hadronic showers. Reconstruction of the particle composition and properties of secondary showers in each hadronic cascade brings additional information that can be used in diferent applications. This contribution presents the graph neural network based reconstruction of electromagnetic component within a hadronic shower in the CALICE analog hadron calorimeter. Preliminary model performance, rst results on application to the hadronic energy reconstruction and prospects of segmenting distinct secondary particle components will be discussed.
In this talk I present a method to reconstruct the kinematics of neutral-current deep inelastic scattering (DIS) using a deep neural network (DNN). Unlike traditional methods, it exploits the full kinematic information of both the scattered electron and the hadronic-final state, and it accounts for QED radiation by identifying events with radiated photons and event-level momentum imbalance. The method is studied with simulated events at HERA and the future Electron-Ion Collider (EIC). We will show that the DNN method outperforms all the traditional methods over the full phase space, improving resolution and reducing bias. The DNN-base reconstruction has the potential to extend the kinematic reach of future experiments at the EIC, and thus their discovery potential in polarized and nuclear DIS.
Despite continuous efforts by the LHC physics program as well as other experiments to conduct searches for physics beyond the standard model, no evidence has been found so far. A major disadvantage of many current searches is their reliance on specific signal and background models. Since it is impossible to cover all possible models and phase space regions with a dedicated search, the development of model-independent methods, which can be directly trained on and applied to data, is necessary.
We propose a novel method for unsupervised anomaly detection, called CATHODE, combining neural density estimation and classification. We present the first application of this method to the LHC Olympics 2020 R&D dataset. We compare the performance of CATHODE as well as its robustness against input feature correlations to previous state-of-the-art anomaly detectors that are based on either density estimation or classification entirely.
The associated production of a bb¯ pair with a Higgs boson could provide an important probe to both the size and the phase of the bottom-quark Yukawa coupling, yb. However, the signal is shrouded by several background processes including the irreducible Zh,Z→bb¯ background. We show that the analysis of kinematic shapes provides us with a concrete prescription for separating the yb-sensitive production modes from both the irreducible and the QCD-QED backgrounds using the bb¯γγ final state. We draw a page from game theory and use Shapley values to make Boosted Decision Trees interpretable in terms of kinematic measurables and provide physics insights into the variances in the kinematic shapes of the different channels that help us complete this feat. Adding interpretability to the machine learning algorithm opens up the black-box and allows us to cherry-pick only those kinematic variables that matter most in the analysis. We resurrect the hope of constraining the size and, possibly, the phase of yb using kinematic shape studies of bb¯h production with the full HL-LHC data and at FCC-hh.
Dynamically and opportunistically extending the compute resources of a HTC-Cluster (ATLAS-BFG)
with compute resources of a HPC-Cluster (NEMO) allows one to increase the computational capabilities
based on the demand of users and the availability of resources and as such leads to an efficient use of
resources across boundaries of clusters and disciplines. This is completely transparent, since users are faced
with the same software environment regardless of where their jobs are scheduled. This work illustrates how
this is realized using the software COBalD/TARDIS. Furthermore an overview of the monitoring setup is
given and an outlook to a potential accounting system tailored for such use cases is presented.
A file caching setup to access ATLAS data was deployed for the NEMO HPC cluster in Freiburg, using
an XRootD proxy server running in forwarding mode. This setup allows running HEP workflows without
the need to operate a large local data storage. Several performance tests were carried out to measure any
potential overhead caused by the caching setup with respect to direct, non-cached data access.
Ongoing and future HEP experiments with their growing data volumes and computing power requirements pose a constant challenge for the computing model and infrastructure.
Therefore, the HEP computing group of KIT is developing software and concepts in various projects to improve HEP computing further.
One of these developments is the transparent integration of opportunistic resources to increase the number of resources from different providers.
Furthermore, coordinated caching concepts in a distributed system help ensure sufficient data rate and reduce network load.
Coordinated caches can also facilitate data management in a distributed system with automated copies and cleanups on demand.
In addition to traditional CPU resources, KIT provides GPUs using a batch system for both local groups and Grid.
This talk will show the status and plan of our developments on opportunistic resources, distributed caching, and GPU usage over batch systems.
The lepton–proton collisions produced at the HERA collider represent a unique high energy physics data set. A number of years after the end of collisions, the data collected by the H1 experiment, as well as the simulated events and all software needed for reconstruction, simulation and data analysis were migrated into a preserved operational mode at DESY. A recent modernisation of the H1 software architecture has been performed, which will not only facilitate ongoing and future data analysis efforts with the new inclusion of modern analysis tools, but also ensure the long-term availability of the H1 data and associated software. The present status of the H1 software stack, the data, simulations and the currently supported computing platforms for data analysis activities are discussed.
After the long shutdown preparing the CMS detector for Run 3, the tracker alignment constants, namely position, orientation, and curvature of each of the 15148 tracker modules that compose the tracking system need to be derived again with a high precision in order to ensure a good performance of the detector for physics analysis. This process constitutes a major computational challenge due to the enormous number of degrees of freedom involved. The latest public results of the CMS tracker alignment performance corresponding to the very first alignment with cosmic rays, derived after the work in the underground experimental cavern was finished, will be presented. The workflows, turnarounds, the so-called automated alignment, and the use of CMS CERN Analysis Facilities (CAF) for the derivation of the alignment conditions will also be discussed.