- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Welcome to the 16th Annual Helmholtz Alliance Workshop on "Physics at the Terascale"! As in the past 16 years, the workshop will offer a rich programme of stimulating plenary talks and parallel sessions full of intense discussion on topics connected to the LHC, linear colliders, and Belle.
This meeting will be organised as a in-person meeting, with the option to connect remotely if needed. The number of sessions will depend on the number of registrations. A call for abstracts is open.
Connection details will be made available to registered participants shortly before the meeting.
Topics of the parallel sessions include:
The Zoom links appear only in the menu when you are registered for this event and logged in with the account that you used for the registration.
In person at the DESY main auditorium, for remote participation please use the following link:
https://desy.zoom.us/j/69167728936?pwd=d2krbzA0S1RhRG1oRzFPUkF5S1lrZz09
Meeting ID: 691 6772 8936
Passcode: HGF
In person at the DESY main auditorium, for remote participation please use the following link:
https://desy.zoom.us/j/69167728936?pwd=d2krbzA0S1RhRG1oRzFPUkF5S1lrZz09
Meeting ID: 691 6772 8936
Passcode: HGF
Composite Higgs (CH) models offers an attractive means to solve the hierarchy problem and at the same time explain the flavour hierarchies observed in nature via the idea of partial compositeness (PC). In this talk, predictions for the fermion spectrum of a minimal UV realisation of PC, considering each Standard-Model (SM) fermion to mix linearly with a bound state consisting of a new scalar and a new fermion, are presented - taking into account the dynamical emergence of the composites. Employing the non-perturbative functional renormalisation group, the scaling of the relevant correlation functions is examined and the resulting SM-fermion masses are analysed. Finally, novel ideas to mitigate the residual tuning in CH models will be presented.
The initiation of a novel neutrino physics program at the Large Hadron Collider (LHC) and the proposed Forward Physics Facility (FPF) motivate studies of the discovery potential of these searches. This requires resolving degeneracies between new predictions and uncertainties in modeling neutrino production in the forward kinematic region. Based on a broad selection of existing predictions for the parent hadron spectra at FASER$\nu$ and the FPF, we parametrize the expected correlations in the spectra of neutrinos produced in their decays, and use a Fisher information approach to determine the highest achievable precision for their observation. This allows for constraining various physics processes within and beyond the Standard Model, including neutrino non-standard interactions. We also illustrate how combining multiple neutrino observables could lead to experimental confirmation of the enhanced-strangeness scenario proposed to resolve the cosmic-ray muon puzzle during LHC Run 3.
The Higgs-gluon interaction is crucial for LHC phenomenology. To improve the constraints on the CP structure of this coupling, in this talk I will investigate Higgs production with two jets using machine learning. In particular, the CP sensitivity of the so far neglected phase space region that differs from the typical vector boson fusion-like kinematics is exploited. The presented results suggest that significant improvements in current experimental limits are possible. In the talk I also discuss the most relevant observables and how CP violation in the Higgs-gluon interaction can be disentangled from CP violation in the interaction between the Higgs boson and massive vector bosons. Assuming the absence of CP-violating Higgs interactions with coloured beyond-the-Standard-Model states, the projected limits on a CP-violating top-Yukawa coupling are stronger than more direct probes like top-associated Higgs production and limits from a global fit.
The CP structure of the Higgs boson is a fundamental Higgs property which has not yet been constrained with high precision.
CP violation in the Yukawa coupling between the Higgs boson and the top quark can be probed directly at the Large Hadron Collider by measuring top-quark-associated Higgs production. Multivariate analysis techniques are designed for a specific signal model and, therefore, complicate reinterpretations and statistical combinations between experiments.
With this motivation in mind, we propose in this work a CP-sensitive extension of the simplified template cross-section framework.
Considering multiple Higgs decay channels, we perform an in-depth comparison of CP-sensitive observables and combinations thereof. We present options to extend the existing binning in the transverse momentum of the Higgs boson by a second dimension. A selection of candidate observables are presented as possible choices.
Recently, a concept for a Hybrid Asymmetric Linear Higgs Factory (HALHF) has been proposed, where a center-of-mass energy of 250 GeV is reached by colliding a plasma-wakefield accelerated electron beam of 500 GeV with a conventionally accelerated positron beam of about 30 GeV. While clearly facing R&D challenges, this concept bears the potential to be significantly cheaper than any other proposed Higgs Factory, comparable in cost e.g. to the EIC. The asymmetric design changes the requirements on the detector at such a facility, which needs to be adapted to forward-boosted event topologies as well as different distributions of beam-beam backgrounds. This contribution will give a first assessment of the impact of the accelerator design on the physics prospects in terms of some flagship measurements of Higgs factories, and how a detector would need to be adjusted from a typical symmetric Higgs factory design.
Real singlet models are simple extensions of the SM as these models add
a new Higgs-like scalar that transforms as a singlet under the SM gauge
group. For the case of an additional particle that is heavier than twice
the SM-Higgs this scalar, labelled S, could play a role in
resonant-enhanced pp$\rightarrow$hh production. We here simulate samples for such
scenarios for the inclusive process, the scenario where pp$\rightarrow$S$\rightarrow$hh only,
as well as production without the intermediate resonance.
By comparing distributions of these three processes in different
observables, such as the invariant mass or the final state
transverse momenta, taking the finite width of the heavy scalar into
account, we discuss the importance of interference effects for di-Higgs
production.
The measurement of the differential cross section of the charged-current Drell-Yan (ccDY) process in the decay $W\to\ell\nu$ is presented, where $\ell$ is an electron or muon. It is based on pp-collision data taken with the ATLAS detector during the LHC Run-2 at a center-of-mass energy of $\sqrt{s}=13\,$TeV, corresponding to an integrated luminosity of $\mathcal{L}=140\,\text{fb}^{-1}$.
The cross section is measured differentially as a function of the transverse mass $m_T^W$ as well as double-differentially in $m_T^W$ and the pseudorapidity of the lepton with a focus on the high transverse mass region between $200\,$GeV and $2000\,$GeV.
A precise measurement of the ccDY processes at high masses is done for the first time and will allow for constraints on the parton distribution functions of the proton and on effective field theories in the future. An overview of the complete analysis will be given.
We present detailed studies of the production of Z boson with 2 jets. The electroweak contribution to Z+2jets will be studied in detail, with emphasis on the Vector-Boson-Fusion channel.
Based on phenomenological studies to identify the VBF signal, we show first results of an Run2 analysis at detector level
Jets are crucial for high energy physics and part of many analyses at the CMS experiment at the LHC. A well calibrated jet energy resolution (JER) is mandatory for both measurements and searches to reach a high precision. This talk presents the latest JER measurements at the CMS experiment for data collected in the scope of the LHC data taking periods Run 2 and Run 3. These results are measured with a well established method that exploits the transverse momentum balance of the two most energetic jets per event. Furthermore, a novel technique based on the missing transverse momentum (MET) projection fraction is introduced, that is more robust against the increasing number of additional proton-proton interactions (pileup) which heavily affect jets and MET.
The performance of the CMS tracker alignment during the ongoing Run 3 data-taking period is described. The results of the tracker alignment calibration performed with cosmic rays and collision tracks recorded at a center-of-mass energy of 900 GeV and 13.6 TeV are presented. The performance of the high-granularity Prompt Calibration Loop alignment for automated online calibration after a full calendar year since its deployment is discussed. The usage of computing resources for both online and offline calibration is discussed. Finally, the prospects for the tracker alignment calibration towards the third year of data-taking in Run 3, are mentioned.
In order to observe and measure rare processes in nature, a staggering amount of data needs to be produced and processed at particle colliders. With the advancement of the LHC towards Run 3 and HL-LHC, the flow of data as well as the complexity of the analyses will increase even more. In light of these challenges and the limited resources available, an efficient usage of computing power is critical for future analyses.
In order to analyze data in an efficient way, a new columnar analysis tool, columnflow, has been developed. In this presentation, an introduction to columnflow is given, including an overview of the workflow and some example use cases.
Future collider experiments, such as the upcoming high luminosity phase of the LHC, are expected to be extremely data-rich. This anticipates a significant demand for innovative track reconstruction techniques to more efficiently reconstruct particle trajectories. Specifically, LUXE (Laser Und XFEL Experiment) at DESY, a proposed experiment to investigate the transition into strong-field QED, presents an ideal platform for developing and testing novel techniques, due to the large range of generated positrons and detector occupancies of up to 100 hits/mm2 for the initial phase.
To reconstruct positron tracks from the four-layered Silicon pixel detector used in LUXE, we formulate the track reconstruction as a quadratic unconstrained binary optimisation (QUBO) problem. This formulation allows the problem to be solved with either a gate-based quantum computer or a quantum annealer. In this talk, the simulated performance of these methods is benchmarked against the classical track reconstruction technique of using a Combinatorial Kalman Filter. Additionally, the talk will include a discussion on the methodologies for transforming pattern recognition problems, like track reconstruction, into effective QUBO problems
Particle track reconstruction plays a crucial role in the exploration of new physical phenomena, particularly when rare signal tracks are obscured by a significant background. In muon colliders where beam muons interacting with the detector produce secondary and tertiary background particles, track reconstruction can be computationally intensive due to the large number of detector hits. The formulation of the reconstruction task as quadratic unconstrained binary optimisation (QUBO) enables the use of quantum computers, which are believed to offer an advantage over classical computers in such optimisation scenarios.
The QUBO parameters are determined by combining spatial and temporal information from detector hits, resulting in a 4D quantum algorithm. To demonstrate the effectiveness of this approach, the quantum algorithm is used to reconstruct signal tracks from samples consisting of Monte Carlo simulated charged particles overlaid with background hits for a Muon Collider tracking detector. We will present the obtained reconstruction performance and discuss possible paths for further improvements.
In a wide range of high-energy particle physics analyses, machine learning methods have proven as powerful tools to enhance analysis sensitivity.
In the past years, various machine learning applications were also integrated in central CMS workflows, leading to great improvements in reconstruction and object identification efficiencies.
However, the continuation of successful deployments might be limited in the future due to memory and processing time constraints of more advanced models evaluated on central infrastructure.
A novel inference approach for models trained with TensorFlow, based on Ahead-of-time (AOT) compilation is presented. This approach offers a substantial reduction in memory footprints while preserving or even improving computational performance.
This talk outlines strategies and limitations of this novel approach, and presents integration workflow for deploying AOT models in production.
In a quantum mechanical process, perturbative calculations, such as in proton-proton collisions at the LHC, can introduce negative density terms as the perturbative series increases to higher order. Consequently, numerical sampling techniques (e.g. Monte Carlo) result in data points with either positive or negative weights. This is an issue for probabilistic machine learning-based algorithms since they only function under the positive-definite probability density paradigm. The work that will be presented will be a short update of an on-going Helmholtz Information & Data Science Academy funded project aimed at developing generic solutions to negatively weighted data in neural-based machine learning models by extending the regime of probabilistic machine learning models to be negative weight safe.
The production of top quark pairs in association with a photon ($t\bar{t}\gamma$) is an important process to investigate the coupling between photon and top quark. Precise measurements of this coupling allow to test the Standard Model (SM) and probe for new physics effects. The use of Standard Model Effective Field Theory (SMEFT) models new physics phenomena beyond the SM via the introduction of higher dimension operators. In this talk, the measurement of the differential $t\bar{t}\gamma$ cross-section using $140\;$fb$^{-1}$ of data collected by the ATLAS detector in proton-proton collisions at $\sqrt{s}=13$ TeV and its interpretation in the context of SMEFT will be presented. The measurement is performed in the single lepton and dilepton decay channel of the top quarks at particle level. The differential cross section as a function of photon transverse momentum is used to set constraints on the electroweak dipole moments of the top quark.
A presicion measurement of the top quark pair production cross-section at 13.0 TeV, based on 59.8 fb-1 of the 2018 CMS data is presented. A new method combining both dilepton and lepton+jets channels is used to constrain the lepton and b-tag scale factors. The currently expected systematic uncertainty, excluding that of the luminosity, is 1.5%.
The t-channel production is the dominant process for single top quark and single top antiquark production at the LHC.
The presented analysis measures the total cross-sections for top-quark and top-antiquark production $\sigma(tq)$ and $\sigma(\bar{t}q)$ as well as the combined cross-section
$\sigma(tq+\bar{t}q)$ and the cross-section ratio $R_{t}=\sigma(tq)/\sigma(\bar{t}q)$.
The full Run 2 dataset recorded with the ATLAS detector in the years 2015-2018 is used.
The measurements of $\sigma(tq)$ and $\sigma(\bar{t}q)$ are interpreted in an effective field theory approach to constrain the strength of the four-fermion operator $O_{qQ}^{(1,3)}$.
The measured total cross-section is used to derive constraints on the CKM matrix elements involving top quarks.
In the years 2016-2018 the CMS Experiment at CERN's Large Hadron Collider (LHC) recorded a large amount of proton-proton collision data at a centre of mass energy of 13 TeV, corresponding to an integrated luminosity of 138 fb$^{-1}$. With this large dataset, the associated production of top quarks with the Z boson has been measured precisely and differentially. However, background processes were always assumed to follow the expectations of the Standard Model (SM). In this measurement, for the first time, t$\bar{\mathrm{t}}$Z, tWZ and tZq are measured simultaneously and differentially. The measurement will therefore be more sensitive to new physics, and particularly suitable for effective field theory interpretations. Due to the large overlap between t$\bar{\mathrm{t}}$Z and tWZ, separating the two processes is extremely challenging both theoretically and experimentally, therfore their sum is reported in the results.
In person at the DESY main auditorium, for remote participation please use the following link:
https://desy.zoom.us/j/69167728936?pwd=d2krbzA0S1RhRG1oRzFPUkF5S1lrZz09
Meeting ID: 691 6772 8936
Passcode: HGF
Zoom room for colloquium (Tuesday 16:00)
https://desy.zoom.us/j/99616528733
Meeting ID: 996 1652 8733
Passcode: 733220
A measurement of the Higgs boson production via vector boson fusion (VBF) and its decay into a bottom quark-antiquark pair is presented using proton-proton collision data recorded by the CMS experiment at center-of mass energy of 13 TeV and corresponding to an integrated luminosity of 91/fb. Treating the gluon-gluon fusion process as a background and constraining its rate to the value expected in the standard model (SM) within uncertainties, the signal strength of the VBF process, defined as the ratio of the observed signal rate to that predicted by the SM, is measured to be 1.01 +0.55 -0.46. The VBF signal is observed with a significance of 2.4 standard deviations relative to the background prediction, while the expected significance is 2.7 standard deviations.
The bottom anti-bottom Higgs boson decay channel of Higgs-associated top quark pair production offers direct access to measurements of the top Yukawa coupling and Higgs-$p_\mathrm{\kern0.1emT}$ differential cross-section, which are sensitive to potential new physics. To incorporate improvements such as developments in b-tagging and Monte Carlo simulation of the dominant $t\bar{t} + b\bar{b}$ background, a legacy analysis of the $t\bar{t}H(H \rightarrow b\bar{b})$ process with the full ATLAS Run 2 dataset of $\mathcal{L} = 140\:\mathrm{fb}^{−1}$ is currently ongoing.
This talk will provide insight into the analysis strategy with a special focus on recent improvements and validation of the transformers – an advanced deep learning architecture – developed in the analysis for event classification and Higgs-$p_\mathrm{\kern0.1emT}$ reconstruction. The developments herein consist especially of the inclusion of missing transverse energy in the model inputs, performance comparisons of competing reconstruction methods, and the optimisation of the region definitions obtained from the event classification networks.
This talk will summarise a measurement of VH differential cross sections in $H\to b\bar{b}$ final state in Simplified Template Cross Section framework with CMS Run 2 data.
Measuring the Higgs self-coupling is a key target for future colliders and is enabled by double Higgs production. An important question is how the precision of this measurement improves with higher center-of-mass collision energy. In this work, we study the ZHH process at center-of-mass energies of 500, 550, and 600 GeV, simulated with the ILD detector concept from the International Linear Collider (ILC) using the DD4HEP toolkit. The accurate reconstruction of ZHH events under realistic detector conditions requires the use of advanced algorithms to fully utilize the initial-state kinematics, including e.g. kinematic fitting, matrix element-inferred likelihoods and jet clustering with graph neural networks (GNNs). This is the first study of the dependence of the self-coupling precision on the choice of center-of-mass energy and it demonstrates the importance of optimizing the center-of-mass energy for increased sensitivity on the self-coupling. The requirements that the Higgs self-coupling measurement puts on the choice of center-of-mass energy will be evaluated as this is important for shaping the landscape of future colliders such as ILC or Cool Copper Collider ($C^3$). It also highlights the reusability of the ILC detector concept and Key4HEP-based analyses for new collider concepts.
Some say SUSY is dead, because LHC has not discovered it yet. But is this
really true? It turns out that the story is more subtle. SUSY can be 'just
around the corner', even if no signs of it has been found and a closer
look is needed to quantify the impact of LHC limits and their implications
for future colliders. In this contribution, a study of prospects for SUSY
based on scanning the relevant parameter space of (weak-scale) SUSY
parameters, is presented.
I concentrate on the properties most relevant to evaluate the experimental
prospects: mass differences, lifetimes and decay-modes. The observations are
then confronted with estimated experimental capabilities, including -
importantly - the detail of simulation these estimates are based upon.
I have mainly considered what can be expected from LHC and HL-LHC, where it
turns that large swaths of SUSY parameter space will be quite hard to access.
For e+e- colliders, on the other hand, the situation is simple:
at such colliders, SUSY will be either discovered or excluded almost to
the kinematic limit.
The direct pair-production of the tau-lepton superpartner, stau, is one of the
most interesting channels to search for SUSY. First of all the stau is with high
probability the lightest of the scalar leptons. Secondly the signature of stau
pair production signal events is one of the most difficult ones, yielding the
'worst' and thus most general scenario for the searches.
Future e+e- Higgs factories offer excellent facilities for SUSY
searches. With respect to previous e+e- colliders, they increase the luminosity
and centre-of-mass energy and improve the technologies, while, with
respect to hadron colliders, they offer a cleaner environment, a known
initial state and a triggerless operation of the detectors.
In this contribution, the prospects for discovering stau-pair
production at the future e+e- colliders and the resulting detector
requirements will be discussed.
For detector-level simulations, the study takes the ILD detector concept
and ILC parameters at 500 GeV as example. It includes all SM
backgrounds, as well as beam induced backgrounds, as overlay-on-physics
and - for the first time - overlay-only events, and considers the
worst-case scenario for the stau-mixing. It shows that with the chosen
accelerator and detector conditions, SUSY will be discovered if the
NLSP mass is up to just a few GeV below the kinematic limit of the
collider.
Based on these results, expectations for other center-of-mass energies,
luminosities, beam polarisations, beam background and detector
conditions will derived.
We investigate the discovery potential for long-lived particles produced in association with a top-antitop quark pair at the (High-Luminosity) LHC. Compared to inclusive searches for a displaced vertex, top-associated signals offer new trigger options and an extra handle to suppress background. We propose a search strategy for a displaced di-muon vertex decaying in the tracking chambers, calorimeter or the muon chambers, in addition to a reconstructed top-antitop pair. Such a signature is predicted in many models with new light scalars or pseudo-scalars, which generically couple more strongly to top quarks than to light quarks. For axion-like particles with masses above the di-muon threshold and below the $b\bar{b}$ threshold, we find that the (High-Luminosity) LHC can probe effective top-quark couplings as small as $c_{tt}/f_{a} = 0.03 (0.01)$ TeV$^{-1}$ and proper decay lengths as long as 10 (400) m, with data corresponding to an integrated luminosity of 150 fb$^{-1}$ (3 ab$^{-1}$).
In this talk I will present a summary of the phenomenology study of long-lived axion-like particles in $t\bar{t}$ events at the (High-Luminosity) LHC, and the on-going analysis to search for this signature in CMS.
Among the open questions of particle physics is the origin of neutrino masses. These masses can be explained by the "Seesaw mechanism," which introduces Majorana neutrinos with masses on the TeV scale. This talk presents a search for such Majorana neutrinos produced in same-sign $WW$ scattering. The analysis uses 140 fb$^-1$ of $pp$ collison data collected between 2015 and 2018 by the ATLAS detector at the Large Hadron Collider. The analysis targets final states with with exactly two same-sign muons and at least two hadronic jets with a large separation in rapidity. The main backgrounds are the Standard Model same-sign $WW$ scattering and $WZ$ production. No significant excess over the Standard Model expectation is observed. The measurement results are interpreted in the phenomenological Type I Seesaw model and the Weinberg operator model.
The LUXE experiment at DESY stands at the forefront of the investigation into strong-field quantum electrodynamics with high precision. The interaction between electrons or photons and a high-intensity laser generates new electrons, positrons, and photons. The phenomena under examination include the non-linear Compton scattering. In this talk, I will explain how the photons produced in this process offer an avenue for exploring new physics through a beam-dump-type experiment.
We present a model-independent search for new particles decaying to top quark-antiquark pairs ($\text{t}\bar{\text{t}}$) using 138 $\text{fb}^{-1}$ of pp collision data at $\sqrt{s}=13$ TeV recorded with the CMS detector during LHC Run 2. The search targets both resonant and non-resonant signatures in the spectrum of the invariant mass $m_{\text{t}\bar{\text{t}}}$.
Focusing on lepton+jets final states, we use novel top-tagging techniques to identify the hadronic decay of highly Lorentz-boosted top quarks. We further employ a deep neural network for event classification. Reconstructed $m_{\text{t}\bar{\text{t}}}$-distributions are used to derive constraints on various physics models predicting new particles decaying to $\text{t}\bar{\text{t}}$, such as heavy resonances, Kaluza-Klein gluons, heavy Higgs bosons (including interference with the SM process), as well as non-resonant axion-like particles, extending the reach of earlier searches significantly.
Quantum entanglement is a fundamental prediction of quantum mechanics and the experimental achievements with electrons and photons were recognised by the Nobel Prize in Physics 2022. At the LHC, quantum entanglement can be observed in top quarks, testing quantum mechanics at high energies. Therefore, a sensitivity study for a possible measurement of quantum entanglement in the top quark pair production in the lepton+jets final state is presented.
The angular separation between the decay products of the top quarks can act as an indicator of quantum entanglement, when the two top quarks are produced near threshold. The two strongest spin analysers in this final state are the charged lepton and the down type quark which is accessed via $c$-tagging. The result is then compared to parton level predictions using a calibration curve. As the biggest challenge for this analysis, the parton shower systematic uncertainty, comparing Powheg+Pythia 8 to Powheg+Herwig 7.13 predictions, is discussed.
The study is performed with ATLAS Monte Carlo simulations under Run 2 conditions.
Precision top mass measurements at hadron colliders have been notoriously difficult. The fundamental challenge in the existing approaches lies in achieving simultaneously high top mass sensitivity and good theoretical control. Inspired by the use of standard candles in cosmology, we overcome this problem by showing that a single energy correlator-based observable can be constructed that reflects the characteristic angular scales associated with both the $W$-boson and top quark. This gives direct access to the dimensionless quantity $m_{t}/m_{W}$, from which $m_{t}$ can be extracted in a well-defined short-distance mass scheme as a function of the well-known $m_{W}$. A Monte-Carlo-based study is performed to demonstrate the properties of our observable and the statistical feasibility of its extraction from the Run 2 and 3 and High-Luminosity LHC data sets. The resulting $m_t$ has remarkably small uncertainties from hadronization effects and is insensitive to the underlying event and parton distribution functions. Our proposed observable provides a road map for a rich program to achieve a top mass determination at the LHC with record precision.
We present the first evidence for the production of a top quark in association with a W and a Z boson. The analysis employs proton-proton collision data collected in the years 2016-2018 at a center-of-mass energy of 13 TeV by the CMS Experiment, for a total integrated luminosity of 138 fb$^{-1}$.
We target the multilepton final state of the process, where we consider two charged leptons being produced in the decay of the Z boson and a third lepton arising from the decays of either the top quark and the W boson.
The analysis defines various signal and control regions and makes use of several machine learning algorithms in order to achieve high sensitivity to the signal and discrimination of the background processes.
Finally, we measure the cross section for the production of the process to be 354 $\pm$ 54 (stat) $\pm$ 95 (syst) fb, corresponding to a statistical significance of 3.4 standard deviations.
The Higgs boson production in association with a top quark pair plays a key role for studying the Yukawa coupling between the Higgs boson and the top quark. The coupling can be determined by measuring the cross-section of the $t\bar{t}H$ production in various final states using the $140\,\text{fb}^{-1}$ ATLAS dataset at $\sqrt{s}=13\,\text{TeV}$. Multi-lepton final states are rare but pure since most backgrounds are significantly suppressed.
The non-resonant $t\bar{t}H\to4\ell$ process has low production rate and has contributions from Higgs decay modes like $H\to WW^{*}$, $H\to\tau\tau$, and $H\to Z^{*}Z^{*}$. The dominant background arises from $t\bar{t}Z$, $ZZ$, and misidentified leptons from $t\bar{t}$ production. A multiclass dense neural network (DNN) is trained to separate signal events from these backgrounds and to define analysis regions. Additional fake lepton regions are defined to estimate the normalisation of the most important fake contributions. Partially unblinded fit results for the signal strength are presented for the $4\ell$ channel, but also the combined performance of other multi-lepton final states are shown.
In person at the DESY main auditorium, for remote participation please use the following link:
https://desy.zoom.us/j/69167728936?pwd=d2krbzA0S1RhRG1oRzFPUkF5S1lrZz09
Meeting ID: 691 6772 8936
Passcode: HGF