The European Physical Society Conference on High Energy Physics (EPS-HEP) is one of the major international conferences that reviews the field every second year since 1971 and is organized by the High Energy and Particle Physics Divison of the European Physical Society. Previous conferences in this series were held in Hamburg (online), Ghent, Venice, Vienna, Stockholm, Grenoble, Krakow, Manchester, Lisbon, and Aachen.
The 2023 European Physical Society conference for high energy physics will be hosted jointly by DESY and the Universität Hamburg. The conference will feature plenary, review and parallel sessions covering all major areas and developments in high energy and particle physics, astroparticle physics, neutrino physics and related areas.
We very much hope that this conference will be an event that brings scientists from our field together in person to discuss science and enjoy the excitement of an in-person conference!
We encourage participation in person, but limited remote participation will also be possible.
We ask everyone to register for the EPS-HEP2023 conference, if you want to attend any of the sessions. Remote access to sessions will be possible using a key given to registered participants only. For remote participation the registration fee will be signficantly reduced.
Participants to the conference are invited to submit abstracts for parallel session talks and for posters. Please follow the instructions at Call for Abstracts to submit and manage your abstracts.
The proceedings of the EPS-HEP2023 conference will be published in PoS - Proceedings of Science, the open access online journal organised by SISSA, the International School for Advanced Studies based in Trieste.
Registration | |
Opening | February 1, 2023 |
End of early registration | May 30, 2023 (extended to June 26, 2023) |
End of late registration | August 1, 2023 |
Abstract submission | |
Opening | March 8, 2023 |
Closing | June 2, 2023 |
Acceptance notification | June 20, 2023 |
Conveners:
Livia Conti (INFN)
Carlos Perez de los Heros (Uppsala University)
Martin Tluczykont (Universität Hamburg)
Gabrijela Zaharijas (UNG)
Contact: eps23-conveners-t01 @desy.de
NUSES is a new space mission project aiming to test innovative observational and technological approaches related to the study of low energy cosmic and gamma rays, high energy astrophysical neutrinos, Sun-Earth environment, Space weather and Magnetosphere-Ionosphere-Lithosphere Coupling (MILC). The satellite will host two experiments, named Terzina and Zirè. While Terzina will focus on space based detection of ultra high energy cosmic ray or neutrino induced extensive air showers, Zirè will perform measurements of electrons, protons and light nuclei from few up to hundreds of MeVs, also testing new tools for the detection of cosmic MeV photons. Monitoring of possible MILC signals will also be possible extending the sensitivity to very low energy electrons with a dedicated Low Energy Module (LEM). Innovative technologies for space-based particle detectors, e.g. exploiting Silicon Photo Multipliers (SiPMs) for the light readout system, will be adopted. In this work, a general overview of the scientific goals, the design activities, and the overall status of the NUSES mission will be presented and discussed.
Axionlike particles (ALPs) are predicted in many extensions of the Standard Model and are viable dark matter candidates. These particles could mix with photons in the presence of a magnetic field. Searching for the effects of ALP-photon mixing in gamma-ray observations of blazars has provided some of the strongest constraints on ALP parameter space so far. For the first time, we perform a combined analysis on Fermi Large Area Telescope data of three bright flaring flat-spectrum radio quasars, with the blazar jets themselves as the dominant mixing region. We include a full treatment of photon-photon dispersion within the jet and account for the uncertainty in our B-field model by leaving the field strength free in the fitting. Overall, we find no evidence for ALPs but are able to exclude the photon-ALP couplings > 5e-12 / GeV for ALP masses between 5 neV and 200 neV with 95% confidence. In this mass region, these are the strongest bounds on the photon-ALP coupling to date from gamma-ray observations.
The High Energy cosmic-Radiation Detection (HERD) facility, planned for launch in 2027 and is one of the scientific payloads on board of the Chiniese Space Station. HERD's primary scientific objectives covers several high energy astrophysics topics, including the search for dark matter annihilation products, precise measurements of the cosmic electron (and positron) spectrum beyond 10 TeV, analysis of cosmic ray spectra for various species up to the knee energy, and the monitoring and surveying of high-energy gamma rays. At the heart of HERD lies a 3-dimensional imaging calorimeter, surrounded by a fiber tracker, a plastic scintillator detector, and a silicon charge detector on five sides. To ensure calibration of TeV nuclei, a transition radiation detector is employed. Thanks to its design with five instrumented sides, HERD has an acceptance area an order of magnitude greater than that of existing experiments. In this presentation, I will provide an overview of the recent progress made in the HERD project.
The IceCube Neutrino Observatory has detected neutrinos from various astrophysical sources with its 1km3 detector volume in Antarctic ice. IceTop, the cosmic-ray detector on the surface of IceCube, consists of 81 pairs of ice-Cherenkov tanks. The rise in threshold of measurements due to accumulating snow inspired the next generation of South Pole detectors comprising of elevated scintillator panels and radio antennas controlled by a central DAQ system referred to as the Surface Array Enhancement (SAE). The planned IceCube Gen-2 Surface Array is expected be built on the same design. An initial study with the SAE prototype station was already conducted. We briefly review the Enhancement and the deployment as well as calibration status of the upcoming stations of the planned array of 32 stations. The focus of this contribution is on the radio detection of extensive air showers. A preliminary proof of concept for the Xmax estimation with the data from the 3 antennas of the prototype station was carried out. An extension of the method from previous analyses is also briefly discussed.
The Baikal-GVD is a large neutrino telescope being under construction in Lake Baikal. Recently it is the largest neutrino telescope operating in Northern Hemisphere. The result of the winter expedition of 2023 is the three-dimensional array of 3 456 photo-sensitive units (optical modules). The data collection is allowed by the design of the experiment while being in a construction phase. In this contribution we review the design and the basic characteristics of the Baikal-GVD detector. Some preliminary results on diffuse neutrino flux measurements with the partially completed detector will be presented.
Conveners:
Summer Blot (DESY)
Pau Novella (IFIC)
Davide Sgalaberna (ETH)
Jessica Turner (Durham University)
Contact: eps23-conveners-t04 @desy.de
Borexino is a 280-ton liquid scintillator detector that took data from May 2007 to October 2021 at Laboratori Nazionali del Gran Sasso in Italy. Thanks to the unprecedented radio-purity of the detector, the real time spectroscopic measurement of solar neutrinos from both the pp chain and CNO fusion cycle of the Sun has been performed. Borexino also reported the first directional measurement of sub-MeV $^7$Be solar neutrinos with Phase-I period (May 2007-May 2010) using a novel technique called Correlated and Integrated Directionality (CID). In this technique, the directional solar neutrino signal is discriminated from the isotropic background events by correlating the well known position of the Sun and the direction of early hits of each event, exploiting the sub-dominant Cherenkov photons emitted at early times. This angular distribution in data is fitted with the signal and background distributions from Borexino Monte Carlo simulation to infer the number of solar neutrinos, after taking into account all the systematics. For the first time, we provide the CNO rate measurement without using an independent constraint on $^{210}$Bi background rate by exploiting CID technique on full Borexino detector live time dataset. This talk will present the complete analysis strategy and latest results on CNO solar neutrinos obtained by using CID technique in Borexino. In addition, we also present the most precise CNO measurement obtained by Borexino using a multivariate technique on Phase-III dataset as used in 2022 analysis, where the novel CID result is now applied as an additional constraint
KATRIN is probing the effective electron anti-neutrino mass by a precise measurement of the tritium beta-decay spectrum near its kinematic endpoint. Based on the first two measurement campaigns a world-leading upper limit of 0.8 eV (90% CL) was placed. New operational conditions for an improved signal-to-background ratio, the steady reduction of systematic uncertainties and a substantial increase in statistics allow us to expand this reach. In this talk, I will present the status of the KATRIN experiment and provide an insight into the latest result.
The smallness of neutrino masses is one of the most intriguing puzzles in the context of particle physics. One of the most natural ways to introduce naturally suppressed masses is the construction of dimension-5 effective operators, called Weinberg operators. In the presence of only the standard Higgs scalar doublet, these kinds of operators arise in the three usual seesaw models. In this manuscript, we want to understand which may be the consequences in this context of the addition of new scalar Higgs multiplets. We will take into account the possible UV completions of such models and the phenomenology due to the dimension-6 operators.
We discuss a TeV scale extension of the Standard Model in which a dark sector seeds neutrino mass generation radiatively within the linear seesaw mechanism. Since symmetry prevents tree-level contributions, tiny neutrino masses are generated at one-loop from spontaneous lepton number violation by the small vacuum expectation value of a Higgs triplet. The model can have sizeable rates for lepton flavour violating processes such as µ → eγ. We also comment on the implications for dark-matter and collider searches.
I will present some of the results obtained regarding the emergence of decoherence in neutrino oscillations. In our model all the particles, including the source and detector, are treated dynamically and evolved consistently with Quantum Field Theory; decoherence can emerge naturally given the time evolution of the initial state and the final state considered.
We have shown that some of the assumptions commonly used in the literature, such as the covariance of the wavepackets, are inconsistent. We found that a crucial ingredient for decoherence is the localization in space-time of the neutrino creation and detection: in Nature, such a measurement is usually carried out by environmental interactions, however it could also be approximated by considering localized wavefunctions in the final state. On the other hand, if the environmental interactions are not present (for example, if the decay happens in vacuum), the final position of the daughter particles will not be measured, i.e. they will be described by plane waves instead: in this case the neutrino is not localized either, and we don't have decoherence.
A consequence of the time-evolution is that a Gaussian wavepacket will gradually spread: I will show that such an effect could in principle affect decoherence; moreover it would depend on the absolute mass scale of the neutrino, not on the $\Delta m^2$, which could offer a possible way to probe such a parameter by studying the neutrino oscillations.
Conveners:
Laura Fabbietti (TU München)
Gunther Roland (MIT)
Jasmine Brewer (CERN)
Jeremi Niedziela (DESY)
Contact: eps23-conveners-t05 @desy.de
The physics of ultraperipheral ultrarelativistic heavy-ion collisions
gives an excellent opportunity to study photon-photon interaction.
Vast moving charged particles (nuclei) are surrounded by an electromagnetic field that can be considered as a source of (almost real) photons. The photon flux scales as the square of the nuclear charge, so $^{208}$Pb has a considerable advantage over protons as far as a flux of photons is considered.
Here we discuss possible future studies of photon-photon scattering using a planned ALICE 3 apparatus. ALICE 3 is planned as a next-generation heavy-ion detector for the LHC Runs 5 and 6. The broad range of (pseudo)rapidities and lower cuts on transverse momenta open a necessity to consider not only dominant box contributions but also other, not yet studied, subleading contributions, such as double-hadronic photon fluctuations, $t/u$-channel neutral pion exchange or resonance excitations ($\gamma \gamma \to R$) and deexcitation ($R \to \gamma \gamma$). Here we include $R = \pi^0$, $\eta$, $\eta'$ contributions. The resonance contributions give intermediate photon transverse momenta. However, these contributions can be eliminated by imposing windows on diphoton invariant mass. We study in detail individual fermionic box contributions. The electron/positron boxes dominate at low $M_{\gamma \gamma} <$ 1 GeV diphoton invariant masses.
The $Pb Pb \to Pb Pb \gamma \gamma$ cross section is calculated within equivalent photon approximation in the impact parameter space. Several differential distributions will be presented and discussed. We predict a huge cross section for typical ALICE 3 cuts, a few orders of magnitude larger than for current ATLAS or CMS experiments. We also consider the two-$\pi^0$ background, which can, in principle, be separated/eliminated at the new kinematical range for the ALICE-3 measurements by imposing dedicated cuts.
Measurements of direct photons provide valuable information on the properties of the quark-gluon plasma because they are colour-neutral and created during all phases of the collision. Sources of photons include initial hard scatterings, Bremsstrahlung and the fragmentation process, jet-medium interactions, and radiation from the medium. Direct thermal photons, produced by the plasma, are sensitive to the collective flow at photon production time and an effective medium temperature. Furthermore, Bose-Einstein correlations can be used to study the space-time evolution of the medium created in heavy-ion collisions with Hanbury Brown and Twiss interferometry. Direct prompt photons produced in hadronic collisions have minimal event activity from the hard process, allowing the isolation method to suppress background photons. Isolated photon measurements in pp and p--Pb collisions can constrain NLO pQCD predictions. Hadrons correlated with isolated photons are a promising channel to study the energy loss in heavy-ion collisions and to constrain the $Q^{2}$ of the initial hard scattering, obtaining information on the amount of energy lost by the parton recoiling off the photon.
The ALICE experiment reconstructs photons from conversion photons using its excellent tracking capabilities and directly in calorimeters. Combining these methods, ALICE can measure direct photons at mid-rapidity with transverse momentum from 0.4 GeV/$c$. This talk presents ALICE measurements of direct-photon distributions using statistical (decay-photon subtraction, thermal photons) and isolation (prompt photons) methods in different collision systems and energies and their correlations.
Photonuclear reactions are induced by the strong electromagnetic field generated by ultrarelativistic heavy-ion collisions. These processes have been extensively studied in ultraperipheral collisions, in which the impact parameter is larger than twice the nuclear radius. In recent years, the observation of coherent J/$\psi$ photoproduction has been claimed in nucleus--nucleus (A--A) collisions with nuclear overlap, based on the measurement of an excess (with respect to hadroproduction expectations) in the very low transverse momentum ($p_{\rm T}$) J/$\psi$ yield. Such quarkonium measurements can help constraining the nuclear gluon distribution at low Bjorken-x and high energy. In addition, they can shed light on the theory behind photon induced reactions in A--A collisions with nuclear overlap, including possible interactions of the measured probes with the formed and fast expanding quark-gluon plasma. In order to confirm the photoproduction origin of the very low-$p_{\rm T}$ J/$\psi$ yield excess, polarization measurement is a golden observable. It is indeed expected that the produced quarkonium would keep the polarization of the incoming photon due to s-channel helicity conservation. ALICE can measure inclusive and exclusive quarkonium production down to zero transverse momentum, at forward rapidity (2.5 <$y$< 4) and midrapidity (|$y$|< 0.9). In this contribution, we will report on the new preliminary measurement of the $y$-differential cross section and the new first polarization analysis at LHC of coherently photoproduced J/$\psi$ in peripheral Pb--Pb collisions. Both measurements are conducted at forward rapidity in the dimuon decay channel. These results will be discussed together with the recent results on coherent J/$\psi$ photoproduction as a function of centrality at both mid and forward rapidities. Comparison with models will be shown when available.
One of the main challenges in nuclear physics is studying the structure of the atomic nucleus. Recently, it has been shown that high-energy heavy-ion collisions at RHIC and the LHC can complement low-energy experiments. Heavy-ion collisions provide a snapshot of the nuclear distribution at the time of collisions, offering a unique and precise probe of the nuclear structure.
This talk presents our latest developments in nuclear structure studies using the novel multi-particle correlations technique at relativistic energies. Specifically, we demonstrate how to precisely constrain the quadrupole deformation $\beta_2$ and triaxial structure of $^{129}$Xe and showcase new opportunities to observe the 𝛼-clustering structure of $^{16}$O using A Multi-Phase Transport model (AMPT). We propose a new multi-particle correlation algorithm that allows us to study genuine multi-particle correlations of the anisotropic flow, $v_{n}$, and the mean transverse momentum, $[p_{\rm T}]$. These new cumulants not only show better sensitivities to probe the nuclear structure than existing standard observables like anisotropic flow and/or event-by-event fluctuations of $[p_{\rm T}]$, but they also help to pin down the uncertainty in the width of the nucleon and the neutron skin of $^{208}$Pb at the LHC. This approach can help resolve the current discrepancy of state-of-the-art nuclear theory predictions from Ab initio and recent determination from parity-violating asymmetries in polarised electron scattering from PREX. These latest developments have vast potential in heavy-ion data taking at the LHC. They will be a crucial component in spanning the bridge between the fields of low-energy nuclear physics at the MeV energy scale and high-energy heavy-ion physics at the TeV energy scale.
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
The elastic scattering of protons at 13 TeV is measured in the range of the protons??? transverse momenta allowing the access to the Coulomb-Nuclear-Interference region. The data were collected thanks to dedicated special LHC beta* = 2.5km optics. The total cross section as well as rho-parameter, the ratio of the real to imaginary part of the forward elastic scattering amplitude, are measured and compared to various models and to results from other experiments. The measurement of exclusive production of pion pairs at the LHC using 7 TeV data is also presented. This represents the first use of proton tagging to measure an exclusive hadronic final state at the LHC. In addition, the analysis of the momentum difference between charged hadrons in pp, p-lead, and lead-lead collisions of various energies is performed in order to study the dynamics of hadron formation. The spectra of correlated hadron chains are explored and compared to the predictions based on the quantized fragmentation of a three dimensional QCD helix string. If ready, the measurement of charged particle distributions using LHC data collected at 13.6 TeV of centre-of-mass energy will also be shown.
Exclusive and diffractive physics measurements are important for better understanding of the non-perturbative regime of QCD. Recent results from the CMS and TOTEM experiments using pp collisions at a center-of-mass energy of 13 TeV are presented in this talk.
We will gain unprecedented, high accuracy insights into internal structure of the atomic nucleus thanks to lepton-hadron collision studies in the coming years at the Electron-Ion-Collider (EIC) in the United States. A good control of radiative corrections is necessary for the EIC to be fully exploited and to extract valuable information from various measurements. However, there is a significant gap to fill: there are no automated simulation tools relevant for the EIC that can incorporate next-to-leading order (NLO) QCD radiative corrections.
This talk presents our implementation of photoproduction in MadGraph5_aMC@NLO, a widely used for (N)LO calculations at the Large Hadron Collider (LHC), at fixed order. It applies to in electron-hadron collisions, in which the quasi-real photon comes from an electron and to proton-nucleus, nucleus-nucleus collisions. In addition, I will also present another extension of the MadGraph5_aMC@NLO framework towards asymmetric collisions in order to provide predictions e.g. for proton-nucleus collisions.
We evaluate the cross section for diffractive bremsstrahlung of a single photon in the $pp \to pp \gamma$ reaction at high energies and at forward photon rapidities. Several differential distributions, for instance, in ${\rm y}$, $k_{\perp}$ and $\omega$, the rapidity, the absolute value of the transverse momentum, and the energy of the photon, respectively, are presented. We compare the results for our ``exact'' model with two versions of soft-photon-approximations, SPA1 and SPA2, where the radiative amplitudes contain only the leading terms proportional to $\omega^{-1}$. The SPA1, which does not have the correct energy-momentum relations, performs surprisingly well in the kinematic range considered. We discuss also azimuthal correlations between outgoing particles. The azimuthal distributions are not isotropic and are different for our exact model and SPAs. We discuss also the possibility of a measurement of two-photon-bremsstrahlung in the $pp \to pp \gamma \gamma$ reaction. In our calculations we impose a cut on the relative energy loss ($0.02 < \xi_{i} < 0.1$, $i = 1,2$) of the protons where measurements by the ATLAS Forward Proton (AFP) detectors are possible. The AFP requirement for both diffractively scattered protons and one forward photon (measured at LHCf) reduces the cross section for $p p \to p p \gamma$ almost to zero. On the other hand, much less cross-section reduction occurs for $pp \to pp \gamma \gamma$ when photons are emitted in opposite sides of the ATLAS interaction point and can be measured by two different arms of LHCf. For the SPA1 ansatz we find $\sigma(pp \to pp \gamma \gamma) \simeq 0.03$~nb at $\sqrt{s} = 13$ TeV and with the cuts $0.02 < \xi_{i} < 0.1$, $8.5 < {\rm y}_{3} < 9$, $-9 < {\rm y}_{4}< -8.5$. Our predictions can be verified by ATLAS and LHCf combined experiments. We discuss also the role of $p p \to p p \pi^0$ background for single photon production.
References: P. Lebiedowicz, O. Nachtmann, A. Szczurek, arXiv:2303.13979 [hep-ph].
We discuss production of far-forward $D$ mesons/antimesons and neutrinos/antineutrinos from their semileptonic decays in proton-proton collisions at the LHC energies. We include the gluon-gluon fusion $gg \to c\bar{c}$, the intrinsic charm (IC) $gc \to gc$ as well as the recombination $gq \to Dc$ partonic mechanisms. The calculations are performed within the $k_T$-factorization approach and the hybrid model using different unintegrated parton distribution functions (uPDFs) for gluons from the literature, as well as within the collinear factorization approach. We compare our results to the LHCb data for forward $D^{0}$-meson production at $\sqrt{s} = 13$ TeV for different rapidity bins in the interval $2 < y < 4.5$. The IC and recombination model give negligible contributions at the LHCb kinematics. Both the mechanisms start to be crucial at larger rapidities and dominate over the standard charm production mechanisms. At high energies there are so far no experiments probing this region. We present uncertainty bands for the both mechanisms. Somewhat reduced uncertainty bands will be available soon from fixed-target charm meson production experiments in $pA$-collisions. We present also energy distributions for forward electron, muon and tau neutrinos to be measured at the LHC by the currently operating FASER$\nu$ experiment, as well as by future experiments like FASER$\nu2$ or FLArE, proposed very recently by the Forward Physics Facility project.
Contributions of different mechanisms are shown separately. For all kinds of neutrinos (electron, muon, tau) the subleading contributions, i.e. the IC and/or the recombination, dominate over light meson (pion, kaon) and the standard charm production contribution driven by fusion of gluons for neutrino energies $E_{\nu} > 300$ GeV. For electron and muon neutrinos both the mechanisms lead to a similar production rates and their separation seems rather impossible. On the other hand, for $\nu_{\tau} + {\bar \nu}_{\tau}$ neutrino flux the recombination is reduced further making the measurement of the IC contribution very attractive.
[1] R. Maciuła and A. Szczurek, Far-forward production of charm mesons and neutrinos at forward physics facilities at the LHC and the intrinsic charm in the proton, Phys. Rev. D 107, no.3, 034002 (2023).
Conveners:
Thomas Blake (University of Warwick)
Marzia Bordone (CERN)
Thibaud Humair (MPP)
Contact: eps23-conveners-t08 @desy.de
The entanglement in the neutral kaon pairs produced at the DA$\Phi$NE $\phi$-factory is a unique tool to test discrete symmetries. The exchange of in and out states required for a genuine test involving an antiunitary transformation implied by time-reversal is implemented exploiting the entanglement of ${K}^0\bar{K}{}^0$ pairs produced at a $\phi$-factory. We will present the final result of the first direct test of CPT and T symmetries in neutral kaon transitions performed at KLOE.
Novel quantum phenomena have been recently discussed [1] in association to a peculiar time correlation between entangled neutral kaons produced at a φ-factory: the past state of the first decayed kaon, when it was still entangled before its decay, is post-tagged by the result and the time of the future observation on the other kaon decay. This surprising “from future to past” effect is fully observable. Preliminary results obtained on the analysis of data collected by the KLOE experiment at the DAΦNE collider are presented, showing experimental evidence of this new effect.
[1] J. Bernabeu, A. Di Domenico, Phys. Rev. D 105 (11) (2022) 116004.
The NA62 experiment at CERN collected the world's largest dataset of charged kaon decays in 2016-2018, leading to the first measurement of the branching ratio of the ultra-rare $K^+ \rightarrow \pi^+ \nu \bar\nu$ decay, based on 20 candidates. In this talk the NA62 experiment reports recent results from analyses of $K^+ \rightarrow \pi^0 e^+ \nu \gamma$, $K^+ \rightarrow \pi^+ \mu^+ \mu^-$ and $K^+\rightarrow \pi^+ \gamma \gamma$ decays, using a data sample recorded in 2017--2018. The radiative kaon decay $K^+ \rightarrow \pi^0 e^+ \nu \gamma$ (Ke3g) is studied with a data sample of O(100k) Ke3g candidates with sub-percent background contaminations. Results with the most precise measurements of the Ke3g branching ratios and T-asymmetry are presented. The $K^+ \rightarrow \pi^+ \mu^+ \mu^-$ sample comprises about 27k signal events with negligible background contamination, and the presented analysis results include the most precise determination of the branching ratio and the form factor. The $K^+ \rightarrow \pi^+ \gamma \gamma$ sample contains about 4k signal events with $10\%$ background contamination, and the analysis improves the precision of the branching ratio measurement by a factor of 3 with respect to the previous measurements.
Rare kaon decays are among the most sensitive probes of both heavy and light new physics beyond the Standard Model description thanks to high precision of the Standard Model predictions, availability of very large datasets, and the relatively simple decay topologies. The NA62 experiment at CERN is a multi-purpose high-intensity kaon decay experiment, and carries out a broad rare-decay and hidden-sector physics programme. Recent NA62 results on searches for violation of lepton flavour and lepton number in kaon decays, and searches for production of hidden-sector mediators in kaon decays, are presented. Future prospects of these searches are discussed.
We give a general prescription for the transformation between four-fermion effective operator bases via corrected Fierz identities at the one-loop level. The procedure has the major advantage of only relating physical operators between bases, eliminating the necessity for Fierz-evanescent operators, thereby reducing the number of operators which enter in higher-order EFT computations. Additionally, when performing basis changes using loop-corrected Fierz identities, the dependence on renormalization scheme factorizes between the two bases, implying that such transformations simultaneously change renormalization scheme along with the operator basis. We illustrate the utility of loop-corrected Fierz identities in flavor physics through several examples of BSM phenomenology.
Conveners:
Ilaria Brivio (Universita di Bologna)
Karsten Köneke (Universität Freiburg)
Matthias Schröder (Universität Hamburg)
Contact: eps23-conveners-t09 @desy.de
The mass of the Higgs boson is a fundamental parameter of the Standard Model which can be measured most precisely in its decays to four leptons and two photons, which benefit from excellent mass resolution. The total width of the Higgs boson is another important parameter for Higgs sector phenomenology. It is too small to be measured directly at the LHC, but indirect measurements can be performed using the off-shell Higgs boson production process in the ZZ and WW final states, as well as through interference effects in the diphoton decay channel.
This talk presents the most recent mass and width measurements by the ATLAS experiment using the full Run 2 dataset of pp collisions at the LHC collected at 13 TeV.
The Higgs boson mass, and its decay width, are fundamental properties of this particle. Here, we discuss the latest measurements of these properties, as well as their future prospects, with the CMS experiment.
While the Standard Model predicts that the Higgs boson is a CP-even scalar, CP-odd contributions to the Higgs boson interactions with vector bosons and fermions are presently not strongly constrained. A variety of Higgs boson production processes and decays can be used to study the CP nature of the Higgs boson interactions. This talk presents the most recent results of such analyses by the ATLAS experiment, based on pp collision data collected at 13 TeV
We discuss recent results of Higgs boson measurements with the CMS experiment, where the Higgs boson has high transverse momentum and its decay products are merged. Several production modes and final state channels are presented.
We report progress on the inclusive hadroproduction of a Higgs+jet system at LHC and FCC collision energies. Kinematic sectors explored fall into the so-called semi-hard regime, where both the fixed-order and the high-energy dynamics come into play. We propose a novel version of a matching procedure aimed at combining NLO fixed-order computations, as obtained from POWHEG, with the NLL resummation of energy logarithms via JETHAD. We present preliminary analyses on assessing the weight of systematic uncertainties, such as the ones coming from: finite top- and bottom-quark masses, collinear parton densities, energy-scale variations. According to our knowledge, POWHEG+JETHAD represents a first and pioneering implementation of a matching in the context of the high-energy resummation at NLL and for rapidity-separated two-particle final states.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
The Mu2e experiment, currently under construction at Fermilab, will search for neutrinoless mu->e conversion in the field of an aluminum atom. A clear signature of this charged lepton flavor violating two-body process is given by the monoenergetic conversion electron of 104.97 MeV produced in the final state.
An 8 GeV/c pulsed proton beam interacting on a tungsten target will produce the pions decaying in muons; a set of superconducting magnets will drive the negative muon beam to a segmented aluminum target where the topped muons will eventually convert to electrons; a set of detectors will be used to both identify conversion electrons and reject beam and cosmic backgrounds.
The experiment will need 3-5 years of data-taking to achieve a factor of
$10^4$ improvement on the current best limit on the conversion rate.
After an introduction to the physics of Mu2e, we will report on the status of the different components of the experimental apparatus. The updated estimate of the experiment’s sensitivity and discovery potential will be presented.
A resonant structure has been observed at ATOMKI in the invariant mass of electron-positron pairs, produced after excitation of nuclei such as $^8$Be and $^4$He by means of proton beams. Such a resonant structure can be interpreted as the production of an hypothetical particle (X17) whose mass is around 17 MeV.
The MEG-II experiment at the Paul Scherrer Institut whose primary physics goal is the search for the charged lepton violation process $\mu$ $\rightarrow$ $ e \gamma$ is in the position to confirm and study this observation. MEG-II employs a source of protons able to accelerate them up to a kinetic energy of about 1 MeV. These protons are absorbed on a thin target where they excite nuclear transitions to produce photons for the Xenon calorimeter calibration of the MEG-II detector
By using a dedicated 2 $\mu$m thick target containing lithium atoms the $^7$Li(p,e$^+$e$^{-}$)$^8$Be process is being studied with a magnetic spectrometer including a cylindrical drift chamber and a system of fast scintillators. This aims to reach a better invariant mass resolution than previous experiments and to study the production of the X17 with a larger acceptance and therefore to shed more light into the nature of this observation.
After a 2022 engineering run, a month-long data-taking was conducted in Feb 2023. We report about our first results on the search and the study of this hypothetical X17 particle.
Charged lepton flavor violation (CLFV) is an unambiguous signature of new physics. In the Belle experiment, we study various CLFV signatures, which include $\tau$ leptons in the final state. In this presentation, we report searches for CLFV in $\Upsilon(1S) \to \ell^{\pm}\ell^{\prime\mp}$ and $\chi_{bJ}(1P)\to \ell^{\pm}\ell^{\prime\mp}$ decays, where $\ell,\ell^\prime
=e, \mu, \tau$ using $25 {\rm fb}^{-1}$ of $\Upsilon(2S)$ data. Using 772 million $B\bar{B}$ sample, we search for CLFV in $B^+\to\tau^{\pm}\ell^{\mp}$ decays, where $\ell=e,\mu$. Also,
we search for CLFV in $B^0_s\to\ell^{\pm}\tau^{\mp}$ decays, where $\ell=e,\mu$, using $121 {\rm fb}^{-1}$ of $\Upsilon(5S)$ sample.
Electric dipole moments (EDMs) of elementary particles are powerful probes of CP-violating New Physics (NP). In the context of a general two-Higgs doublet model (2HDM) which due to lack of any ad hoc discrete symmetry possesses complex extra Yukawa couplings that can help explain baryon asymmetry of the Universe (BAU), we discuss their NP contribution to EDMs of lepton and quarks. In leptons, while the electron EDM, given recent experimental improvements, continues to be the most sensitive probe of extra Yukawa couplings, we show that there exists NP scenarios in which muon EDM can be quite large and within sensitivity region of upcoming J-PARC and PSI experiments. We further present results for (chromo-)EDMs of various quarks. In particular, we show that neutron EDM together with electron EDM can provide crucial bound on the top Yukawa-driven BAU explanation in g2HDM. We also show the results for the top quark chromo-EDM, in light of recent analysis from CMS.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
We investigate the discovery potential for long-lived particles produced in association with a top-antitop quark pair at the (High-Luminosity) LHC. Compared to inclusive searches for a displaced vertex, top-associated signals offer new trigger options and an extra handle to suppress background. We propose a search strategy for a displaced di-muon vertex decaying in the tracking chambers, calorimeter or the muon chambers, in addition to a reconstructed top-antitop pair. Such a signature is predicted in many models with new light scalars or pseudo-scalars, which generically couple more strongly to top quarks than to light quarks. For axion-like particles with masses above the di-muon threshold and below the $b\bar{b}$ threshold, we find that the (High-Luminosity) LHC can probe effective top-quark couplings as small as $c_{tt}/f_a = 0.03~(0.01)~$TeV$^{-1}$ and proper decay lengths as long as $10~(400)$ m, with data corresponding to an integrated luminosity of 150 fb$^{-1}$ (3 ab$^{-1}$). In this talk I will present a summary of the analysis, including signal and background kinematics, the event selection, and predictions for LHC Run 2 and High-Luminosity LHC.
Some say SUSY is dead
, because LHC has not discovered it yet. But is this
really true? It turns out that the story is more subtle. SUSY can be 'just
around the corner', even if no signs of it has been found and a closer
look is needed to quantify the impact of LHC limits and their implications
for future colliders. In this contribution, a study of prospects for SUSY
based on scanning the relevant parameter space of (weak-scale) SUSY
parameters, is presented.
I concentrate on the properties most relevant to evaluate the experimental
prospects: mass differences, lifetimes and decay-modes. The observations are
then confronted with estimated experimental capabilities, including -
importantly - the detail of simulation these estimates are based upon.
I have mainly considered what can be expected from LHC and HL-LHC, where it
turns that large swaths of SUSY parameter space will be quite hard to access.
For e+e- colliders, on the other hand, the situation is simple:
at such colliders, SUSY will be either discovered or excluded almost to
the kinematic limit.
Supersymmetric models with low electroweak fine-tuning are more prevalent on the string landscape than fine-tuned models. We assume a fertile patch of landscape vacua containing the minimal supersymmetric standard model (MSSM) as a low-energy EFT. Such models are characterized by light higgsinos in the mass range of a few hundred GeV whilst top squarks are in the 1-2.5 TeV range. Other sparticles are generally beyond current LHC reach. We evaluate prospects for top squark searches of the expected natural SUSY at HL-LHC.
The existence of the magnetic monopole has eluded physicists for centuries. The NOvA Far Detector (FD), used for neutrino oscillation searches, also has the ability to identify magnetic monopoles. With a surface area of 4,100 m$^2$ and a location near the earth’s surface, the 14 kt FD provides us with the unique opportunity to be sensitive to potential low-mass monopoles unable to penetrate underground experiments. We have designed a novel data-driven triggering scheme that continuously searches the FD’s live data for monopole-like patterns. At the offline level, the largest challenge in reconstructing monopoles is to reduce the 148,000 Hz speed-of-light cosmic ray background. In the absence of any signal events in a 95-day exposure of the FD, we set limits on the monopole flux of $2 \times 10^{-14} \mathrm{cm}^{-2} \mathrm{s}^{-1} \mathrm{sr}^{-1}$ at 90% C.L. for monopole speed $6 \times 10^{-4} < \beta < 5 \times 10^{-3}$ and mass greater than $5 \times 10^8$GeV. In this talk, I will review the current monopole results and discuss the sensitivities of future searches using more than 8 years of collected FD data.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
The Pierre Auger Observatory was built to study ultra-high-energy cosmic rays. It has a hybrid design that allows one to observe the main features of extensive air showers with unprecedented precision. However, these discoveries have opened new questions about the nature of cosmic rays. One of the most intriguing is the discrepancy between the observed number of muons and the expected value from the more updated hadronic interaction models. Therefore, the design of AugerPrime, the upgrade of the Pierre Auger Observatory, includes the installation of a new detection system, the Underground Muon Detector (UMD), to perform a direct measurement of the number and temporal distribution of muons in extensive air showers. This presentation will be an overview of the main characteristics of the Underground Muon Detector: final design and deployment status, as well as the calibration and reconstruction processes. Furthermore, the first results obtained during the engineering array phase will be presented, showing the contribution of the UMD to solving the still open questions about cosmic ray physics.
We present an estimation of the noise induced by scattered light inside the main arms of the Einstein Telescope (ET) gravitational wave detector. Both ET configurations for high- and low-frequency interferometers are considered. As it is already the case in the existing experiments, like LIGO and Virgo, optically coated baffles are used to mitigate and suppress the noise inside the vacuum tubes. We propose baffle layouts for ET and compute the remaining scattered light noise contribution to ET sensitivity. Virgo has introduced the novel concept of instrumented baffles, with the aim to implement active monitoring of the stray light distribution close to the main mirrors. We present the technology and the comparison of the data with simulations, and show their potential to monitor the performance of the mirrors, the presence of defects and point absorbers in the mirror substrates, and to assist in the pre-alignment of the arms.
The Any Light Particle Search II (ALPS II) is a Light-Shining-through-a-Wall experiment operating at DESY, Hamburg. Its goal is to probe the existence of Axions and Axion Like Particles (ALPs), possible candidates for dark matter. In the ALPS II region of interest, a rate of photons reconverting from Axions/ALPs on the order of $10^{-5}$ cps is expected. A first science run at lower sensitivity based on a heterodyne detection method was successfully started in May 2023. The design sensitivity is expected to be reached in 2024. A complementary science run is foreseen with a single photon detection scheme. This requires a sensor capable of measuring low-energy photons (1.165 eV) with high efficiency and a low dark count rate. We investigate a tungsten Transition Edge Sensor (TES) system as a photon-counting detector that promises to meet these requirements. This detector exploits the drastic change in its resistance caused by the absorption of a single photon when operated in its superconducting transition region at millikelvin temperatures. In order to achieve the required sensitivity, the implementation of the TES into the ALPS II experiment needs to be carefully optimized. In this work, we present the progress on measurements for the characterization of our system and data analysis for background reduction. Additionally, an overview of ongoing setup simulations will be given, which are an essential step toward a comprehensive understanding of our system.
Neutron spectroscopy is an invaluable tool for many scientific and industrial applications, including searches for dark matter. In deep underground dark matter experiments, neutron induced background produced by cosmic ray muons and natural radioactivity, may mimic a signal. However, neutron detection techniques are complex and, thus, measurements remain elusive. Use of $^3$He based detectors - the most widely used technique, to date – is not a viable solution, since $^3$He is scarce and expensive.
A promising alternative for fast neutron spectroscopy is the use of a
nitrogen-filled Spherical Proportional Counter. The neutron can be detected and its energy measured through the $^{14}$N(n,$\alpha$)$^{11}$B and $^{14}$N(n,p)$^{14}$C reactions, which for fast neutrons have comparable cross sections to the $^3$He(n,p)$^3$H reaction. Furthermore, the use of a light element, such as nitrogen, keeps $\gamma$-ray efficiency low and enhances the signal to background ratio in mixed radiation environments. This constitutes a safe, inexpensive, effective and reliable alternative.
The latest developments in spherical proportional counter instrumentation, such as resistive multi-anode sensors for high-gain operation with high-charge collection efficiency and gas purifiers that minimize gas contaminants to negligible levels, which enable neutron detection with increased target mass, and thus higher efficiency, are presented. Measurements for fast and thermalised neutrons from an Am-Be source and from the University of Birmingham MC40 cyclotron are shown, and compared with simulations.
LaBr3:Ce crystals have been introduced for radiation imaging in medical
physics, with photomultiplier or single SiPM readout. An R & D was pursued
using 1” LaBr3:Ce to realize compact large area detectors with SiPM array readout, aiming at high light yields, good energy resolution, good detector linearity and fast time response for low-energy X-rays. A natural application was found inside the FAMU project at RIKEN-RAL muon facility, that aims at a precise measure of the proton Zemach radius to solve the so-called ”proton radius puzzle”, triggered by the recent measure of the proton charge radius at PSI. The goal is the detection of characteristic X-rays around 130 KeV. The drawbacks of these detectors in practical use are due both to SiPMs’ gain drift with temperature and to a worse timing, as compared to a conventional readout with photomultipliers (PMTs). The gain drift with temperature has been studied in laboratory, inside a climatic chamber, with different SiPM arrays, including the new Hamamatsu S14161 MPPC array with enhanced sensitivity in the UV region for PET. Correction for this effect have been studied and an effective correction was found by developing a custom 8-channels NIM module based on CAEN A7585D chips, with temperature feedback. The effect was reduced by a factor five, in the temperature range 10-40 ◦C.
The poor timing characteristics of the detectors (especially falltime), due to the large capacity of the used SiPM arrays, were also studied and different solutions were implemented. With a standard parallel ganging typical risetime (falltime) of the order of 50 (300) ns are obtained. Long falltime are a problem in experiments as FAMU, where a ”prompt” component must be separated from a ”delayed” one in the signal X-rays to be detected. A dedicated R & D was pursued to settle this problem starting from the hybrid ganging of SiPM cells, to the development of a suitable zero pole circuit with a parallel ganging, using an increased overvoltage and to finally the development of an ad-hoc electronics (1-4) to split the 1” SiPM array in 4 quadrants, thus reducing the detectors’ capacitance. The aim was to improve the timing characteristics, while keeping a good FWHM energy resolution. Reductions in falltime (risetime) up to a factor 2-3X were obtained with no deterioration of the energy resolution. A FWHM energy resolution better than 3 % (8%) at the Cs137 (Co57) peak was obtained. These results compare well with the best results obtained with a PMT readout.
Conveners:
Alessia Bruni (INFN Bologna)
Marie-Lena Dieckmann (Universität Hamburg)
Gwenhaël Wilberts Dewasseige (UC Louvain)
Contact: eps23-conveners-t14@desy.de
Communicating science through mobile smartphone and tablet applications is one of the most efficient ways to reach general public of diverse background and age coverage. The Higgsy project was created to celebrate the 10th anniversary of the discovery of the Higgs boson at CERN in 2022. This project introduces a mobile game to search for the Higgs boson production in a generic particle physics detector. The MatterBricks mobile game was created for a major national event in Belgium, held in 2023, which is an augmented-reality project to learn about elementary particles. The talk will cover the main features of the two mobile applications and will give further prospects for reaching general public through mobile application development process.
This talk describes an outreach exposition centered around a replica of the Alpha Magnetic Spectrometer Payload Operation Control Room (AMS POCC) as a means to help people comprehend the continuous monitoring and control of space mission payloads by various control rooms on Earth. The exposition's added value stems from the AMS collaboration's monitoring software development, enabling individuals to access AMS telemetry data. This innovation emerged during the pandemic, granting AMS collaborators the ability to participate remotely in the day-to-day operations of the experiment due to restrictions on physical access to the CERN site, where the AMS POCC is located. The replica of the POCC, along with accompanying posters and videos, serves as an effective starting point for communicating the significance of fundamental research in the areas of space radiation and cosmic rays.
The talk will also feature the initial implementation of the exposition, which took place on May 2023, in Bologna, during an event organized in collaboration with the Moon Village Association Italian branch and the Marco Peroni studio. The outreach event encompasses the above-mentioned exposition titled "Far yet so close: Cosmic Ray Measurements in Space with the Alpha Magnetic Spectrometer (ams02.space) and Ground Control Operations of Space Missions," as well as the exposition "Radiation and Safety on the Moon: Active Shielding for Radiation Protection." These exhibitions shed light on how fundamental research addresses the challenge of shielding against cosmic rays in lunar exploration. The AMS Roma Sapienza research group and the Marco Peroni studio jointly undertake the collaboration activities presented at the event.
The event took place within the "Living in Space" Permanent Exhibition located at the premises of the Marco Peroni Engineering Studio. The event successfully bridges the gap between scientific knowledge and public understanding through this approach. It allows individuals from diverse backgrounds to learn about and appreciate the advancements made in space exploration and radiation protection. By making science relatable, accessible, and visually stimulating, the outreach event fosters curiosity, encourages active participation, and promotes scientific literacy among non-scientific audiences.
Promoting and sharing the audio-visual heritage of the history of Italian physics are the two main goals of La Mediateca INFN: the history of physics through videos, the new cultural project of INFN, a website dedicated to a general audience, but in particular aimed at students of Italian schools and university researchers and students. Today, it includes almost 200 videos totaling more than 70 hours of interviews, documentaries, newscasts, conferences and seminars, giving rise to a digital archive open to everyone to do research, gather information, explore, and re-trace paths, anecdotes and events in the history of physics.
To make La Mediateca known especially by young students, a large in person and online event focusing on the project. The event was followed by over 600 classes, with almost 11 thousand high school students that connected from all over Italy. La Mediateca was also at the heart of a contest for high school students, called “Audioritratti di Scienza” (Science Audioportaits): over 500 students participated in the contest, submitting 130 original podcasts.
These initiatives were evaluated through two different tools: an assessment questionnaire filled in by the students who participated in the contest and the analysis of the numbers and behaviors of the users visiting the website La Mediateca INFN, from November 2022 until today.
During this talk, the main features of La Mediateca INFN will be presented. Furthermore, the reach of the website, the results of questionnaire and the participation in the project’s events will be discussed to understand how the history of physics can be a hook to engage young students and to make them closer to physics and science.
ScienzaPerTutti(*), literally ScienceForAll, is the web portal dedicated to Physics education and popularization of science curated by INFN Italian National Institute for Nuclear Physics’ researchers. The contents are mainly addressed to High School students and teachers and are designed to engage the audience with the main topics of modern research in particle and nuclear physics, theoretical and astroparticle physics. The missions are to promote public awareness of science, to raise interest towards the importance of discoveries along with the applications in everyday life, to support teaching/learning of modern physics using innovative methods.
The portal, created in 2002, has evolved through the years including different multimedia products like didactic units, research materials, columns, infographics, videos, interviews, book reviews, and podcasts, and expanding the reachability that has currently an average of 3000 entries every day.
After an introduction to the leading sections of ScienzaPerTutti web site, this contribution will present the development of the annual contest addressed to Middle and High-School students that, in 2022, arrived at its XVIII edition. Every year the contest is devoted to a different topic and participants are asked to design and realize a multimedia product to share their work. In 2023 the contest was centered on the Physics of sports, students have to choose a sport and describe the Physics beyond it. In particular, High School students were also asked to imagine the same sport played on another planet or in condition out of the ordinary to invent a new sport. 299 teams from 95 Italian schools applied for the 2023 competition and we will here report about the works and the outcomes.
(*) https://scienzapertutti.infn.it
The graphical program FeynGame is introduced, which allows didactic access to Feynman diagrams in a playful way. It offers didactic approaches for different levels of experience: from games involving simple clicking and drawing, to practicing the theory of fundamental interactions, to the mathematical representation of scattering amplitudes.
For the specialist, FeynGame may also represent a highly intuitive and flexible tool for drawing Feynman diagrams for publications which can be adjusted to personal needs and taste in an very simple way.
Opening plenary of the EPS-HEP2023
The 2023 EPS High Energy and Particle Physics Prize is awarded to Cecilia Jarlskog for the discovery of an invariant measure of CP violation in both quark and lepton sectors; and to the members of the Daya Bay and RENO collaborations for the observation of short-baseline reactor electron-antineutrino disappearance, providing the first determination of the neutrino mixing angle Θ13, which paves the way for the detection of CP violation in the lepton sector.
The 2023 Giuseppe and Vanna Cocconi Prize is awarded to the SDSS/BOSS/eBOSS collaborations for their outstanding contributions to observational cosmology, including the development of the baryon acoustic oscillation measurement into a prime cosmological tool, using it to robustly probe the history of the expansion rate of the Universe back to 1/5th of its age providing crucial information on dark energy, the Hubble constant, and neutrino masses.
The 2023 Gribov Medal is awarded to Netta Engelhardt for her groundbreaking contributions to the understanding of quantum information in gravity and black hole physics.
The 2023 Young Experimental Physicist Prize of the High Energy and Particle Physics Division of the EPS is awarded to Valentina M. M. Cairo for her outstanding contributions to the ATLAS experiment: from the construction of the inner tracker, to the development of novel track and vertex reconstruction algorithms and to the searches for di-Higgs boson production.
The 2023 Outreach Prize of the High Energy and Particle Physics Division of the EPS is awarded to Jácome (Jay) Armas for his outstanding combination of activities on science communication, most notably for the 'Science & Cocktails' event series, revolving around science lectures which incorporate elements of the nightlife such as music/art performances and cocktail craftsmanship and reaching out to hundreds of thousands in five different cities world-wide.
Conveners:
Livia Conti (INFN)
Carlos Perez de los Heros (Uppsala University)
Martin Tluczykont (Universität Hamburg)
Gabrijela Zaharijas (UNG)
Contact: eps23-conveners-t01 @desy.de
The KM3NeT Collaboration is incrementally building a network of water-Cherenkov neutrino observatory in the Mediterranean Sea, consisting of two telescopes, named ARCA (Astroparticle Research with Cosmics in the Abyss) and ORCA (Oscillation Research with Cosmics in the Abyss), sharing the same detection technology. ARCA, located off the shores of Sicily, in its completed shape will be a cubic-kilometre scale modular telescope made of 230 detection units, optimised for neutrino astronomy in the TeV-PeV energy range. ORCA, off the shores of Toulon, will be a 7-Mton modular telescope made of 115 detection units, focused on neutrino oscillations and neutrino mass hierarchy, for neutrinos in the 1-100 GeV energy range. At the current time, ARCA consists of 21 detection units whereas ORCA has 15 already installed. Both telescopes have been already taking data for a few years, providing good understanding of backgrounds as well as of the expected signals and hence of the scientific potential of KM3NeT. The technique for neutrino detection and measurement is reviewed, along with outlooks for the completion of the two telescopes and the expected performances for detection of astrophysical neutrino sources, measurement of neutrino oscillation parameters and neutrino mass ordering. Contributions of KM3NeT to global efforts for multimessenger astronomy are also discussed. Early physics outputs of both telescopes are reported.
The ANTARES neutrino telescope was operational in the Mediterranean Sea from 2006 to 2022. The detector array, consisting of 12 lines with a total of 885 optical modules, was designed to detect high-energy neutrinos covering energies from a few tens of GeV up to the PeV range. Despite the relatively small size of the detector, the results obtained are relevant in the field of neutrino astronomy, due to the view of the Southern sky and the good angular resolution of the telescope. This presentation will give an overview of the legacy results of ANTARES, including searches for point sources, neutrinos from the galactic ridge, from dark matter annihilation, and from transients, as well as measurements of neutrino oscillations and limits on new physics.
The IceCube Neutrino Observatory is a cubic-kilometer scale neutrino detector at the South Pole. IceCube consists of over 5000 photosensors deployed on cables deep in the Antarctic ice. The sensors detect neutrinos via the Cherenkov light emitted by secondary particles produced in neutrino interactions.
With the measurement of the isotropic astrophysical neutrino flux in the TeV-PeV energy range, IceCube has opened a new window into the high-energy universe.
During the past few years, IceCube has detected deviations from isotropy with neutrino emission from the blazar TXS056+056 and the Seyfert Galaxy NGC1068. The neutrino emission spectra of both objects differ substantially, hinting at differences in the underlying production mechanisms.
Adding to the complexity of the neutrino sky, IceCube has recently measured neutrino emission from the Galactic Plane, which offers valuable new information to the study of galactic cosmic ray production and transport.
In this contribution, we will present an overview of IceCube's results on the origin of galactic and extra-galactic neutrino emission.
Astrophysical hypotheses suggest the existence of neutrinos beyond the energy range currently reached by optical detectors (> 10 PeV). The observation of such particles by capturing the coher- ent emission of their interaction in ice, i.e. Askaryan radiation, is the aim of the Radio Neutrino Observatory in Greenland (RNO-G). Located at Summit Station, RNO-G represents the first neu- trino detector oriented towards the Northern sky, and it will play a role in the future shaping of the larger IceCube-Gen2 Radio Array. The first installed stations of RNO-G are currently active and collecting data, while the full array will reach completion within the next years. The plan includes a grid of 35 radio stations, each designed to be low powered and autonomous. Learning from previous radio detectors, each station includes both shallow antennas mainly for cosmic-ray iden- tification, and in-ice deep antennas with a phased array trigger for detection and reconstruction. We present the motivation, design and current status of the detector.
Continuous gravitational waves are long-duration gravitational-wave signals
that still remain to be detected.
These signals are expected to be produced by rapidly-spinning
non-axisymmetric neutron stars, and would provide valuable information on the
physics of such compact objects; additionally, they would allow us to probe the
galactic population of EM-dark neutron stars, whose properties may be different
from the pulsar population observed through electromagnetic means.
Other sources include the evaporation of boson clouds around spinning black holes,
or binary systems of light compact objects such as planetary-mass black holes.
In this talk, I give a brief overview of the continuous gravitational-wave search
results produced by the LIGO-Virgo-KAGRA collaboration using data from their third
observing run O3, and discuss prospects from the now ongoing
fourth observing run O4.
The 5n-vector ensemble method is a statistical multiple test for the targeted search of continuous gravitational waves from an ensemble of known pulsars. This method can improve the detection probability combining the results from individually undetectable pulsars if few signals are near the detection threshold. In this presentation, I show the results of the 5n-vector ensemble method considering the O3 data set from the LIGO and Virgo detectors and an ensemble of 223 known pulsars. I show no evidence for a signal from the ensemble and set a 95% credible upper limit on the mean ellipticity assuming a common exponential distribution for the pulsars' ellipticitites. Using two independent hierarchical Bayesian procedures, the upper limits on the mean ellipticity are $2.2 \times 10^{-9}$ and $1.2 \times 10^{-9}$ for the analyzed pulsars.
The first observation of gravitational waves (GWs) with laser interferometers of the LIGO collaboration in 2015 was about 100 years after their prediction within general relativity. In this talk we focus on the detection of gravitational waves in a higher frequency regime with superconducting radio frequency (SCRF) cavities. This approach has already been considered as probes for GWs before laser interferometers were built and the operational spectrum reaches up to high GW frequencies above ∼10 kHz. Measurements in this frequency range could give possible hints to new physics beyond the standard model or insights into early universe phenomena.
The detection principle is based on the GW induced transition between two electromagnetic eigenmodes of the SCRF cavity. We consider the interplay of the indirect coupling to the cavity boundaries and the
direct coupling to the electromagnetic field explained by the Gertsenshtein effect. We precisely analyse all contributing effects and derive in detail the coupling coefficients for the frequency range O(kHz-GHz).
Aiming on improving the describtion of GWs the results are applied using the MAGO cavity prototype built at INFN in 2005 in Genoa. Together with FNAL the Universität Hamburg and DESY restart research on this detector by characterizing its geometry and mechanical and electromagnetic eigenmodes. The prototype parameters give predictions for achievable sensitivities in the desired frequency range and can be compared to possible GW generating sources. Further improvements on the MAGO cavity prototype parameters indicate that the region of new physics is in reach.
Conveners:
Fady Bishara (DESY)
James Frost (University of Oxford)
Silvia Scorza (LPSC Grenoble)
Contact: eps23-conveners-t03 @desy.de
We consider threshold effects of thermal dark matter (DM) pairs (fermions and antifermions) interacting with a thermal bath of dark gauge fields in the early expanding universe. Such threshold effects include the processes of DM pairs annihilating into the dark gauge fields (light d.o.f.) as well as electric transitions between pairs forming a bound state or being unbound but still feeling non-perturbative long range interactions (Sommerfeld effect). We scrutinize the process of bound-state formation (bsf) and the inverse thermal break-up process (bsd), but also (de-)excitations, providing a thermal decay width due to the thermal bath. We compute the corresponding observables by exploiting effective-field-theory (EFT) techniques to separate the various scales (the mass of the particles M, the momenta Mv, the energies Mv^2, as well as thermal scales: the temperature T, the Debye mass m_D), which are intertwined in general. To do so we make use of the so-called non-relativistic EFT (NREFT) as well as potential non-relativistic EFT (pNREFT) at finite T. These processes play an important role for a quantitative treatment of the dynamics of the relevant d.o.f. at the thermal freeze-out regime and the corresponding observables appear in the relevant evolution equations, from which we eventually determine the relic energy density of DM.
The null results of dark matter at experiments motivate us to look beyond the usual freeze-out mechanisms, and work out the upper bound on the dark matter masses which could be probed at experiments. In this talk, we shall briefly overview different production mechanisms, and correspondingly upper bounds of dark matter in those scenarios. In addition, we shall focus on the exponential mechanism for dark matter production and list out the differences of our approach with the ones in the literature.
Models of feebly-interacting Dark Matter (DM), potentially detectable in long-lived particle searches, have gained popularity due to the non-observation of DM in direct detection experiments. Unlike DM freeze-out, which occurs when the dark sector particles are non-relativistic, feebly-interacting DM is primarily produced at temperatures corresponding to the heaviest mass scale involved in the production process. Consequently, incorporating finite temperature corrections becomes essential for an accurate prediction of the relic density. However, current calculations are often performed at either zero temperature or rely on thermal masses to regulate infrared divergences. In our study, we utilize the Closed-Time-Path (CTP) formalism to compute the production rate of feebly-interacting DM associated with a gauge charged parent. We compare our results with the aforementioned approaches such as the insertion of thermal masses, zero temperature calculations and a recent calculation that interpolates between finite temperature results in the ultra-relativistic and non-relativistic regime. Furthermore, we discuss the applicability and feasibility of these different approaches for phenomenological studies.
Although the Standard Model is very successful, there are still open
problems which it cannot explain, one of it being dark matter (DM).
This has led to various Beyond Standard Model theories, of which Two
Higgs Doublet models are very popular, as they are one of the simplest
extensions and lead to a rich phenomenology. Further extensions with
a complex singlet lead to a natural DM candidate.
The aim of this work is to explore the dark sector in a Two
Higgs Doublet Model extended by a complex scalar singlet, where the
imaginary component of the singlet gives rise to a pseudo-scalar DM
candidate. Both, the doublets, and the singlet, obtain a vacuum ex-
pectation value (vev), where the singlet vev leads to additional mixing
of the doublet and the singlet scalar sector. We examine the influence
of the Higgs sector parameters on DM relic density as well as direct and indirect detection cross sections. The results are then compared with
constraints from experiments.
We investigate ways of identifying two kinds of dark matter component particles at high-energy colliders. The strategy is to notice and
distinguish double-peaks(humps) in some final state observable. We
carried out our analysis in various popular event topologies for dark
matter search, such as mono-X and n-leptons+n-jets final state along
with missing energy/transverse momenta. It turns out that a lepton-collider is suitable for such analyses. The observables which are best-suited for this purpose have been identified, based on the event topology. The implication of beam-polarization is also explored in detail. Lastly, a quantitative measure of the distinguishability of the two peaks has been established in terms of a few newly-constructed interesting variables.
The cold dark matter (CDM) candidate with weakly interacting massive
particles can successfully explain the observed dark matter relic
density in cosmic scale and the large-scale structure of the Universe.
However, a number of observations at the satellite galaxy scale seem
to be inconsistent with CDM simulation.
This is known as the small-scale problem of CDM.
In recent years, it has been demonstrated that
self-interacting dark matter (SIDM) with a light mediator offers
a reasonable explanation for the small-scale problem.
We adopt a simple model with SIDM and focus on the effects of
Sommerfeld enhancement.
In this model, the dark matter candidate is a leptonic scalar particle
with a light mediator.
We have found favored regions of the parameter space with proper masses and coupling strength generating a relic density that is
consistent with the observed CDM relic density.
Furthermore, this model satisfies the constraints of recent direct searches and indirect detection for dark matter
as well as the effective number of neutrinos and the
observed small-scale structure of the Universe.
In addition, this model with the favored parameters can resolve the
discrepancies between astrophysical observations and $N$-body simulations.
Axion-like particles (ALPs) are leading candidates to explain the dark matter in the universe. Their production via the misalignment mechanism has been extensively studied for cosine potentials characteristic of pseudo-Nambu-Goldstone bosons. In this work we investigate ALPs with non-periodic potentials, which allow for large misalignment of the field from the minimum. As a result, the ALP can match the relic density of dark matter in a large part of the parameter space. Such potentials give rise to self-interactions which can trigger an exponential growth of fluctuations in the ALP field via parametric resonance, leading to the fragmentation of the field. We study these effects with both Floquet analysis and lattice simulations. Using the Press-Schechter formalism, we predict the halo mass function and halo spectrum arising from ALP dark matter. These halos can be dense enough to produce observable gravitational effects such as astrometric lensing, diffraction of gravitational wave signals from black hole mergers, photometric microlensing of highly magnified stars, perturbations of stars in the galactic disk or stellar streams. These effects would provide a probe of dark matter even if it does not couple to the Standard Model. They would not be observable for halos predicted for standard cold dark matter and for ALP dark matter in the standard misalignment mechanism. We determine the relevant regions of parameter space in the (ALP mass, decay constant)-plane and compare predictions in different axion fragmentation models.
Axion kinetic misalignment is a mechanism that may enhance the dark matter relic found in models of QCD axions or axion-like-particles by considering initial conditions with large kinetic energy. This is interesting because it motivates axion dark matter at lower decay constants where the couplings to matter, incl. detectors, are stronger. I will here give an introduction to this mechanism. I will discuss some of the phenomenology that arises from kinetic misalignment and I will also briefly discuss our recent work on how the mechanism can be realized.
Conveners:
Laura Fabbietti (TU München)
Gunther Roland (MIT)
Jasmine Brewer (CERN)
Jeremi Niedziela (DESY)
Contact: eps23-conveners-t05 @desy.de
Light-flavour hadrons represent the bulk of particles produced in high-energy hadronic collisions at the LHC.
Measuring their pseudorapidity dependence provides information on the partonic structure of the colliding hadrons. It is, in particular at LHC energies sensitive to non-linear QCD evolution in the initial state.
In addition, measurements of light-flavour hadron production in small collision systems at the LHC energies have shown the onset of phenomena (e.g. radial flow and long-range correlations) that resemble what is typically observed in nucleus-nucleus collisions and attributed to the formation of a deconfined system of quarks and gluons.
The improved detector commissioned during LS2 makes ALICE the perfect setup for these measurements.
In this talk, particle production mechanisms are explored by addressing the charged-particle pseudorapidity densities measured in pp and Pb−Pb collisions, presenting the results from Run 3 for the first time.
In addition, new results on identified light-flavour particle production measured in high-multiplicity triggered events will be shown. These will be interpreted in light of the first results from Run 3 on the identified particle production in pp collisions as a function of multiplicity, spanning from the lowest collision energy of $\sqrt{s}$ = 900 GeV to the highest collision energy ever achieved in the laboratory of $\sqrt{s}$ = 13.6 TeV.
The study of strange particle production in heavy-ion collisions plays an important role in understanding the dynamics of the strongly interacting system created in the collision. The enhanced production of strange hadrons in heavy-ion collisions relative to that in pp collisions is historically one of the signatures of the formation of the quark-gluon plasma. The study of strangeness production in small collision systems is also of great interest. One of the main challenges in hadron physics is the understanding of the origin of the increase of (multi)strange hadron yields relative to pion yields with increasing charged-particle multiplicity observed in pp and p-Pb collision systems, a feature that is reminiscent of the heavy-ion phenomenology.
In this talk, new results on multiple productions of strange hadrons in pp collisions are presented. In addition, recent measurements on the production of (multi)strange hadrons in small collision systems as a function of multiplicity and effective energy are shown. These results are discussed in the context of state-of-the-art phenomenological models.
Charmonia have long been recognized as a valuable probe of the nuclear matter in extreme conditions, such as the strongly interacting medium created in heavy-ion collisions and known as quark-gluon plasma (QGP). At LHC energies, the regeneration process due to the abundantly produced charm quarks, was found to considerably affect measured charmonium observables. Comprehensive production measurements of charmonia, including both ground and excited states, are crucial to discriminate among different regeneration scenarios assumed in theoretical calculations. Charmonia can also be sensitive to the initial state of the heavy-ion collision. In particular, their spin-alignment can be affected by the strong magnetic field generated in the early phase, as well as by the large angular momentum of the medium in non-central collisions. The determination of the component originating from beauty hadron decays, known as non-prompt charmonium, grants a direct insight into the nuclear modification factor of beauty hadrons, which is expected to be sensitive to the energy loss experienced by the ancestor beauty quarks inside the QGP. Furthermore, once it is subtracted from the inclusive charmonium production, it allows for a direct comparison with prompt charmonium models.
In this contribution, newly published results of inclusive J/$\psi$ production, including yields, average transverse momentum and nuclear modification factors, obtained at central and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, will be presented. At midrapidity, newly published measurements of prompt and non-prompt J/$\psi$ production will also be shown. Recently published results obtained at forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV will be discussed. These include, among others, the $\psi$(2S)-to-J/$\psi$ (double) ratio and the $\psi$(2S) nuclear modification factor, as well as the J/$\psi$ polarization with respect to a quantization axis orthogonal to the event-plane. Results will be compared to available model calculations.
Electromagnetic probes such as photons and dielectrons (e$^{+}$e$^{-}$ pairs) are a unique tool to study the space-time evolution of the hot and dense matter created in ultra-relativistic heavy-ion collisions. They are produced at all stages of the collision with negligible final-state interactions. At intermediate dielectron invariant mass ($m_{\rm ee} > 1$ GeV/$c^{2}$), thermal radiation from the quark-gluon plasma carries information about the early temperature of the medium. At LHC energies, it is however dominated by a large background from correlated heavy-flavour hadron decays. At smaller $m_{\rm ee}$, thermal radiation from the hot hadronic phase contributes to the dielectron spectrum via decays of $\rho$ mesons, whose spectral function is sensitive to chiral-symmetry restoration. Finally, at vanishing $m_{\rm ee}$, the real direct photon fraction can be extracted from the dielectron data. In pp collisions, such measurement in minimum bias events serves as a baseline and a fundamental test for perturbative QCD calculations, while studies in high charged-particle multiplicity events allow one to search for thermal radiation in small colliding systems. The latter show surprising phenomena similar to those observed in heavy-ion collisions.
In this talk, final ALICE results, using the full data sample collected during the LHC Run 2, will be presented. They include measurements of the dielectron and direct-photon production in central Pb--Pb at the centre-of-mass energy per nucleon pairs, $\sqrt{s_{\rm NN}}$, of 5.02 TeV, as well as of direct photons in minimum bias and high-multiplicity pp collisions at $\sqrt{s} = 13$ TeV. Finally, first results with the Run 3 pp data at $\sqrt{s} = 13.6$ TeV, using the upgraded ALICE detector to disentangle the different dielectron sources, will be reported.
Hypernuclei are bound states of nucleons and hyperons. The study of their properties, such as their lifetimes and binding energies, provide information on the hadronic interaction between hyperons and nucleons which are complementary to those obtained from correlation measurements. Precise modeling of this interaction is a fundamental input for the calculation of the equation of state of high-density nuclear matter inside neutron stars. Moreover, measurements of their production rate in different collision systems are important to constrain (hyper)nuclei production models, such as the statistical hadronization model and baryon coalescence.
In this presentation, recent results on (anti)hypertriton production in small collision systems and the first-ever observations of the (anti)hyperhydrogen-4 and (anti)hyperhelium-4 in Pb-Pb collisions are presented. These measurements pave the way for detailed investigations of the large charge symmetry breaking implied by the Λ binding energy difference in these hypernuclei. Moreover, differential measurements of their productions yields will contribute to a better understanding of their production models. Recent results on the hypertriton production, high-precision measurements of its lifetime and binding energy in Pb-Pb collisions will also be shown and discussed in the context of the state-of-the-art theoretical models.
The investigation of the quark content of hadrons has been a major goal of non-perturbative strong interaction physics. In the last decade, several resonances in the mass range 1000-2000 MeV/$c^2$ have emerged that cannot be explained by the quark model. The internal structure of exotic resonances such as $\rm f_0$, $\rm f_1$, and $\rm f_2$ is currently unknown. Different scenarios are possible ranging from two-quark, four-quark, molecule, a hybrid state, or glueballs. A modification of the measured yields of these exotic hadrons in AA and pA collisions as compared to pp collisions has been proposed as a tool to investigate their internal structure.
The excellent particle identification capabilities of the ALICE detector along with the large data sample collected in pp and p-Pb collisions provide an opportunity for multi-differential studies of such high-mass resonances. In this presentation, the first-ever measurement of $\rm f_1$ production in pp collisions and measurements of $\rm f_0$ and $\rm f_2$ production both in pp and p-Pb collisions will be presented. The measurements of their mass, width, and yields will be presented and their sensitivity to the internal structure of these exotic resonances will be discussed. These results will pave the way for future experimental investigations on the internal structure of other exotic hadrons.
Short-lived hadronic resonances are unique tools for studying the hadron-gas phase that is created in the late stages of relativistic heavy-ion collisions. Measurements of the yield ratios between resonances and the corresponding stable particles are sensitive to the competing rescattering and regeneration effects. These measurements in small collision systems, such as pp and p-Pb, are a powerful method to reveal a possible short-lived hadronic phase. In addition, resonance production in small systems is interesting to study the onset of strangeness enhancement, collective effects, and the hadron production mechanism. On this front, the $\phi$ meson is particularly relevant since its yield is sensitive to different production models: no effect is expected by strange number canonical suppression but its production is expected to be enhanced in the rope-hadronization scenario.
In this presentation, recent measurements of hadronic resonances in different collision systems, going from pp to Pb-Pb collisions, are presented. These include transverse momentum spectra, yields, and yield ratios as a function of multiplicity and spherocity. The presented results are discussed in the context of state-of-the-art phenomenological models of hadron production. The resonance yields measured in Xe-Xe and Pb-Pb collisions are used as an experimental input in a partial chemical equilibrium-based thermal model to constrain the kinetic freeze-out temperature. This is a novel procedure that is independent of assumptions on the flow velocity profile and the freeze-out hypersurface.
We present a selection of very recent results by the CMS collaboration on heavy-ion physics at the LHC (CERN).
The center-of-mass energies available at modern accelerators, such as the Large Hadron Collider (LHC), and at future generation accelerators, such as the Electron-Ion Collider (EIC) and Future Circular Collider (FCC), offer us a unique opportunity to investigate hadronic matter under the most extreme conditions ever reached. One of the most intriguing phenomena of strong interaction is the so-called gluon saturation in nucleons and nuclei. In the saturation regime, the density of partons, per unit transverse area, in hadronic wavefunctions becomes very large leading to non-linear effects, that are described by the Balitsky-JIMWLK hierarchy of equations.
Pursuing the goal of obtaining accurate theoretical predictions to test the physics of saturation, we compute the cross-sections of diffractive single and double hadron photo- or electroproduction with large $p_T$, on a nucleon or a nucleus at next-to-leading logarithmic accuracy. We employ a hybrid formalism mixing collinear factorization and high energy small-$x$ factorization. This new class of processes provides an access to precision physics of gluon saturation dynamics, with very promising future phenomenological studies at the EIC, or, at the LHC in $p A$ and $A A$ scattering, using Ultra Peripheral Collisions (UPC).
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
“The modeling of the soft radiation in MC approaches and the inclusion of the intrinsic kT effect in a consistent and “simple” way is one of the successes of the Parton Branching TMD approach. In this approach, a consistent treatment of the Parton Shower evolution and the TMD evolution is carried out by the PB-TMD initial state shower. In this talk, the azimuthal correlation, φ12, of high transverse momentum jets in pp collisions at √s = 13 TeV is studied by applying PB-TMD distributions and the PB-TMD initial state shower to NLO calculations via MCatNLO. Also, in the same kinematics regime, the Z+jet azimuthal correlation is studied. The different patterns of Z+jet and dijet azimuthal correlations can be used to search for potential factorization-breaking effects in the back-to-back region, which depend on the different color and spin structure of the final states and their interferences with the initial states. In order to investigate these effects experimentally, we propose to measure the ratio of the distributions in φ for Z+jet- and multijet production at low and at high transverse momenta.”
Multi-jet events at various kinematic regimes are subject of wide scaled studies in the LHC program and future colliders. Merging of TMDs, parton showers and matrix elements is a delicate matter that is sensitive to the process and observable of interest. We present studies of the merging scale in the TMD merging framework, using the Cascade3 Monte Carlo generator. The merging scale separates hard and soft partonic emissions, and serves as an extension of the concept of factorization scale which allows one to treat exclusive production channels. Differential jet rates of Z plus jet events at LHC energies have been investigated to determine the dependence of theoretical predictions on the merging scale as a function of the DY mass, including the case of high-mass DY, and analyze the associated theoretical systematics.
At leading order in positron-proton collisions, a lepton scatters off a quark through virtual photon exchange, producing a quark jet and scattered lepton in the final state. The total transverse momentum of the system is typically small, however deviations from zero can be attributed to perturbative initial and final state radiations in the form of soft gluon radiation when the transverse momentum difference, $\vert\vec{P}_{\perp}\vert$, is much greater than the total transverse momentum of the system, $\vert\vec{q}_{\perp}\vert$. The soft gluon radiation comes only from the jet, and should result in a measurable azimuthal asymmetry between $\vec{P}_{\perp}$ and $\vec{q}_{\perp}$. Quantifying the contribution of soft gluon radiation to this asymmetry should serve as a novel test of perturbative QCD as well as an important background estimation for measurements of the lepton-jet imbalance that have recently garnered intense investigation. The measurement is performed in positron-proton collisions from HERA Run II measured with the H1 detector. A new machine learning method is used to unfold eight observables simultaneously and unbinned. The final measurement, the azimuthal angular asymmetry, is then derived from these unfolded and unbinned observables. Results are compared with parton shower Monte Carlo predictions as well as soft gluon radiation calculations from a Transverse Momentum Dependent (TMD) factorization framework.
https://www-h1.desy.de/h1/www/publications/htmlsplit/H1prelim-23-031.long.html
We present novel analyses on accessing the 3D gluon content of the proton via spin-dependent TMD gluon densities, calculated through the spectator-model approach. Our formalism embodies a fit-based spectator-mass modulation function, suited to catch longitudinal-momentum effects in a wide kinematic range. Particular attention is paid to the time-reversal even Boer-Mulders and the time-reversal odd Sivers functions, whose accurate knowledge, needed to perform precise 3D analyses of nucleons, motivates synergies between LHC and EIC Communities.
We present a novel method of extraction of the Collins-Soper kernel directly from the comparison of differential cross-sections measured at different energies. Using this method, we analyze the simulated data from the CASCADE event generator and extract the Collins-Soper kernel predicted by the Parton Pranching method in the wide range of transverse distances. Using the method, we also solve a long standing problem of comparison between TMDs obtained from PB and factorization approaches.
The Transverse Momentum Dependent (TMD) Parton Branching (PB) method is a Monte Carlo (MC) framework to obtain QCD high energy collider predictions grounded in ideas originating from the TMD factorization. It provides an evolution equation for the TMD parton distribution functions and allows to use those within TMD MC generators.
In this work, we analyze the structure of the TMD PB Sudakov form factor. We discuss the logarithmic order of the low-qt resummation achieved so far by the PB method by comparing its Sudakov form factor to the Collins-Soper-Sterman (CSS) one and we illustrate how the accuracy of PB can be increased by using the ideas of physical (effective) coupling. By using appropriate integration limits in PB, we show how we can analytically identify a term analogous to Collins-Soper (CS) kernel. We investigate the effects of different evolution scenarios on PB TMDs and integrated TMDs and on a numerical extraction of the CS kernel.
The Parton-Branching method (PB) allows the determination of Transverse Momentum Dependent (TMD) parton densities, which cover the region from very small to $k_T$. In the very small $k_T$ region, the contribution from the intrinsic motion of partons (intrinsic $k_T$) plays a role, but also contributions of very soft gluons, which are resummed in the evolution equation. A detailed study shows the importance of very soft gluons (below a resolvable scale) to both the integrated as well as TMD parton densities.
The PB TMD parton densities together with a NLO calculation for the hard process in the MC@NLO style are used to calculate the transverse momentum spectrum of Drell-Yan pairs over a wide mass range. The sensitivity to the intrinsic $k_T$-distribution is used to determine its free parameters. Starting from the PB-NLO-HERAI+II-2018 set2 TMD parton distributions, the width of the intrinsic $k_T$ -distribution is determined, resulting in a slightly larger width than in the default set.
The width of the intrinsic $k_T$-distribution is independent of the mass of the Drell-Yan pair and independent of the center-of-mass energy $\sqrt{s}$, in contrast to other approaches.
QCD calculations for collider physics make use of perturbative solutions of renormalisation group equations (RGEs). RGE solutions can contribute significantly to systematic uncertainties of theoretical predictions for physical observables. We propose a method to express these systematic effects in terms of resummation scales, using techniques borrowed from soft-gluon resummation approaches. We discuss applications to collinear and Sudakov processes at hadron colliders, including deep-inelastic scattering (DIS) structure functions and the Drell-Yan (DY) vector-boson transverse momentum distribution.
The talk is based on work in progress in collaboration with V. Bertone
and G. Bozzi and on work published in Phys. Rev. D 105 (2022) 096003
"Perturbative hysteresis and emergent resummation scales".
Conveners:
Thomas Blake (University of Warwick)
Marzia Bordone (CERN)
Thibaud Humair (MPP)
Contact: eps23-conveners-t08 @desy.de
The Fermilab muon $g-2$ experiment was designed to measure the muon's anomalous magnetic moment $a_\mu=(𝑔−2)/2$ to 140 parts per billion. The value of $a_\mu$ is proportional to the difference frequency $\omega_a = \omega_s - \omega_c$ between the muon's cyclotron frequency and spin precession frequency in the uniform magnetic field of the $g-2$ storage ring. The frequency $\omega_a$ is extracted from the time distribution of the mu-decay positrons recorded by 24 electromagnetic calorimeters positioned around the inner circumference of the storage ring. We will discuss the various approaches to the frequency determination including the reconstruction and fitting of time distributions, fitting of time distributions, and procedures for handling the effects of gain changes, positron pileup and beam dynamics. We also discuss the data consistency checks and the strategy for the averaging of the $\omega_a$ across the different analyses.
The Muon g-2 experiment at Fermilab measures the muon magnetic-moment anomaly, $a_\mu=(g-2)/2$, with the ultimate goal of 140 parts per billion (ppb) precision. This requires determining the absolute magnetic field, averaged over space and time, experienced by the muons, expressed as the nuclear magnetic resonance frequency of protons in a spherical pure water sample at a specified reference temperature. A chain of calibrations and measurements maps and tracks the magnetic field providing the muon-weighted average field with precision better than 60 ppb. This talk will present the principles, practical realizations, and innovations incorporated into the measurement and analysis of the magnetic field for the 2019-20 data sets.
The Muon g-2 experiment at Fermilab is making progress towards its physics goal of measuring the muon anomalous magnetic moment with the unprecedented precision of 140 parts per billion. In April 2021 the collaboration published the first measurement, based on the first year of data taking. The second result is based on the second and third years of data taking combined. In this talk, we discuss the corrections to the anomalous spin precession signal due to beam dynamics effects being used to determine the anomalous spin precession frequency for the second result.
During the last 15 years the "Radio MontecarLow (“Radiative Corrections and Monte Carlo Generators for Low Energies”) Working Group, see www.lnf.infn.it/wg/sighad/, has been providing valuable support to the development of radiative corrections and Monte Carlo (MC) generators for low energy e+e- data and tau-lepton decays. Its operation which started in 2006 proceeded until the last few years bringing together at 20 meetings both theorists and experimentalists, experts working in the field of e+e- physics and partly also the tau community and produced the report “Quest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental data” S. Actis et al. Eur. Phys. J. C 66, 585-686 (2010) (https://arxiv.org/abs/0912.0749), which has more than 300 citations.
While the working group has been operating for more than 15 years without a formal basis for funding, parts of our program have recently been included as a Joint Research Initiative in the group application of the European hadron physics community, STRONG2020, to the European Union with the specific goal of creating an annotated database for low-energy hadronic cross sections in e+e- collisions. The database will contain information about the reliability of the data sets, their systematic errors, and the treatment of RC. In parallel the theory community is continuing its effort towards the realization of a MC with full NNLO corrections for low energy e+e- data into hadrons, which is of relevance for the precise determination of the leading hadronic contribution to the muon g-2. We will report on both these initiatives.
Systematic Operator Product Expansions can be applied to the hadronic light-by-light tensor in those kinematic regimes where there is any large external Euclidean momentum. In this talk it is reviewed how this can be applied to the different kinematic regimes entering into the muon g-2 integral, shedding some light on the interplay of short-distance and long-distance contributions in the data-driven approach and constraining the regime where the largest error in HLbL comes from. This leads to a better theoretical control of the corresponding uncertainties. This talk is based on Phys.Lett.B 798 (2019) 134994, JHEP 10 (2020) 203, JHEP 04 (2021) 240 and especially on JHEP 02 (2023) 167 and work in progress.
Charged lepton flavor violation (CLFV) is forbidden in the Standard Model but possible in several new physics scenarios. Thus, the observation of CLFV would be a clear signature of new physics. We report the world-leading result of lepton-flavor-violating decays of \tau lepton decays into a light charged lepton ($\ell$ = e, $\mu$) and a vector meson using the full data sample collected by the Belle. In addition, we report the searches for new physics in $\tau$ decays, especially with its decay involving heavy neutral lepton. We also cover recent searches for tau decays into a scalar non-standard-model particle and a light charged lepton. The results are based on the data set collected by the Belle experiment at the KEKB asymmetric-energy $e^{+}e^{-}$ collider.
The CPT symmetry is one of the most fundamental symmetries in physics. Any violation of this symmetry would have profound implications for our understanding of the universe [1]. In this study, we report the CPT symmetry tests in 3$\gamma$ decays of polarised $^3$S$_{1}$ positronium using the Jagiellonian Positron Emission Tomography device. The J-PET experiment allows sensitive and precise tests of CPT symmetry by measuring the angular correlation between the spin of ortho-positronium and the momenta direction of the annihilation photons emitted in ortho-positronium decay [2]. The potential of J-PET in determining the full range of the expectation value of this correlation has resulted in improving the precision of the CPT symmetry test to 10$^{-4}$ already [3]. The accuracy of this previous measurement was limited by statistics only. The new test is based on the increased statistics due to the modified experimental setup aiming at the improvement of detection efficiency and due to the usage of different positronium production chamber. The high precision of this test would open the possibility to explore the limits of CPT symmetry validity in the charged leptonic sector.
[1] R. Lehnert, Symmetry 8(11), 114 (2016).
[2] P. Moskal et al., Acta Phys. Polon. B 47, 509 (2016).
[3] P. Moskal et al., Nat. Commun. 12, 5658 (2021).
he MEG II experiment, which focuses on investigating Charged Lepton Flavour Violation in muon decays, completed the commissioning of all subdetectors in time for the 2021 run. Recently, it concluded the second year of data collection at the Paul Scherrer Institut (CH).
The experimental apparatus has been specifically designed to search for $\mu^+ \rightarrow e^+ \gamma$ decays, aiming for a significant improvement over the current sensitivity of $4.2 \times 10^{-13}$. This requires high-performance and lightweight detectors capable of handling the pileup effect.
In this presentation, I will provide a brief overview of the experimental techniques employed and share the lessons learned from operating the detectors in a high-rate environment. Subsequently, I will describe the analysis approach applied to the 2021 dataset. Although we collected only few weeks of data in 2021, we anticipate achieving a sensitivity of $8.5 \times 10^{-13}$, although our analysis will be strongly limited by available statistics. Currently, the data in the signal region are blinded, and we are preparing the Likelihood analysis using signal sidebands. The MEG experiment's current limit is twice as good as what we can achieve with the 2021 dataset but relies on the complete dataset spanning approximately four years. However, the 2021 data provides a unique opportunity to fine-tune the analysis procedure and address systematic uncertainties as the collaboration works towards improving statistical precision.
Finally, I will present the data-taking plan for MEG II, which will continue until the end of 2026, to achieve a sensitivity of $6 \times 10^{-14}$. Additionally, I will touch upon some potential other searches being considered by the MEG II collaboration, extending beyond the $\mu^+ \rightarrow e^+ \gamma$ decay channel.
Conveners:
Ilaria Brivio (Universita di Bologna)
Karsten Köneke (Universität Freiburg)
Matthias Schröder (Universität Hamburg)
Contact: eps23-conveners-t09 @desy.de
In this talk, the latest results from the CMS experiment on inclusive and simplified template cross section measurements of the Higgs boson are discussed. We cover the latest measurements for the fermionic decay channels in this presentation. Measurements of the Higgs boson couplings in the fermionic Higgs boson decay channels are also presented.
Detailed measurements of Higgs boson properties and its interactions can be performed using its decays into fermions, providing a key window into the nature of the Yukawa interactions. This talk presents the latest measurements of the Higgs boson properties in various leptonic (ττ, μμ) and quark (bb,cc) decay channels by the ATLAS experiment, using the full Run 2 pp collision dataset collected at 13 TeV. They include in particular measurements within the Simplified Template Cross Section framework, and their interpretations in specific scenarios of physics beyond the Standard Model, as well as generic extensions in the context of Standard Model Effective Field Theories.
The study of Higgs boson production in association with one or two top quarks provides a key window into the properties of the two heaviest fundamental particles in the Standard Model, and in particular into their couplings. This talk presents measurements of tH and ttH production in pp collisions collected at 13 TeV with the ATLAS detector using the full Run 2 dataset of the LHC.
In this talk, the latest results from the CMS experiment on inclusive and simplified template cross section measurements of the Higgs boson are discussed. We cover the latest measurements for the bosonic decay channels in this presentation. Measurements of the Higgs boson couplings in the bosonic Higgs boson decay channels are also presented.
Higgs boson decays to bosons provide very detailed measurements of its properties and interactions, and shine light on the mechanism of electroweak symmetry breaking. This talk presents the latest measurements of the Higgs boson coupling properties performed by the ATLAS experiment in various bosonic decay channels (WW, ZZ and yy) using the full Run 2 pp collision dataset collected at 13 TeV. Results on production mode cross sections, Simplified Template Cross Sections (STXS), and their interpretations are presented. Specific scenarios of physics beyond the Standard Model are tested, as well as generic extensions within the framework of the Standard Model Effective Field Theory.
We present an overview of the most recent differential and fiducial Higgs boson cross section measurements from CMS. A variety of Higgs boson final states are covered.
The Higgs boson decay to two W bosons provides the largest branching fraction among bosonic decays, and can be used to perform some of the most precise measurements of the Higgs boson production cross sections. This talk presents Higgs boson fiducial and differential cross section measurements by the ATLAS experiment in the WW decay channel, targeting both the gluon-gluon fusion and vector-boson fusion production modes, as well as complementary measurements in the ZZ and gamma-gamma final states. The results are based on pp collision data collected at 13 TeV and 13.6 TeV during Run 2 and Run 3 of the LHC.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
We study the impact of three different BSM models in the charge asymmetry defined for the 2SS$\ell$ (with $\ell= e, \mu$) with jets ($n_j\geq2$) final state at the LHC, at $\sqrt{s}=13$ TeV, where the main SM contribution is the $t\bar{t}W$ production. We consider the impact of a heavy neutral scalar/pseudoscalar arising from a 2HDM model; a simplified RPV MSSM model with electrowikino production (Higgsino or wino-like); and an effective theory with dimension 6 four-quark operators. We propose measuring the charge asymmetries differentially with respect to different kinematic observables, and inclusively/exclusively with the number of b-tagged jets in the final state ($n_b\geq\{1, 2, 3\}$). We show that the 2HDM and the four quark operator schemes may be sensitive to the detection of new physics, even for an integrated luminosity of 139 fb$^{-1}$
Several physics scenarios beyond the Standard Model predict the existence of new particles that can subsequently decay into a pair of Higgs bosons. These include pairs of SM-like Higgs bosons (HH) as well as asymmetric decays into two scalars of different masses (SH). This talk summarises ATLAS searches for resonant HH and SH production with LHC Run 2 data. Several final states are considered, arising from various combinations of Higgs boson decays.
An overview of the results of searches for massive new resonances by the CMS Collaboration is presented. The results include searches for resonances such as W' and Z' particles decaying to final states with top quarks as well as charged Higgs boson searches. The CMS search program covers a variety of final states targeting different new physics models including extended Higgs sectors. The results are based on the large dataset collected during Run 2 of the LHC at a centre-of-mass energy of 13 TeV.
In the Standard Model, one doublet of complex scalar fields is the minimal content of the Higgs sector in order to achieve spontaneous electroweak symmetry breaking. However, several theories beyond the Standard Model predict a non-minimal Higgs sector and introduce charged scalar fields that do not exist in the Standard Model. As a result, singly- and doubly-charged Higgs bosons would be a unique signature of new physics with a non-minimal Higgs sector. As such, they have been extensively searched for in the ATLAS experiment, using proton-proton collision data at 13 TeV during the LHC Run 2. In this presentation, a summary of the latest experimental results obtained in searches for both singly- and doubly-charged Higgs bosons are presented.
The discovery of the Higgs boson with the mass of about 125 GeV completed the particle content predicted by the Standard Model. Even though this model is well established and consistent with many measurements, it is not capable of solely explaining some observations. Many extensions of the Standard Model addressing such shortcomings introduce additional Higgs-like bosons. The current status of searches for additional low- and high-mass neutral Higgs bosons based on the full LHC Run 2 dataset of the ATLAS experiment at 13 TeV are presented.
Following the potential discovery of new heavy particles at the LHC or a future collider, it will be crucial to determine their properties and the nature of the underlying Physics. Of particular interest is the possibility of Beyond-the-Standard-Model (BSM) scalar trilinear couplings.
In this talk, I will consider as a specific example the scalar top (stop) trilinear coupling parameter, which controls the stop–stop–Higgs interaction, in the Minimal Supersymmetric Standard Model and I will discuss possible strategies for its experimentally determination. I will show that the best prospects for determining the stop trilinear coupling arise from its quantum effects entering the prediction for the mass of the SM-like Higgs boson in comparison to the measured value. Importantly, the Higgs-boson mass exhibits a high sensitivity to the stop trilinear coupling even for heavy masses of the non-standard particles.
Next, I will review different renormalisation prescriptions for the stop trilinear coupling, and their impact in the context of Higgs-boson mass calculations. I will show that a mixed renormalisation scheme is preferred in view of the present level of accuracy of this calculation, and I will clarify the source of potentially large logarithms that cannot be resummed with standard renormalisation group methods.
Axion-like particles (ALPs) are gauge-singlet under the Standard Model (SM) and appear in many well-motivated extensions of the SM. Since they arise as pseudo-Nambu-Goldstone bosons of an approximate axion shift-symmetry, the masses of ALPs can naturally be much smaller than the energy scale of the underlying UV model, making them an attractive target for the Large Hadron Colloder (LHC) and the future High-Luminosity LHC (HL-LHC). In this talk, we present a method for determining the nature of a possible signal in searches for ALPs produced via gluon-fusion and decaying into top-antitop-quark ($t\bar{t}$) final states in proton-proton scattering at $\sqrt{s} = 13$ TeV. Such a signal has the potential to explain a local $3.5\, \sigma$ excess in resonant $t\bar{t}$ production at a mass scale of approximately $400$ GeV, observed by the CMS collaboration in LHC Run-II data. In particular, we investigate how ALP production can be distinguished from the production of pseudoscalar Higgs bosons as they arise in models featuring a second Higgs doublet, making use of the invariant $t\bar{t}$ mass distribution and angular correlations sensitive to $t\bar{t}$ spin correlation. Furthermore, comparisons to existing experimental bounds from the LHC are presented and discussed.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
With the restart of the proton-proton collision program in 2022 (Run-3) at the Large Hadron Collider (LHC), the ATLAS detector aims to double the integrated luminosity accumulated during the ten previous years of operation. After this data-taking period the LHC will undergo an ambitious upgrade program to be able to deliver an instantaneous luminosity of $7.5\times 10^{34}$ cm$^{-2}$ s$^{-1}$, allowing the collection of more than 3 ab$^{-1}$ of data at $\sqrt{s}=$14 TeV. This unprecedented data sample will allow ATLAS to perform several precision measurements to constrain the Standard Model Theory (SM) in yet unexplored phase-spaces, in particular in the Higgs sector, a phase-space only accessible at the LHC. To benefit from such a rich data-sample it is fundamental to upgrade the detector to cope with the challenging experimental conditions that include huge levels of radiation and pile-up events. The ATLAS upgrade comprises a completely new all-silicon tracker with extended rapidity coverage that will replace the current inner tracker detector; a redesigned trigger and data acquisition system for the calorimeters and muon systems allowing the implementation of a free-running readout system. Finally, a new subsystem called High Granularity Timing Detector, will aid the track-vertex association in the forward region by incorporating timing information into the reconstructed tracks. A final ingredient, relevant to almost all measurements, is a precise determination of the delivered luminosity with systematic uncertainties below the percent level. This challenging task will be achieved by collecting the information from several detector systems using different and complementary techniques. This presentation will describe the ongoing ATLAS detector upgrade status and the main results obtained with the prototypes, giving a synthetic, yet global, view of the whole upgrade project.
In the high-luminosity era of the Large Hadron Collider, the instantaneous luminosity is expected to reach unprecedented values, resulting in up to 200 proton-proton interactions in a typical bunch crossing. To cope with the resulting increase in occupancy, bandwidth and radiation damage, the ATLAS Inner Detector will be replaced by an all-silicon system, the Inner Tracker (ITk). The innermost part of the ITk will consist of a pixel detector, with an active area of about 13 m2. To deal with the changing requirements in terms of radiation hardness, power dissipation and production yield, several silicon sensor technologies will be employed in the five barrel and endcap layers. Prototype modules assembled with RD53A readout chips have been built to evaluate their production rate. Irradiation campaigns were done to evaluate their thermal and electrical performance before and after irradiation. A new powering scheme – serial – will be employed in the ITk pixel detector, helping to reduce the material budget of the detector as well as power dissipation. This contribution presents the status of the ITk-pixel project focusing on the lessons learned and the biggest challenges towards production, from mechanics structures to sensors, and it will summarize the latest results on closest-to-real demonstrators built using module, electric and cooling services prototypes.
The inner detector of the present ATLAS experiment has been designed and developed to function in the environment of the present Large Hadron Collider (LHC). At the ATLAS Phase-II Upgrade, the particle densities and radiation levels will exceed current levels by a factor of ten. The instantaneous luminosity is expected to reach unprecedented values, resulting in up to 200 proton-proton interactions in a typical bunch crossing. The new detectors must be faster and they need to be more highly segmented. The sensors used also need to be far more resistant to radiation, and they require much greater power delivery to the front-end systems. At the same time, they cannot introduce excess material which could undermine tracking performance. For those reasons, the inner tracker of the ATLAS detector was redesigned and will be rebuilt completely. The ATLAS Upgrade Inner Tracker (ITk) consists of several layers of silicon particle detectors. The innermost layers will be composed of silicon pixel sensors, and the outer layers will consist of silicon microstrip sensors. This contribution focuses on the strip region of the ITk. The central part of the strip tracker (barrel) will be composed of rectangular short (~ 2.5 cm) and long (~5 cm) strip sensors. The forward regions of the strip tracker (end-caps) consist of six disks per side, with trapezoidal shaped sensors of various lengths and strip pitches. After the completion of final design reviews in key areas, such as Sensors, Modules, Front-End electronics, and ASICs, a large scale prototyping program has been completed in all areas successfully. We present an overview of the Strip System and highlight the final design choices of sensors, module designs and ASICs. We will summarise results achieved during prototyping and the current status of pre-production and production on various detector components, with an emphasis on QA and QC procedures.
In June 2022 the data taking of the Belle II experiment was stopped for the Long Shutdown 1 (LS1), which is primarily required to install a new two-layer DEPFET detector (PXD) and upgrade components of the accelerator. The whole silicon tracker (VXD) will be extracted from Belle II, then the outer four-layer double-sided strip detector (SVD) is split into its two halves to allow access for the PXD installation. Then a new VXD commissioning phase will begin such that it will be ready to take data by the end of 2023. We describe the challenges and status of this VXD upgrade.
In addition, we report on the performance of the SVD, which has been operated since 2019. The high hit efficiency and the large signal-to-noise are monitored via online data-quality plots.
The good cluster-position resolution is estimated using the unbiased residual with respect to the track, resulting in reasonable agreement with the expectations. A novel procedure to group SVD hits event-by-event, based on their time, has been developed. Using the grouping information during reconstruction allows to significantly reduce the fake rate while preserving the tracking efficiency.
So far, in the layer closest to the I.P., the SVD average occupancy has been less 0.5%, which is well below the estimated limit for acceptable tracking performance. As the luminosity increases, higher machine backgrounds are expected and the excellent hit-time information in SVD can be exploited for background rejection. We have developed a method that uses the SVD hit-time to estimate the collision time (event-T0) with similar precision to the estimate based on the drift chamber. The execution time needed to compute SVD event-T0 is three orders of magnitude smaller, allowing a faster online reconstruction that is crucial in a high luminosity regime. Furthermore, the front-end chip (APV25) is operated in “multi-peak” mode, which reads six samples. To reduce background occupancy, trigger dead-time and data size, a 3/6-mixed acquisition mode, based on the timing precision of the trigger, has been successfully tested in physics runs.
Finally, concerning the radiation damage, the SVD dose is estimated by the correlation of the SVD occupancy with the dose measured by the diamonds of the monitoring and beam-abort system. Although the sensor current and the strip noise have shown a moderate increase due to radiation, we expect the detector performance will not be seriously degraded during the lifespan of the detector.
With the emergence of advanced Si sensor technologies such as LGADs, it is now possible to achieve exceptional time measurement precision below 50 ps. As a result, the implementation of time-of-flight (TOF) particle identification for charged hadrons at future $e^{+}e^{-}$ Higgs factory detectors has gained an increasing attention. Other particle identification techniques require a gaseous tracker with excellent dE/dx (or dN/dx) resolution, or a RICH, which adds additional material in front of the calorimeter.
TOF measurements can be implemented either in the outer layers of the tracker or in the electromagnetic calorimeter, and are thus particularly interesting as a PID method for detector concepts based on all-silicon trackers and optimised for particle-flow reconstruction.
In this presentation, we will explore potential integration scenarios of TOF measurement in a future Higgs factory detector, using the International Large Detector (ILD) as an example. We will focus on the challenges associated with crucial components of TOF particle identification, namely track length reconstruction and TOF measurements. The subsequent discussion will highlight the vital impact of precise track length reconstruction and various TOF measurement techniques, including recently developed machine learning approaches. We will evaluate the performance in terms of kaon-pion and kaon-proton separation as a function of momentum, and discuss potential physics applications.
The increase of the particle flux (pile-up) at the HL-LHC with instantaneous luminosities up to L ≃ 7.5 × 1034 cm−2s −1 will have a severe impact on the ATLAS detector reconstruction and trigger performance. The end-cap and forward region where the liquid Argon calorimeter has coarser granularity and the inner tracker has poorer momentum resolution will be particularly affected. A High Granularity Timing Detector (HGTD) will be installed in front of the LAr endcap calorimeters for pile-up mitigation and luminosity measurement. The HGTD is a novel detector introduced to augment the new all-silicon Inner Tracker in the pseudo-rapidity range from 2.4 to 4.0, adding the capability to measure charged-particle trajectories in time as well as space. Two silicon-sensor double-sided layers will provide precision timing information for minimum-ionising particles with a resolution as good as 30 ps per track in order to assign each particle to the correct vertex. Readout cells have a size of 1.3 mm × 1.3 mm, leading to a highly granular detector with 3.7 million channels. Low Gain Avalanche Detectors (LGAD) technology has been chosen as it provides enough gain to reach the large signal over noise ratio needed. The requirements and overall specifications of the HGTD will be presented as well as the technical design and the project status. The R&D effort carried out to study the sensors, the readout ASIC, and the other components, supported by laboratory and test beam results, will also be presented.
The LHCb experiment has been upgraded during the second long shutdown of the Large Hadron Collider at CERN, and the new detector is currently operating at the LHC. The Vertex Locator (VELO) is the detector surrounding the interaction region of the LHCb experiment and is responsible of reconstructing the proton-proton collision (primary vertices) as well as the decay vertices of long-lived particles (secondary vertices).
The VELO is composed by 52 modules with hybrid pixel detector technology, operating at just 5.1 mm from the beams. The sensors consist of 200 μm thick n-on-p planar silicon sensors, read out via 3 VeloPix ASICs. The sensors are attached to a 500 μm thick silicon plate, which embeds 19 micro-channels for the circulation of the CO$_2$ evaporative cooling. The VELO operates in an extreme environment, which poses significant challenges to its operation. During the lifetime of the detector, the sensors are foreseen to accumulate an integrated fluence of up to 8×10$^{15}$ 1MeV n$_{eq}$cm$^{−2}$, roughly equivalent to a dose of 400 MRad. Moreover, due to the geometry of the detector, the sensors will face a highly non-uniform irradiation, with fluences in the hottest regions expected to vary by a factor 400 within the same sensor. The highest occupancy ASICs foresee a maximum pixel hit rate of 900 Mhit/s and an output data rate exceeding 15 Gbit/s. The design, operation and early results obtained during the first year of commissioning will be presented.
The high-luminosity upgrade of the LHC (HL-LHC) brings unprecedented requirements for real-time and precision bunch-by-bunch online luminosity measurement and beam-induced background monitoring. A key component of the CMS Beam Radiation Instrumentation and Luminosity (BRIL) system is a stand-alone luminometer, the Fast Beam Condition Monitor (FBCM), which is fully independent from the CMS central trigger and data acquisition services and able to operate at all times with an asynchronous readout. FBCM utilizes a dedicated front-end ASIC to amplify the signals from CO2-cooled silicon-pad sensors with a few nanoseconds timing resolution also enabling the measurement of beam-induced background. FBCM uses a modular design with two half-disks of twelve modules at each end of CMS, with 4 service modules placed around the disk edge at a radius of reduced radiation fluence. The electronics system design adapts several components from the CMS Tracker for power, control and read out functionalities. The dedicated FBCM ASIC contains 6 channels with 600e- ENC and adjustable shaping time to optimize the noise with regards to sensor leakage current. Each channel outputs a single binary high-speed asynchronous signal carrying the Time-of-Arrival and Time-over-Threshold information. The chip output signal is sent via a radiation-hard gigabit transceiver and an optical link to the back-end electronics for analysis. The paper reports on the design and the testing program for the FBCM detector.
Conveners:
Alessia Bruni (INFN Bologna)
Marie-Lena Dieckmann (Universität Hamburg)
Gwenhaël Wilberts Dewasseige (UC Louvain)
Contact: eps23-conveners-t14@desy.de
Since 1983 the Italian groups collaborating with Fermilab (US) have been running a 2-month summer training program for Master students. While in the first year the program involved only 4 physics students, in the following years it was extended to engineering students. Many students have extended their collaboration with Fermilab with their Master Thesis and PhD.
The program has involved almost 600 Italian students from more than 20 Italian universities. Each intern is supervised by a Fermilab Mentor responsible for the training program. Training programs spanned from Tevatron, CDF, CMS, Muon (g-2), Mu2e and SBN (MicroBooNE, SBND and ICARUS) and DUNE design and data analysis, development of particle detectors, design of electronic and accelerator components, development of infrastructures and software for tera-data handling, quantum computing and research on superconductive elements and accelerating cavities.
In 2015 the University of Pisa included the program within its own educational programs. Summer Students are enrolled at the University of Pisa for the duration of the internship and at the end of the internship they write summary reports on their achievements. After positive evaluation by a University of Pisa Examining Board, interns are acknowledged 6 ECTS credits for their Diploma Supplement. In the years 2020 and 2021 the program was canceled due to the sanitary emergency but in 2022 it was restarted and allowed a cohort of 21 students to be trained for nine weeks at Fermilab. We are now organizing the 2023 program.
The REINFORCE EU (Research Infrastructures FOR Citizens in Europe) was a three- year long SwafS project which engaged citizens in active collaboration with the scientists working in large research infrastructures across Europe. The overall aim was to bridge the gap between them and reinforce society’s science capital. The citizen scientists had at their disposal data from four different “discovery demonstrators” hosted on the online Zooniverse platform.
The demonstrators asked for the citizen contribution to front-end research such as: gravitational wave astronomy, deep sea neutrino telescopes, particle search at CERN and cosmic rays. The task of the citizens was to help the scientists to optimize the detectors and/or the reconstruction algorithms.
The focus of the talk will be on the demonstrator titled “Search for new particles at CERN”, where citizen-scientists visually inspected events collected by the ATLAS detector at LHC and searched for signatures of new particles. To make this possible, the demonstrator adopted a three-stage architecture. The first two stages used simulated data to train citizens, but also to allow for a quantitative assessment of their performance and a comparison with machine learning algorithms. The third stage used real data, providing two research paths: (a) study of Higgs boson decays to two photons, one of which could be converted to an electron-positron pair by interaction with detector material, and (b) search for yet undiscovered long-lived particles, predicted by certain theories Beyond-the-Standard-Model.
The results of 360,000 classifications showed that citizen scientists can carry out complicated tasks responsibly, with a performance comparable to that of a purpose-built machine-based algorithm and can identify interesting patterns or errors in the reconstruction, in individual events. Moreover, the demonstrator showed that the statistical combination of user responses (user consensus) appears to be quite a powerful tool that can be further considered and exploited in fundamental scientific research.
The demonstrator approach to applying citizen science to high energy physics proved that users could contribute to the field, but also identify areas where further study is necessary.
The International Particle Physics Outreach Group (IPPOG) is a network of scientists, science educators and communication specialists working across the globe in informal science education and public engagement for particle physics. The primary methodology adopted by IPPOG includes the direct participation of scientists active in current research with education and communication specialists, in order to effectively develop and share best practices in outreach. IPPOG member activities include the International Particle Physics Masterclass programme, International Day of Women and Girls in Science, Worldwide Data Day, International Muon Week and International Cosmic Day organisation, and participation in activities ranging from public talks, festivals, exhibitions, teacher training, student competitions, and open days at local institutes. These independent activities, often carried out in a variety of languages to public with a variety of backgrounds, all serve to gain the public trust and to improve worldwide understanding and support of science. We present our vision of IPPOG as a strategic pillar of particle physics, fundamental research and evidence-based decision-making around the world.
The war on Ukraine has affected significantly scientific cooperation and communication in particle physics and also in many other fields of scientific, cultural and educational exchange. Immediately after the war in Feb 2022, many scientific institutions paused or banned scientific cooperation and exchange with Russian and Belorusian institutes and their scientists. Publications were out on hold or even banned, if Russian scientists were on the author list.
The Science4Peace Forum was created in March 2022 as a consequences of these restrictions, as a completely independent Forum for open communication among scientists, using independent ZOOM rooms and Webpages. The basic ideas of the S4P Forum are fully in line with the policy of IUPAP of supporting and encouraging free scientific exchange. In the course of discussions on the consequences of the war on Ukraine, the S4P Forum organized a high-level panel discussion on "Sanctions in Science - One Year of Sanctions" [[1]].
The war on Ukraine has also increased enormously the risk of nuclear escalation. Together with 14 Nobel laureates and many Scientists, the S4P Forum launched an appeal to subscribe to the "no-first use" policy and urged the governments to subscribe to the Treaty of the Prohibition of Nuclear Weapons adopted by the United Nations [[2]].
The S4P Forum fully supports the ideas and activities originating from the International Year of Basic Sciences for Sustainable Development (IYBSSD) to open and to keep open discussion channels at all levels, using Science as a common language.
This presentation is submitted on behalf of the Science4Peace Forum.
The communities of astrophysics, astronomers and high energy physicists have been pioneers in establishing Virtual Research and Learning Networks (VRLCs)[1] generating international productive consortiums in virtual research environments and forming the new generation of scientists. These environments are key to improve accessibility and inclusion for students and researchers in developing countries. In this talk we will discuss one in particular: LA-CoNGA Physics (Latin American alliance for Capacity buildiNG in Advance physics) [2].
LA-CoNGA physics aims to support the modernization of the university infrastructure and the pedagogical offer in advanced physics in four Latin American countries: Colombia, Ecuador, Peru and Venezuela. This virtual teaching and research network is composed of 3 partner universities in Europe and 8 in Latin America, high-level scientific partners (CEA, CERN, CNRS, DESY, ICTP), and several academic and industrial partners. The project is co-funded by the Education, Audiovisual and Culture Executive Agency (EACEA) of the European Commission.
Open Science education and Open Data are at the heart of our operations. In practice LA-CoNGA physics has created a set of postgraduate courses in Advanced Physics (high energy physics and complex systems) that are common and inter-institutional, supported by the installation of interconnected instrumentation laboratories and an open e-learning platform. This program is inserted as a specialization in the Physics masters of the 8 Latinamerican partners in Colombia, Ecuador, Peru and Venezuela. It is based on three pillars: courses in high energy physics theory/phenomenology, data science and instrumentation. The program is complemented by transversal activities like seminars, citizen science projects and open science hackathons [3].
In the current context, VRLCs and e-learning platforms are contributing to solve challenges, such as distance education during the COVID19 pandemic and internationalization of institutions in developing countries.
[1] http://www.oecd.org/sti/inno/international-distributed-research-infrastructures.pdf
[2] https://laconga.redclara.net/en/home/
[3] https://laconga.redclara.net/hackathon/
The ATLAS Collaboration has developed a number of highly successful programmes featuring educational content for schools and universities, as well as communication strategies to engage the broader public. The ATLAS Open Data project has successfully delivered open-access data, simulations, documentation and related resources for education and outreach use in High Energy Physics and related computer sciences based on data collected in proton–proton collisions at 8 TeV and 13 TeV. These resources have found substantial application worldwide in schools, universities and other public settings. Building on this success and in support of CERN’s Open Data Policy the ATLAS experiment plans to continue to release 13 TeV data for educational purposes and – for the first time – also for research purposes. The ATLAS Communication Programme prepares substantial web content through online press statements, briefings that explain topical result releases to the public, video content (interviews with analysers, tours and live Q&As), and social media engagement. We will summarise the landscape of the ATLAS Open Data project and discuss communication strategies, types of content, and the effect on target audiences, with best practices shared.
The “Congrès des deux infinis” (“Congress of the two infinities”) was a huge education and outreach festival, organized last Fall in Réunion island, parallel to the international conference EDSU2022 – “Fourth World Summit on Exploring the Dark Side of the Universe”. During two weeks, tens of events – public lectures, conferences in schools, university seminars, school contests, teacher training sessions, topical round tables, etc. – have allowed more than 6,500 spectators (teachers, students, general audience) from all over the island to benefit from this uncommon concentration of French-speaking researchers in physics.
This Congress, unique in many ways, did not come by chance. It is a climax of many activities organized in the past decade around the “infinitely small” and the “infinitely large” basic science topics, by a group of extremely motivated local teachers, far from mainland France – and thus with a limited access to research institutes, scientists or even educational resources. This endeavor started with the participation of some of these teachers to the French Teacher Program, organized yearly at CERN for 15 years, and then expanded with the help of CNRS/IN2P3 (the leading public research institute for these fields in France) and the Ministry of National Education.
After presenting the main realizations of these teachers over the years – characterized by the recurring use of the unique Réunion environment as a playground for educational activities –, this parallel talk will describe in details the contents of the “Congrès des deux infinis”, with a focus of the organizational difficulties that had to be overcome during the preparation phase and later on, during the Congress itself to give all participants a high-quality experience. To conclude, the experience gained along the way will be shared, for colleagues (researchers or teachers) who could be interested in organizing a similar event.
INFN Kids (*) is a science education project of the Italian National Institute for Nuclear Physics addressed to young people of Primary and Middle schools age. The initiative aims at raising children’s curiosity towards science with a focus on Physics, inspiring them with science by illustrating the different research fields that INFN is pursuing, the development in technologies along with the applications in everyday life and presenting people who animate science. It gathers technicians and researchers of thirteen units and National labs in the design and realization of multimedia products, laboratory-based activities, comics, science demos and exhibits. The activities are conducted online and in person in schools, science festivals and at INFN’s sites.
The adopted methodologies and the didactic tools (lectures, interactive lessons, hands-on sessions, science games) involve children in the direct exploration of natural phenomena.
Given the manifold plan of activities the recipients of the project are also teachers and families, and this allowed to expand and use different formats to meet the audience’s requests.
We here present an overview of the ongoing initiatives to share our experiences and we illustrate in particular the comics centered on the characters Leo and Alice that drive children in the investigation of the micro and macro world, and the laboratory-based activities designed to introduce kids some fundamental concepts related to matter and its inner structure.
(*) https://web.infn.it/infn-kids/
The climate crisis and the degradation of the world's ecosystems require humanity to take immediate action. The international scientific community has a responsibility to limit the negative environmental impacts of basic research. The HECAP+ communities (High Energy Physics, Cosmology, Astroparticle Physics, and Hadron and Nuclear Physics) make use of common and similar experimental infrastructure, such as accelerators and observatories, and rely similarly on the processing of big data. Our communities therefore face similar challenges to improving the sustainability of our research. This document aims to reflect on the environmental impacts of our work practices and research infrastructure, to highlight best practice, to make recommendations for positive changes, and to identify the opportunities and challenges that such changes present for wider aspects of social responsibility.
Conveners:
Livia Conti (INFN)
Carlos Perez de los Heros (Uppsala University)
Martin Tluczykont (Universität Hamburg)
Gabrijela Zaharijas (UNG)
Contact: eps23-conveners-t01 @desy.de
The IceCube Neutrino Observatory has measured the high-energy astrophysical neutrino flux but has not yet detected prompt atmospheric neutrinos originating from charmed meson decays. Understanding the prompt neutrino flux is crucial for improving models of high-energy hadronic interactions and advancing astrophysical neutrino measurements. We present a combined analysis of cascades and up-going tracks to explore the subdominant prompt neutrino flux. We propose a robust method for calculating upper limits of the prompt flux, considering the model dependencies on the astrophysical and conventional atmospheric neutrino fluxes.
We investigate the kinematical regions that are important for producing prompt neutrinos in the atmosphere and in the forward region of the LHC, as probed by different experiments. We illustrate the results in terms of the center-of-mass nucleon-nucleon collision energies and rapidities of neutrinos and of the parent heavy-flavoured hadrons. We find overlap in only part of the kinematic space and we point out the physics requirements needed to appropriately describe the two regimes.
The contribution is based on W. Bai et al. [arXiv:2212.07865]
In recent years, the IceCube Neutrino Observatory has started to unravel the high-energy neutrino sky. The discoveries of TXS0506+056 and NGC1068 as neutrino emitters and neutrino emission from the galactic plane hint at a zoo of possible neutrino sources. However, open questions regarding the production mechanisms remain that require a new generation of neutrino telescopes to answer.
The Pacific Ocean Neutrino Experiment (P-ONE) is a planned, next-generation neutrino telescope off the coast of Vancouver Island, where it will leverage deep-sea infrastructure provided by Ocean Networks Canada (ONC). Once completed, P-ONE aims for greatly improved resolutions compared to IceCube, complementing other next-generation telescopes, such as KM3NeT. The first detector line is currently under construction and targeted for deployment in 2024. In this contribution, I will present status of the first detector line, and prospects for the full detector array.
Dark compact objects, like primordial black holes, can span a large range of masses depending on their time and mechanism of formation. In particular, they can have subsolar masses and form binary systems with an inspiral phase that can last for long periods of time. Additionally, these signals have a slow increase of frequency, and, therefore, are well suited to be searched with continuous gravitational waves methods. We present a new pipeline called COmpact Binary Inspiral (COBI), based on the Band Sampled Data (BSD) framework, which specifically targets these signals. We describe the method and propose a possible setup for a search on O4 LIGO-Virgo data. We characterize the pipeline performances in terms of sensitivity and computing cost of the search corroborating the results with software injections on O3 data.
The current and upcoming generations of gravitational wave experiments represent an exciting step forward in terms of detector sensitivity and performance. Key upgrades at the LIGO, Virgo and KAGRA facilities will see the next observing run (O4) probe a spatial volume around four times larger than the previous run (O3), and design implementations for e.g. the Einstein Telescope, Cosmic Explorer and LISA experiments are taking shape to explore a wider frequency range and probe cosmic distances.
In this context, however, a number of imminent data analysis problems face the gravitational wave community. It will be crucial to develop tools and strategies to analyse (amongst other scenarios) signals that arrive coincidentally in detectors, longer signals that are in the presence of non-stationary noise or other shorter transients, as well as noisy, potentially correlated, coherent stochastic backgrounds. With these challenges in mind, we develop PEREGRINE, a new sequential simulation-based inference approach designed to study broad classes of gravitational wave signal.
In this talk, I discuss the need of the hour for flexible, simulation-efficient, targeted inference tools like PEREGRINE before demonstrating its accuracy and robustness through direct comparison with established likelihood-based methods. Specifically, we show that we are able to fully reconstruct the posterior distributions for every parameter of a spinning, precessing compact binary coalescence using one of the most physically detailed and computationally expensive waveform approximants (SEOBNRv4PHM). Crucially, we are able to do this using only 2% of the waveform evaluations that are required in e.g. nested sampling approaches, highlighting our simulation efficiency as the state-of-the-art when it comes to gravitational waves data analysis.
Dust particles (diameter ≳0.5um) present inside the clean environments of the ground based interferometric detectors of gravitational waves can contribute to scatter light significantly, adding to the residual scattering originated by imperfections of high quality optical components. Stray light, i.e. light that leaves the main optical beam, picks up phase noise by reflecting off of mechanically noisy surfaces and couples back in the main beam, is suspected to contribute much of the unexplained excess noise observed in the mid-low frequency band. Dust particles can scatter light either when deposited on the optics as well as by crossing the beam while they move in space. Knowing the amount and size distribution of dust particles present in the different environments we can predict the amount of scattered light they generate and elaborate mitigation strategies. We describe the dust monitoring system we have set up at Virgo to size the amount of dust that deposits, both on in-air benches and in vacuum towers, as both a check of cleanliness procedures and an alert system. We also describe a work to estimate the effect of dust particles in the beam pipes of the future Einstein Telescope: this is fundamental for setting cleanliness requirements for the production and installation of the ~100 km of vacuum tubes of the interferometer’s main arms. Finally we describe an experimental facility to measure the particles deposited on witness samples, and to measure the scattering properties of surfaces, both clean and contaminated by dust.
Conveners:
Fady Bishara (DESY)
James Frost (University of Oxford)
Silvia Scorza (LPSC Grenoble)
Contact: eps23-conveners-t03 @desy.de
DarkSide-50 is an experiment for direct dark matter detection at Laboratori Nazionali del
Gran Sasso. It uses a dual-phase time projection chamber filled with low-radioactivity argon extracted from underground. Thanks to single electron sensitivity and with an analysis based on the sole ionization signal, Darkside-50 set the most stringent exclusion limit on WIMPs with a mass of few GeV/c$^2$. A recent analysis improves by 10 times the existing exclusion limits for spin-independent WIMP-nucleon interactions in the [1.2, 3.6] GeV/c$^2$ mass range. Thanks to the inclusion of the Migdal effect, the exclusion limits are extended down to 40 MeV/c$^2$ dark matter mass. Furthermore, new constraints are set to the interactions of dark matter particles with electron final state, namely low-mass WIMPs interacting with electrons, galactic axions, dark photons, and sterile neutrinos.
XENONnT is the follow-up to the XENON1T experiment aiming for the direct detection of dark matter in the form of weakly interacting massive particles (WIMPs) using a liquid xenon (LXe) time projection chamber (TPC). The detector, operated at Laboratori Nazionali del Gran Sasso (LNGS) in Italy, features a total LXe mass of 8.5 tonnes of which 5.9 tonnes are active. XENONnT has completed its first science run and is currently taking data for the second science run. It has achieved an unprecedented purity for both, electronegative contaminants, with an electron lifetime exceeding 10 ms due to a novel purification in liquid phase, and for radioactive radon, with an activity of 1.72±0.03 𝜇Bq/kg due to a novel radon distillation column.
This talk will present the latest results from the search for nuclear recoils induced by WIMPs using data from the first science run with an exposure of 1.1 tonne-year. In addition, results from other searches for non-standard interactions and new particles via their electronic interactions will be shown.
LUX-ZEPLIN (LZ) is a direct detection dark matter experiment hosted in the Davis Campus of the Sanford Underground Research Facility in Lead, South Dakota. LZ's central detector is a dual-phase time projection chamber utilizing 7 active tonnes of liquid xenon (LXe) and is aided by a LXe "skin" detector and liquid scintillator-based outer detector to veto events inconsistent with dark matter particles. LZ recently reported its first results on Spin-Independent and Spin-Dependent interactions between nucleons and Weakly Interacting Massive Particles (WIMPs) with an exposure of 60 live days with a fiducial mass of 5.5 tonnes, setting world leading limits on the exclusion of spin-independent WIMP-nucleon scattering with WIMP masses > 9 GeV/c^2.
This talk will provide an overview of the experiment and details of the recent LZ results, as well as projections for LZ’s full exposure consisting of 1000 live days.
Cryogenic Rare Event Search with Superconducting Thermometers (CRESST) is a direct detection dark matter (DM) search experiment located at Laboratori Nazionali del Gran Sasso (LNGS) in Italy. The experiment employs cryogenic and scintillating crystals to search for nuclear recoils from DM particles, and has repeatedly achieved thresholds below 100 eV in its third phase (CRESST III) for a wide range of target materials including CaWO$_4$, LiAlO$_2$, Al$_2$O$_3$, and Si. The sensitivity to measure small energy depositions makes CRESST one of the leading experiments in sub-GeV dark matter search. A major challenge for all low-mass dark matter searches is the presence of an unknown event population at very low energies, called the low energy excesses (LEE). The scientific effort at CRESST in the latest run has been primarily towards an understanding of the origin of this excess. However, we set also new limits on low-mass DM. We report dark matter search results as well as updates on the understanding of LEE from CRESST-III. We conclude the talk with our future plans.
With its increasing statistical significance, the DAMA/LIBRA annual modulation signal is a cause for tension in the field of dark matter direct detection. A possible standard dark matter explanation for this signal is highly incompatible with the null results of numerous other experiments. The COSINUS experiment aims at a model-independent cross-check of the DAMA/Libra signal claim.
For such a model-independent cross-check, the same detector material as used by DAMA has to be employed. Thus COSINUS will use NaI crystals operated as cryogenic scintillating calorimeters at millikelvin temperatures. Such a setup enables independent measurement of both temperature and scintillation light signals via transition edge sensors (TESs). The dual-channel readout allows particle discrimination on an event-by-event basis, as the amount of light produced depends on the particle type (light quenching).
The construction of COSINUS started in December 2021 at the LNGS underground laboratory in central Italy. We will report on the current status of the construction and results from our prototype detectors.
The NEWS-G collaboration is searching for light dark matter candidates using a novel gaseous detector concept, the spherical proportional counter. Access to the mass range from 0.05 to 10 GeV is enabled by the combination of low energy threshold, light gaseous targets (H, He, Ne), and highly radio-pure detector construction. Initial NEWS-G results obtained with SEDINE, a 60 cm in diameter spherical proportional counter operating at LSM (France), excluded for the first time WIMP-like dark matter candidates down to masses of 0.5 GeV. First physics results using the commissioning data of a 140 cm in diameter spherical proportional counter, S140, constructed at LSM using 4N copper with 500 um electroplated inner layer will be presented, along with the new developments in read-out sensor technologies using resistive materials and multi-anode read-out that enables its operation. The first physics campaign in SNOLAB (Canada) with the detector was recently completed. To suppress background and so enhance the sensitivity of future detectors, NEWS-G are developing novel electroforming techniques. The potential to achieve sensitivity reaching the neutrino floor in light Dark Matter searches with a next generation detector, DarkSPHERE, will also be presented.
Conveners:
Summer Blot (DESY)
Pau Novella (IFIC)
Davide Sgalaberna (ETH)
Jessica Turner (Durham University)
Contact: eps23-conveners-t04 @desy.de
T2K is a long baseline neutrino experiment which exploits a neutrino and antineutrino beam produced at the Japan Particle Accelerator Research Centre (J-PARC) to provide world-leading measurements of the parameters governing neutrino oscillation. Neutrino oscillations are measured by comparing neutrino rates and spectra at a near detector complex, located at J-PARC, and at the water-Cherenkov far detector, Super-Kamiokande, located 295 Km away.
The latest T2K results include multiple analysis improvements, in particular a new sample is added at the far detector requiring the presence of a pion in muon-neutrino interactions. It is the first time that a pion sample is included in the study of neutrino disappearance at T2K and, for the first time, a sample with more than one Cherenkov ring is exploited in the T2K oscillation analysis, opening the road for future samples with charged- and neutral-pion tagging. The inclusion of such a sample assures proper control of the oscillated spectrum on a larger neutrino-energy range and on subleading neutrino-interaction processes.
T2K is also engaged in a major effort to perform a joint fit with the Super-Kamiokande neutrino atmospheric measurements and another joint fit with NOvA. Such combinations allow to lift the degeneracies between the measurement of the CP-violating phase $\delta_{CP}$ and the measurement of the ordering of the neutrino mass eigenstates. Results and prospects of such joint fits will be discussed.
The NOνA experiment is a long-baseline, off-axis neutrino experiment that aims to study the mixing behavior of neutrinos and antineutrinos using the Fermilab NuMI neutrino beam near Chicago, IL. The experiment collects data at two functionally identical detectors, the Near Detector is near the neutrino production target at Fermilab; the 14 kt Far Detector is 810 km away in Ash River, MN. Both detectors are tracking calorimeters filled with liquid scintillator which can detect and identify muon and electron neutrino interactions with high efficiency. The physics goals of NOvA are to observe the oscillation of muon (anti)neutrinos to electron (anti)neutrinos, understand why matter dominates over antimatter in the universe, and to resolve the ordering of neutrino masses. To that end, NOvA measures the electron neutrino and antineutrino appearance rates, as well as the muon neutrino and antineutrino disappearance rates. In this talk I will give an overview of NOvA and present the latest results combining both neutrino and antineutrino data.
In the current epoch of neutrino physics, many experiments are aiming for precision measurements of oscillation parameters. Thus, various new physics scenarios which alter the neutrino oscillation probabilities in matter deserve careful investigation. Recent results from NOvA and T2K show a slight tension on their reported values of the CP violating phase $\delta_{CP}$. Since the baseline of NOvA is much larger than the T2K, the neutral current non-standard interactions (NSIs) of neutrinos with the earth matter during their propagation might play a crucial role for such discrepancy. In this context, we study the effect of a vector leptoquark which induces non-standard neutrino interactions that modify the oscillation probabilities of neutrinos in matter. We show that such interactions provide a relatively large value of NSI parameter $\varepsilon_{e \mu}$. Considering this NSI parameter, we successfully explain the recent discrepancy between the observed $\delta_{CP}$ results of T2K and NOvA. We also briefly discuss the implication of $U_3$ leptoquark on lepton flavour violating muon decay modes: $\mu \to e \gamma$ and $\mu \to ee e$.
We report on the latest measurement of atmospheric neutrino oscillation parameters using data from the IceCube Neutrino Observatory. The DeepCore array in the central region of IceCube enables the detection and reconstruction of atmospheric neutrinos at energies as low as $\sim5$ GeV. This energy threshold allows the measurement of muon neutrino disappearance over a wide range of baselines available for atmospheric neutrinos. The present analysis is performed using a new data sample of DeepCore, which includes significant improvements in data calibration, detector simulation, data processing, and a detailed treatment of systematic uncertainties. The observed relative fluxes of neutrino flavors as functions of their reconstructed energy and arrival directions allow us to measure the atmospheric mixing parameters, $\sin^2\theta_{23}$ and $\Delta m^2_{32}$. The resulting improvement in the precision measurement of both parameters with respect to our previous result makes this the most precise measurement of oscillation parameters using atmospheric neutrinos.
The Super-Kamiokande experiment (Super-K) is a water Cherenkov detector in Japan. It has been collecting atmospheric neutrino events in ultrapure water from 1996 to 2020, after which it was upgraded with the addition of Gadolinium sulfate in the water. Tau neutrinos are not expected in the atmospheric neutrino flux below 10 GeV unless they appear from the oscillation of atmospheric muon neutrinos. Super-K is capable of detecting these oscillated tau neutrinos - which would be an unambiguous confirmation of neutrino oscillations. In the last published results from Super-K in 2018 with 328 kt year exposure, the hypothesis of no tau neutrino appearance was rejected with 4.6 sigma significance. The current analysis uses all of the data collected on the pure water phase, corresponding to 484 kt year. The statistics have been significantly increased by expanding the fiducial volume of the detector from 22.5 kt to 27.2 kt. In total, nearly 50% more events have been added to the analysis.
The Daya Bay reactor neutrino experiment is the first experiment that measured a non-zero value for the neutrino mixing angle $\theta_{13}$ in 2012. Antineutrinos from six 2.9 GW$_{\text{th}}$ reactors were detected in eight identically designed detectors deployed in two near and one far underground experimental halls. The near-far arrangement in km-scale baselines of anti-neutrino detectors allows for a high-precision test of the three-neutrino oscillation framework. Daya Bay's collection of physics data already ended on Dec. 12, 2020. In this talk, I will show the measurement results of $\theta_{13}$ and the mass-squared difference, based on the Gd-capture tagged sample in the complete dataset. The updated results on the H-capture-based oscillation analysis and search for light sterile neutrino will also be reported if ready.
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
The HERAPDF2.0 ensemble of parton distribution functions (PDFs) was introduced in 2015. The final stage is presented, a next-to-next-to-leading-order (NNLO) analysis of the HERA data on inclusive deep inelastic ep scattering together with jet data as published by the H1 and ZEUS collaborations. A perturbative QCD fit, simultaneously of αS(M2Z) and and the PDFs, was performed with the result αS(M2Z) = 0.1156±0.0011 (exp) +0.0001−0.0002 (model +parameterisation) ± 0.0029 (scale). The PDF sets of HERAPDF2.0Jets NNLO were determined with separate fits using two fixed values of αS(M2Z), αS(M2Z) = 0.1155 and 0.118, since the latter value was already chosen for the published HERAPDF2.0 NNLO analysis based on HERA inclusive DIS data only. The different sets of PDFs are presented, evaluated and compared. The consistency of the PDFs determined with and without the jet data demonstrates the consistency of HERA inclusive and jet-production cross-section data. The inclusion of the jet data reduced the uncertainty on the gluon PDF. Predictions based on the PDFs of HERAPDF2.0Jets NNLO give an excellent description of the jet-production data used as input.
We compute the NNLO massive corrections for diphoton production in quantum chromodynamics (QCD). This process is very important as a test of perturbative QCD and as a background process for the decay of a Higgs into two photons. We compute semi-analitically the master integrals via power series expansion, classifying Feynman diagrams in different topologies and finding the canonical basis for non elliptic integrals. We present a study of the maximal cut for the non-planar topology showing the elliptic curve defining the integral. We then present the matrix element computed for the first time, in terms of form factors. Finally, we study the impact of our novel massive corrections on the phenomenology of the process, for different observables.
The production of jets and prompt isolated photons at hadron colliders provides stringent tests of perturbative QCD. The latest measurements by the ATLAS experiment, using proton-proton collision data at $\sqrt{s}$ =13 TeV, are presented. Prompt inclusive photon production is measured for two distinct photon isolation cones, R=0.2 and 0.4, as well as for their ratio. The measurement is sensitive to gluon parton density distribution. Various measurements using dijet events are presented, as well. The measurement of new event-shape jet observables defined in terms of reference geometries with cylindrical and circular symmetries using the energy mover???s distance are discussed. In addition, measurements of variables probing the properties of the multijet energy flow and cross-section ratios of two- and three-jet production are highlighted. The measurements are compared to state-of-the-art NLO and NNLO predictions and used to determine the strong coupling constant.
Singular elements associated with the QCD factorization in the collinear limit are key ingredients for high-precision calculations in particle physics. They govern the collinear behaviour of scattering amplitudes, as well as the perturbative energy evolution of PDFs and FFs. In this talk, we explain the computation of multiple collinear and higher-order QCD splittings with massive partons. Our results might be highly-relevant for the consistent introduction of mass effects in the subtraction formalism and PDF/FF evolution.
One of the main obstacles to the calculation of next-to-next-to-leading order (NNLO) corrections in QCD is the presence of infrared singularities. Together with Raoul Röntsch, Kirill Melnikov and other collaborators, I am developing a more general approach to the nested soft-collinear subtraction method to address this problem for the production of an arbitrary final state at hadron colliders. In this presentation, I will discuss results for the process $P+P \to V + n$ gluons at NNLO, demonstrating the analytic cancellation of poles and presenting finite remainders of integrated subtraction terms, and will outline how the method can be completely generalized.
A precise measurement of the luminosity is a crucial input for many ATLAS physics analyses, and represents
the leading uncertainty for W, Z and top cross-section measurements. The first ATLAS luminosity determination in Run-3 of the LHC, for the dataset recorded in 2022, at center-of-mass energy of 13.6TeV follows the procedure developed in Run-2 of the LHC. It is based on van der Meer scans during dedicated running
periods each year to set the absolute scale, and an extrapolation to physics running conditions using
complementary measurements from the ATLAS tracker and calorimeter subsystems. The presentation discusses the procedure of the ATLAS luminosity measurement, as well as the results obtained for the 2022 pp dataset.
The associated production of vector bosons V (W, Z or gamma) and jets originating from heavy-flavour (c or b) quarks is a dominant background for many SM and Higgs boson measurements and new physics searches beyond the SM. The study of events with a vector boson accompanied by heavy-flavour jets is crucial to test the theoretical predictions in perturbative QCD up to NNLO, and provides a key tool for Monte Carlo generators. Newest 13 TeV differential cross sections of V+ c/b jets are measured as a function of several kinematic observables with the CMS detector at 8 and 13 TeV will be presented, with special attention on pQCD and EW aspects of their production, PDF constraints and modelling of the heavy flavour content of the proton.
The focus of the session is on top quarks precision measurements and theory calculations.
Conveners:
Gauthier Durieux (CERN)
Abideh Jafari (DESY)
Narei Lorenzo Martinez (LAPP, Annecy)
Contact: eps23-conveners-t07 @desy.de
15'+5'
15'+5'
Duration" 15'+5'
A large fraction of the top quarks produced at the LHC emerges from electroweak interactions, via the so-called t-channel single-top production.
Predictions for this process can be used, for instance, to constrain
the CKM matrix element, and probe possible anomalous couplings
in the tWb vertex. QCD corrections to t-channel single-top production
are known up to NNLO in the factorisable approximation, namely neglecting the crosstalk between different quark lines.
In this contribution we report on the recent calculation of QCD non-factorisable corrections to t-channel single-top production and stress the importance of these corrections in the light of increasing the accuracy of theoretical predictions for this process. We present results for the total cross section and for selected observables relevant to proton-proton collisions at the LHC and the FCC.
Duration: 15'+5'
We compare double-differential normalized production cross-sections for top-antitop + X hadroproduction at NNLO QCD accuracy, as obtained through a customized version of the MATRIX framework interfaced to PineAPPL, with recent data by the ATLAS and the CMS collaborations.
We take into account theory uncertainties due to scale variation and we see how predictions vary as a function of parton distribution function (PDF) choice and top-quark mass value, considering different state-of-the-art PDF fits with their uncertainties.
Notwithstanding the overall reasonable good agreement, we observe discrepancies at the level of a few sigma between data and theoretical predictions in some kinematical regions, which can be alleviated by refitting the top-quark mass values, and/or PDFs and/or alpha_s(M_Z), considering the correlations between these three quantities.
In a fit of top-quark mass standalone, we notice that, for all considered PDF fits used as input, some datasets point towards top-quark mass values lower by about two sigma than those emerging from fitting other datasets, suggesting a possible tension between experimental measurements using different decay channels, and/or the need of better estimating uncertainties on the latter.
15'+5'
Duration: 15+5'
The comparison of theory predictions and experimental measurements is one of the main roads for discovering physics beyond the Standard Model. The tremendous amount of data that has been and will be further collected at the LHC already demands a high level of precision from the theory predictions.
In this talk I will focus on ttZ production, whose phenomenological interest is well-established. The intricate resonance structure and the high multiplicity of the final state make the achievement of theory results for this process extremely challenging. I will present how we took another step forward to predict this process at high accuracy by computing for the first time the complete set of fully off-shell QCD and EW corrections.
Conveners:
Thomas Blake (University of Warwick)
Marzia Bordone (CERN)
Thibaud Humair (MPP)
Contact: eps23-conveners-t08 @desy.de
The tree-level determination of the CKM angle gamma is a standard candle measurement of CP violation in the Standard Model. The latest LHCb results from time-integrated measurements of CP violation using beauty to open charm decays are presented. These include updates to previous Run 1 measurements using the full LHCb Run 1+2 data sample and the latest LHCb gamma & charm mixing combination.
This talk will summarize the latest results on branching fractions and CP violation in the B->DX family of decays.
In this work, we investigate the time-dependent angular analysis of $B_s^0 \rightarrow \phi \phi$ decay to search for new physics signals via CP-violating observables. We work with a new physics Hamiltonian containing both left- and right-handed Chromomagnetic dipole operators. The hierarchy of the helicity amplitudes in this model gives us a new scheme of experimental search, which is different from the ones LHCb has used in its analysis. To illustrate this new scheme, we perform a sensitivity study using two pseudo datasets generated using LHCb's measured values. We find the sensitivity of CP-violating observables to be of the order of $5-7\%$ with the current LHCb statistics. Moreover, we show that Belle(II)'s $B^0_d \rightarrow \phi K_s$ and LHCb's $B_s^0 \rightarrow \phi \phi$ measurements could be coupled within our model to obtain the chirality of the new physics.
The study of CP violation in charmless B decays is of great interest as penguin and tree-level topologies contribute to the decay amplitudes with comparable strengths. The former topologies may be sensible to new particles appearing in the loops as virtual contributions. However the interpretation of physics quantities in terms of CKM parameters is not trivial due to strong-interaction effects between quarks. Amplitude analyses over the phase space of multibody charmless B decays allows the extraction of relevant information to refine the model describing the dynamics of strong interaction. In this presentation the most recent measurement analyses of multibody charmless B decays at LHCb are presented.
The investigation of $B$-meson decays into charmed and charmless hadronic final states is a keystone of the Belle II program. It offers theoretically reliable and experimentally precise constraints on CKM unitarity, it is sensitive to effects from non-SM physics, and it furthers knowledge about uncharted $b \to c$ hadronic transitions. Recent results on branching ratios and direct CP-violating asymmetries of $B \to K \pi$ decays are presented that lead to world-leading tests of the SM based on the $K \pi$ isospin sum rule. First observations of new $B \to D^{(*)}KK_S$ decays and new results from combined analyses of Belle and Belle II data to determine the CKM angle $\phi_3$ (or $\gamma$) are also presented.
Nowadays little is know about the dynamics behind the decays of heavy flavoured particles into final states with baryons. The description of these decays is very challenging from the theoretical point of view and more experimental results are fundamental to shed light on the particular features of these decays, like the enhancement close to the p-pbar threshold in multibody decays, or the suppression of two-body final states. The most recent results of LHCb in the search for charmless baryonic decays of beauty hadrons are discussed in this presentation.
Time-dependent measurements of CP violation are chief goals of the Belle II physics program. Comparison between penguin-dominated $b \to q\bar qs$ and tree-dominated $b \to c \bar cs$ results allows for stringent tests of CKM unitarity that are sensitive to non-SM physics. This talk presents recent Belle II results on $B^0 \to K_S \pi^0$, $B^0 \to K_S K_S K_S$, and $B^0 \to \phi K_S$ decays.
Conveners:
Ilaria Brivio (Universita di Bologna)
Karsten Köneke (Universität Freiburg)
Matthias Schröder (Universität Hamburg)
Contact: eps23-conveners-t09 @desy.de
Simplified template cross-sections provide a detailed description of the properties of Higgs boson production at the LHC. They are most precisely determined in the combination of the measurements performed in the different Higgs boson decay channels. This talk presents these combined measurements, as well as their interpretations in the context of specific scenarios of physics beyond the Standard Model, as well as generic extensions within the framework of the Standard Model Effective Field Theory. A combination of measurements of the branching fraction of Higgs boson decays into invisible particles is also presented, and interpreted as constraints on the cross section of WIMP dark matter interactions with nucleons.
In the decade since the discovery of the Higgs boson, its properties have been measured in detail, and appear to be consistent with the expectation of the Higgs boson in the SM. However, anomalous contributions to the Higgs boson couplings are not excluded. In this talk we review the most recent results from the CMS experiment on anomalous Higgs boson couplings. Interpretations of such anomalous interactions within an effective field theory framework are also discussed.
We analyse the sensitivity to beyond-the-Standard-Model effects of hadron collider processes involving the interaction of two electroweak and two Higgs bosons, VVHH, with V being either a W or a Z boson.
We examine current experimental results by the CMS collaboration in the context of a dimension-8 extension of the Standard Model in an effective-field-theory formalism. We show that constraints from vector-boson-fusion Higgs pair production on operators that modify the Standard Model VVHH interactions are already comparable with or more stringent than those quoted in the analysis of vector-bosonscattering final states. We study the modifications of such constraints when introducing unitarity bounds, and investigate the potential of new experimental final states, such as ZHH associated production. Finally, we show perspectives for the high-luminosity phase of the LHC.
Precision measurements of diboson production at the LHC is an important probe of the limits of the Standard Model. The gluon-fusion channel of this process offers a connection between the Higgs and top sectors. We study in a systematic way gluon-induced diboson production in the Standard Model Effective Field Theory. We compute the helicity amplitudes of double Higgs, double $Z/W$ and associated $ZH$ production at one loop and with up to one insertion of a dimension-6 operator. We study their high-energy limit and identify which operators in each channel lead to growths with energy for different helicity configurations. We perform a phenomenological study of associated $ZH$ production, including both quark and gluon initial states. Our analysis uses the channels in which the Higgs decays to b quarks and the $Z$ decays leptonically. To maximise our sensitivity to New Physics, we consider both the resolved and boosted Higgs regimes and employ a binning in $p_T$. We show that for some top operators the gluon-induced channel can offer competitive sensitivity to constraints obtained from top quark production processes.
We discuss rare Higgs boson production and decay channel searches with the CMS experiment. A particular focus of this talk are searches for very rare Higgs boson decays to a neutral light meson or quarkonium and a photon or Z boson, whose standard model branching fraction predictions are in the range 10^{-4}-10^{-9}. Such searches can help constrain Yukawa couplings to light and charm quarks. Other rare Higgs boson production and decay channels, such as H->Zgamma, will also be discussed.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
Many theories beyond the Standard Model predict new phenomena giving rise to multijet final states. These jets could originate from the decay of a heavy resonance into SM quarks or gluons, or from more complicated decay chains involving additional resonances that decay e.g. into leptons. Also of interest are resonant and non-resonant hadronic final states with jets originating from a dark sector, giving rise to a diverse phenomenology depending on the interactions between the dark sector and SM particles. This talk presents the latest 13 TeV ATLAS results.
Many new physics models and Standard Model extensions like, additional symmetries and forces, compositeness, extra dimensions, extended Higgs sectors, supersymmetry, dark sectors and dark matter particles, are expected to manifest themselves in final states with hadronic jets. This talk will present recent searches for new phenomena in such final states using the full Run II luminosity corresponding to 138 fb-1 collected with the CMS detector at the CERN LHC. Prospects for Run III will also be presented.
The role of the Parton Distribution Functions (PDF) is crucial not only in the precise determination of the SM parameters, but also in the interpretation of new physics searches at the LHC. In this talk we show the potential of global PDF analyses to inadvertently ‘fit away’ signs of new physics, by identifying specific scenarios in which the PDFs may completely absorb signs of new physics, thus biassing theoretical predictions. At the same time, we discuss several strategies to single out and disentangle such effects.
We study the influence of theoretical systematic uncertainties due to the quark density on LHC experimental searches for Z'-bosons.
Using an approach originally proposed in the context of the ABMP16 PDF set for the high-x behaviour of the quark density, we presents results on differential cross section and Forward-Backward asymmetry observables commonly used to study Z' signals in dilepton channels.
The Large Hadron-electron Collider and the Future Circular Collider in electron-hadron mode [1] will make possible the study of DIS in the TeV regime providing electron-proton collisions with per nucleon instantaneous luminosities of $10^{34}$ cm$^{−2}$s$^{−1}$. We review the possibilities for detection of physics beyond the SM in these experiments, focusing on feebly interacting particles like heavy neutrinos or dark photons, on anomalous gauge couplings, and on theories with heavy resonances like leptoquarks, or with contact interactions. We will emphasise the complementarity of searches at the LHeC (FCC-eh), and the respective hadronic colliders, the HL-LHC and the FCC-hh, and $e^+e^-$ Higgs factories.
[1] LHeC Collaboration and FCC-he Study Group: P. Agostini et al., J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
The International Large Detector (ILD) is a detector designed primarily for the International Linear Collider (ILC), a high-luminosity linear electron-positron collider with an initial center-of-mass energy of 250 GeV, extendable to 1 TeV.
The ILD concept is based on particle flow for overall event reconstruction, which requires outstanding detector capabilities including superb tracking, very precise detection of secondary vertices and high-granularity calorimetry. In the past years ILD has been working with groups building and testing technological prototypes of the key sub-detector technologies, scalable to the full ILD size, studying their integration into a coherent detector, benchmarking the ILD performance and preparing for an optimization of the overall ILD size and costing. The current status has been made public in the ILD Interim Design Report (IDR, 2020) of interest for any future e+e– collider detector. A particular strength of the ILD concept is the integration of a well-developed concept for a detector, based on well understood prototypes, with a well-developed and available suite of simulation and reconstruction tools, which allow detailed and reliable studies to be performed. ILD as a general purpose detector optimized for high precision science at an e+e- collider can also serve as an excellent basis to compare the science reach and detector challenges for different collider options. ILD is actively exploring possible synergies with other Higgs/ EW factory options. In this talk we will report on the state of the ILD detector concept, report on recent results and discuss selected examples of studies of an ILD detector at other colliders than ILC.
The Circular Electron Positron Collider (CEPC) was been proposed as a Higgs and high luminosity Z factory in last few years. The detector conceptual design of a updated detector consists of a tracking system, which is a high precision (about 100μm) spatial resolution Time Projection Chamber (TPC) detector as the main track device in very large 3D volume. The tracking system required the high precision performance requirements, but without power-pulsing not likely as the International Linear Collider (ILC), which leads to additional constraints on detector specifications, especially for the case of the machine operating at the high luminosity Z pole (Tera Z). TPC detection technology requires longitudinal time resolution of about 100ns and the physics goals require Particle Identification Detection (PID) resolution of very good separation power with cluster counting to be considered. The simulation and PID resolution show TPC technology potential to extend Tera Z at the future e+e- collider.
In this talk, the feasibility and status of high precision TPC as the main track detector for e+e collider will be presented. The simulation results of the pad/pixelated TPC technology for e+e- collider will be given. Compared with the pad readout, the pixelated readout option will obtain the better spatial resolution of single electrons, the very high detection efficiency in excellent tracking and good dE/dx performance. A smaller prototype TPC has been developed with a drift length of 500 mm, gaseous chamber, 20000V field-cage, the low power consumption FEE electronics and DAQ have been commissioned and some studies have been finished. Some results of the spatial resolution, the gas gain, the track reconstruction and PID will be reported.
A large, worldwide community of physicists is working to realize
an exceptional physics program of energy-frontier, electron-positron
collisions with the International Linear Collider (ILC)
and other collider projects (summarized and evaluated in
https://arXiv.org/abs/2208.06030). The International Large Detector (ILD) is one of the proposed detector concepts at the next \ee collider. The ILD tracking system consists of a Si vertex detector, forward tracking disks, a large volume Time Projection Chamber (TPC) and silicon tracking detectors inside and outside of the TPC, all embedded in a 3.5 T solenoidal field. An extensive research and development program
for a TPC has been carried out within the framework of the LCTPC
collaboration. A Large Prototype TPC in a 1 T magnetic field, which allows to accommodate up to seven identical Micropattern Gaseous Detector (MPGD) readout modules of the TPC-design being studied, has been built as a demonstrator at the 5 GeV electron test-beam at DESY. Three MPGD concepts are being developed for the TPC: Gas Electron Multiplier, Micromegas and Pixel, aslo known as GridPix, (≡ MicroMegas integrated on a Timepix chip).Successful test beam campaigns with the different technologies have been carried out during the last decade. Fundamental parameters such as transverse and longitudinal spatial resolution and drift velocity have been measured. In parallel, a new gating device based on large-aperture GEMs has been researched and successfully developed. Recent R\&D also led to a design of a
Micromegas module with monolithic cooling plate in 3D printing and 2-phase CO2 cooling. In this talk, we will review the track reconstruction
performance results and summarize the next steps towards the TPC
construction for the ILD detector. The TPC with pad/(pixel) electronics is designed to have about 106 pads/ (109 pixels) per endcap for continuous tracking and a momentum resolution of δ(1/pt)~1× 10−4/GeV/c (TPC only)/({δ(1/pt)~0.8×~10−4/GeV/c (60\% coverage, TPC only)}), and the dE/dx resolution is ≃5~\%/(≃4~\%). The momentum resolution including all tracking subdetectors is ~2~10−5/GeV/c
During the upcoming years of the High Luminosity Large Hadron Collider (HL-LHC) program, the CMS Muon spectrometer will face challenging conditions. The existing detectors, which consist of Drift Tubes (DT), Resistive Plate Chambers (RPC), and Cathode Strip Chambers (CSC), as well as recently installed Gas Electron Multiplier (GEM) stations, will need to sustain an instantaneous luminosity of up to 5-7 × 10^34 cm−2 s−1, resulting in increased pile-up, and about 10 times the originally expected LHC integrated luminosity. To cope with the high rate environment and maintain good performance, additional GEM stations and improved RPC (iRPC) detectors will be installed in the innermost region of the forward muon spectrometer. To test the effects of these challenging conditions a series of accelerated irradiation studies have been performed for all the muons systems, mainly at the CERN Gamma Irradiation Facility (GIF++), and also with specific X-ray sources. Furthermore, since RPCs and CSCs use gases with a global warming potential (GWP), ongoing efforts are being made to find new eco-friendly gas mixtures, as part of the CERN-wide program to phase out fluorinated greenhouse gases. This report presents the status of the CMS Muon system longevity studies, along with actions taken to reduce detector aging and minimize greenhouse gas consumption.
A key focus of the physics program at the LHC is the study of head-on proton-proton collisions. However, an important class of physics can be studied for cases where the protons narrowly miss one another and remain intact. In such cases, the electromagnetic fields surrounding the protons can interact producing high-energy photon-photon collisions. Alternatively, interactions mediated by the strong force can also result in intact forward scattered protons, providing probes of quantum chromodynamics (QCD). In order to aid identification and provide unique information about these rare interactions, instrumentation to detect and measure protons scattered through very small angles is installed in the beam pipe far downstream of the interaction point. We describe the ATLAS Forward Proton ‘Roman Pot’ Detectors (AFP and ALFA), including their performance and status. The physics interest, as well as the newest results on diffractive interactions, are also discussed.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
The effective design of instruments that rely on the interaction of radiation with matter for their operation is a complex task. Furthermore, the underlying physics processes are intrinsically stochastic in nature and open a vast space of possible choices for the physical characteristics of the instrument. While even large scale detectors such as e.g. at the LHC are built using surrogates for the ultimate physics objective, the MODE Collaboration (an acronym for Machine-learning Optimized Design of Experiments) aims at developing tools also based on deep learning techniques to achieve end-to-end optimization of the design of instruments via a fully differentiable pipeline capable of exploring the Pareto-optimal frontier of the utility function for future particle collider experiments and related detectors. The construction of such a differentiable model requires inclusion of information-extraction procedures, including data colle ction, detector response, pattern recognition, and other existing constraints such as cost. This talk will give an introduction to the goals of the newly founded MODE collaboration and highlight some of the already existing ingredients.
Novel technologies emerging from the second quantum revolution enable us to identify, control and manipulate individual quanta with unprecedented precision. One important area is the rapidly evolving new paradigm of quantum computing, which has the potential to revolutionize computing by operating on completely different principles. Expectations are high, as quantum computers have already solved complex problems that cannot be solved with classical computers.
A very important new branch is quantum machine learning (QML), which lies at the intersection of quantum computing and machine learning. QML combines classical Machine Learning with topics concerning Quantum Algorithms and Architectures. Many studies address hybrid quantum-classical approaches, but full quantum approaches are also investigated. The ultimate goal is to find the so-called quantum advantage, where quantum models outperform classical algorithms in terms of runtime or even solve problems that are intractable for classical computers.
However, in the current NISQ era (Noisy Intermediate-Scale Quantum computing), where noise in quantum computing challenges the accuracy of computations and the small number of qubits limits the size of the problem to be solved, it is difficult to achieve quantum advantage. Nevertheless, machine learning can be robust to noise and allows to deal with limited resources of present-day quantum computers.
In this talk, quantum machine learning will be introduced and explained with examples. Challenges and possible transfer to practical applications will be discussed.
This work introduces a comprehensive framework and discussion on the measurement of scientific understanding in agents, encompassing both humans and machine learning models. The focus is on artificial understanding, particularly investigating the extent to which machines, such as Large Language Models (LLMs), can exhibit scientific understanding. The presentation centers around fundamental physics, specifically particle physics, providing illustrative examples within this domain.
The study builds upon a philosophy of science perspective on scientific understanding, which is expanded to encompass a framework for assessing understanding in agents more broadly. The framework emphasizes three fundamental aspects of understanding: knowledge acquisition, explanatory capacity, and the ability to draw counterfactual inferences. Furthermore, the capabilities of LLMs to comprehend the intricacies of particle physics are examined and discussed.
Through this interdisciplinary exploration, the talk sheds light on the nature of scientific understanding in agents, bridging the gap between philosophical accounts and the potential of advanced machine learning models. The insights gained contribute to the ongoing dialogue on the boundaries of artificial understanding and its relevance in scientific research, particularly in the context of particle physics.
The work is based on https://arxiv.org/abs/2304.10327 and subsequent work.
While simulation plays a crucial role in high energy physics, it also consumes a significant fraction of the available computational resources, with these computing pressures being set to increase drastically for the upcoming high luminosity phase of the LHC and for future colliders. At the same time, the significantly higher granularity present in future detectors increases the physical accuracy required of a surrogate simulator. Machine learning methods based on deep generative models hold promise to provide a computationally efficient solution, while retaining a high degree of physical fidelity.
Significant strides have already been taken towards developing these models for the generation of particle showers in highly granular calorimeters, the subdetector which constitutes the most computationally intensive part of a detector simulation. However, to apply these models to a general detector simulation, methods must be developed to cope with particles incident at various points and under varying angles in the detector. This contribution will address steps taken to tackle the challenges faced when applying these simulators in more general scenarios, as well as the effects on physics observables after interfacing with reconstruction algorithms. In particular, results achieved with bounded information bottleneck and normalising flow architectures based on regular grid geometries will be discussed. Combined with progress on integrating these surrogate simulators into existing full simulation chains, these developments bring an application to benchmark physics analyses closer.
Deep learning methods are becoming key in the data analysis of particle physics experiments. One clear example is the improvement of neutrino detection using neural networks. Current neutrino experiments are leveraging these techniques, which, in combination, have exhibited to outperform standard tools in several domains, such as identifying neutrino interactions or reconstructing the kinematics of single particles. In this talk, I will show various deep-learning algorithms used in the context of voxelised neutrino detectors. I will present how to design and use advanced deep-learning techniques for tasks such as fitting particle trajectories and understanding the particles involved in the vertex activity. All these methods report promising results and are crucial for improving the reconstruction of the interacting particle kinematics and enhancing the sensitivity to future physics measurements.
FASER, the ForwArd Search ExpeRiment, is an LHC experiment located 480 m downstream of the ATLAS interaction point, along the beam collision axis. FASER was designed, constructed, installed, and commissioned during 2019-2022 and has been taking physics data since the start of LHC Run 3 in July 2022. This talk will present the status of the experiment, including detector design, detector performance, and first physics results from Run 3 data. Special focus will be drawn on signatures of new physics, i.e. searches for new light and very weakly-interacting particles such as dark photons.
Many extensions of the Standard Model with Dark Matter candidates predict new long-lived particles (LLP). The LHC provides an unprecedented possibility to search for such LLP produced at the electroweak scale and above. The ANUBIS concept foresees instrumenting the ceiling and service shafts above the ATLAS experiment with tracking stations in order to search for LLPs with decay lengths of O(10m) and above. After a brief review of the ANUBIS sensitivity, this contribution will discuss the first complete prototype detector module proANUBIS data taking in the ATLAS experimental cavern in 2023.
We will present the operational status of the milliQan Run 3 detector, which was installed during the 2022-3 YETS and is presently being commissioned. We will available initial results from data obtained with Run 3 LHC Collisions.
The NA62 experiment at CERN took data in 2016–2018 with the main goal of measuring the $K^+ \rightarrow \pi^+ \nu \bar\nu$ decay. We report on the search for visible decays of exotic mediators from data taken in "beam-dump" mode with the NA62 experiment. The NA62 experiment can be run as a "beam-dump experiment" by removing the Kaon production target and moving the upstream collimators into a "closed" position. More than $10^{17}$ protons on target have been collected in this way during a week-long data-taking campaign by the NA62 experiment. We report on new results from analysis of this data, with a particular emphasis on Dark Photon and Axion-like particle Models.
The parameters space for Weakly-Interacting-Massive-Particles as possible explanation for Dark Matter, is shrinking more and more. This triggered new attempts to create dark matter at accelerators. This alternative approach represents an innovative and open-minded way to broaden this research field in a wider range of energies with high-sensitivity detectors [1].
In this panorama is inserted the Positron Annihilation into Dark Matter Experiment (PADME) ongoing at the Laboratori Nazionali di Frascati of INFN. PADME was conceived to search a Dark Photon signal [2] by studying the missing-mass spectrum of single photon final states resulting from positrons annihilations with the electrons of a fixed target. Actually, the PADME approach allows to look for any new particle produced in e$^+$ e$^-$ collisions through a virtual off-shell photon such as long lived Axion-Like-Particles (ALPs), proto-phobic X bosons, Dark Higgs ...
After the detector commissioning and the beam-line optimization, PADME collaboration collected in 2020 about 5×10$^{12}$ positrons on target at 430 MeV. A fraction of these data have been used to evaluate the cross-section of the process e$^+$ e$^-$→γγ(γ) at √s=20 MeV with a precision of 5% [3]. This is the first measurement ever done at this energy, that detected the two final state photons, making it the first measurement allowing to define stringent limits to processes beyond Standard Model.
PADME has also the unique opportunity to confirm/disprove the particle nature of the X17 anomaly observed in the ATOMKI nuclear physics experiments studying de-excitation of some light nuclei [4]. The PADME 2022 data taking was conducted with this scope. About 10$^{10}$ positrons have been stopped on the target for each of the 47 beam energy values in the range 262 - 298 MeV. This precise energy scan is intended to study the reaction e$^+$ e$^-$→X17→e$^+$ e$^-$.
The talk will give an overview of the scientific program of the experiment and of the data analyses ongoing.
References
[1] P. Agrawal et al., Eur. Phys. J. C 81 (2021) 11, 1015.
[2] P. Albicocco et al., JINST 17 (2022) 08, P08032.
[3] F. Bossi et al., Phys. Rev. D 107 (2023) 1, 012008.
[4] L. Darmé et al., Phys. Rev. D 106 (2022) 11, 115036.
We present the most recent $BABAR$ searches for reactions that could simultaneously explain the presence of dark matter and the matter-antimatter asymmetry in the universe. This scenario predicts $B$-meson decays into an ordinary-matter baryon and a dark-sector anti-baryon $\psi_D$ with branching fractions accessible at the $B$ factories. The results are based on the full data set of about 430 $\text{fb}^{-1}$ collected at the $\Upsilon(4S)$ resonance by the $BABAR$ detector at the PEP-II collider.
We search, in particular, for decays like $B^{0}\to\psi_{D} {\cal B}$ where $\cal{B}$ is a baryon (proton, $\Lambda$, or $\Lambda_c$). The hadronic recoil method has been applied with one of the $B$ mesons from $\Upsilon(4S)$ decay fully reconstructed, while only one baryon is present in the signal $B$-meson side. The missing mass of signal $B$ meson is considered as the mass of the dark particle $\psi_{D}$. Stringent upper limits on the decay branching fraction are derived for $\psi_D$ masses between 1.0 and 4.2 GeV/c$^2$.
In a class of theories, dark matter is explained by postulating the existence of a 'dark sector',
which interacts gravitationally with ordinary matter. If this dark sector contains a U(1) symmetry,
and a corresponding 'dark' photon ($A_{D}$) , it is natural to expect that this particle with kineticly mix
with the ordinary photon, and hence become a 'portal' through which the dark sector can be studied.
The strength of the mixing is given by a mixing parameter $(\epsilon)$. This
same parameter governs both the production and the decay of the $A_{D}$ back to SM
particles, and for values of $\epsilon$ not already excluded, the signal would be
a quite small, and quite narrow resonance: If $(\epsilon)$ is large enough to
yield a detectable signal, its decay width will be smaller than the detector resolution, but so large
that the decay back to SM particles is prompt. For masses of the dark photon above the reach of
BelleII, future high energy e+e- colliders are ideal for searches for such a signal, due to the
low and well-known backgrounds, and the excellent momentum resolution and equally
excellent track-finding efficiency of the detectors at such colliders.
This contribution will discuss a study investigating the dependency of the limit on the mixing
parameter and the mass of the $A_{D}$ using the $A_{D}\rightarrow\mu^{+}\mu^{-}$ decay mode in
the presence of standard model background, using fully simulated signal and background events in
the ILD detector at the ILC Higgs factory. In addition, a more general discussion about the capabilities
expected for generic detectors at e+e- colliders operating at other energies will be given.
Conveners:
Summer Blot (DESY)
Pau Novella (IFIC)
Davide Sgalaberna (ETH)
Jessica Turner (Durham University)
Contact: eps23-conveners-t04 @desy.de
The near detector of T2K (ND280) is undergoing a major upgrade. A new scintillator tracker, named superFGD, with fine granularity and 3D-reconstruction capabilities has been assembled at J-PARC. The new Time Projection Chambers are under construction, based on the innovative resistive Micromegas technology and a field cage made of extremely thin composite walls. New scintillator panels with precise timing capability have been built to allow precise Time of Flight measurements.
The detector is currently in assembly phase following a detailed effort of characterization during detector production. The results of multiple tests of the detectors with charged beams, neutron beam, cosmics and X-rays will be presented. Among these results, we could mention the first measurement of neutron cross-section with the superFGD and the first detailed characterization of the charge spreading in resistive Micromegas detectors.
Thanks to such innovative technologies, the upgrade of ND280 will open a new way to look at neutrino interactions thanks to a significant improvement in phase space acceptance and resolution with an enhanced purity in the exclusive channels involving low-momentum protons, pions and neutrons. Sensitivity results and prospects of physics capabilities will be also shown.
The Deep Underground Neutrino Experiment (DUNE) is a next generation long baseline neutrino experiment for oscillation physics and proton decay studies. The primary physics goals of the DUNE experiment are to perform neutrino oscillation physics studies, search for proton decay, detect supernova burst neutrinos, make solar neutrino measurements and BSM searches. The liquid argon prototype detectors at CERN (ProtoDUNE) are a test-bed for DUNE’s far detectors, which have operated for over 2 years, to inform the construction and operation of the first two and possibly subsequent 17-kt DUNE far detector LArTPC modules. Here we introduce the DUNE and protoDUNE experiments and physics goals as well as discussing recent progress and results.
The ESSnuSB project aims to measure the leptonic CP violation at the second neutrino oscillation maximum using an intense neutrino beam, which will be produced by the powerful ESS proton linear accelerator. The first phase of the project was successfully concluded with the production of the Conceptual Design Report in which it was shown that this next-to-next generation neutrino oscillation experiment has a potential to start the precision era in the field of the leptonic CP violation measurement.
ESSnuSB+ is a continuation of this study which focuses on neutrino interaction cross-section measurement at the low enutrino energy region, exploring the sensitivity of the experimental set-up to additional physics scenarios and on the civil engineering of the near and far detectors sites. It foresees an intermediate step in the ESSnuSB construction phase in which a number of additional facilities will be built: a 1/4 power ESSnuSB neutrino production target system prototype, a low energy muon storage ring and a low energy monitored neutrino beam facility; a common near neutrino detector for the muon ring and monitored beam will be designed, and a study of the effect of Gd doping of ESSnuSB water Cherenkov detectors will be performed.
This talk will give an overview of the ESSnuSB and the ESSnuSB+ projects and their intended place in the landscape of leptonic CP violation measurements.
The Jiangmen Underground Neutrino Observatory (JUNO) is a 20-kiloton multi-purpose liquid scintillator detector under construction in a 700-meter underground laboratory in China. With its excellent energy resolution, sizeable fiducial volume, and remarkable background control, JUNO presents unique prospects to explore many important topics in neutrino and astroparticle physics.
By measuring the oscillation of reactor electron antineutrinos, JUNO can determine the neutrino mass ordering (NMO) and measure several oscillation parameters with sub-percent precision. Additionally, atmospheric neutrino measurements provide independent data for oscillation physics, consequently enhancing JUNO’s NMO sensitivity.
Besides oscillation measurements, JUNO has substantial potential for addressing a wide range of non-oscillation physics, such as detecting solar neutrinos, geo-neutrinos, supernova neutrinos, and diffuse supernova neutrino background, as well as searching for proton decay and other new physics beyond the Standard Model.
This talk presents JUNO’s physics potential in various domains.
The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose experiment, which is under construction in South China. Thanks to the 20 ktons of ultra-pure liquid scintillator (LS), JUNO will be able to perform innovative and groundbreaking measurements like the determination of neutrino mass ordering (NMO). Beyond NMO, JUNO will measure the three neutrino oscillation parameters with a sub-percent precision. Furthermore, the JUNO experiment is expected to have important physics reach with solar neutrinos, supernova neutrinos, geoneutrinos and atmospheric neutrinos.
The experiment is being constructed in a 700m underground laboratory, located about 53 km from both the Taishan and Yangjiang nuclear power plants. The JUNO central detector (CD) will be equipped with 17612 20-inch photomultiplier tubes (PMTs) and 25600 3-inch PMTs. The central detector will be surrounded by a water tank that will provide passive shielding from radioactivity decays and serve as a water Cherenkov detector to tag cosmic muons. Additionally, a plastic scintillator detector is located above the central detector to veto cosmic muons from the top.
The JUNO CD detector is expected to have an energy resolution better than 3% at 1 MeV and to have an absolute energy scale uncertainty better than 1% over the whole reactor antineutrino energy range.
The detector construction is expected to be completed in 2023. In this talk, I will present the detector design and the installation status of the various JUNO subsystems.
LiquidO is a new neutrino detection technology which uses opaque liquid scintillator with a very short scattering length and an intermediate absorption length. Reducing the scattering length down to the scale of millimetres causes the light to be confined to a few cm radius near its creation point. To extract the light a lattice of wavelength-shifting fibres runs through the scintillator. This technology provides high-resolution imaging that enables highly efficient identification of individual particles event-by-event down to the MeV scale and therefore offers a wide range of applications in particle physics. Additionally, the exploitation of an opaque medium gives LiquidO natural affinity for using dopants at unprecedented levels. The principles of the technique have been demonstrated with two small prototypes. The next step will be the construction of a 5-ton demonstrator and its operation at the Chooz nuclear power plant within the scope of an Innovation programme (EIC-Pathfinder project - AntiMatter-OTech) for monitoring nuclear reactor activity. CLOUD collaboration plans to exploit the fundamental science programme associated to this project. CLOUD collaboration includes 13 institutions over 10 countries.
Supernova (SN) explosions provide a perfect environment to produce and therefore test hypothetical particles. SN1987a gave a possibility to set a number of constraints on FIPs parameters using the energy-loss argument and further development of neutrino detectors extends those possibilities. I will discuss, how SN-produced FIPs may create detectable signatures that can significantly improve testable regions of FIP's parameters compared to those, provided by energy-loss arguments. in particular, for HNLs in the range of masses between $\sim 150$ MeV and $500$ MeV, it can potentially close a gap in the testable parameter's space between expected SHiP sensitivity and BBN
Conveners:
Laura Fabbietti (TU München)
Gunther Roland (MIT)
Jasmine Brewer (CERN)
Jeremi Niedziela (DESY)
Contact: eps23-conveners-t05 @desy.de
This talk presents the latest ATLAS measurements of collective phenomena in various collision systems, including pp collisions at 13 TeV, Xe+Xe collisions at 5.44 TeV, and Pb+Pb collisions at 5.02 TeV. These include measurement of vn-[pT] correlations in pp, Xe+Xe, and Pb+Pb, which carry important information about the initial-state geometry of the Quark-Gluon Plasma, provide an insight as to what effects are be observed without invoking hydrodynamic modeling, and can potentially shed light on any quadrupole deformation in the Xe nucleus. This talk will also present measurements of flow decorrelations differential in rapidity probing the longitudinal structure of the colliding system and study of the sensitivity of collective behavior in pp collisions to the presence of jets, which seek to distinguish the role that semi-hard processes play in the origin of these phenomena.
Studies have yielded strong evidence that a deconfined state of quarks and gluons, the quark--gluon plasma, is created in heavy-ion collisions. This hot and dense matter exhibits almost zero friction and a strong collective behavior. An unexpected collective behavior has also been observed in small collision systems. In this talk, the origin of collectivity in small collision systems is addressed by confronting PYTHIA8 and EPOS4 models using measurements of azimuthal correlations for inclusive and identified particles. In particular, anisotropic flow coefficients measured using two- and four-particle correlations with various pseudorapidity gaps, per-trigger yields, and balance functions are reported in pp collisions at $\sqrt{s}=13.6$ TeV and p--Pb collisions at $\sqrt{s_{NN}}=5.02$ TeV. The results are compared with the available experimental data.
Event classifiers based either on the charged-particle multiplicity or on event topologies, such as spherocity and Underlying Event, have been extensively used in proton-proton (pp) collisions by the ALICE Collaboration at the LHC. These event classifiers became very useful tools since the observation of fluid-like behavior in high multiplicity pp collisions, for example radial and anisotropic flow. Furthermore, the study as a function of the charged-particle multiplicity in the forward V0 ALICE detector allowed for the discovery of strangeness enhancement in high-multiplicity pp collisions. However, one drawback of the multiplicity-based event classifiers is that, requiring a high charged-particle multiplicity, biases the sample towards hard processes like multijet final states. These biases blur the effects of multi-parton (MPI) interactions and make it difficult to pin down the origins of fluid-like effects.
This contribution explores the use of a new event classifier, the charged-particle flattenicity, defined in ALICE using the charged-particle multiplicity estimated in 2.8 < $\eta$ < 5.1 and −3.7 < $\eta$ < −1.7 intervals. New final results on the production of pions, kaons, protons, and unidentified charged particles at midrapidity (|$\eta$| < 0.8) as a function of flattenicity in pp collisions at $\sqrt{s}$ = 13 TeV will be discussed. It will be shown how flattenicity can be used to select events more sensitive to MPI and less sensitive to final state hard processes. All the results are compared with predictions from QCD-inspired Monte Carlo event generators such as PYTHIA and EPOS. Finally, a preliminary study using the flattenicity estimator using Run 3 data will be shown.
Recent measurements of high multiplicity pp collisions at LHC energies have revealed that these systems exhibit features similar to quark-gluon plasma, such as presence of radial and elliptic flow, and strangeness enhancement, traditionally believed to be only achievable in heavy nucleus-nucleus collisions at high energy. To pinpoint the origin of these phenomena and to bring all collision systems in equal footings, along with charged-particle multiplicity, lately several event shape observables such as transverse activity classifier and transverse spherocity has been used extensively in experiments as well as in the phenomenological front.
In this contribution, we will summarise our phenomenological explorations [1-6] and compare with experimental results from LHC to conclude our learning so far from these studies. We observe that the event shape observables successfully differentiate the events based on soft and hard physics, however, obtaining these observables presents experimental challenges due to biases from detectors. In such a scenario, we propose to use machine learning methods for the determination of such observables in a dense environment like heavy-ion collisions. We will also provide a future outlook in view of Run 3 at the LHC.
The contribution would be based on our recent publications:
1. Phys. Rev. D107 (2023) 7, 074011
2. Phys. Rev. D107 (2023) 7, 076012
3. Phys. Rev. D103 (2021) 9, 094031
4. Sci. Rep. 12 (2022) 1, 3917
5. Eur. Phys. J. C82 (2022) 6, 524
6. J. Phys. G48 (2021) 4, 045104
Hard probes as heavy quarks (charm and beauty) and jets are valuable tools for investigating the properties of the quark-gluon plasma (QGP) formed in ultra-relativistic heavy-ion collisions. In particular, measurements of the nuclear modification factor $R_{\rm AA}$ of these probes allow us to characterise the in-medium energy loss of heavy quarks, light quarks and gluons while traversing the QGP, and to shed light on the jet-quenching phenomenology. Information on the heavy-quark diffusion and degree of participation in the medium collective motion can be obtained by measuring the elliptic-flow coefficient $v_2$ of heavy-flavour particles. Similarly, measurements of jets yield correlation with the event-plane orientation allow us to study the path-length dependence of jet energy loss due to quenching. Complementary insights into heavy-quark fragmentation and energy redistribution in the QGP can be obtained by measuring angular correlations involving heavy-flavour particles.
In this contribution, the newly published results on the non-prompt $v_2$ coefficient of ${\rm D}^0$ mesons in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV will be shown and compared to measurements of prompt D-meson $v_2$ in the same system. The recent final results of the heavy-flavour decay electron $R_{\rm AA}$ in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV will also be reported, together with measurements of prompt and non-prompt D mesons and $\Lambda_{\rm c}^+$ baryons. New results of angular correlations of heavy-flavour decay electrons with charged particles in the same collision system will also be discussed.
Measurements of the inclusive charged-particle jet yield in central Pb--Pb collisions, with the large uncorrelated background mitigated using a novel event mixing technique, will also be reported. In addition to explorations of the low-$p_{\rm T}$ frontier, the inclusive charged-particle jet $v_2$ in semi-central Pb--Pb collisions will be shown, quantifying the yield dependence relative to the event-plane orientation and probing the path-length dependence of jet energy loss. More differential measurements of this azimuthal dependence, obtained by using event-shape engineering to select specific event topologies, and the jet substructure observable $R_{\rm g}$ to select specific jet topologies, will be discussed. Such measurements improve our understanding of how jet suppression depends on both medium and jet properties.
Jets are excellent probes for studying the deconfined matter formed in heavy ion collisions. This talk presents the new observables to study how jets interact with the QGP. First, we introduce a new infrared and collinear safe measurement of the jet energy flow within jets reconstructed with different resolution parameters $R$. Changing the jet $R$ varies the relative contribution of competing energy-loss effects. Second, the measurement of jets recoiling from a trigger hadron (hadron+jet) provides unique probes of medium-induced modification of jet production. Jet deflection via multiple soft scatterings with the medium constituents may broaden the azimuthal correlation between the trigger hadron and the recoiling jet. In addition, the tail of this azimuthal correlation may be sensitive to single-hard Molière scatterings off quasi-particles in the medium. The $R$-dependence of recoil jet yield probes jet energy loss and intra-jet broadening. Finally, in inclusive jet populations, the principle axis of energy flow in the plane transverse to the jet axis examines the correlation of particles outside the jet cone with the energy of the jet. All three results may be sensitive to wake effects due to jet-medium energy transfer at low $p_\mathrm{T}$.
This talk presents measurements of the semi-inclusive distribution of charged-particle jets recoiling from a trigger hadron in pp and Pb--Pb collisions. We observe that the jet yield at low $p_\mathrm{T}$ and at large azimuthal angle between the trigger hadron and jet is significantly enhanced in Pb--Pb collisions with respect to pp collisions, which we interpret through comparisons to model calculations. In addition, the first energy flow between jets of different radii and correlations of tracks with the principle direction of energy flow in the plane transverse to the jet will be presented.
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
I will review the MSHT20 parton distribution functions and focus on our recent paper within the MSHT collaboration on the inclusion of theoretical uncertainties and higher order (N3LO) terms into the MSHT PDFs, producing the MSHT20aN3LO (approximate N3LO) set. This represents the first global analysis of parton distribution functions (PDFs) at approximate N3LO as well as simultaneously the inclusion of theoretical uncertainties into the MSHT PDFs from missing higher order terms beyond NNLO. I will review the formalism, higher orders and theoretical uncertainties included, and their effects on both the fit quality and PDFs before examining indicative N3LO predictions.
The radiation pattern within high energy quark and gluon jets (jet substructure) is used extensively as a precision probe of the strong force as well as an environment for optimizing event generators for nearly all tasks in high energy particle and nuclear physics. While there has been major advances in studying jet substructure at hadron colliders, the precision achievable by collisions involving electrons is superior, as most of the complications from hadron colliders are absent. Therefore jets are analyzed which were produced in deep inelastic scattering events and recorded by the H1 detector. This measurement is unbinned and multi-dimensional, making use of machine learning to correct for detector effects. Results are presented after unfolding the data to particle level for events in the fiducial volume of momentum transfer $Q^2>150$ GeV$^2$, inelasiticity $0.2< y < 0.7$, jet transverse momentum $p_{T,jet}>10$ GeV, and jet pseudorapidity $-1<\eta_{jet}<2.5$. The jet substructure is analyzed in the form of generalized angularites, and is presented in bins of $Q^2$ and $y$. All of the available object information in the events is used to achieve the best precision through the use of graph neural networks. Training these networks was enabled by the new Perlmutter supercomputer at Berkeley Lab that has a large number of Graphical Processing Units (GPUs). The data are compared with a broad variety of predictions to illustrate the versatility of the results for downstream analyses.
arxiv:2303.13620, submitted to PLB
We investigate the impact of the recently released FNAL-E906 (SeaQuest) data concerning the ratio of proton-deuteron and proton-proton DY production cross-sections on the sea quark PDFs. We find that they have constraining power on the light-quark sea isospin asymmetry (dbar-ubar)(x) and on the (dbar/ubar)(x) ratio at large longitudinal momentum fraction x values, that they are particularly relevant in the interval 0.25 < x < 0.45, and that their constraints turn out to be compatible with those from DY data in collider experiments (Tevatron and Large Hadron Collider) and in old fixed-target experiments by the FNAL-E605 and FNAL-E866 collaborations. We study the impact of nuclear corrections due to the deuteron target, finding them within 1% in most of the kinematic region covered by SeaQuest. We perform a new proton PDF fit, including SeaQuest data, using the ABMP16 methodology and we compare to other PDF fits, including these data or not yet.
On the basis of S. Alekhin et al. [arXiv:2306.01918]
We present recent updates in the xFitter software framework for global fits of parton distribution functions (PDFs) in high-energy physics. Our focus is on investigating the sensitivity to Z boson couplings using the forward-backward asymmetry in Drell-Yan production. By utilizing an effective approach and simulated data, we assess the accuracy of these couplings, specifically considering the full LHC data sample. Furthermore, we compare our results with predictions for future colliders, providing insights into their potential impact on understanding Z boson interactions.
The production of dijet events containing at least two jets is among the largest cross sections at the LHC, with QCD predictions directly sensitive to the strong coupling constant. Dijet cross section measurements from ATLAS and CMS, at center-of-mass energies of 7, 8 and 13 TeV are exploited for the determination of the strong coupling constant, using state-of-the-art next-to-next-to-leading order QCD predictions from NNLOJET which include subleading colour contributions. These are interfaced to the grid frameworks of APPLgrid and fastNLO. The large kinematic range of the dijet data allows for a comprehensive test of the renormalisation scale dependence of QCD.
Measurements of individual electroweak bosons at hadron colliders provide stringent tests of perturbative QCD and improve the modelling of background to many BSM searches. We present the measurement of the production of W boson in association with D+ and D*+ mesons. This precision measurement provides information about the strange content of the proton and is compared to NLO theoretical calculations. Also presented is the production of Z bosons in association with b-tagged large-radius jets. The result highlights issues with the modelling of an additional hadronic activity and provides a distinction between flavour-number schemes used in theoretical predictions. Finally, differential measurements of W and Z production with large missing transverse momentum in association with jets are discussed and compared to the state-of-the-art QCD theoretical predictions. The production rate of Z+jet events with large missing transverse momentum is used to measure the decay width of the Z boson decaying to neutrinos.
The study of the associated production of vector bosons and jets constitutes an excellent ground field to test the state-of-art pQCD predictions, and to understand the EW aspects of their production. The newsr results on the differential cross sections of vector bosons produced in association with jets at 13 TeV center-of-mass energy will be presented. Differential distributions as function of a broad range of kinematical observables are measured and compared with theoretical predictions up to NNLO. Final states with a vector boson and jets can be also used to study electroweak initiated processes, such as the vector boson fusion production of a photon, Z or W boson accompanied by a pair of energetic jets with large invariant mass, and they can be a powerful test of the EW emission of bosons.
Measurements of jet production in proton-proton collisions at the LHC are crucial for precise tests of QCD, improving the understanding of the proton structure and are important tools for searches for physics beyond the standard model. We present the most recent set of jet measurements performed using CMS data, from which measurements of the the strong coupling constant and PDF constraints are derived via QCD fits. Interpretation within the standard model effective field theory is also presented.
The focus of the session is on top quarks precision measurements and theory calculations.
Conveners:
Gauthier Durieux (CERN)
Abideh Jafari (DESY)
Narei Lorenzo Martinez (LAPP, Annecy)
Contact: eps23-conveners-t07 @desy.de
15'+5'
15'+5'
Duration: 15'+5'
Current measurements of the top mass have achieved a precision of less than 500 MeV. However, these measurements, relying on Monte Carlo Simulations, are affected by the top mass interpretation problem, introducing a theory uncertainty of $\mathcal{O}$(1 GeV). To address this challenge, accurate first principles calculations in short distance schemes are needed, allowing direct comparison with unfolded LHC data. This talk presents two complementary observables, the soft drop jet mass (Phys. Rev. D 100, 074021) and the 3-point energy correlator (Phys. Rev. D 107, 114002), where precise hadron-level predictions for the top mass can be achieved. I will review recent advancements in these approaches, including a new NNLL prediction for the soft drop jet mass in top quark jets that incorporates first principles treatment of hadronization corrections. Additionally, I will present an improved calibration of the Monte Carlo top mass parameter in collaboration with ATLAS, using the new theory input.
Duration: 15'+5'
The precise measurement of the properties of the top quark are among the most important goals of the LHC. The signature of top quarks can only be measured through their decay products, which are almost exclusively a W-boson and a b-quark, and unbiased measurements of the top-quark pair production process are therefore performed in the final state of two W-bosons and two b-quarks (WWbb). However, the WWbb final state has further contributions from single-top production and even from channels without intermediate top-quarks. At next-to-leading order QCD, these channels interfere and cannot be calculated separately any more, and since the top quarks can be off their mass shell, also finite width effects become important.
In this contribution, we exploit a measurement of the WWbb final state in the di-lepton decay channel from ATLAS at 13 TeV together with a next-to-leading order QCD prediction supplemented with parton shower in the Powheg-Box-Res framework (denoted "bb4l") for a determination of the top-quark mass and its width. We evaluate the impact of using the fully off-shell calculations, and study the correlation between the top quark mass and width. For the inference, we make use of a novel analytic parameter estimation ansatz, the Linear Template Fit, which will also be introduced briefly.
Duration: 15'+5'
The simulation of processes involving heavy unstable particles, like the top quark, holds significant importance in LHC physics. In this contribution, we address the exclusive simulation of top-quark pair production with dileptonic decays, including the non-resonant diagrams, interferences, and off-shell effects arising from the finite top-quark width. Our simulations, utilizing the mg5_aMC@NLO program, achieve next-to-leading order accuracy in QCD and is matched to parton showers through the MC@NLO method. We present phenomenological results with direct relevance to the 13 TeV LHC. We benchmark the impact of the off-shell effects on representative distributions relevant for top mass extractions, and compare our simulation to lower accuracy simulations and to data.
15'+5'
Duration: 15'+5'
With the help of the pole approximation, observables with polarised intermediate resonances can be calculated. Gauge-boson-pair production represents a particularly interesting class of processes to study polarisation. The definition of polarised signals at amplitudue level has enabled successful phenomenological studies of leptonically decaying vector bosons. The natural step forward from this is the investigation of bosons decaying into hadrons. In this talk I discuss the NLO QCD predictions for the production of a polarised ZW$^+$ pair, where the W$^+$ boson decays hadronically and the Z boson leptonically. Of particular interest are observables that are well suited for the discrimination amongst different polarisation states of both weak bosons. In addition I analyse the significant impact of NLO QCD corrections on differential distributions.
Conveners:
Thomas Blake (University of Warwick)
Marzia Bordone (CERN)
Thibaud Humair (MPP)
Contact: eps23-conveners-t08 @desy.de
Measuring the mixing phases of the B0 and Bs mesons is very important to validate the CP violation paradigm of the Standard Model and to search for new physics beyond it. Golden modes to measure these quantities are those governed by tree-level $b\to c\bar{c}q$ transitions, that allow precise and theoretically clean determinations to be performed. Besides, measuring the mixing phases with modes receiving major contributions from penguin diagrams open the possibility to reveal new physics appearing in the loops. In this presentation we show the most recent time-dependent measurements for B0 and Bs mixing phases at LHCb.
Flavour physics represents a unique test bench for the Standard Model (SM). New analyses performed at the LHC experiments and new results coming from Belle II are bringing unprecedented insights into CKM metrology and new results for rare decays.
The CKM picture provides very precise SM predictions through global analyses. We present here the results of the latest global SM analysis performed by the UTfit collaboration including all the most updated inputs from experiments, lattice QCD and phenomenological calculations for Summer 2023.
Ref: https://arxiv.org/abs/2212.03894
The usual unitarity triangles of either the $3\times 3$ CKM quark flavor mixing matrix or the $3\times 3$ PMNS lepton flavor mixing matrix are not fully rephasing-invariant, although their areas are all equal to a half of the corresponding Jarlskog invariant of CP violation. Here we propose the novel "rescaled unitarity triangles" (RUTs), whose sides are completely rephasing-invariant and whose heights are all equal to the Jarlskog invariant, to reconstruct the CKM and PMNS matrices. In particular, we find that these RUT quantities simply appear in the probabilities of neutrino oscillations or in the rates of B-meson decays, and they satisfy an interesting Pythagoras-like theorem. So the latter is very useful to describe CP violation and test the consistency of the CKM and PMNS unitarities in a more straightfroward way.
Charm physics, involving a heavy up-type quark, offers a pathway to search for new particles and couplings beyond the Standard Model complementary to that of B physics. A program based on precision measurements of charm lifetimes is now underway at Belle II, and benefits from the detector's outstanding vertexing performance and low-background environment. Recent results from measurements of D_s meson and Omega_c baryon lifetimes are presented. In addition, a novel algorithm to identify the flavor of neutral charm mesons is presented that effectively doubles the sample size for many measurements of CP violation and flavor mixing.
A systematic treatment of electromagnetic and strong corrections to the semi-leptonic decays is needed in order to have a precise determination of phenomenological parameters of the Standard Model (SM), such as CKM matrix elements. Under the presence of QED, the matrix element associated to the effective semi-leptonic operator on the lattice has to be renormalised, thus requiring a matching to the continuum results.
To this end, in collaboration with Dr. M. Gorbahn, Dr. S. Jäger and Mr. E. van der Merwe, we calculated the corresponding pertubative matching coefficients up to $O(\alpha\alpha_s)$.
In our work, we emphasise the importance of appropriate choices of renormalisation conditions on the lattice and show how these impact the resulting perturbative matching. In particular, we have found that the renormalization conditions as defined and used in the literature thus far lead to extraneous and unnecessary QCD contributions that reflect in an artificial dependence on the lattice matching scale.
We suggest improvements to rectify this problem and present the complete expression for the Leading-Log (LL) and Next-to-Leading-Log (NLL) strong corrections to the electromagnetic contributions of the low-scale Wilson Coefficient.
Additional steps including the matching to the full SM at the Electroweak scale and the 3-loop anomalous dimensions of the semi-leptonic necessary to achieve the NLL result will also be discussed.
We present updated predictions of $R(D^{(*)})$ using a modified power-counting within the heavy quark effective theory that results in a highly constrained set of second-order power corrections in the heavy quark expansion, compared to the nominal expansion. We analyze new experimental data to determine all $B \to D^{(*)}$ form factors within and beyond the Standard Model at $\mathcal{O}(\alpha_s, \alpha_s/m_{c,b}, 1/m_{c,b}^2)$. We further present additional tests of the residual chiral expansion using baryonic decays to predict $R(\Lambda_c)$ and determinations of the CKM matrix element $|V_{cb}|$.
Semileptonic $B$-meson decays allow for determining the magnitudes of the CKM matrix parameters $|V_{cb}|$ and $|V_{ub}|$, two fundamental parameters of the SM flavor sector. Belle II analyses use both exclusive decays such as $B \to D^*\ell \nu$ and $B \to \pi \ell \nu$, or inclusive $X_c \ell \nu$ or $X_u \ell \nu$ final states restricted in phase space. The low-background collision environment, along with the possibility of partially or fully reconstructing one of the two $B$ mesons in the event, offer precisions on $|V_{cb}|$ and $|V_{ub}|$ approaching that of world-best results.
Conveners:
Ilaria Brivio (Universita di Bologna)
Karsten Köneke (Universität Freiburg)
Matthias Schröder (Universität Hamburg)
Contact: eps23-conveners-t09 @desy.de
Rare Higgs boson production and decay modes provide a crucial probe of Higgs boson properties if potential deviations from the predictions of the Standard Model. In this presentation, the latest results of measurements of rare Higgs boson production and decay modes by the ATLAS experiment are shown using the large data samples collected in pp collisions at 13 TeV during Run 2 of the LHC.
We present searches for exotic decays of the Higgs boson with the CMS experiment. Searches where the 125 GeV Higgs boson decays into two low-mass scalars are discussed. We also present searches for the 125 GeV Higgs boson decaying to a Z boson and a low-mass scalar. Searches for invisibly decaying Higgs bosons are also covered.
Searches for additional Higgs bosons at masses above 60 GeV with the CMS experiment are presented. A variety of final states, such as decays into pairs of photons or pairs of tau leptons, are discussed. We also cover searches for additional Higgs bosons which decay into pairs of 125 GeV Higgs bosons, or a 125 GeV Higgs boson and another particle.
New narrow resonances are a generic signature of models of new phenomena beyond the Standard Model. The clean signatures of the final states composed of two photons or a Z boson and a photon provide sensitivity to a wide class of such signals, in particular axion-like particles (ALPs) and Higgs-like scalar particles. The results of several such searches by the ATLAS experiment are presented for masses both above and below that of the Higgs bosons using the full Run 2 pp collision dataset collected at 13 TeV.
The trilinear Higgs coupling $\lambda_{hhh}$ is a crucial tool to probe the structure of the Higgs potential and to search for possible effects of physics beyond the Standard Model (SM). Focusing on the Two-Higgs-Doublet Model as a concrete example, I will discuss the calculation of the leading two-loop corrections to $\lambda_{hhh}$, and show that this coupling can be significantly enhanced with respect to its SM prediction in certain regions of parameter space. Taking into account all relevant corrections up to the two-loop level, I will show that the current experimental bounds on $\lambda_{hhh}$ already rule out significant parts of the parameter space that would otherwise be unconstrained. Finally, I will present a benchmark scenario illustrating the interpretation of the current results and future measurement prospects on $\lambda_{hhh}$. Recent results from direct searches of BSM scalars (such as ATLAS-CONF-23-034), and their implications for $\lambda_{hhh}$, will also be discussed in this context.
Most of the current experimental searches for charged Higgs bosons at the Large Hadron Collider (LHC) concentrate upon the $tb$ and $\tau\nu$ decay channels. In this study, we analyze the feasibility of the bosonic decay channel $W^{\pm (*)} h$ instead, with the charged gauge boson being either on-shell or off-shell and $h$ being a neutral light Higgs boson. Focusing on the Two-Higgs Doublet Model (2HDM), we consider the associated production of a charged Higgs with such a light neutral one, $pp\to H^\pm h$, at the LHC followed by the aforementioned charged Higgs boson decay, which leads to various signatures. We specifically study the $W^{\pm (*)}+ 4b/4\gamma$ final states and provide several Benchmark Points (BPs) for Monte Carlo (MC) analysis. We prove that there is a strong possibility that these signals could be found at the LHC with the centre of mass energy of 14 TeV and luminosity of 300 $\rm{fb}^{-1}$.
The new version of SusHi is introduced. It features a unified input for SM and BSM parameters for higher-order total cross sections for Higgs production in gluon fusion, heavy-quark annihilation, as well as Higgs-Strahlung. Like previous versions of SusHi, it provides links to codes like 2HDMC and FeynHiggs, but can also process standard SLHA output of spectrum generators like SoftSusy and SPheno.
Supersymmetric models with radiatively-driven naturalness (RNS) enjoy low electroweak fine-tuning whilst respecting LHC search limits on gluinos and top squarks and allowing for $m_h\simeq 125$ GeV. While the heavier Higgs bosons $H,\ A$ may have TeV-scale masses, the SUSY conserving $\mu$ parameter must lie in the few hundred GeV range. Thus, in natural SUSY models there should occur large heavy Higgs boson branching fractions to electroweakinos, with Higgs boson decays to higgsino plus gaugino dominating when they are kinematically accessible. These SUSY decays can open up new avenues for discovery.
We investigate the prospects of discovering heavy neutral Higgs bosons $H$ and $A$ decaying into light plus heavy chargino pairs which can yield a four isolated lepton plus missing transverse energy signature at the LHC and at a future 100~TeV $pp$ collider. We find that discovery of heavy Higgs decay to electroweakinos via its $4\ell$ decay mode is very difficult at HL-LHC. For FCC-hh or SPPC, we study the $H,\ A \to $ SUSY reaction along with dominant physics backgrounds from the Standard Model and devise suitable selection requirements to extract a clean signal for FCC-hh or SPPC with $\sqrt{s}=100$ TeV, assuming an integrated luminosity of 15 $ab^{-1}$. We find that while a conventional cut-and-count analysis yields a signal statistical significance greater than $5\sigma$ for $m_{A,H}\sim 1.1-1.65$ TeV, a boosted-decision-tree analysis allows for heavy Higgs signal discovery at FCC-hh or SPPC for $m_{A,H}\sim 1-2$ TeV.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
Precision studies of the properties of the Higgs and gauge bosons may provide a unique window for the discovery of new physics at the LHC. New phenomena can in particular be revealed in the search for lepton-flavor-violating or exotic decays of the Higgs bosons, as well as in their possible couplings to hidden-sector states that do not interact under Standard Model gauge transformations. This talk presents recent searches by the ATLAS experiment for decays of the Higgs bosons to new particles, using collision data at sqrt(s) = 13 TeV collected during the LHC Run 2.
Resonances in multi-boson final states (VVV, VV, VH, HH and HY, where V = W, Z and Y is a new scalar particle) with the CMS detector are presented. The results are based on the large dataset collected during Run 2 of the LHC at a centre-of-mass energy of 13 TeV. The analyses are optimised for high sensitivity over a large range in resonance mass. Many of the relevant backgrounds are estimated data-driven techniques and results are interpreted under various beyond the Standard Model scenarios.
Leptoquarks are hypothetical particles that appear in many theoretical extensions of the Standard Model. They are predicted to mediate interactions between quarks and leptons, bridging the gap between the two fundamental classes of particles. Other theoretical models such as supersymmetry, introduce a link between bosons and fermions, predicting also additional particles such as stops. Both extensions offer a compelling avenue for exploring new physics beyond the Standard Model and have the potential to explain a variety of experimental observations. Decays of supersymmetric particles, and leptoquarks decays to neutrinos lead to a characteristic shared signature of missing energy which allows for an easy interpretation of searches in both models. The ATLAS experiment at the Large Hadron Collider is conducting a comprehensive program of searches for leptoquarks and supersymmetric particles, targeting interactions with particles from all three generations. This talk will present the most recent results from the ATLAS collaboration's search for leptoquarks and stops in a range of experimental signatures, including flavour-diagonal and cross-generational final states.
Many theories beyond the Standard Model predict new phenomena, such as heavy vectors or scalar, vector-like quarks, and leptoquarks in final states containing bottom or top quarks. Such final states offer great potential to reduce the Standard Model background, although with significant challenges in reconstructing and identifying the decay products and modelling the remaining background. The recent 13 TeV pp results, along with the associated improvements in identification techniques, will be reported.
We present results for new heavy fermions at CMS. The results include searches for third-generation quark and lepton partners with vector-like properties. The results are based on the large dataset collected during Run 2 of the LHC at a centre-of-mass energy of 13 TeV. We search for these particles in a wide range of masses using several categories of reconstructed objects, from mulit-leptonic to fully-hadronic final states.
Many theories beyond the Standard Model predict new phenomena, such as Z', W' bosons, KK gravitons, or heavy leptons, in final states with isolated, high-pT leptons (e/mu/tau) or photons. Searches for new physics with such signatures, produced either resonantly or non-resonantly, are performed using the ATLAS experiment at the LHC. This includes a novel search that exploits the lepton-charge asymmetry in events with an electron and muon pair. The most recent 13 TeV pp results will be reported.
Many new physics models, such as the Sequential Standard Model, Grand Unified Theories, models of extra dimensions, or models with eg. leptoquarks or vector-like leptons, predict heavy mediators at the TeV energy scale. We present recent results of such searches in leptonic and photonic final states obtained using data recorded by the CMS experiment at Run-II of the LHC.
Conveners:
Alessandra Gnecchi (INFN, Milan)
Craig Lawrie (DESY)
Alexander Westphal (DESY)
Contact: eps23-conveners-t11 @desy.de
I will summarise recent progress in the formulation of flavour mixing and oscillations in pseudo-Hermitian quantum theories with non-Hermitian mass mixing matrices [arXiv: 2302.11666]. Such non-Hermitian quantum theories are made viable by the existence of a discrete anti-linear symmetry of the Hamiltonian, which ensures that single-particle states have real energies. I will describe the self-consistent construction of oscillation and survival probabilities that are consistent with positivity and unitarity, and highlight features of these pseudo-Hermitian flavour oscillations that are unique compared to their Hermitian counterparts.
One of the most severe bottlenecks to reach high-precision predictions in QFT is the calculation of multiloop multileg Feynman integrals. Several new strategies have been proposed in the last years, allowing impressive results with deep implications in particle physics. Still, the efficiency of such techniques starts to drastically decrease when including many loops and legs. In this talk, we explore the implementation of quantum algorithms to optimize the integrands of scattering amplitudes. We rely on the manifestly causal loop-tree duality, which translates the loop into phase-space integrals and avoids the spurious singularities due to non-causal effects. Then, we built a Hamiltonian codifying causal-compatible contributions and minimize it using a Variational Quantum Eigensolver. Our very promising results point towards a potential speed-up for achieving a more numerically-stable representation of Feynman integrals by using quantum computers.
Recently there has been a huge research activity on the interplay between symmetries and entanglement, exploiting the block-diagonal structure of the reduced density matrix (RDM) in each charge sector. The goal of this talk is to study how the presence of a global U(1) charge affects the modular flow, a central object in the algebraic description of quantum field theory. Roughly speaking, the modular flow is given by a generalized time evolution induced by a RDM of a given spatial region. I will discuss the symmetry resolution of the modular flow and the modular correlation function of U(1)-invariant operators. I will provide a consistent definition of symmetry-resolved modular flow defined for a local algebra of operators associated with a sector with a fixed charge. I will also discuss the symmetry-resolved modular correlation functions, showing that they satisfy the KMS condition in each symmetry sector. In order to complement this analysis with an example, I will provide a toolkit for computing the symmetry-resolved modular correlation function of the charge density operator in free fermionic theories. I will show that, in a 1 + 1-dimensional free massless Dirac field theory, this quantity is independent of the charge sector at leading order in the ultraviolet cutoff expansion. This feature can be regarded as an equipartition of the modular correlation function.
Entropy is the most innovative concept in thermodynamics. However, it seems that entropy has been defined and computed conveniently in each context, and that a unified definition of entropy for general relativistic field theory has not been established.
Recently, the author and collaborators have proposed a unified method to construct entropy current and entropy density as a conserved current and a conserved charge density, respectively, for general field theory defined on general curved spacetime with covariantly conserved energy momentum tensor even without global symmetry. An important consequence of the proposal is that the entropy computed by the proposed method for a couple of classic gravitational systems satisfies both the local Euler's relation and the first law of thermodynamics non-perturbatively with respect to the Newton constant. Other important aspects will be also reported within allowed time.
Traditionally, scalar ϕ4 theory in four dimensions is thought to be quantum trivial in the continuum. This tradition is apparently well grounded both in physics arguments and mathematical proofs. Digging into the proofs one finds that they do not actually cover all physically meaningful situations, in particular the case of multi-component fields and non-polynomial action. In this work, I study multi-component scalar field theories in four dimensions in the continuum and show that they do evade the apparently foregone conclusion of triviality. Instead, one finds a non-trivial interacting theory that has two phases, bound states and non-trivial scattering amplitudes in the limit of many components. This has potentially broad implications, both for the foundations of quantum field theory as well as for the experimentally accessible Higgs sector of the Standard Model.
The anomalous (odd intrinsic parity) Lagrangian in mesonic Chiral Perturbation Theory is determined to next-to-next-to-leading order ($p^8$) thereby completing the order $p^8$ Lagrangian [1810.06834]. The number of independent operators and the operator basis will be discussed for a general number $N_f$ of light quark flavours as well as for the physical cases $N_f=2,3$. The explicit construction of the Lagrangian agrees with the number of operators derived using the Hilbert series [2009.01239].
The AdS/CFT correspondence is a powerful tool for studying quantum gravity and strongly coupled quantum field theories. One of its simplest predictions is that the on-shell action of type IIB supergravity on $AdS_5 \times S^5$ is a non-zero number fixed by the boundary data, despite being zero in the standard formulation of supergravity. This apparent paradox was recently resolved by Kurlyand and Tseytlin, who showed that one needs to add suitable boundary terms to the supergravity action to make it consistent with AdS/CFT. In this talk, I will revisit this issue from the perspective of Sen's formalism for type IIB supergravity, which incorporates the self-dual five-form field strength in a manifestly covariant way. I will demonstrate that Sen's formalism also naturally leads to a specific boundary term reproducing the AdS/CFT prediction. However, the boundary term in Sen's formalism, as I will argue, also can be viewed as a candidate for the complete boundary term of the entire type IIB string theory in $AdS_5 \times S^5$. This result can provide robust evidence for the strongest version of the AdS/CFT conjecture. This is also an interesting result since the general problem of constructing appropriate boundary terms for the spacetime actions of string theory is poorly understood.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
The ALICE Collaboration proposes a completely new apparatus, ALICE 3, for the LHC Runs 5 and 6 (arXiv:2211.02491). The detector consists of a large pixel-based tracking system covering eight units of pseudorapidity, complemented by multiple systems for particle identification, including silicon time-of-flight layers, a ring-imaging Cherenkov detector, a muon identification system, and an electromagnetic calorimeter. Track pointing resolution of better than 10 micron for $p_{\rm T}$>200 MeV/c can be achieved by placing the vertex detector on a retractable structure inside the beam pipe. ALICE 3 will, on the one hand, enable novel studies of the quark-gluon plasma and, on the other hand, open up important physics opportunities in other areas of QCD and beyond. The main new studies in the QGP sector focus on low-$p_{\rm T}$ heavy-flavour production, including beauty hadrons, multi-charm baryons and charm-charm correlations, as as well as on precise multi-differential measurements of dielectron emission to probe the mechanism of chiral-symmetry restoration and the time-evolution of the QGP temperature. Besides QGP studies, ALICE 3 can uniquely contribute to hadronic physics, with femtoscopic studies of the interaction potentials between charm mesons and searches for nuclei with charm, and to fundamental physics, with tests of the Low theorem for ultra-soft photon emission. The presentation will cover the detector concept, the physics performance, and the status of novel sensor R&D.
The proposed Circular Electron Positron Collider (CEPC) imposes new challenges for the vertex detector in terms of pixel size and material budget. A Monolithic Active Pixel Sensor (MAPS) prototype, TaichuPix, based on a data-driven structure and a column drain readout architecture, has been implemented to achieve high spatial resolution and fast readout. In December 2022, a beam test system of 6 layer TaichuPix-3 chips was tested in DESY TB21 beamline. The offline analysis results indicate the spatial resolution can reach 5um, and the detection efficiency is better than 98%. The baseline vertex detector was designed with a 6-ladder structure with double-sided TaichuPix-3 chips. Another beam test was performed in April 2023 to verify the performance of the vertex detector prototype. The team has recorded enough valid data during the beam test and the offline analysis is working in progress.
The ALICE collaboration is planning to install the next upgrade of the Inner Tracking System (ITS3) during the LHC Long Shutdown 3 (2026-2028). The aim of this upgrade is to reduce material budget of the three innermost layers from 0.3% of a radiation length $X_0$ to 0.05% $X_0$ per layer, essentially reducing it to the silicon contribution only. In order to achieve this, the layers of the current detector will be replaced with truly cylindrical layers made of wafer-scale, thin and flexible stitched CMOS pixel sensors.
These sensors, made using a 65 nm CMOS process and thinned down to less than 50 $\mu$m, will be flexible enough to form cylindrical shapes which will be installed at a distance of respectively 18, 24, and 30 mm from the interaction point.
In order to produce wafer-scale sensors a process called stitching is used to combine small reticles and build sensors up to 300 mm in length in a single wafer. This allows a full coverage of half of the cylindrical layer with only one module, with a total of 6 modules for the whole detector.
Thanks to the self supporting property of these modules the mechanical supports can be reduced to only carbon foam spacers. Furthermore, the power consumption of the sensors will stay below 20 mW/cm$^2$, enabling the use of forced air cooling. This upgrade will provide unprecedentedly good tracking and vertexing capabilities, improving the pointing resolution by a factor of 2 with respect to the current detector.
This contribution will provide an overview of the ALICE ITS3 detector and of the R&D achievements, including: the validation of the 65 nm technology for particle tracking and radiation hardness, the achievements in terms of bending and flexibility, the integration of wafer-scale silicon detectors, and the first production of stitched sensor prototypes.
The LHCb detector is set to undergo a significant upgrade during the
upcoming long shutdown 4 of the LHC. This upgrade will result in
a nearly tenfold increase in instantaneous luminosity, reaching $1.5
\times 10^{34} c m^{-2} s^{-1}$, with an integrated luminosity expected
to rise from $50 f b^{-1}$ to $300 f b^{-1}$. To effectively handle
the elevated track densities, the downstream tracking stations will employ
silicon pixel sensors in the inner region where particle fluences are
highest. The MightyPix ASIC is a Monolithic HV-CMOS sensor based on the
HV-MAPS families MuPix and ATLASPix, specifically designed to meet the
requirements of LHCb. The Mighty Tracker silicon detector will covering
an extensive active area of $18 m^2$ will comprise over $ 2 \times 10^9$
pixels. The first iteration of the chip, along with its features and
design, will be presented. Notable recent advances in the mechanical and
electronic design of the silicon modules will also be shown. Progress on
prototyping developments, which focus on simulation, verification and FPGA
emulation work will be outlined. The latest beam test campaigns have
yielded valuable insights into the radiation performance of precursor
chips of the MightyPix. Noteworthy highlights will be presented,
accompanied by plans in place to maximise the chip's performance.
Signal reduction is the most important radiation damage effect on performance of silicon tracking detectors in ATLAS. Adjusting sensor bias voltage and detection threshold can help in mitigating the effects but it is important to have simulated data that reproduce the evolution of performance with the accumulation of luminosity, hence fluence. ATLAS collaboration developed and implemented an algorithm that reproduces signal loss and changes in Lorentz angle due to radiation damage. This algorithm is now the default for Run3 simulated events. In this talk the algorithm will be briefly presented and results compared to first Run3 collision data. For the high-luminosity phase of LHC (HL-LHC) a faster algorithm is necessary since the increase of collision, event, track and hit rate imposes stringent constraints on the computing resources that can be allocated for this purpose. The philosophy of the new algorithm will be presented and the strategy on how to implement it and the needed ingredients will be discussed.
The new CMS MIP Timing Detector (MTD) will provide precision timing information for charged particles, with hermetic coverage up to a pseudo-rapidity of |η|=3. This upgrade will mitigate the effects of pile-up expected at the High-Luminosity LHC, while bringing new and unique capabilities to the CMS detector. The endcap regions of CMS will be instrumented with two disks of silicon devices with excellent time resolution, covering a pseudorapidity range from about 1.6 to 3.0. This Endcap Timing Layer (ETL) will utilize low-gain avalanche diode (LGAD) sensors to detect the time-of-arrival of charged particles with precision of around 30 ps. The use of timing and tracking together will give CMS excellent association of tracks to vertices even when the vertices are very close together in space, recovering the Phase-1 quality of event reconstruction. We will present an overview of the ETL design and report on the status of the developments of the full-size systems for ETL, including results with the first full-size ETROC ASIC integrated with prototype front-end electronics and data acquisition software.
For the High Luminosity era of the LHC, the accelerator will undergo a major upgrade to significantly increase the deliverable luminosity with respect to the current one. To withstand the harsh experimental conditions in terms of pileup and radiation at the HL-LHC and maintain the current excellent performance, substantial upgrades of the experiments are ongoing. In particular, the CMS upgrade will include a novel timing layer, the MIP Timing Detector (MTD), designed to measure the time of arrival of charged particles with a resolution of about 30-60 ps. The MTD will equip both the barrel and the endcap part of CMS. The sensor technology chosen for the central part of the MTD is based on LYSO:Ce scintillating crystals readout by silicon photomultipliers. In this talk we will present an overview of the Barrel Timing Layer (BTL) design and describe the optimization of the sensors. Protoype sensors were tested both in laboratory and at test beam, showing a time resolution performance compliant with the design goal. These results represent an important reference for the detector validation, which will shortly lead the CMS MTD collaboration to the assembly phase of the BTL.
ATLAS innermost detector layer will undergo a broad range of upgrades for the HL-LHC phase. To be able to cope with the new detector design and a large set of modules to be integrated on the ITk, a demonstrator-based project at SR1 facility in CERN is conducted, to test and integrate a large number of Pixel modules equipped with RD53a electronics.
To mimic the ITk detector, a demonstrator project for the outer barrel section is ongoing with 34 modules. RD53a modules will encounter several production operation stages to be loaded on the OB demonstrator and finally integrated with real on-detector services for a full system test of multiple modules. Additionally, to monitor the module's performance from the reception stage to the final system test on the demonstrator, electrical scans ranging from the front-end readout chip to the sensor level are carried out, with an additional X-ray scan to find any open bumps at each production step. Furthermore, the module performance is compared at different production stages to allow a better understanding of any undesired trend of performance degradation.
A comprehensive study will be presented; tracking the main module performance features and applying a newly developed tool to identify, categorize and locate different Pixel defects, to allow a better understanding of any degradation foreseen in the large production phase for the ITk modules and define a detailed quality control (QC) scheme.
Besides, the anticipation of the overall testing stages will quantify the production yield based on module performance. Using the module QC tool, a combined analysis of the electrical scans for the Pixel detector circuit, starting from the digital front-end part towards the sensor is carried out. Indeed, if at any incidence a Pixel defect is found, it will get recorded by the tool and counted for the individual module and later for the total number of Pixel channels in the OB demonstrator project.
This approach is used to tackle the difficulties expected in modules and stave ratings once the ITk production starts. Hence, an envision of the most likely Pixel defects to occur is studied, based on classifying the origin of the Pixel defect failure with the QC tool. Moreover, to enable a deeper understanding of the expected difficulties, the 34 modules Pixel matrices are stacked to identify any specific geographical Pixel region containing any large number of Pixel defects.
In summary, this work is initiated to study in depth the Pixel quad module performance using RD53a front-end electronics, considering the implications of different testing stages during integration in the OB construction. The methodology applied here can be extended or adapted in the future, for the final module production to allow quick and systematic identification of defects in the module production and on-stave integration.
We have all heard the story; classical physics supposedly predicts that the intensity of emitted radiation from a blackbody diverges in the ultraviolet and that is why Planck introduced the quanta of light.
This story, however, is a myth fabricated long after the historical events. The ultraviolet catastrophe was not coined until 1911 and its origin dates back to 1905; five years after Planck's work.
In this presentation we will learn about the true course of events that lead Planck to quantize the energy of the atomic oscillators in a blackbody.
Der Wunsch Materie und ihre Wechselwirkung zu verstehen ist so alt wir die menschliche Zivilisation. Aus philosophischen Überlegungen wurden mit der naturwissenschaftlichen Revolution physikalisch-mathematische Theorien, die sich Hand in Hand mit dem technischen Fortschritt der Experimentalphysik entwickelten. Moderne Teilchenbeschleuniger gewähren beispiellose Einblicke in die fundamentalen Bausteine der Materie, die auch zu neuen Erkenntnissen über den Beginn unseres Universums führen. Eine der drängendsten Fragen der zeitgenössischen Physik — der Ursprung von Masse — hängt direkt mit dem Higgs-Teilchen zusammen und kann seit dessen Entdeckung im Jahr 2012 experimentell untersucht werden. Der Vortrag beschreibt auf zugängliche Weise aktuelle Forschungen am grössten Teilchenbeschleuniger der Welt, dem Large Hadron Collider am Europäischen Forschungszentrum CERN.
CMS searches for dark matter including those with dark portal interactions are presented. Various topologies and kinematic variables are explored. In this talk, we focus on the recent results obtained using the full Run-II dataset collected at the LHC.
The presence of a non-baryonic Dark Matter (DM) component in the Universe is inferred from the observation of its gravitational interaction. If Dark Matter interacts weakly with the Standard Model (SM) it could be produced at the LHC. The ATLAS Collaboration has developed a broad search program for DM candidates in final states with large missing transverse momentum produced in association with other SM particles (light and heavy quarks, photons, Z and H bosons, as well as additional heavy scalar particles) and searches where the Higgs boson provides a portal to Dark Matter, leading to invisible Higgs decays. The results of recent searches on 13 TeV pp data from the LHC, their interplay and interpretation will be presented.
We consider an axion-like particle decaying invisibly at Belle II proposing a nearly background-free search in the $e^+e^-+\text{invisible}$ channel. This search leverages dedicated kinematic variables, whose behaviour and performance we test under a simplified, yet realistic, treatment of detector effects. We find that at the Belle II experiment the $e^+e^-+\text{invisible}$ channel has the potential to be as sensitive as mono-$\gamma$ for all the ALP mass range that can be probed by Belle II and can significantly improve the bounds expected for O(GeV) ALP mass. This demonstrates that new searches based on high signal purity channels can give comparable or better bounds than searches based on more traditional large-background final states. We explore the implication of the expected reach of our proposal for dark matter freeze-out through ALP-mediated annihilations.
Belle has unique reach for a broad class of models that postulate the existence of dark matter particles with MeV—GeV masses. This talk presents recent world-leading physics results from Belle II searches for dark $Z’$ decays as well as long-lived (pseudo) scalars in $B$ decays.
In the Standard Model, the introduction of a singlet complex scalar field that acquires vacuum expectation value may give rise to a cosmologically stable pseudo-Nambu-Goldstone boson (pNGB); a good dark matter (DM) candidate with novel features at the phenomenological level, such as the reduction of the direct detection signal. This work extends this scenario by including a second cosmological stable particle: a fermion singlet. The pNGB and the new fermion can be regarded as DM candidates simultaneously, interacting with the Standard Model through a Higgs portal via two non-degenerate Higgs bosons. We explore the thermal freeze-out of this scenario, with special emphasis on the increasing yield of the pNGB before it completely freeze-out (recently called bouncing DM). We test the model under collider, relic abundance, and direct detection, and we explore the consequences of the yield bouncing on indirect detection observables today.
Hidden sectors are ubiquitous in supergravity theories, in strings and in branes. Well motivated models such as the Stueckelberg hidden sector model could provide a candidate for dark matter. In such models, the hidden sector communicates with the visible sector via the exchange of a dark photon (dark Z′) while dark matter is constituted of Dirac fermions in the hidden sector. Using data from collider searches and precision measurements of SM processes as well as the most recent limits from dark matter direct and indirect detection experiments, we perform a comprehensive scan over a wide range of the Z′ mass and set exclusion bounds on the parameter space from sub-GeV to several TeV. We then discuss the discovery potential of an O(TeV) scale Z′ at HL-LHC and the ability of future forward detectors to probe very weakly interacting sub-GeV Z′ bosons. Our analysis shows that the parameter space in which a Z′ can decay to hidden sector dark matter is severely constrained whereas limits become much weaker for a Z′ with no dark decays. The analysis also favors a self-thermalized dark sector which is necessary to satisfy the dark matter relic density.
Conveners:
Livia Conti (INFN)
Carlos Perez de los Heros (Uppsala University)
Martin Tluczykont (Universität Hamburg)
Gabrijela Zaharijas (UNG)
Contact: eps23-conveners-t01 @desy.de
The LIGO-Virgo detections made so far have neglected the realistic astrophysical environment where the compact binaries live. Gravitational wave emission will be affected by the source surroundings and the environment imprints should be observable in a dephasing of the emitted signal with respect to the vacuum scenario.
We present a first investigation on environmental effects for the events in the first gravitational wave catalog by LIGO-Virgo. In particular, we focus on accretion, dynamical friction and gravitational pull by adding corrections at -4.5 and -5.5 Post-Newtonian order to the GW phase relative to the vacuum quadrupole emission. We also give an estimation of the 90% bounds on the densities surrounding the sources.
The current catalog of gravitational waves (GWs) from binary black hole (BBH) mergers allows to conduct refined tests to probe the validity of the general relativity (GR) theory against alternative predictions. It has been proposed that black holes (BHs) may have exotic characteristics making them different from GR BH. Such exotic compact objects (ECOs) would radiate repeated GW pulses of widely uncertain morphology (echoes) in the post-merger phase whose detection would also help to infer the fundamental properties of ECOs.
I will present a method for detecting echoes and inferring their main observables if any, which is agnostic to the properties of these GW pulses. The methodology is implemented on a dedicated version of coherent WaveBurst (cWB), an unmodelled GW transient search algorithm, developed in the LIGO Scientific Collaboration (LSC) and Virgo Collaboration, widely used on LIGO-Virgo-KAGRA data.
We will discuss the results from the loudest BBH detections in LIGO-Virgo open data (O1, O2, and O3). In particular, we will present the first quantitative upper limits on the amplitude of echo-like signals.
Star clusters are the dynamical formation channel for binary black holes (BBHs). In these dense systems, BBH mergers are driven by gravitational wave emission and binary-single encounters with other objects in the environment. The talk will focus on the gravitational wave (GW) signals generated by close encounters between a BBH and a third black hole, highlighting the various outcomes that can arise from these interactions. To estimate the GW spectrum of these signals, numerical simulations were performed using the N-body code ARWV (Chassonnery et al. 2019; Chassonnery & Capuzzo-Dolcetta 2021), with stellar mass black holes serving as input masses. The talk will also consider the potential for these burst signals to fall within the sensitivity band of current and future ground-based detectors, depending on the parameters involved.
Gravitational-wave (GW) observations provide unique information about compact objects and, as detectors sensitivity increases, new astrophysical populations could emerge. Close hyperbolic encounters are one such example: black holes and neutron stars are expected to have unbound orbits in dense clusters, that manifest as GW burst signals in the frequency band of current detectors.
In this talk, we present the search for GW from hyperbolic encounters in the second half of the third Advanced LIGO-Virgo observing run (O3b). We perform a model-informed search with the algorithm Coherent WaveBurst enhanced by machine learning, exploiting for the first time 3-PN accurate hyperbolic approximants. The main new result of this search is the assessment of the observable sensitivity volume achieved in O3 and its constrain in terms of rate density of such sources.
We derive limits on the intrinsic charm (IC) content of the proton,
considering various theoretical models for IC, using as a basis
the results on the upper limit of prompt neutrino fluxes from the IceCube
collaboration. We work under the hypothesis that both the standard
heavy-flavour production and decay mechanism, mainly driven by gluon-gluon partonic interactions in pQCD, and the one involving IC, contribute to the
total prompt neutrino flux reaching the IceCube detector. We show how QCD
uncertainties on the standard calculation affect our limits on IC.
On the basis of work in progress + S. Ostapchenko et al., PRD 107 (2023), 023014
The IceCube Neutrino Observatory is a cubic-kilometer ice Cherenkov detector located at the geographic South Pole. Thousands of photomultipliers embedded in the deep glacial ice have been used to successfully detect and reconstruct astrophysical neutrino interactions over the last decade. This rich data set has provided evidence for several astrophysical neutrino sources, demonstrating that neutrinos are viable messengers to study these extreme environments. To further improve our abilities to observe these neutrinos, two extensions to the IceCube Neutrino Observatory are planned or in construction. In 2025/26 the IceCube Upgrade will be installed, which will consist of several new photosensor technologies in R&D to improve Cherenkov detection. It will also include an array of calibration devices that will be used to reduce systematic uncertainties for the 10+ years of data already collected. In a next step, the planed IceCube-Gen2 facility will significantly expand on the current sensitivity and observable energy range by increasing the instrumented volume, both in the ice and on the surface, and by employing the radio detection technique. In this contribution, I will describe the scientific motivation, the novel technology developed as well as the overall status of the projects.
Conveners:
Summer Blot (DESY)
Pau Novella (IFIC)
Davide Sgalaberna (ETH)
Jessica Turner (Durham University)
Contact: eps23-conveners-t04 @desy.de
We present a method to verify Mikheyev-Smirnov-Wolfenstein effect during the propagation of SN neutrinos from the SN core to the Earth. The non-MSW scenario to be distinguished from the MSW one is the
incoherent flavor transition probability for neutrino propagation in the vacuum. Our approach involves studying time evolution of neutrino event rates in liquid Argon, liquid scintillation and water Cherenkov detectors. Using currently available simulations for SN neutrino emissions, the time evolution of $\nu_e{\rm Ar}$ event rates and $\bar{\nu}_e$ inverse beta-decay event rates and the corresponding cumulative event fractions are calculated up to t=100
ms in DUNE, JUNO and Hyper-Kamiokande detectors, respectively. It is shown that the area under the cumulative time distribution curve from t=0 to t=100 ms in each detector and their ratio can be used to discriminate different flavor transition scenarios of SN neutrinos.
FASER, the ForwArd Search ExpeRiment, is an LHC experiment located 480 m downstream of the ATLAS interaction point, along the beam collision axis. One main physics goal of FASER and its sub-detector FASERnu is to detect and study TeV-energy neutrinos, the most energetic neutrinos ever detected from a human-made source. FASER is taking data since the start of LHC Run 3 in July 2022. This talk will present the status of the experiment with a special focus on first neutrino physics results from Run 3 data.
SND@LHC is a compact and stand-alone experiment to perform measurements with neutrinos produced at the LHC in a hitherto unexplored pseudo-rapidity region of 7.2 < 𝜂 < 8.6, complementary to all the other experiments at the LHC. The experiment is located 480 m downstream of IP1 in the unused TI18 tunnel. The detector is composed of a hybrid system based on a 800 kg target mass of tungsten plates, interleaved with emulsion and electronic trackers, followed downstream by a calorimeter and a muon system. The configuration allows efficiently distinguishing between all three neutrino flavours, opening a unique opportunity to probe physics of heavy flavour production at the LHC in the region that is not accessible to ATLAS, CMS and LHCb. This region is of particular interest also for future circular colliders and for predictions of very high-energy atmospheric neutrinos. The physics programme includes studies of charm production, and lepton universality tests in the neutral sector. The detector concept is also well suited to searching for Feebly Interacting Particles via signatures of scattering in the detector target. The first phase aims at operating the detector throughout LHC Run 3 to collect a total of 250 fb−1. The experiment was recently installed in the TI18 tunnel at CERN and has collected its first data in 2022. A new era of collider neutrino physics has started.
Measurements of Numu Charged Current Pion Production on different nuclei in the delta(1232) resonance region are an important interaction process for accelerator-based neutrino oscillation experiments. Here we present new high statistics differential cross section measurements of pi plus and pi zero production on scintillator, carbon, water, iron, and lead targets recorded by the MINERvA experiment using a wide-band energy (<E_nu> ~6 GeV) numu beam. These results include the first measurements of incoherent pion production cross section ratios between various nuclei, and they are directly sensitive to nuclear effects. Data indicates a need for a low-Q^2 suppression that current generators cannot reproduce.
Precise knowledge of how neutrinos interact with matter is essential for measuring neutrino oscillations in long-baseline experiments. At the T2K experiment, the near detector complex measures neutrino interactions to constrain cross section models for oscillation studies and characterises the beam flux. In addition, the near detector complex provides a separate platform for performing neutrino-nucleon cross section measurements. The composition and design of one of the near detectors, ND280, allows for a large variety of cross section measurements on different targets to be performed.
The most recent cross section measurements from the ND280 detector, together with an overview of the T2K measurement strategy, adopted to reduce the model dependence, will be presented. With increasing statistics, dedicated efforts are devoted to investigating rare or poorly studied interaction channels studied including electron neutrino, kaon and neutral current interactions. In this talk, the latest measurements of pion production will be shown. This includes measurements of transverse pion kinematics, and an improved analysis of the coherent pion production cross section which makes use of an anti-neutrino sample for the first time.
Long-baseline (LBL) neutrino oscillation experiments search for Charge-Parity (CP) violation in the leptonic sector by precisely measuring the $\nu_\mu\to\nu_e$ and $\overline{\nu}_\mu\to\overline{\nu}_e$ appearance probabilities.
One of the dominant systematic uncertainties on the measurements of CP violation, comes from our modeling of the $\nu_e/\overline{\nu}_e$ cross-section ratio, which is subject to a range of uncertainties related to poorly-constrained nuclear physics processes.
Whilst tight constraints on the $\nu_\mu/\overline{\nu}_\mu$ cross-section can be achieved using LBL experiment's near detector data, the lepton mass differences mean that the extrapolation to the $\nu_e/\overline{\nu}_e$ is not trivial.
Currently running LBL experiments reach a sensitivity to exclude the CP conserving hypothesis of about three standard deviations for a relatively large range of $\delta_{CP}$ values, hence a more accurate evaluation of the $\nu_e/\overline{\nu}_e$ related uncertainties becomes increasingly crucial.
Following up on work by Nikolakopoulos et al., we present an analysis quantifying the potential for mis-modeling of the $\nu_\mu/\nu_e, \ \overline{\nu}_\mu / \overline{\nu}_e$ and $\nu_e/\overline{\nu}_e$ cross sections due to nuclear effects as a model spread in the full kinematic phase space for CCQE interactions.
This impact is then propagated to simulated experimental configurations based on the Hyper-K and ESS$\nu$SB experiments.
Significant differences between the theoretical models are found, which largely lie in regions of phase space that contribute only a small portion of the flux integrated cross sections.
Overall, a systematic uncertainty on the oscillated flux-averaged $\nu_e/\overline{\nu}_e$ cross section of $\sim 2$\% and $\sim4$\% is found for the simulated Hyper-K and ESS$\nu$SB experiments respectively.
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
$J/\psi-$pair production at the LHC is currently the best tool available to probe gluon transverse momentum distributions (TMDs) which are very poorly known today. Data from LHCb at low transverse momentum are already available and more are expected soon from CMS and LHCb. Such data in the collider mode will allow one to probe the evolution of the unpolarised-gluon TMDs and to measure, for the first time, the distribution of the linearly polarised gluon in unpolarised protons. In addition, data in the fixed-target mode will give us some handle to measure the momentum-fraction dependence of the TMDs.
In this talk, I will present improved results obtained for the LHC in collider mode ($\sqrt{s}$=13 TeV) up to NLL in TMD factorisation and present first results for the LHCb in the fixed-target mode ($\sqrt{s}$=115 GeV).
After showing a comparison with the existing LHCb data, I will discuss predictions of transverse-momentum distributions at different invariant masses that could be measured by LHCb and CMS in the collider mode. I will then present predictions for the azimuthal modulations of the cross section that arise from linearly polarised gluons.
In this talk, I will discuss the impact of one-loop QCD corrections[1] to the differential distributions of $J/\psi$ and $\Upsilon$ mesons produced in inclusive $\gamma \gamma$ collisions for the kinematical conditions of LEP and future high-energy $e^+e^-$ facilities. Firstly, I will focus on the pure QED processes $\gamma + \gamma \to Q\bar{Q}(^3S^{[1]}_1) +\gamma$, which only receive virtual corrections. Then I will discuss QCD corrections to the single-resolved contribution, whose high-energy behavior is known to be perturbatively unstable[2]. I will finally discuss the former process as a contribution to the exclusive production of $J/\psi+\gamma$ in ultra-peripheral heavy-ion collisions at the LHC.
We calculate the total cross section and transverse momentum distributions for the production of the enigmatic $\chi_{c1}(3872)$ (or X(3872)) (see [1]) assuming different scenarios: $c \bar c$ state and $D^{0*} {\bar D}^0 + D^0 {\bar D}^{0*}$ molecule.
The derivative of the $c \bar c$ wave function needed in the first scenario is taken from a potential $c \bar c$ model calculations. Compared to earlier calculations of molecular state, we include not only single parton scattering (SPS) but also double parton scattering (DPS) contributions. The latter one gives a smaller contribution than the SPS one.
The upper limit for the DPS production of $\chi_{c1}(3872)$ is much below the CMS data. We compare the results of our calculations with existing experimental data of CMS, ATLAS and LHCb collaborations. Reasonable cross sections can be obtained in either $c \bar c$ or molecular $D {\bar D}^*$ scenarios for $X(3872)$, provided one takes into account both directly produced $D^0, \bar D^0$, as well as $D^0, \bar D^0$ from the decay of $D^*$. However, arguments related to the lifetime of $D^*$ suggest that the latter component is not active. With these reservations, also a hybrid scenario is not excluded.
We propose to study the structure of the enigmatic $\chi_{c1}(3872)$ axial vector meson through its $\gamma^* \gamma \chi_{c1}(3872)$ transition form factor (see [2]). We derive a light-front wave function representation of the form factor for the lowest $c \bar c$ Fock-state. We found that the reduced width of the state is well within the current experimental bound recently published by the Belle collaboration. This strongly suggests a crucial role of the $c \bar c$ Fock-state in the photon-induced production. Our results for the $Q^2$ dependence can be tested by future single tagged $e^+ e^−$ experiments, giving further insights into the short-distance structure of this meson.
[1] A. Cisek, W. Schaefer and A. Szczurek,
''Structure and production mechanism of the enigmatic $X(3872)$ in high-energy hadronic reactions'',
Eur. Phys. Jour. C882, (2022) 1062.
[2] I. Babiarz, R. Pasechnik, W. Schaefer and A. Szczurek,
``Probing the structure of $\chi_{c1}(3872)$ with photon transition form factors'',
arXiv:2303.09175, Phys. Rev. D107 (2023) L071503.
Recent CMS results on production of open heavy flavor hadrons and quarkonia in pp collisions are discussed. The measurements are performed with data collected in pp collisions at sqrt(s)=13 TeV between 2016 and 2018.
The study of quarkonium production in proton-proton collisions involves both the perturbative and non-perturbative regimes of QCD, providing an excellent probe for quantum chromodynamics. Its mechanism is widely studied but not yet fully understood. The associated production of quarkonia is not only useful to probe the quarkonium production puzzle, but also helpful to reveal the double parton scatterings process, which is of great interest to the community and still awaits further research both theoretically and experimentally. The associated quarkonium production is also considered an ideal way to probe the transverse momentum dependent parton distribution functions of gluons inside the proton, leading towards a more comprehensive knowledge of the proton structure. In this talk, the latest results on associated quarkonium production from LHCb will be presented.
Studying heavy-flavor mesons and baryons in hadronic collisions provides unique access to the properties of heavy-quark hadronisation in the presence of large partonic densities, where new mechanisms of hadron formation beyond in-vacuum fragmentation can emerge. It also tests calculations of perturbative QCD and explores the role of cold nuclear matter effects. Examining heavy-flavor production in different collision systems and event multiplicities reveals insights into multi-parton interactions and the emergence of strongly interacting mediums in high-multiplicity pp and p--Pb collisions. In particular, recent measurements in pp collisions showed an increase in the baryon-to-meson ratio with respect to $\mathrm{e^+e^-}$ collisions, suggesting that the fragmentation of charm is not universal across different collision systems. The measurements of charm baryons in pp and p--Pb collisions provide insights into the hadronisation mechanisms at much larger charged-particle densities. They also provide constraints to the effects on the particle spectra of heavy-flavour hadrons generated by the presence of collective effects.
In this talk, we will present a selection of the latest charm and beauty production measurements in pp collisions and p--Pb with ALICE, which can shed light on the modification of the heavy-quark hadronisation mechanisms. We will discuss more precise measurements of prompt and non-prompt D mesons in pp collisions at $\sqrt{s}$ = 13 TeV, enabling a quantitative comparison of the hadronization properties between beauty and charm mesons. The latest measurements of prompt charm strange baryon ($\mathrm{\Xi^{0}_{c}}$) and non-prompt charm baryon ($\mathrm{\Lambda^{+}_c}$) production and the conrresopnding baryon-to-meson ratio in the hadronic collisions will be presented. New measurements of the $\Omega^0_\mathrm{c}$ from semileptonic decay and recent measurements of resonant $\mathrm{D^+_{s1}}$ and $\mathrm{D_{s2}^{*+}}$ states will be discussed. Finally, the status and prospects for the reconstruction of charm mesons and baryons on LHC Run 3 data, using the upgraded ALICE apparatus will be shown for the first time.
The exclusive photoproduction reactions γp→J/ψ(1S)p and γp→ψ(2S)p have been studied at an ep centre-of-mass energy of 318 GeV with the ZEUS detector at HERA using an integrated luminosity of 373 pb−1. The measurement has been made in the kinematic range 30 < W < 180 GeV, Q2 <1 GeV2, |t| <1 GeV2, where W is the photon--proton centre-of-mass energy, Q2 is the photon virtuality and t is the squared four-momentum transfer at the proton vertex. The decay channels used were J/ψ(1S)→μ+μ−, ψ(2S)→μ+μ− and ψ(2S)→J/ψ(1S)π+π− with subsequent decay J/ψ(1S)→μ+μ−. The ratio of the production cross sections R=σψ(2S)/σJ/ψ(1S) has been measured as a function of W and t and compared to previous data in photoproduction and deep inelastic scattering and with predictions of QCD-inspired models of exclusive vector-meson production.
The focus of the session is on top quarks precision measurements and theory calculations.
Conveners:
Gauthier Durieux (CERN)
Abideh Jafari (DESY)
Narei Lorenzo Martinez (LAPP, Annecy)
Contact: eps23-conveners-t07 @desy.de
15'+5'
Duration: 15'+5'
The LHCb experiment covers the forward region of proton-proton collisions, and it can study the W and Z bosons in this phase space complementary to ATLAS and CMS. Measurements of W and Z bosons production cross sections in this region are unique and important tests of the Standard Model.
Thanks to the excellent detector performance, fundamental parameters of the Standard Model can be precisely measured by studying the properties of the electroweak bosons.
Moreover the collected W and Z boson events, measured with or without the associated production of hadronic jets, can be used to probe the proton structure in a phase space region not accessible by other LHC experiments.
In this talk an overview of the wide LHCb measurement program with electroweak bosons will be presented, and prospects with future detector upgrades will be discussed.
Duration: 15'+5'
This paper presents a comprehensive study of currently available measurements of the mass of the W boson. The study uses results from the hadron colliders Tevatron and LHC, performed by the CDF, D0, ATLAS and LHCb experiments, and includes the combined result from LEP-2. As the measurements were performed at different moments in time, different assumptions for the modelling of W-boson production and decay were employed, as well as different fits of the parton distribution functions of the proton (PDFs). The measurements are corrected to a common modelling reference and to the same PDFs, and subsequently combined accounting for PDF correlations in a quantitative way. The compatibility of the results is discussed and a set of combinations is presented.
Duration: 15'+5'
Measurements of neutral current Drell-Yan production at large invariant dilepton masses can be used to test the energy scale dependence (running) of the electroweak mixing angle.
In this work, we make use of a novel implementation of the full next-to-leading order electroweak radiative corrections to the Drell-Yan process using the $\overline{\mathrm{MS}}$ renormalization scheme for the electroweak mixing angle. The potential of future analyses using proton-proton collisions at $\sqrt{s}=13.6$ TeV in the Run 3 and High-Luminosity phases of the LHC is explored. In this way, the Standard Model predictions for the $\overline{\mathrm{MS}}$ running at TeV scales can be probed.
Duration: 15'+5'
In this talk we will review the most recent global analysis of electroweak data, in the Standard Model and beyond, as obtained in the HEPfit framework. Based on arXiv:2204.04204, this analysis include the most recent measurements of the W-boson mass (CDF and ATLAS) and of the top-quark mass (CMS). Moreover, we will present preliminary results of a global fit of the SMEFT that extends the set of observables considered to also include Higgs-boson and top-quark observables, as well as several improvements in the treatment of the SMEFT formalism within HEPfit.
Duration" 15'+5'
We present results from the global electroweak fit to precision measurements of the Standard Model (SM). The fit uses the latest experimental results as well as up-to-date theoretical calculations for observables on the Z pole and the W boson mass, yielding precise SM predictions for the effective weak mixing angle and the masses of the W and Higgs bosons, as well as the top quark. We report constraints on coefficients of the SM effective field theory (SMEFT), obtained from electroweak precision data. We present correlations between the SMEFT coefficients, evaluated at next-to-leading order for the precision observables entering the fit, and the free parameters of the SM.
Conveners:
Thomas Blake (University of Warwick)
Marzia Bordone (CERN)
Thibaud Humair (MPP)
Contact: eps23-conveners-t08 @desy.de
We report the measurement of the partial branching fraction of $B \to X_u \ell \bar \nu_\ell$ over $B \to X_c \ell \bar \nu_\ell$ using the complete Belle data set of 711 fb${}^{-1}$. We reconstruct collision events where one B meson is fully reconstructed in hadronic modes using the full-event-interpretation algorithm, developed for Belle II. This allows for the reconstruction of the hadronic $X_u$ system associated with the semileptonic $b \to u \ell \bar\ nu_\ell$ decay. We further perform the first simultaneous determination of the absolute value of the CKM matrix element $V_{ub}$ by leveraging both inclusive and exclusive decays with the complete Belle data set using a similar technique. To disentangle exclusive $B \to \pi \ell \bar \nu_\ell$ decays from other inclusive $B \to X_u \ell \bar \nu_\ell$ events and backgrounds, we employ a two-dimensional fit that utilizes the charged pion multiplicity in the $X_u$ system and the four-momentum transfer $q^2$ between the $B$ and $X_u$ system. Both results provide new insights in the tension of the value of $V_{ub}$ between inclusive and exclusive determinations.
Semileptonic $b$-hadron decays proceed via charged-current interactions and provide powerful probes for testing the Standard Model and for searching for New Physics effects. The advantages of studying such decays include the large branching fractions and reliable calculations of the hadron matrix elements. In this contribution, LHCb measurements on CKM paramenters and test of new physics will be presented.
We present a new measurement of the lepton-flavor-universality ratios $R(D^{(*)})$ utilizing the entire Belle data set, which corresponds to an integrated luminosity of 711 fb${}^{-1}$. The analysis employs hadronic tag-side reconstruction, leveraging the capabilities of the full-event-interpretation algorithm developed for Belle II. This results in a significant efficiency improvement of approximately a factor of two with respect to the previously utilized tag-side reconstruction method. The analysis utilizes the reconstructing the mass squared of missing neutrinos originating from leptonic tau decays ($\tau \to \ell \bar \nu_\ell \, \nu_\tau$) and unassigned neutral energies registered in the Belle calorimeter. By exploiting these two observables, we separate $B \to D^{(*)} \tau \bar \nu_\tau$ decays from background processes and the normalization mode $B \to D^{(*)} \ell \bar \nu_\ell$ with $\ell = e, \mu$ using a two-dimensional fit. One of the key backgrounds of this analysis are decays involving higher charm resonances ($D^{**}$). Their branching fractions and decay dynamics are poorly known. The talk also covers related measurements from Belle, such as the first observation of the $B \to D_1 \ell \bar \nu_\ell$ decay with $D_1 \to D \pi \pi$ and new measurements of $B \to D^{(*)} \pi \ell \bar \nu_\ell$ and $B \to D^{(*)} \pi \pi \ell \bar \nu_\ell$ branching fractions. These results provide further insights into the difference between the inclusive $B \to X_c \ell \bar \nu_\ell$ branching fraction and the sum over exclusive contributions from $D, D^*$ and $D^{**}$ contributions.
The observed rate of semitauonic $B$ decays has been consistently above expectations since these decays were first measured. Significant differences between the forward-backward asymmetry in $B \to D^*e \nu$ and $B \to D^* \mu \nu$ were also reported. Belle II data are well suited to probe these anomalies. The low-background collision environment along with the possibility of partially or fully reconstructing one of the two $B$ mesons in the event offer high precision. This talk presents recent Belle II results on lepton-flavor universality tests.
In the last ~10 years, several discrepancies, at the level of several standard deviations, between flavor observables and the SM predictions have been found by the Belle, BaBaR, and LHCb collaborations. In this talk, the results from the CMS experiment on these flavor anomalies are presented. The results are based on 13 TeV pp collision data collected during 2016-2018.
Conveners:
Ilaria Brivio (Universita di Bologna)
Karsten Köneke (Universität Freiburg)
Matthias Schröder (Universität Hamburg)
Contact: eps23-conveners-t09 @desy.de
The investigation of the trilinear self-coupling of the discovered Higgs boson is one of the main goals of particle physics in the near future.
We provide predictions for this coupling, expressed in terms of the coupling modifier $\kappa_\lambda$, incorporating one-loop corrections within arbitrary renormalizable QFTs.
The presented framework (implemented in the public code ${\tt anyH3}$) allows to apply a wide class of pre- and user-defined renormalization conditions whereas the calculation of all required one-, two- and three-point functions is incorporated in an automated way.
In this talk I motivate precision predictions for $\kappa_\lambda$ in the context of di-Higgs production and the need for their automation.
The basic ingredients of a generic $\kappa_\lambda$ calculation at the one-loop order as well as the features of the resulting computer program and applications to specific models are discussed.
We consider the next-to-leading order electroweak corrections to the Higgs boson pair production in gluon fusion. This requires the computation of two-loop four-point amplitudes with massive internal particles such as top quarks, Higgs and gauge bosons. We perform analytic calculations both in the high-energy and large top-quark mass limits. In particular, we show that our high energy expansion can even yield precise results above $p_t \approx 120$ GeV. The technical challenges are described and results for the form factors are presented.
Gaining information about the shape of the Higgs potential is one of the main goals of particle physics for the coming years. The trilinear Higgs self-interactions can directly be probed via Higgs pair production, which at the LHC happens dominantly through gluon fusion. In models with extended Higgs sectors the trilinear self-coupling of the detected Higgs boson can largely deviate from the prediction of the Standard Model (SM) even in scenarios where all couplings to gauge bosons and fermions are very close to their SM values. Furthermore, triple Higgs couplings involving additional Higgs bosons can have an important impact on the pair production process of the SM-like Higgs boson. We study the phenomenological implications for Higgs pair production in the framework of the Two Higgs Doublet Model (THDM), which predicts five physical Higgs bosons. We analyze the potential sensitivity to both, the SM-like trilinear Higgs self coupling and the BSM triple Higgs coupling involving a resonantly produced CP-even Higgs. In particular, we focus on the theoretical prediction of two observable quantities: the total Higgs pair production cross section and the invariant mass distribution of two Higgses in the final state. We show that the inclusion of loop corrections to the trilinear Higgs couplings is crucial in this context. Finally, we discuss the applicability of resonant and non resonant Higgs pair production experimental limits for testing the predictions of extended Higgs sectors.
In the Standard Model, the ground state of the Higgs field is not found at zero but instead corresponds to one of the degenerate solutions minimising the Higgs potential. In turn, this spontaneous electroweak symmetry breaking provides a mechanism for the mass generation of nearly all fundamental particles. The Standard Model makes a definite prediction for the Higgs boson self-coupling and thereby the shape of the Higgs potential. Experimentally, both can be probed through the production of Higgs boson pairs (HH), a rare process that presently receives a lot of attention at the LHC. In this talk, the latest HH searches by the ATLAS experiment are reported, with emphasis on the results obtained with the full LHC Run 2 dataset at 13 TeV. Non-resonant HH search results are interpreted both in terms of sensitivity to the Standard Model and as limits on the Higgs boson self-coupling and the quartic VVHH coupling. Further, HH searches can be exploited to put constraints on the Wilson coefficients of Effective Field Theories. The Higgs boson self-coupling can be also constrained by exploiting higher-order electroweak corrections to single Higgs boson production. A combined measurement of both results yields the overall highest precision, and reduces model dependence by allowing for the simultaneous determination of the single Higgs boson couplings. Results for this combined measurement are also presented. Finally, extrapolations of recent HH results towards the High Luminosity LHC upgrade are also discussed
The Higgs boson is one of the main interests of the particle physics community. To strive for its full characterization, a new era of precise measurements has begun and will continue for the next two to five decades, as foreseen by recent updates of the European Strategy for Particle Physics. Of utmost importance is the exploration of the particle’s interaction with itself. In this talk, the latest status of CMS measurements of the Higgs boson self-coupling will be presented, covering both direct Higgs boson pairs searches and indirect single Higgs boson interpretations; as well as the EFT interpretations of these results. Moreover, the latest projections of the same measurements at Run 3 and HL-LHC will be discussed.
Experimental information on the trilinear Higgs boson self-coupling $\kappa_3$ and the quartic self-coupling $\kappa_4$ will be crucial for gaining insight into the shape of the Higgs potential and the nature of the electroweak phase transition. While Higgs pair production processes provide access to $\kappa_3$, triple Higgs production processes, despite their small cross sections, will provide valuable complementary information on $\kappa_3$ and first experimental constraints on $\kappa_4$. In this work, we consider triple Higgs production at the HL-LHC, employing efficient Graph Neural Network methodologies to maximise the statistical yield. We show that it will be possible to establish bounds on the variation of both couplings from the HL-LHC analyses that significantly go beyond the constraints from perturbative unitarity. We also discuss the prospects for the analysis of triple Higgs production at future high-energy lepton colliders operating at the TeV scale.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
Recently, a concept for a Hybrid Asymmetric Linear Higgs Factory (HALHF) has been proposed, where a center-of-mass energy of 250 GeV is reached by colliding a plasma-wakefield accelerated electron beam of 500 GeV with a conventionally accelerated positron beam of about 30 GeV. While clearly facing R&D challenges, this concept bears the potential to be significantly cheaper than any other proposed Higgs Factory, comparable in cost e.g. to the EIC. The asymmetric design changes the requirements on the detector at such a facility, which needs to be adapted to forward-boosted event topologies as well as different distributions of beam-beam backgrounds. This contribution will give a first assessment of the impact of the accelerator design on the physics prospects in terms of some flagship measurements of Higgs factories, and how a detector would need to be adjusted from a typical symmetric Higgs factory design.
The Forward Calorimeter (FoCal) in ALICE, which is planned to take data in Run 4, covers a pseudorapidity interval of 3.4 < $\eta$ < 5.8 for probing non-linear QCD dynamics in an unexplored kinematic region at the LHC. In its electromagnetic section, layers of high granularity Monolithic Si pixels are alternated to Si pads for sampling the longitudinal development of the electromagnetic showers, designed to allow for the reconstruction of neutral mesons with high efficiency. Its hadronic section is made from constructing towers by grouping copper capillary tubes filled with scintillator fibers interface by SiPMs. During 2021 and 2022, various ever-improving prototypes of the calorimeter were installed at the Test Beam facilities of CERN to evaluate their performance and compare to simulations.
In the talk, we report on the most recent results of these campaigns and outline the impact on the design of the detector.
The upgraded CERN LHC for high luminosity (HL-LHC) will deliver unprecedented instantaneous luminosities to the detectors which, together with an average of up to 200 simultaneous interactions per bunch crossing, requires major upgrades of the CMS electromagnetic calorimeter (ECAL). While a new detector will be installed in the endcap regions, the ECAL barrel's lead tungstate crystals and photo detectors are expected to sustain the new conditions. However, a completely new and upgraded readout and trigger electronic system will need to be installed to cope with the challenging HL-LHC conditions. Each of the 61,200 ECAL barrel crystals will be read out by two custom ASICs providing signal amplification with two gains, ADC with 160 MHz sampling rate, and lossless data compression for the transmission of all channel data to the off detector electronics. Trigger primitives generation by updated reconstruction algorithms as well as data acquisition will be implemented on powerful FPGA processors boards. The upgrade of the ECAL electronics will allow to maintain the excellent energy resolution of the detector and, in addition, greatly improves the time resolution of electrons and photons above 10 GeV, down to a few tens of picoseconds. This talk presents the design and status of the individual components of the upgraded ECAL barrel detector, and results of energy and time resolution measurements with a full readout chain prototype system in recent test beam campaigns at the CERN SPS.
A new era of hadron collisions will start around 2029 with the High-Luminosity LHC which will allow to collect ten times more data than what has been collected during 10 years of operation at LHC. This will be achieved by higher instantaneous luminosity at the price of a higher number of collisions per bunch crossing. In order to withstand the high expected radiation doses and the harsher data taking conditions, the ATLAS Liquid Argon Calorimeter readout electronics will be upgraded. The electronic readout chain is composed of four main components. 1) New front-end boards will allow to amplify, shape, and digitise the calorimeter’s ionisation signal on two gains over a dynamic range of 16 bits and 11 bit precision. Low noise below Minimum Ionising Particle (MIP), i.e. below 120 nA for 45 ns peaking time, and maximum non-linearity of two per mille is required. Custom preamplifiers and shapers are being developed to meet these requirements using 65 nm and 130 nm CMOS technologies. They shall be stable under irradiation until 1.4kGy (TID) and 4.1x10^13 new/cm^2 (NIEL). Two concurrent preamp-shaper ASICs were developed and, “ALFE”, the best one has been chosen. The test results of the latest version of this ASIC will be presented. “COLUTA”, a new ADC chip is also being designed. A production test setup is being prepared and integration tests of the different components (including lpGBT links developed by CERN) on a 32-channels front-end board are ongoing, and results of this integration will be shown. 2) New calibration boards will allow the precise calibration of all 182468 channels of the calorimeter over a 16 bits dynamic range. A non-linearity of one per mille and non-uniformity between channels of 0.25% with a pulse rise time smaller than 1ns shall be achieved. In addition, the custom calibration ASICs shall be stable under irradiation with same levels as preamp-shaper and ADC chips. The HV SOI CMOS XFAB 180nm technology is used for the pulser ASIC, “CLAROC”, while the TSMC 130 nm technology is used for the DAC part, “LADOC”. The latest versions of those 2 ASICs which recently passed the production readiness review (PDR) with their respective performances will be presented. 3) New ATCA compliant signal processing boards (“LASP”) will receive the detector data at 40 MHz where FPGAs connected through lpGBT high-speed links will perform energy and time reconstruction. In total, the off-detector electronics receive 345 Tbps of data via 33000 links at 10 Gbps. For the first time, online machine learning techniques are considered to be used in these FPGAs. A subset of the original data is sent with low latency to the hardware trigger system, while the full data are buffered until the reception of trigger accept signals. The latest development status of the board as well as the firmware will be shown. 4) A new timing and control system, “LATS”, will synchronise with the aforementioned components. Its current design status will also be shown.
The Mu2e experiment at Fermilab will search for the charged-lepton flavour violating conversion of negative muons into electrons in the coulomb field of an Al nucleus, planning to reach a single event sensitivity of about 3x10^{−17}, four orders of magnitude beyond the current best limit.
The conversion electron has a clear monoenergetic signature at 104.967 MeV, slightly below the muon rest mass, and will be identified by a complementary measurement carried out by a high-resolution tracker and an electromagnetic calorimeter (EMC). The calorimeter is composed of 1348 pure CsI crystals, each read by two custom UV-extended SiPMs, that are arranged in two annular disks. The EMC should achieve better than 10% energy resolution and 500 ps timing resolution for 100 MeV electrons and maintain extremely high levels of reliability and stability and in a harsh operating environment with high vacuum, 1 T B-field and radiation exposures up to 100 krad and 10^{12} n_{1MeVeq}/cm^{2}.
The calorimeter technological choice, along with the design of the custom front-end electronics, cooling and mechanical systems were validated through an electron beam test on a large-scale 51-crystals prototype (Module-0) and extensive test campaigns that characterised and verified the performance of crystals, photodetectors, analogue and digital electronics. This included hardware stress tests and irradiation campaigns with neutrons, protons, and photons.
The production and QC phases of all calorimeter components is completed apart for the digital electronics that has been revised for improving its resilience to SEU and SEL. A series of vertical slice tests with the final electronics is being carried out on the Module-0 at LNF along with the implementation and validation of the calibration procedures. The first disk has been assembled in 2022 while the second disk is under assembly at the moment of writing. Status of construction will be summarised, along with plans for commissioning and first calibration of the fully assembled disks.
Progress in high-energy physics has been closely tied to the development of high-performance electromagnetic calorimeters. Recent experiments have demonstrated the possibility to significantly accelerate the electromagnetic shower development inside the scintillating crystals, typically used to build homogeneous calorimeters, when the incident beam is aligned with a crystallographic axis. In particular, an effective reduction of the radiation length was measured when a ultrarelativistic electron or photon beam is oriented with an high-Z scintillator crystal along one of its main axes.
Here we propose a new type of ultra-compact ultra-fast e.m. calorimeter, based on oriented ultrafast PWO (PWO-UF) crystals. An oriented crystal calorimeter will open the way for applications at the maximum energies achievable in current and future experiments. Such applications span from forward calorimeters to compact beam dumps for the search for light dark matter, to source-pointing space-borne γ-ray telescopes to decrease the size and the cost of the calorimeter needed to fully contain e.m. showers initiated by GeV to TeV particles.
Conveners:
Tatiana Pieloni (École Polytechnique Fédérale de Lausanne)
Mauro Migliorati (Universita di Roma)
Marc Wenskat (Universität Hamburg)
Contact: eps23-conveners-t13 @desy.de
The High-Luminosity LHC project aims to increase the integrated luminosity of the LHC by an order of magnitude and enable its operation until the early 2040s. This presentation will provide an overview of the current status of the HL-LHC project. By mid-2023, several achievements related to the HL-LHC can be reported, starting from the completion of the civil engineering to the successful demonstration of the new Nb3Sn magnet technology for the triplet magnets.
At the end of the nominal LHC operation period, radiation dose levels are expected to exceed 10 MGy for integrated luminosities above 300 fb-1 in the focusing triplet quadrupole magnets next to the main experiments by the end of 2025. Such radiation levels cause most epoxy and insulation materials used for magnet coil construction to become brittle and lose their mechanical strength, and are expected to result in the loss of mechanical integrity or degradation of the electrical insulation system for integrated luminosities between 400 fb-1 and 500 fb-1. But preparing the LHC machine for an integrated luminosity for the nominal target of 3000 fb-1 requires not only replacing the current triplet magnets with new, more radiation-tolerant magnets with larger apertures but additional technology developments in several areas. The HL-LHC project is therefore not only an upgrade of the LHC machine, but also a technology driver that develops technologies that will enable future accelerator projects such as the FCC and EIC.
The uncertainties affecting the integrated absolute luminosity recorded by the experimental detectors in pp collisions during LHC Run 2 lie in the 1-2% range. They typically fall into three categories: van-der-Meer (vdM) calibration biases, instrumental non-linearities that affect the transfer of the vdM calibration to the high-luminosity physics regime, and long-term stability of the luminometers. In a recent update by the ATLAS Collaboration, the vdM-calibration uncertainty in pp collisions at sqrt(s) = 13 TeV, which slightly dominates the other two categories, reached an absolute accuracy better than 1%, a performance unmatched at any hadron collider since the CERN ISR. Controlling systematic uncertainties to such a level requires an in-depth understanding of multiple beam-instrumentation or accelerator-physics effects. The most challenging problems arise (i) from non-linear correlations in the transverse-density distributions of the colliding bunches (also known as non-factorization), and (ii) from separation-dependent, beam—beam-induced distortions of the beam orbits and of the bunch shapes. A broad overview of the above issues and of their mitigation will be illustrated by studies selected from recently released luminosity-performance results.
The beam–beam interaction between the two circulating beams has been studied since the era of particle colliders started. This electromagnetic interaction occurs during collisions and can result in a significant bias on the measured luminosity. Numerical models have been developed to study the beam-beam induced bias on the Large Hadron Collider (LHC) luminosity measurements during van der Meer scans. They were further extended to reproduce the nominal operation configuration and study biases in more demanding conditions with beam train-dependent structures, and the extreme beam and machine parameters at the interaction points. In this report we compare results from a dedicated experiment performed at the LHC to those obtained with the numerical model. In addition, preliminary observations of the beam-beam impact to luminosity in physics operation are also discussed. The final aim of this study is to quantify the beam-beam bias to luminosity-based observables in hadron colliders. Firstly focusing on obtaining the precise luminosity calibration and further investigating an independent way to measure the detector-specific non-linearities and overall stability during nominal data-taking period.
Different scenarios for fixed-target physics research are being considered at the Large Hadron Collider (LHC) as part of the Physics Beyond Collider (PBC) study at CERN. In the so-called double crystal setup, a first bent crystal channels a fraction of the LHC multi-turn halo and steers it onto an in-vacuum target. This is followed by a second crystal with a bending angle of several mrad. This allows measuring the precession of a polarized particles produced by the interaction between the 7 TeV protons and the target nuclei, using a vertex detector and a spectrometer. This implementation offers interesting prospects to measure electric and magnetic dipole moments of charmed short-lived baryons, like the $\Lambda_c^+$, that decay over too short distances to observe precession with conventional magnets. The complexity of this setup and the deployment of this technique with the high-intensity beams at the LHC are challenging. Therefore, a proof-of-principle layout has been conceived to address experimentally the key challenges of this proposal, as input to a future experiment. This contribution presents the challenges, status and plans for an imminent deployment at the LHC.
A circular muon collider, due to strong suppression of synchrotron radiation (muons are about 207 times heavier than electrons), can provide high energy (multi-TeV) collisions using fundamental (point-like) particles improving that way the energy frontiers of lepton-antilepton machines. A unique feature of muon colliders, is the significant increase of the luminosity per beam power with beam energy. Muon colliders have been studied in the past by several initiatives. Recently, a new International Muon Collider Collaboration (IMCC) was formed and and is working mainly on the development of a 10 TeV center of mass energy muon collider complex. Such a machine is expected to require less power than the CLIC at 3 TeV and to provide a physics reach similar to the 100 TeV proton version of FCC with a more compact collider ring. A muon accelerator complex is expected to be a cost effective facility provided numerous technical challenges can be overcome. These challenges are mainly driven by the short lifetime of muons and among the crucial ones are the need for rapid cooling and acceleration of the beams with power and cost efficient solutions, the use of high field magnets, the minimisation of the beam induced background and to keep the maximum radiation doses for people due to neutrinos recaching the Earth surface at negligible levels . The progress of the main studies on the 10 TeV muon accelerator complex will be present on behalf of the IMCC.
Multi-TeV muon collisions are one of the most promising means to perform Standard Model high precision physics measurements and to search for new physics. The design of the interaction region and therefore of the Machine-Detector Interface, depends on the center of mass energy, therefore dedicated studies and optimizations are needed. In order to achieve the desired luminosity, high beam intensities, of the order of 10^12 muons per bunch, are needed. This generates a high flux of secondary and tertiary particles that reach both the machine elements and the detector region. The current strategy adopted by the International Muon Collider Collaboration to manage the flux of these particle is based on dedicated absorbers along the beamline and in the interaction region, as in past it was done by the Muon Accelerator Program in US.
This contribution will describe the interaction region configuration at two center of mass energies, 3 and 10 TeV, focusing on the absorbers design. The characteristics of the beam-induced background after the absorber’s introduction and its main effects on the detector performance will be also presented.
The costs of operating big science facilities, for example accelerators, are very large, and depend critically on the price of primarily electricity, but also of water and other utilities. This means that facilities must be energy and resource efficient. Facilities should at the same time also be environmentally sustainable. Finally, in these times with very high energy prices, all efforts must be made to keep the cost of operations at a reasonable level.
In fact, these targets pull in the same direction.
Efficient use includes not only the actual electrical efficiency of the existing equipment; it also means optimizing the whole facility for use of all energy. This will also help limiting the operations cost.
At ESS a goal was to incorporate elements for sustainability already at the design stage in order to increase sustainability [1, 2]. ESS today sells the high temperature cooling water to provide the local heating grid with energy, and provisions exist to use also lower-temperature water for heating purposes. The contribution will exemplify the trade-offs and considerations that were made, and what could have been implemented in a better way.
A lot more can be done, and given the huge electrical power use almost any measure to save power will pay off quickly. The contribution will go over some of the possibilities and combinations that exist: Energy storage, solar panels, novel DC/DC converters to power equipment directly from solar panels, bio-fueled gas turbines, energy brokerage (buying electric power on term contracts to limit market exposure).
ESS is also an active participant in various EU-programs that target sustainability, energy innovation, and flexible use of power [3, 4]. Many accelerator sites have large areas where for example solar panels or energy storage facilities can be installed. This kind of sustainability is part of the future for ESS.
References
1. T. Parker (ed), ESS Energy Design Report, ESS-0001761 (2013)
2. S. Peggs (ed), ESS Technical Design Report, ESS-2013-001 (2013)
3. iFAST, Horizon 2020 proposal No 101004730 (2020)
4. FlexRICAN, Horizon 2023 proposal No 101131516 (2023)
Conveners:
Maciej Bilicki (Center for Theoretical Physics, Warsaw)
Hyungjin Kim (DESY)
Fabrizio Rompineve (CERN)
Contact: eps23-conveners-t02 @desy.de
Monopoles are inevitable predictions of GUT theories. They are produced during phase transitions in the early universe, but also mechanisms like Schwinger effect in strong magnetic fields could give relevant contributions to the monopole number density. I will show that from the detection of intergalactic magnetic fields of primordial origin we can infer additional bounds on the magnetic monopole flux. I will also discuss the implications of these bounds for minicharged monopoles, for magnetic black holes and for monopole pair production in primordial magnetic fields.
We investigate the imprints of new long-range forces mediated by a new light scalar acting solely on dark matter. Dark fifth forces in general will modify the background evolution as well as the growth of density fluctuations. At the linear level, constraints are derived from CMB together with a full-shape analysis of the power spectrum as measured by BOSS. At the non-linear level, the presence of fifth forces induces violation of the equivalence principle in cosmological correlators. This is encoded in the breaking of consistency relations at tree level for the bispectrum, which could be directly tested with future galaxy surveys. Combining this information with the full shape power spectrum at one loop leads to an unprecedented sensitivity on dark fifth forces.
Q-balls are non-topological solitons that coherently rotate in field space. We show that these coherent rotations can induce superradiance for scattering waves, thanks to the fact that the scattering involves two coupled modes. Despite the conservation of the particle number in the scattering, the mismatch between the frequencies of the two modes allows for the enhancement of the energy and angular momentum of incident waves. When the Q-ball spins in real space, additional rotational superradiance is also possible, which can further boost the enhancements. We identify the criteria for the energy and angular momentum superradiance to occur.
The Two-Higgs-Doublet-Standard Model-Axion-Seesaw-Higgs-Portal inflation (2hdSMASH) model consisting of Two Higgs doublets, a Standard Model (SM) singlet complex scalar and three SM singlet right-handed neutrinos can embed axion dark matter, neutrino masses and address inflation. We report on an investigation of the inflationary aspects of 2hdSMASH and its subsequent impact on low energy phenomenology. In particular, we identify inflationary directions for which the parameter values required for successful inflation do not violate perturbative unitarity and boundedness-from-below conditions. By analyzing the renormalization-group flow of the parameters we identify the necessary and sufficient constraints for running all parameters perturbatively and maintaining stability from the electroweak to the PLANCK scale. We determine typical benchmark points satisfying theoretical and experimental constraints which can be potentially probed by future colliders.
In several models of beyond Standard Model physics discrete symmetries play an important role. For instance, in order to avoid flavor changing neutral currents, a discrete Z2 symmetry is imposed on Two-Higgs-Doublet-Models (2HDM). This can lead to the formation of domain walls as the Z2 symmetry gets spontaneously broken during electroweak symmetry breaking in the early universe.
Due to this simultaneous spontaneous breaking of both the discrete symmetry and the electroweak symmetry, the vacuum manifold has the structure of 2 disconnected 3-spheres and the formed domain walls can exhibit lots of special effects in contrast to standard domain walls. In this talk I will focus on some of these effects such as CP and/or charge violating vacua localized inside the domain walls.
I will also discuss the scattering of standard model fermions off such types of domain walls as, for example, top quarks being transmitted or reflected off the wall as a bottom quark.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Ilaria Brivio (Universita di Bologna)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Karsten Köneke (Universität Freiburg)
Matthias Schröder (Universität Hamburg)
Contact: eps23-conveners-t09 @desy.de or eps23-conveners-t10 @desy.de
In this study, we investigate the constraints imposed on the Doublet Left-Right Symmetric Model (DLRSM) by the latest experimental data on the Higgs boson. While most previous studies have assumed small values for the ratios $r=\kappa_2/\kappa_1$ and $w=v_L/\kappa_1$, we consider the most general scalar potential and explore the constraints on $r$ and $w$. Through our analysis, we calculate the masses of the CP-even scalars and their couplings to W and Z gauge bosons and third generation quarks. Our results show that there is no lower bound on either $r$ or $w$, but equating the mass of the lightest CP-even scalar to 125 GeV yields an upper limit of $w$ < 6.7. Additionally, we find that the perturbativity of the Yukawa coupling of the quarks to the Higgs bidoublet sets the upper bounds $r$ < 0.8 and $w$ < 3.5. Our analysis of the Yukawa coupling of the bottom quark to the lightest CP-even scalar strongly disfavors values of $r$ and $w$ < 0.1 and indicates a preference for values of $w$ ~ O(1). Our findings provide important insights into the validity of the DLRSM in the current theoretical and experimental framework.
The CMS collaboration has recently reported the final Run 2 results of the low-mass Higgs-boson search in the di-photon channel. The new results show an excess of events at a mass of about 95 GeV with a local significance of $2.9\sigma$, confirming a previously reported excess at about the same mass and similar significance based on the first-year Run 2 plus Run 1 data. In this work, we discuss the diphoton excess and show that it can be interpreted as the lightest Higgs boson in the Two-Higgs doublet model that is extended by a complex singlet (S2HDM) of Yukawa types II and IV. We show that the second-lightest Higgs boson is in good agreement with the current LHC Higgs-boson measurements of the state at 125 GeV, and that the full scalar sector is compatible with all theoretical and experimental constraints. Furthermore, we discuss the diphoton excess in conjunction with an excess in the $b \bar b$ final state observed at LEP and an excess observed by CMS in the ditau final state, which were found at comparable masses with local significances of about $2\sigma$ and $3\sigma$, respectively. We find that the
$b \bar b$ excess can be well described together with the diphoton excess in both types of the S2HDM. However, the ditau excess can only be accommodated at the level of $1\sigma$ in type IV.
Though the Standard Model (SM) provides a very elegant description of interactions among fundamental particles, there are ample evidences to believe that there exists physics beyond the Standard Model. In particular extending the scalar sector has enough motivation from vacuum stability, electroweak phase transition and various other sectors. Among different such extensions, two Higgs doublet model (2HDM) is the simplest one that preserves the electroweak $\rho$-parameter. Although flavour changing neutral currents (FCNC) are usually avoided in these models by implementing additional $Z_2$ symmetry, they still get several severe phenomenological constraints. Conversely, in aligned two Higgs doublet model (A2HDM) FCNCs are avoided by choosing same flavour structure for the Yukawa couplings of the two scalar doublets, and the phenomenological constraints on this model are also weaker. Here, we present a global fit of A2HDM using the package HEPfit, that performs a bayesian analysis on the parameter-space of this model with the help of stability and perturbativity bounds, results for various flavour and electroweak precision observables, and constraints from Higgs searches at the LHC to investigate how much room the current experimental data leave for this model to exist.
I briefly review the Benchmark Planes in the Two-Real-Singlet Model (TRSM), a model that enhances the Standard Model (SM) scalar sector by two real singlets that obey a Z2 x Z2' symmetry. In this model, all fields acquire a vacuum expectation value, such that the model contains in total 3 CP-even neutral scalars that can interact with each other. All interactions with SM-like particles are inherited from the SM-like doublet via mixing. I remind the readers of the previously proposed benchmark planes, and briefly discuss possible production at future Higgs factories, as well as regions in a more generic scan of the model. For these, I also discuss the use of the W-boson mass as a precision observable to determine allowed/ excluded regions in the models parameter space. This work builds on a whitepaper submitted to the Snowmass process.
Various extensions of the Standard Model predict the existence of additional Higgs bosons. If these additional Higgs bosons are sufficiently heavy, an important search channel is the di-top final state. In this channel interference contributions between the signal and the corresponding QCD background process are expected to be important. If more than one heavy Higgs boson is present, besides the signal-background interference effects associated with each Higgs boson also important signal-signal interference effects are possible. We perform a comprehensive model-independent analysis of the various interference contributions within a simplified model framework considering two heavy Higgs bosons that can mix with each other, taking into account large resonance-type effects arising from loop-level mixing between the scalars. The interference effects are studied both in an analytic way at parton level and with Monte Carlo simulations for of the di-top production process at the LHC. We demonstrate that signatures can emerge from these searches that may be unexpected or difficult to interpret.
Searches for dark matter produced via scalar resonances in final states consisting of Standard Model (SM) particles and missing transverse momentum are of high relevance at the LHC. Motivated by dark-matter portal models, most existing searches are optimized for unbalanced decay topologies for which the missing momentum recoils against the visible SM particles. In this work, we show that existing searches are also sensitive to a wider class of models, which we characterize by a recently presented simplified model framework. We point out that searches for models with a balanced decay topology can be further improved with more dedicated analysis strategies. For this study, we investigate the feasibility of a new search for bottom-quark associated neutral Higgs production with a $b \bar b Z + p^{\text{miss}}_{T}$ final state and perform a detailed collider analysis. Our projected results in the different simplified model topologies investigated here can be easily reinterpreted in a wide range of models of physics beyond the SM, which we explicitly demonstrate for the example of the Two-Higgs-Doublet model with an additional pseudoscalar Higgs boson.
Conveners:
Maciej Bilicki (Center for Theoretical Physics, Warsaw)
Hyungjin Kim (DESY)
Fabrizio Rompineve (CERN)
Contact: eps23-conveners-t02 @desy.de
Weak gravitational lensing - small distortions of photon paths due to the large-scale structure of the Universe - is an emerging cosmological probe, also known as "cosmic shear". I will present recent results of today's main cosmic shear surveys, with a focus on the Kilo-Degree Survey (KiDS) in which I take part. I will in particular discuss the current status of the so-called "$S_8$ tension". I will also mention some other applications of weak lensing in KiDS, which include smaller-scale investigations of galaxies and their dark matter haloes. Using KiDS as an example, I will point to some challenges that current and forthcoming weak lensing surveys are facing.
We revisit the framework of axion-like inflation, considering a warm inflation scenario in which the inflaton couples to the topological charge density of non-Abelian gauge bosons whose self-interactions result in a rapidly thermalizing heat bath. Including both dispersive (mass) and absorptive (friction) effects, we find that the system remains in a weak regime of warm inflation (thermal friction Hubble rate) for phenomenologically viable parameters. We derive an interpolating formula for vacuum and thermal production of tensor perturbations in generic warm inflation scenarios, and find that the perturbations exhibit a model-independent f^3 frequency shape in the LISA window, with a coefficient that measures the maximal shear viscosity of the thermal epoch.
The idea of searching for gravitational waves using cavities in strong magnetic fields has recently received significant attention. In particular, cavities with rather small volumes that are currently used to search for axion-like particles are discussed in this context. We propose here a novel experimental scheme enabling the search for gravitational waves with MHz frequencies and above, which could be caused for example by primodial black hole mergers. The scheme is based on synchronous measurements of cavity signals from several devices operating in magnetic fields at distant locations. Although signatures of gravitational waves may be present as identifiable signal in a single cavity, it is highly challenging to distinguish them from noise. By analyzing the correlation between signals from multiple, geographically separated cavities, it is not only possible to increase substantially the signal over noise ratio, but also to investigate the nature and the source of those gravitational wave signatures. In the context of this talk, a first demonstration experiment with one superconduction cavity has been conducted, which is the basis of the proposed data-analysis approaches. The prospects of GravNet (Global Network of Cavities to Search for Gravitational Waves) are outlined in this talk.
As gravitational waves (GW) probe the strong field regime of gravity, they are an important tool for testing gravitational models. This requires an accurate description of the gravitational waveforms in modified gravity theories. In this work we focus on scalar Gauss Bonnet gravity (sGB), a promising extension of General Relativity (GR), to include finite size effects in the modelling of the inspiral of a black hole (BH) binary. sGB introduces on top of the Hilbert Einstein action, a topological invariant quadratic curvature term coupled to a scalar field, leading to the possibility of having black hole solutions with non zero scalar hair. We find that the scalar-induced tidal corrections related to the scalar Love number, contribute at the same PN order and scale the same with distance and frequency as the sGB correction to the gravitational wave (GW) phase. Finally, we investigate the dependency of the sGB correction and tidally induced correction on the physical properties of the binary and find that the tidal effects dominate over the sGB corrections for large separations of the black holes.
Models of freeze-in Dark Matter (DM) have emerged as a compelling explanation for the absence of a signal in direct detection experiments. In these models, DM is generated through the decay of a feebly coupled parent particle. If the parent carries a gauged charge, it can be potentially detected in long-lived particle searches (LLPs). Moreover, in this framework, DM production predominantly occurs at temperatures around the mass of the parent particle. Therefore, the phase of inflationary reheating becomes crucial in determining the relic density, when the reheating temperature is comparable to the parent's mass. We investigate scenarios of bosonic and fermionic reheating with power-law potentials and point out the implications for interpreting collider limits. Additionally, we emphasize the interplay between cosmological constraints on the reheating temperature, dependent on the specific inflationary model, can provide a probe of freeze-in parameter space complementary to collider searches. This interplay could also offer valuable insights into the dynamics of inflationary reheating in case of a LLP signal.
The Ptolemy experiment aims at the detection of the cosmic neutrino background, which is produced approximately one second after the Big Bang, according to the Standard Cosmology. Due to the extremely low energy of these neutrinos, a reliable experimental detection can be achieved through neutrino captures on beta-unstable nuclides without the need for a specific energy threshold. Among the various isotopes available, tritium implanted on a carbon-based nanostructure appears to be a promising candidate in terms of both cross-section and low-endpoint energy.
The Ptolemy collaboration intends to combine a solid-state tritium source with a novel compact electromagnetic filter, which relies on the dynamic transverse momentum cancellation concept. This filter will be applied in conjunction with an event-based preliminary radio-frequency preselection. The measurement of neutrino mass and the search for light sterile neutrinos are additional outcomes that arise from the Ptolemy experiment's physics potential, even when utilizing smaller or intermediate-scale detectors. To finalize the conceptualization of the detector, a demonstrator prototype is being assembled and tested at LNGS (Laboratori Nazionali del Gran Sasso) in the coming months. This prototype aims at addressing the challenging aspects of the Ptolemy experiment.
Conveners:
Fady Bishara (DESY)
James Frost (University of Oxford)
Silvia Scorza (LPSC Grenoble)
Contact: eps23-conveners-t03 @desy.de
The DEAP-3600 detector is a single-phase direct-detection Dark Matter (DM) experiment located 2 km underground at SNOLAB in Sudbury, Canada. The detector consists of 3279 kg of LAr contained in a spherical acrylic vessel. It was specifically designed to search for direct detection of dark matter candidates known as Weakly-Interacting Massive Particles (WIMPs). Radioisotope surface activity is a major source of background in DM experiments, and most experiments use a fiducial volume to remove these events, which reduces signal acceptance.
In our analysis, instead of only relying on position reconstruction algorithms and using a strict fiducial volume to remove surface background events, we designed a new veto algorithm to have a separate tool designed specifically to identify surface events. This approach will enable us to tag and veto surface events and expand the fiducial volume to increase the signal acceptance and improve the upper limits we can set on the WIMP-nucleus cross-section.
Darkside-20k is a next-generation dual-phase Liquid Argon Time Projection Chamber (LAr TPC), currently under construction at the Gran Sasso National Laboratory (LNGS) in Italy. The 20t fiducial liquid Argon mass will probe WIMP-nucleon interactions, with sensitivity to cross sections equal to $10^{-48}$ cm$^2$ for a WIMP mass of 0.1 TeV/𝑐$^2$ considering the exposure goal of 200 tonne-years.
Darkside-20k is designed to be a nearly "instrumental background-free" experiment, meaning that <0.1 events are expected in the WIMP search region during the planned exposure. To achieve this goal, the TPC is surrounded by an inner (neutron) and outer (muon) veto, while low-radioactivity underground argon (depleted in $^{39}$Ar), is used as the inner detector (TPC and inner veto) medium. Additionally, an extensive campaign of radio assays is performed to ensure the radiopurity of the materials used. Both the TPC and the veto systems are instrumented with novel cryogenic Silicon photomultipliers (SiPM), capable of resolving single photoelectrons and providing the required spatial and timing resolution.
This contribution will provide an overview of the DarkSide-20k experimental program, including the physics potential. The construction status of the DarkSide-20k detector will be reported with a focus on the photo-detector system construction and testing procedures.
The nature of dark matter is still a mystery in physics and the detection of particle dark matter has eluded experiments for decades. DARWIN is a next-generation liquid-xenon-based experiment that plans to reach a dark matter sensitivity limited by the cosmic neutrino background. With a proposed active target of 40 t of liquid xenon, ultra-low radioactive background, and keV-level threshold, the physics programme of DARWIN extends beyond searches for dark matter to other rare-event searches and neutrino physics. The project might be realised within the context of the newly-formed XLZD consortium. In this talk, I describe the baseline experimental design of the DARWIN observatory, its science programme, its current status, and the ongoing R&D efforts of the project.
There is a compelling physics case for a large, xenon-based underground detector devoted to dark matter and other rare-event searches. A two-phase time projection chamber as inner detector allows for a good energy resolution, a three-dimensional position determination of the interaction site and particle discrimination. To study challenges related to the construction and operation of a multi-tonne scale detector, we have designed and constructed a vertical, full-scale demonstrator for the DARWIN experiment at the University of Zurich. Here we present first results from a several-months run with 343 kg of xenon and electron drift lifetime and transport measurements with a 53 cm tall purity monitor immersed in the cryogenic liquid. After 88 days of continuous purification, the electron lifetime reached a value of 664(23) microseconds. We measured the drift velocity of electrons for electric fields in the range (25--75) V/cm, and found values consistent with previous measurements. We also calculated the longitudinal diffusion constant of the electron cloud in the same field range, and compared with previous data, as well as with predictions from an empirical model.
Despite decades of experimental efforts, the direct detection of a dark matter (DM) signal has remained elusive. Leading experiments typically have sensitivity to DM candidates in the mass range from 10 GeV to O(1 TeV), therefore a sensitive detection method to probe the sub-GeV mass range is highly motivated. The TESSERACT collaboration aims to use two fully defined sensor technologies (SPICE and HeRALD) to explore the mass range down to 10 MeV. All target materials will be read out using common Transition Edge Sensor (TES) technology. TESSERACT will have sensitivity to DM candidates interacting through nuclear recoils (NR) and electron recoils (ER). SPICE will utilize polar crystals such as GaAs, with the best sensitivity to DM mediated by a dark photon. HeRALD will be equipped with a superfluid 4He target: providing a light but dense target sensitive to low-energy nuclear recoils. In this presentation, we will discuss the current planning phase of TESSERACT and the progress made in realizing this goal, in addition to sharing details about the potential physics reach of this project.
The SABRE (Sodium iodide with Active Background REjection) experiment aims to detect an annual rate modulation from dark matter interactions in ultra-high purity NaI(Tl) crystals in order to provide a model independent test of the signal observed by DAMA/LIBRA. It is made up of two separate detectors; SABRE South located at the Stawell Underground Physics Laboratory (SUPL), in regional Victoria, Australia, and SABRE North at the Laboratori Nazionali del Gran Sasso (LNGS).
SABRE South is designed to disentangle seasonal or site-related effects from the dark matter-like modulated signal by using an active veto and muon detection system. Ultra-high purity NaI(Tl) crystals are immersed in a linear alkyl benzene (LAB) based liquid scintillator veto, further surrounded by passive steel and polyethylene shielding and a plastic scintillator muon veto. Significant work has been undertaken to understand and mitigate the background processes that take into account radiation from detector materials, from both intrinsic and cosmogenic activated processes, and to understand the performance of both the crystal and veto systems.
SUPL is a newly built facility located 1024 m underground (~2900m water equivalent) within the Stawell Gold Mine and its construction was completed in mid-2022. It will house rare event physics searches, including the SABRE dark matter experiment, as well as measurement facilities to support low background physics experiments and applications such as radiobiology and quantum computing. The SABRE South commissioning is expected to occur this year.
This talk will report on the design of SUPL and the construction and commissioning of SABRE South.
Conveners:
Summer Blot (DESY)
Pau Novella (IFIC)
Davide Sgalaberna (ETH)
Jessica Turner (Durham University)
Contact: eps23-conveners-t04 @desy.de
The MicroBooNE liquid argon time projection chamber (LArTPC) experiment operated in the Fermilab Booster Neutrino and Neutrinos at the Main Injector beams from 2015-2021. Among the major physics goals of the experiment is a detailed investigation of neutrino-nucleus interactions. MicroBooNE currently possesses the world's largest neutrino-argon scattering data set, with 8 published measurements, and more than 30 ongoing analyses are studying a wide variety of interaction modes. This talk provides an overview of MicroBooNE's neutrino cross-section physics program, including investigations of exclusive pion final states and rare processes, novel cross section extraction methods, and measurements with both muon and electron neutrinos from the BNB and NuMI beamlines.
ProtoDUNE Single-Phase is a 700-ton liquid argon detector operated in the CERN Neutrino Platform from 2018 to 2020. It is part of the Deep Underground Neutrino Experiment (DUNE), a long-baseline neutrino oscillation experiment with a 40 kT liquid argon far detector to be built at the Sanford Underground Research Facility and a near detector, with both argon and non-argon detector technologies, to be hosted at the Fermi National Accelerator Laboratory. A critical uncertainty to understand in the neutrino oscillation program of DUNE is the uncertainty on final state interactions, either reaction or elastic, of various hadrons on argon since the scattering of neutrino-induced hadrons off argon bias the hadron's measured energy. It can also prevent algorithms from identifying the hadron's particle type. Protons, kaons, and pions from the beam are especially important for the DUNE neutrino program as they represent common final state particles in neutrino interactions off a nucleus. Therefore, ProtoDUNE is analyzing the test beam data to measure cross sections of pions, protons, and kaons on argon, aiming to tune parameters that model charged particle scattering off argon. This talk will discuss the data-taking program for ProtoDUNE and an overview of the status and results of measuring cross sections of pions, protons, and kaons on argon. It will conclude with a brief overview of how these measurements can be used for future liquid argon neutrino detectors.
Neutrinos are unique tools to probe new physics scenarios such as non-standard interactions (NSIs) of neutrinos with matter. The coupling of neutrinos to a scalar field gives rise to a new interaction known as Scalar NSI. Unlike the vector NSI case, which contributes to the usual matter potential, the scalar NSI appears as a correction to the neutrino mass term. In this work, we perform a phenomenological study of neutrino oscillation along with the scalar NSI and its impact on the determination of neutrino mass ordering at the JUNO experiment. We find that in the presence of scalar NSI the survival probabilities $P_{ee} ~\text{and}~ \bar{P}_{ee}$ depend upon the $\delta_{CP}~$ and octant of $\theta_{23}~$ even in vacuum, which is not the case, had the scalar NSI been absent in the Hamiltonian. We explore the role of diagonal scalar NSI parameters $\eta_{ee}~, \eta_{e\mu}~$, and $\eta_{\tau\tau}~$ and it is noted that $\eta_{ee}~$ significantly affect the mass ordering determination of JUNO. The constraints on diagonal scalar NSI elements have been obtained in this work.
The DsTau experiment at CERN-SPS has been proposed to measure an inclusive differential cross-section of a Ds production with a consecutive decay to tau lepton in p-A interactions. A precise measurement of the tau neutrino cross section would enable a search for new physics effects such as testing the Lepton Universality (LU) of Standard Model in neutrino interactions. The detector is based on nuclear emulsion providing a sub-micron spatial resolution for the detection of short length and small “kink” decays. Therefore, it is very suitable to search for peculiar decay topologies (“double kink”) of Ds→τ→X. In 2022, the second physics run of the experiment was performed successfully. In this talk we discuss the physics potential of the experiment and present the analysis result of the pilot run data and the near-future plans.
A rich cross-section and "beyond the Standard Model" (BSM) search programme will be served by the intense $\nu_e$ and $\nu_\mu$ beams that will be provided by the neutrinos from stored muons (nuSTORM) facility. Exceptional precision in cross section measurement and exquisite sensitivity in BSM searches are afforded at nuSTORM by the precise knowledge of the flavour composition and energy distribution of the neutrino beams. These unique features are complemented by the ability to tune the mean energy of the beams and use this freedom to analyse the data using synthetic beams of limited energy spread.
The precision that nuSTORM will provide is critical to the elucidation of neutrino-nucleus scattering dynamics. Especially appealing are the prospects for new precise direct or indirect measurement of neutrino scattering cross sections on single nucleons. Such measurements will be a priceless input to the development of event generators and provide valuable information about hadron structure in the axial sector. The sensitivity of which nuSTORM is capable will allow exquisitely sensitive searches for short-baseline flavour transitions, covering topics such as light sterile neutrinos, non-standard interactions, and non-unitarity of the neutrino mixing matrix. In synergy with the goals of the neutrino-scattering program, BSM searches will also profit from measurements of exclusive final states. This would allow BSM neutrino interactions to be probed by means of precise measurements of neutrino-electron scattering, as well by searching for exotic final states, such as dileptons or single-photon signatures. We will describe the status of the development of the nuSTORM facility and the simulation of its performance. Illustrative examples of the precision and sensitivity that can be achieved will be presented. The implementation of nuSTORM as part of a Muon Collider "demonstrator facility" will also be discussed.
Monitored neutrino beams represent a powerful and cost effective tool to suppress cross section related systematics for the full exploitation of data collected in long baseline oscillation projects like DUNE and Hyper-Kamiokande. In the last years the NP06/ENUBET project has demonstrated that the systematic uncertainties on the neutrino flux can be suppressed to 1% in an accelerator based facility where charged leptons produced in kaon and pion decays are monitored in an instrumented decay tunnel. In this talk, we will present the final results of this successful R&D programme. The collaboration is now working to provide the full implementation of such a facility at CERN in order to perform high precision cross section measurements at the GeV scale exploiting the ProtoDUNEs as neutrino detectors. This contribution will present the final design of the ENUBET beamline that allows to collect $\sim$10$^4$ $\nu_e$ and $\sim$6$\times$10$^5$ $\nu_{\mu}$ charged current interactions on a 500 ton LAr detector in about 2 years of data taking. The experimental setup for high purity identification of charged leptons in the tunnel instrumentation will be described together with the framework for the assessment of the final systematics budget on the neutrino fluxes, that employs an extended likelihood fit of a model where the hadro-production, beamline geometry and detector-related uncertainties are parametrized by nuisance parameters. We will also present the results of a test beam exposure at CERN-PS of the Demonstrator: a fully instrumented 1.65 m long section of the ENUBET instrumented decay tunnel. Finally the physics potential of the ENUBET beam with ProtoDUNE-SP and plans for its implementation in the CERN North Area will be discussed.
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
KLOE and KLOE-2 collected the largest dataset at an electron-positron collider operating at the $\phi$ resonance peak ($\sim$ 8 fb$^{-1}$),
corresponding to the production of about 24 billion of $\phi$ mesons, namely 8 billion pairs of neutral K mesons and 300 million $\eta$ mesons.
A wide hadron physics program, investigating rare meson decays, $\gamma\gamma$ interaction, and dark forces, is under investigation by the KLOE-2 Collaboration.
The $\eta$ decay into $\pi^0 \gamma \gamma$ is a test bench for various models and effective theories, like VMD (Vector Meson Dominance) or ChPT (Chiral Perturbation Theory, which predict branching ratio (BR) far from the experimental value. KLOE-2 performed a new precise measurement of this BR, by using its highly pure $\eta$ sample produced
in $\phi \to \eta \gamma$ process, .
KLOE-2 is currently probing a complementary model to the U boson or "dark photon", where the dark force mediator is a hypothetical leptophobic B boson that could show up in the $\phi \to \eta B\to \eta \pi^0\gamma\,, \eta \to \gamma \gamma$ channel. The preliminary upper limit on the dark $\alpha_{\rm B}$ coupling constant will be shown.
The KLOE-2 High Energy Tagger detectors allow the possibility to investigate $\pi^0$ production from $\gamma \gamma$ scattering by
tagging final-state leptons from $e^+e^- \to \gamma^{\ast}\gamma^{\ast}e^+e^-\to \pi^0 e^+e^-$ in coincidence with the $\pi^0$ in the barrel calorimeter. A preliminary measurement of the $\gamma^{\ast}\gamma^{\ast} \to \pi^0$ counting obtained by using single tagged events will be reported.
Finally, the search for the double suppressed $\phi\rightarrow \eta\, \pi^+ \pi^- $ and the conversion $\phi\rightarrow \eta\, \mu^+ \mu^- $ decays are being performed at KLOE-2 with both $\eta \to \gamma \gamma$ and $\eta \to 3\pi^0$. Clear signals are seen for the first time.
The first observation of hypertriton and antihypertriton at the LHCb experiment is reported. The used dataset consists of pp collisions at √s = 13 TeV, collected between 2016 and 2018, and corresponds to an integrated luminosity of L = 5.5/fb. The hypertriton candidates are reconstructed via the 2-body decay into helium-3 and a charged pion. The corresponding helium nuclei are identified with a technique, innovative at the LHCb experiment, using ionization losses in the LHCb VELO and ST silicon sensors and timing information in the LHCb OT drift tubes. A total of 10^5 prompt helium and antihelium are identified with negligible background contamination and 10^2 hypertriton candidates are found, allowing for a rich program of precise measurements of QCD and astrophysical interest to be performed on the available data.
Quarkonium production in high-energy hadronic collisions is sensitive to both perturbative and non-perturbative aspects of quantum chromodynamics (QCD) calculations. Indeed, the production of the heavy-quark pair is described by perturbative QCD while the formation of the bound state is a non-perturbative process, treated in different ways by available theoretical models. Quarkonium polarization measurements provide also stringent tests of theoretical approaches, as this observable strongly depends on the quarkonium production mechanism at play. Another way to provide constraints on quarkonium production mechanisms is by looking at the production of J/$\psi$ inside jets. For instance, quarkonium production in a parton shower predicts that J/$\psi$ mesons are rarely produced in isolation, contrary to expectations from direct quarkonium production via parton-parton scattering. The measurement of quarkonium production and polarization in pp collisions can also provide a reference for investigating the fate of charmonium in the quark-gluon plasma formed in nucleus-nucleus collisions. ALICE can measure inclusive quarkonium production down to zero transverse momentum ($p_{\rm T}$), at forward rapidity (2.5 <$y$< 4) and midrapidity (|$y$|< 0.9). Prompt and non-prompt charmonium separation is performed at midrapidity down to low $p_{\rm T}$. In this contribution, we will report on recent quarkonium results in pp collisions at $\sqrt{s}$ = 13 TeV, including $\Upsilon$(nS) cross section measurements and $\Upsilon$(1S) polarization at forward rapidity, as well as the prompt and non-prompt J/$\psi$ production in jets at midrapidity. The status of new J/$\psi$ and $\psi$(2S) polarization analyzes at $\sqrt{s}$= 13 TeV and forward rapidity will be shown. Finally, the status of new ongoing quarkonium production analyses using the Run 3 data at $\sqrt{s}$ = 13.6 TeV will be shown both at mid and forward rapidities. Results will be compared with available models.
The $p_T$-integrated cross section of inclusive hadro and photo-production of heavy quarkonia when computed up to NLO in Collinear Factorisation(CF) shows a perturbative instability at high hadronic or photon-hadron collision energies - the cross section could turn negative for reasonable factorisation/renormalisation scale-choices. We solve this problem by resummation of the subset of LLA higher-order corrections $\sim \alpha_s^n \ln^{n-1}(\hat{s}/M^2)$, where $\hat{s}$ is the partonic center of mass energy squared, using High-Energy Factorisation(HEF) formalism. We use doubly-logarithmic approximation for the resummation factors of HEF $\sim \alpha_s^n \left[ \ln(\hat{s}/M^2)\ln(q_T/\mu_F) \right]^{n-1}$, for consistency with NLO DGLAP evolution of PDFs. The DLA HEF result is then matched with the full NLO CF calculation to provide uniformly accurate description at low and high collision energies.
The phenomenological results for $\eta_c$ total and rapidity-differential cross sections will be presented, as well as for $J/\psi$ inclusive photoproduction will be presented. The calculation of loop corrections to coefficient functions of HEF, which is necessary to go beyond the DLA, will be discussed.
The latest CMS results on spectroscopy and properties of beauty mesons and baryons are presented. The results are obtained with the data collected by the CMS experiment in proton-proton collisions at sqrt(s)=13 TeV.
The B->DDX and other related final states provides a bountiful arena for performing spectroscopy studies. This talk covers the latest results in this area from amplitude analyses and direct searches.
Measurements of neutral meson production with ALICE provide a precise determination of the production cross section over a wide range of transverse momentum across all collision systems available at the LHC. The measurements combine results from several reconstruction techniques, including the use of two different calorimeters and the reconstruction of conversion photons via their e$^{+}$e$^{-}$ pairs. In pp collisions such measurements are used to constrain the parton distribution functions (PDF) and the fragmentation functions (FF) while comparisons to the p–-Pb collision system allow the study of cold nuclear matter effects over a large range in transverse momentum. In Pb--Pb collisions the $p_{\text{T}}$ spectra can be used to study energy loss mechanisms in the QGP and provide vital input for direct-photon analyses. Recent results from the LHC show similar observations for high-multiplicity pp and p--Pb collisions to those in heavy-ion collisions. Measured identified particle spectra in collisions with high charged-particle multiplicities give further insight into the hadron chemistry in such events. Moreover, the correlation of neutral mesons and jets measured in pp collisions provides direct access to the FF.
In this talk, detailed measurements of neutral pion, eta, and omega mesons in several multiplicity classes in pp collisions at $\sqrt{s}$ = 13 TeV and in pp and p--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV will be presented, including $R_{\rm pA}$. Furthermore, the measurement of $\pi^{0}$ and $\eta$ hadrons inside of jets and the resulting FFs as a function of jet and meson momentum in pp collisions will be shown. Finally, the status of the Run 3 analyses, together with future perspectives, will be discussed.
The focus of the session is on top quarks precision measurements and theory calculations.
Conveners:
Gauthier Durieux (CERN)
Abideh Jafari (DESY)
Narei Lorenzo Martinez (LAPP, Annecy)
Contact: eps23-conveners-t07 @desy.de
15'+5'
Duration: 15'+5'
The ATLAS experiment has performed extensive searches for rare Standard Model processes involving top quarks. In this contribution two recent highlights of this programme are presented. The top-quark pair production in association with a W boson is a difficult process to calculate and model and is one of the leading sources of same-sign and multi-lepton events. To improve our understanding of this process, a new inclusive and differential measurement of this process in events with 2 or 3 leptons was performed, as well as measurements of the ratio of ttW events with a positively and a negatively charged W-boson. The result confirms the slight tension observed in previous measurements. The 4-top production process, with a cross section of approximately 12 fb, is nearly one order of magnitude still. A re-analysis of the run 2 dataset is performed in the same-sign and multi-lepton channel, with several improvements in the event selection, the data-driven background estimate and the final discriminant. The cross section measurement of 23 +/- fb, is presented, as well as bounds on the top quark Yukawa coupling and on EFT operator coefficients affecting 4-top production.
15'+5'
15'+5'
The top quark is hypothesized in many BSM models to have enhanced, non-standard or rare interactions with other SM or BSM particles. This presentation covers the latest CMS direct results in this regard, including the tests of lepton flavor violations and baryon number violations, including FCNC searches.
15'+5'
15'+5'
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
The physics program of the Higgs factory will focus on measurements of the 125 GeV Higgs boson, with the Higgs-strahlung process being the dominant production channel at 250 GeV. However, production of extra light scalars is still not excluded by the existing experimental data, provided their coupling to the gauge bosons is sufficiently suppressed. Fermion couplings of such a scalar could also be very different from the SM predictions leading to non-standard decay paterns. Considered in the presented study is the sensitivity of future Higgs factory experiments to direct observation of the new light scalar production for the scalar mass range from 50 GeV to 120 GeV.
The electron-positron stage of the Future Circular Collider (FCC-ee) is a precision frontier factory for Higgs, electroweak, flavour, top quark, and QCD physics. It is designed to operate in a 91-km circular tunnel built at CERN, and will serve as the first step towards O(100 TeV) proton-proton collisions. In addition to an essential Higgs program, the FCC-ee offers unique and powerful opportunities to answer fundamental open questions and explore unknown physics beyond the Standard Model. Direct searches for long-lived particles, and indirect probes of new physics sensitive to several tens of TeV scale, will be particularly fertile in the high-luminosity Z run, where $8×10^{12}$ Z bosons are expected. The large data samples of Higgs bosons, W bosons, and top quarks in very clean experimental conditions will offer additional opportunities for discoveries at other collision energies. Three concrete physics cases with promising signatures at FCC-ee will be discussed: heavy neutral leptons (HNLs), axion-like particles (ALPs), and exotic decays of the Higgs boson. These three well-motivated cases motivate out-of-the-box optimization of experimental conditions and analysis techniques, that could lead to improvements in other searches for new physics.
Although the LHC experiments have searched for and excluded many proposed new particles up to masses close to 1 TeV, there are many scenarios that are difficult to address at a hadron collider. This talk will review a number of these scenarios and present the expectations for searches at an electron-positron collider such as the International Linear Collider.
The cases discussed include SUSY in strongly or moderately compressed models, heavy neutrinos, heavy vector bosons coupling to the s-channel in e+e- annihilation, and new scalars.
Future e$^+$e$^-$ colliders, thanks to their clean environment and triggerless operation, offer a unique opportunity to search for long-lived particles (LLPs) at sub-TeV energies. Considered in this contribution are promissing prospects for LLP searches offered by the International Large Detector (ILD), with a Time Projection Chamber (TPC) as the core of its tracking systems, providing almost continuous tracking. The ILD has been developed as a detector concept for the ILC, however, studies directed towards understanding of ILD performance at other collider concepts are ongoing.
Based on the full detector simulation, we study the possibility of reconstructing decays of both light and heavy LLPs at the ILD. For the heavy, $\mathcal{O}$(100 GeV) LLPs, we consider a challenging scenario with small mass splitting between LLP and the dark matter candidate, resulting in only a very soft displaced track pair in the final state, not pointing to the interaction point. We account for the soft beam-induced background (from measurable e$^+$e$^-$ pairs and $\gamma\gamma\to$ hadrons processes), expected to give the dominant background contribution due to a very high cross section, and show the possible means of its reduction. As the opposite extreme scenario we consider the production of light, $\mathcal{O}$(1 GeV) pseudo-scalar LLP, which decays to two highly boosted and almost colinear displaced tracks.
We also present the corresponding results for an alternative ILD design, where the TPC is replaced by a silicon tracker modified from the Compact Linear Collider detector (CLICdet) design.
Muon colliders offer enormous potential for the exploration of the particle physics frontier, representing the unique possibility of combining the high centre-of-mass energy and luminosity of hadron colliders with very precise measurements of lepton machines. They provide an unprecedented physics reach from Standard Model (SM) processes to new physics beyond the SM. The contribution presented will give a general overview of the latter topic, broadening from supersymmetry and dark matter to muon-specific opportunities for the study of g-2 and B anomalies. In particular, the sensitivity of a 3 TeV Muon Collider to dark SUSY model, a hidden sector coupled to SM via supersymmetric neutralino portal characterized by multi muons in the final state, will be discussed along with some considerations on muon reconstruction performance.
Exotic beyond the Standard Model signatures, such as long-lived particles or high-mass resonances, are prime examples of the physics potential of a high-energy muon collider. These experimental signatures impose significant constraints on the detector design and requirements on the event reconstruction techniques employed to analyse the data.
For example: dedicated track reconstruction techniques for ionising particles that disappear as they traverse the detector, dedicated data-paths for late-decaying states, or high-granularity calorimetry to aid with the reconstruction of highly-boosted objects.
This talk will highlight some of the experimental challenges that arise when targeting these exotic signatures at a muon collider, and present the development work which is being done to make them possible as well as the expected reach.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
A Muon Collider is being proposed as a next generation facility. This collider would have unique advantages, since clean events as in electron-positron colliders are possible, and high collisions energy as in hadron colliders could be reached due to negligible beam radiation losses. The beam-induced background, produced by the muon decays in the beams and subsequent interactions, reaches the interaction region and the detectors and presents unique features and challenges with respect to other machines. In this talk the R&D activities for the design of the Muon Collider detector will be presented. In particular the development of the tracking system, the calorimeter system and the muon detector will be discussed.
Results of detailed simulation studies of the detector in the muon collider environment and of experimental tests on prototypes based on the most promising technologies will be shown.
The electromagnetic calorimeter (ECAL) of the CMS experiment at the CERN LHC, due to its excellent energy resolution, is crucial for many physics analyses, ranging from Higgs measurements to new physics searches involving very high mass resonances. A precise calibration of the detector and all its individual channels is essential to achieve the best possible resolution for electron and photon energy measurements, as well as the measurement of the electromagnetic component of jets and the contribution to energy sums used to obtain information about particles that leave no signal in the detectors, such as neutrinos. To ensure the stability of the energy response over time, a laser monitoring system is employed to measure radiation induced changes in the detector hardware and compensate for them in the reconstruction. This talk will summarize the techniques used for the ECAL energy and time calibrations with the laser system and exploiting the full Run 2 (2015-2018) dataset, and will present the ultimate ECAL performance achieved for the legacy reprocessing of the Run 2 data.
The aim of the LHCb Upgrade II is to operate at a luminosity of 1.5 x 10$^{34}$ cm$^{-2}$ s$^{-1}$ to collect a data set of 300 fb$^{-1}$. The required substantial modifications of the current LHCb electromagnetic calorimeter due to high radiation doses in the central region and increased particle densities are referred to as PicoCal. A consolidation of the ECAL already during LS3 will reduce the occupancy and mitigate substantial ageing effects in the central region after Run 3.
Several scintillating sampling ECAL technologies are currently being investigated in an ongoing R&D campaign: Spaghetti Calorimeter (SpaCal) with garnet scintillating crystals and tungsten absorber, SpaCal with scintillating plastic fibres and tungsten or lead absorber, and Shashlik with polystyrene tiles, lead absorber and fast WLS fibres.
Timing capabilities with tens of picoseconds precision for neutral electromagnetic particles and increased granularity with denser absorber in the central region are needed for pile-up mitigation. Time resolutions of better than 20 ps at high energy were observed in test beam measurements of prototype SpaCal and Shashlik modules. Energy resolutions with sampling contributions of about 10%/sqrt(E) in line with the requirements were observed. The presentation will also cover results from detailed simulations to optimise the design and physics performance of the PicoCal.
Calorimetry at the High Luminosity LHC (HL-LHC) faces two enormous challenges, particularly in the forward direction: radiation tolerance and unprecedented in-time event pileup. To meet these challenges, the CMS Collaboration is preparing to replace its current endcap calorimeters for the HL-LHC era with a high-granularity calorimeter (HGCAL), featuring a previously unrealized transverse and longitudinal segmentation, for both the electromagnetic and hadronic compartments, with 5D information (space-time-energy) read out. The proposed design uses silicon sensors for the electromagnetic section and high-irradiation regions (with fluences above 10¹⁴ neq/cm²) of the hadronic section , while in the low-irradiation regions of the hadronic section plastic scintillator tiles equipped with on-tile silicon photomultipliers (SiPMs) are used. The full HGCAL will have approximately 6 million silicon sensor channels and about 240 thousand channels of scintillator tiles. This will facilitate particle-flow-type calorimetry, where the fine structure of showers can be measured and used to enhance particle identification, energy resolution and pileup rejection. In this talk we present the ideas behind the HGCAL, the current status of the project, the lessons that have been learnt, in particular from beam tests as well as the design and operation of vertical test systems and the challenges that lie ahead.
The MUonE experiment proposes a novel approach to determine the leading hadronic contribution to the muon g-2, from a precise measurement of the differential cross section of the $\mu e$ elastic scattering, achievable by using the CERN SPS muon beam onto atomic electrons of a light target.
The detector layout is modular, consisting of an array of identical tracking stations, each one made of a light target and silicon strip planes, followed by an electromagnetic calorimeter made of PbWO$_4$ crystals with APD readout, placed after the last station, and a muon filter. The scattering particles are tracked without any magnetic field, and the event kinematics can be defined in a large phase space region from the expected correlation of the outgoing particle angles. The ambiguity affecting a specific region, with electron and muon outgoing with similar deflection angles, can be solved by identifying the electron track as the one with extrapolation matching the calorimeter cluster or the muon track by associating it to hits in the muon filter. The role of the calorimeter will be important for background estimate and reduction, and to assess systematic errors, providing some useful redundancy and allowing for alternative selections.
Beam tests are carried out at CERN with a prototype calorimeter to determine its calibration with both high energy (20-150 GeV) and low energy electrons (1-10 GeV). In late summer a pilot run is scheduled with up to three tracking stations and the calorimeter integrated within a common triggerless readout system. The main motivations for the MUonE calorimeter are discussed, and the status and first performance results will be presented.
The DUNE experiment, currently under construction in the US, has a broad physics program that spans from oscillation physics at the GeV scale to the observation of solar neutrinos in few-MeV events. This program leverages the unprecedented resolution and imaging capability of the liquid argon TPC. LArTPCs are dense, fully-active detectors, that allow for a 3D real-time reconstruction of the events, achieved by means of the collection of drifted electrons from ionization. In addition to electrons, LArTPCs produce large quantities of VUV photons, which will be fully exploited in DUNE thanks to its Photon Detection System (PDS). The light collected by the PDS will be of paramount importance to measure the event timing and the vertical trajectory of charged particles for non-beam events, and will improve significantly the overall energy resolution of DUNE, especially at low energies, allowing to unlock its full scientific potential. The last few years marked important steps in the development of the PDS. Thanks to an intense R&D effort conducted at the two ProtoDUNE detectors at CERN, the PDS technology for DUNE has been optimized and validated for the DUNE physics. This talk will illustrate the concept of the DUNE PDS, its development and use in ProtoDUNE, as well as its role in achieving the physics goals of DUNE
Conveners:
Tatiana Pieloni (École Polytechnique Fédérale de Lausanne)
Mauro Migliorati (Universita di Roma)
Marc Wenskat (Universität Hamburg)
Contact: eps23-conveners-t13 @desy.de
Plasma wakefield acceleration is a promising technology to reduce the size of particle accelerators. Use of high energy protons to drive wakefields in plasma has been demonstrated during Run 1 of the AWAKE programme at CERN. Protons of energy 400 GeV drove wakefields that accelerated electrons to 2 GeV in under 10 m of plasma. The AWAKE collaboration is now embarking on Run 2 with the main aims to demonstrate stable accelerating gradients of 0.5–1 GV/m, to accelerate bunches of electrons with high beam quality, and develop plasma sources scalable to 100s of metres and beyond. By the end of Run 2, the AWAKE scheme should be able to provide electron beams for particle physics experiments and several possible experiments have already been evaluated. This talk summarises the AWAKE Run 2 programme as well as the possible application of the AWAKE scheme to novel particle physics experiments.
LhARA, the ‘Laser-hybrid Accelerator for Radiobiological Applications’, will be a novel, uniquely flexible, facility dedicated to the study of radiobiology. LhARA will use a high-power pulsed laser to generate a short burst of protons or light ions. These will be captured using strong-focusing electron-plasma (Gabor) lenses. Acceleration using a fixed-field alternating-gradient accelerator will deliver proton beams with energies up to 127 MeV and ion beams, such as C6+, with energies up to 33.4 MeV/nucleon. The laser-hybrid source allows high instantaneous dose rates of up to 109 Gy/s to be delivered in short (10–40 ns) pulses.
The laser-hybrid approach will allow the exploration of the vast “terra incognita” of the mechanisms by which the biological response to radiation is modulated by the beam’s characteristics. The technologies to be demonstrated in LhARA have the potential to allow particle-beam therapy to be delivered in completely new regimens, providing a variety of ion species in a range of spatial configurations and exploiting ultra-high dose rates.
This contribution describes the status of the LhARA project in the context of the Ion Therapy Research Facility.
The construction of an electron-positron collider "Higgs factory" has been stalled for a decade,not because of feasibility but because of the cost of conventional radio-frequency (RF) acceleration.
Plasma-wakefield acceleration promises to alleviate this problem via significant cost reduction based on its orders-of-magnitude higher accelerating gradients. However, plasma-based acceleration of
positrons is much more difficult than for electrons. We propose a collider scheme that avoids positron acceleration in plasma, using a mixture of beam-driven plasma-wakefield acceleration to high energy
for the electrons and conventional RF acceleration to low energy for the positrons. We emphasise the benefits of asymmetric energies, asymmetric bunch charges and asymmetric transverse emittances. The implications for luminosity and experimentation at such an asymmetric facility are explored and found to be comparable to conventional facilities; the cost is found to be much lower.
Beam-driven plasma-wakefield acceleration (PWFA) is a promising technology for future accelerator facilities, where a high electric field gradient could shrink the size, reduce the cost or/and provide highest beam energies. Successful experimental results in recent decades have demonstrated the feasibility of high-gradient acceleration in plasma. However, to meet the demands of current conventional accelerator users in terms of luminosity and brightness, there are more milestones to reach. Preservation of beam quality, high overall energy-transfer efficiency, and high-average-power operation are the three major research pillars of the PWFA experiment FLASHForward at DESY. In this submission an overview of the facility and recent results --per-mille-level energy-spread preservation; high energy-transfer efficiency of 42% from the wake to the accelerating bunch; and the in-principle operation of plasma accelerators at O(10 MHz) inter-bunch repetition rates-- are presented.
A key ambition expressed in the European Strategy for Particle Physics has been that “the energy efficiency of present and future accelerators […] is and should remain an area requiring constant attention”. Accordingly, “a detailed plan for the [ …] saving and re-use of energy should be part of the approval process for any major project”. The Energy Recovery Linac (ERL) developments directly address this requirement. ERLs are a potentially revolutionary accelerator concept based on recycling or recovering the kinetic energy of a “used” particle beam to accelerate a newly injected particle beam from a high-brightness source. The ERL concept thereby enables high-power beams while greatly reducing the power consumption and avoiding high-power beam dumps with associated radiation protection and activation issues. Conceptual ERL-based designs for future colliders have recently been made for e+e- Higgs factories and for TeV-energy ep/eA colliders. ERL concepts have also been considered for a future muon collider and for reaching the EIC luminosity targets at BNL. Documented in a detailed report [2], the implementation of ERLs for HEP applications promises a luminosity increase by one or more orders of magnitude at a power consumption comparable to classic non-ERL based solutions, a key step for future of high-energy physics and its sustainability. To coordinate the European ERL efforts, an Accelerator R&D Roadmap has been endorsed by CERN Council [1]. It entails three major interrelated elements for advancing the development of ERLs: i) embed the developments in a global collaborative and competitive effort, ii) innovative technology developments, and iii) reach the 100 mA electron current target in new facilities. Two European ERL facilities designed for operation this decade serve as anchors of the Roadmap related activities: bERLinPro (Berlin, Germany) with the goal to operate a single-pass 100 mA, 1.3 GHz facility, and PERLE (hosted by IJCLab Orsay, France) as the first multi-turn, high-power, 802 MHz facility with novel physics applications.
[1] “European Strategy for Particle Physics - Accelerator R&D Roadmap”, CERN Yellow Rep. Monogr. 1 (2022) 1-270, arXiv:2201.07895
[2] “The Development of Energy Recovery Linacs”, arXiv:2207.02095, to appear in JINST
In this presentation I plan to discuss potential offered by Energy-Recovery Linacs (ERLs) and particle recycling for boosting luminosity in high-energy electron-positions and lepton-hadron colliders. I will start from presenting several proposed ERL-based colliders and compare them with more traditional, but better developed concept of FCCee, ILC and CLIC. ERL-based colliders have promise not only of significantly higher luminosity, but also of higher energy efficiency measured in units of luminosity divided by the consumed AC power. Addition of recycling collided particles and their recuperations in damping ring removes insane ILC/CLIC appetite for fresh positions and offers high degrees of polarization in colliding beams.
Presentation will cover similarities and distinctions between linear and re-circulating ERL concepts with focus on their costs, energy efficiency and energy reach. Two examples of HIGS ERL-based factory located in LHC and FCC tunnels will be compared with two concepts of linear ERL colliders.
Status of ERLs worldwide will be briefly review and technical challenges facing this promising accelerator technology will be discussed. I will finish talk with discussion of possible technical breakthroughs which can make ERL technology more affordable and more attractive.
The Berlin Energy Recovery LINAC Project (bERLinPro), a 100-mA, 50-MeV ERL design was originally conceived to demonstrate ERL technology for use in a future high-brilliance light source at HZB. This endeavor was officially ended in 2020. However, the full infrastructure for ERL operation, including cryogenics and high-power RF, the UHV vacuum system and complete beam transport is installed, and SRF systems for a photoinjector and booster are currently being assembled. bERLinPro now serves as a general accelerator test facility to explore a broadening set of technologies and applications from SRF photoinjectors and high-power SRF-based energy recovery to ultrafast electron diffraction (UED) that takes advantage of the unique properties of the SRF-based injector. To reflect this expanded focus the facility is now named SEALab: “Superconducting rf Electron Accelerator Laboratory.” Presently, the commissioning of the ca. 10-mA SRF photoinjector is under way, followed by the assembly of the booster module. In future, plans are to explore the 10 – 100 mA range. In this contribution, an overview of lessons learned so far, the status of the machine, the coming commissioning steps and an outlook to midterm and future applications will be given. This includes the potential to use bERLinPro/SEALab to explore technologies and concepts that are key elements of the European Accelerator Roadmap for particle physics, e.g. explore pathways towards a more sustainable large scale science driver.
Conveners:
Alessia Bruni (INFN Bologna)
Marie-Lena Dieckmann (Universität Hamburg)
Gwenhaël Wilberts Dewasseige (UC Louvain)
Contact: eps23-conveners-t14@desy.de
As members of the Virgo Collaboration – co-authoring observational results together with the LIGO Scientific Collaboration and the KAGRA Collaboration – we became aware of biased citation practices that exclude Virgo (and KAGRA) from achievements that collectively belong to the wider LIGO/Virgo/KAGRA Collaboration. Here we frame these practices in the context of Merton’s “Matthew effect” applied to scientific collaborations rather than individual scientists, we quantify the occurrence of this cognitive bias, describe the positive actions that we undertook to correct it in the scientific community, and report the reactions to our efforts.
Engaging young children in physics research is particularly challenging but offers unique educational potential. Outreach and engagement at an early age has been demonstrated to be key to increasing awareness of physics and to increasing diversity in the field in later years. We present a variety of activities and resources developed by the ATLAS Collaboration targeting this demographic, including the ATLAS Baby Book, ATLAS Colouring Books, ATLAS Activity Sheets, Lego models and more. The ATLAS Collaboration also provides a range of educational opportunities appropriate for classroom engagement. Here we present details of the exhibits/features and use of the new ATLAS Visitors Centre at CERN – the most visited external visitor site at CERN – as well as an overview of the highly-successful Virtual Visit programme bringing CERN and the ATLAS experiment to tens of thousands of people. We also present an overview of ATLAS contributions to international Masterclasses and showcase resources such as ‘Cheat Sheets’ and ‘Fact Sheets’, which are intended to cover key topics of the work done by the ATLAS Collaboration and the physics behind the experiment for a broad audience of all ages and levels of experience.
Interest of youth in STEM, and especially physics and engineering studies, is declining,even though new generation of specialists is needed to ensure the continuation of cutting-edge research, primordial for innovation, economic progress and sustainable development. New
pathways are to be found to inspire more young talents to become physicists and STEM specialists. Putting these branches to real-life context including sustainable development, provides a powerful tool to foster students interest and appreciation.
Pilot project called Youth@STEM4SF (Youth at STEM for Sustainable Future) launched in Switzerland in May 2023 with support of Swiss Physical Society (SPS), Swiss Academy of Natural Sciences (SCNAT) and foundation “education21”, presents physics and STEM in context with real life and sustainable development and integrates innovative applied R&D in physics-based industry in the innovative high-school program. It aims to engage young talents, especially girls, for physics and other basic sciences studies and teach future society leaders on value of science in our lives and for sustainable future. The impact in terms of change of interest and attitude was measured, and first results are more than encouraging.
The citizens of the future were extended a unique invitation to witness High Energy Physics firsthand. A group of 130 individuals, aged between 16 and 30, eagerly embarked on a journey to explore 17 INFN sites and engage in conversations with our esteemed scientists across Italy. As they delved into our laboratories, collaborations, and day-to-day operations, a talented director meticulously captured their awe-inspiring experiences and unanticipated discoveries within this previously unknown realm.
The resulting documentary encapsulates the profound emotions that both the visitors and researchers experienced throughout these captivating events. This poignant film is currently being showcased across various Italian cities, granting citizens the opportunity to immerse themselves in the world of high energy physics. Through the lens of these young, visionary individuals—the citizens of the future—they can embark on a journey that unravels the mysteries behind the scenes of high energy physics, not just in Italy but also across the globe.
With several thousand members from more than 200 institutes and over 50 countries, the CMS Collaboration is inherently a diverse and unique scientific environment. The CMS Diversity & Inclusion Office aims to foster further diversity among our collaboration, create an inclusive environment, and ensure equitable access to opportunities, resources, and recognition for all members. The goal, since its creation in 2017, is to cultivate a welcoming and positive working environment for all, so that we can ensure the continued productivity and success of our collaboration. We present here the efforts of the CMS D&I Office in fulfilling its mandate. These efforts include implementing a code of conduct, raising awareness about D&I matters within the collaboration, and facilitating outreach and communication outside of the collaboration about D&I. An update on the recommendations from the Implementation Team on Diversity and Inclusion of CMS will be also presented.
The ATLAS Collaboration consists of more than 5000 members, from over 100 different countries. Regional, age and gender demographics of the collaboration are presented, including the time evolution over the lifetime of the experiment. In particular, the relative fraction of women is discussed, including their share of contributions, recognition and positions of responsibility, including showing how these depend on other demographic measures.
Introduction to GENERA
Conveners:
Fady Bishara (DESY)
James Frost (University of Oxford)
Silvia Scorza (LPSC Grenoble)
Contact: eps23-conveners-t03 @desy.de
The HIBEAM/NNBAR program at the ESS will perform a high-sensitivity search for neutron oscillations with a potential to shed light on physics beyond the Standard Model. This program comprises two distinct phases: HIBEAM and NNBAR. HIBEAM will focus on the search for neutron-sterile neutron ($n$ - $n'$) oscillations, offering an opportunity to explore the physics of the dark sector, while the NNBAR experiment will search for neutron-antineutron ($n$ - $\bar{n}$) oscillations. The observation of neutron-antineutron oscillations that violate the conservation of B by two units would be a significant breakthrough. If such a process can occur in nature, it can reshape our understanding of baryogenesis in the early Universe.
NNBAR can improve the previous search with free neutrons conducted at ILL by a factor of 1000 by taking advantage of the intense beam of cold neutrons from the world’s most powerful spallation neutron source, the ESS, currently being built in Lund, Sweden. To achieve the proposed sensitivity, NNBAR will be equipped with a state-of-the-art annihilation detector composed of a tracking system, an electromagnetic calorimeter and a cosmic veto. Moreover, NNBAR will use highly efficient magnetic shielding, novel neutron reflectors and a new moderator for the ESS optimized to maximize the intensity of cold neutrons. A design study of these critical components to realize NNBAR is currently conducted within the HighNESS project financed by the European Framework for Research and Innovation Horizon 2020. In this talk, I will discuss all these critical aspects that will be part of the Conceptual Design Report of the experiment that will be published this fall.
The Migdal in Galactic Dark mAtter expLoration (MIGDAL) experiment aims to make the first direct and unambiguous observation of the Migdal effect from fast neutron scattering using intense DT and DD generators, allowing the effect to be investigated over a wide range of nuclear recoil energies.
The experiment uses an Optical Time Projection Chamber equipped with a stack of two glass-GEMs operating in 50-Torr CF4 based gas mixture, with light and change readout provided by a CMOS camera, a photomultiplier tube, and a 120 Indium-Tin-Oxide strip anode allowing precise three-dimensional reconstruction of the ionisation tracks from electron and nuclear recoils.
We will present preliminary results from the experiment’s commissioning using fast neutrons from the D-D generator at the Rutherford Appleton Laboratory's Neutron Irradiation Laboratory for Electronics (NILE).
The proposed LUXE experiment (LASER Und XFEL Experiment) at DESY, Hamburg, using the electron beam from the European XFEL, aims to probe QED in the non-perturbative regime created in collisions between high-intensity laser pulses and high-energy electron or photon beams. This setup also provides a unique opportunity to probe physics beyond the standard model. In this talk we show that by leveraging the large photon flux generated at LUXE, one can probe axion-like-particles (ALPs) up to a mass of 350 MeV and with photon coupling of $3 \times10^{−6}$ GeV$^{−1}$. This reach is comparable to the background-free projection from NA62. In addition, we will discuss other probes of new physics such as ALPs-electron coupling.
The viable dark matter (DM) candidate mass range spans 90 orders of magnitude. The natural scenario where DM originates from thermal contact with familiar matter in the early Universe dramatically restricts this window to ~MeV to ~100 TeV. Considerable experimental efforts have sought Weakly Interacting Massive Particles in the upper end of this range (few GeV to several TeV), while the region ~MeV to ~GeV is largely unexplored. This region must be a priority as several traits converge here: In this lower range, tantalising hints for physics beyond the Standard Model appear, most stable particles of ordinary matter have masses, and the thermal origin for DM works in a simple, predictive way. Thermal-origin DM implies a light DM-ordinary matter interaction— and therefore a production mechanism in accelerator-based experiments. The most sensitive way to probe sub-GeV DM, assuming the interaction is not electron-phobic, is to search for this production using a primary electron beam to manufacture DM in fixed-target collisions. The Light Dark Matter experiment (LDMX) is a planned electron-beam fixed-target missing-momentum experiment with unique sensitivity to sub-GeV DM, as well as a broad range of additional signatures. This contribution provides an overview of its theoretical motivation, main experimental challenges and their solutions, and projected sensitivities relative to the landscape of other experiments.
The elusive Dark Matter (DM), proposed due to its gravitational interaction with ordinary matter, supposedly makes up ∼ 25% of our universe. Various models aim to explain the origin and properties of dark matter, many of these proposing beyond standard model particles to make up most of the DM in our universe. The ALPS II (Any Light Particle Search II) light-shining-through-walls experiment will use Transition Edge Sensors (TESs) to detect low-energy single-photons originating from axion(ALP)-photon conversion with rates as low as $10^{-5}$s$^{-1}$
Even beyond ALPS II, these superconducting microcalorimeters, operated at cryogenic temperatures, could help search for further particle-DM candidates. Much of the work to ensure the viability of the TES detector for use in ALPS II, such as calibrating the detector and mitigating external sources of backgrounds, also leads to the ability to utilize the TES for an independent direct-DM search. For this purpose, the superconducting sensor, sensitive to sub-eV energy depositions, can be used as a simultaneous target and sensor for DM-electron scattering for sub-MeV DM. Hence, direct DM searches with TES could explore parameter space as-of-yet inaccessible by nucleon-scattering experiments.
he Any Light Particle Search II (ALPS II) is a Light Shining through a Wall experiment at DESY in Hamburg, which will hunt for axions and axion-like particles in the sub-meV mass range with an axion-photon-photon coupling $g_{\alpha \gamma \gamma} > 2 \times 10^{-11}\ \rm{GeV^{-1}}$, improving the sensitivity by a factor of $10^3$ compared to its predecessors. For this purpose, a high-power laser will be directed through a long string of superconducting dipole magnets and a mode-matched optical cavity where some photons can convert into a beam of axion-like particles. Next, the axion beam will pass through a light-tight barrier and another strong magnetic field and mode-matched optical cavity, where some of the axion-like particles can convert back into photons and be detected.
The ALPS II experiment is starting the initial phase of data taking, in which it will employ a heterodyne detection method (HET). Design sensitivity is expected to be reached in the second quarter of 2023 when about 2 photons/day are expected behind the wall.
To confirm the results obtained with the HET, independent measurements using superconducting Transition Edge Sensors (TES) will be conducted subsequently.
In this work, we will describe the ALPS II experiment and the first results of the collaboration.
The MAgnetized Disc And Mirror Axion eXperiment is designed to search for dark matter axions in the mass range around 100 µeV, which previously was inaccessible by other experiments. This mass range is favored by models in which the PQ symmetry is broken after inflation. The required sensitivity is reached in MADMAX by applying the dielectric haloscope approach, exploiting the axion to photon conversion at dielectric surfaces within a strong magnetic field. For MADMAX a system of up to 80 movable dielectric discs of more than 1 m diameter, the so-called booster, inside an approximately 9 T magnetic field is foreseen. The experiment will be located at DESY Hamburg in Germany and is currently entering its prototyping phase.
Among the crucial steps on the path towards the MADMAX prototype is of course the understanding and calibration of the booster and its behavior which is currently pursued using small scale closed systems. Vast progress has been made here culminating in an Axion-Like-Particle search with this closed system utilizing the MORPURGO magnet at CERN, which will also host the MADMAX prototype in the future. In addition methods to study the electric field in an open system are being developed and will allow for calibrating and aligning an open booster. Along with these activities the preparations for comissioning of the MADMAX prototype including e.g. the construction of the prototope booster and the MADMAX Prototype Cryostat are advancing.
In this contribution, first results from the small scale closed booster system, including the ALP search performed at CERN in 2023, will be shown along with the results of extensive studies looking at various aspects of the prototype and full-scale booster like the measurements of the electric field shape. Together with these results guiding the path towards the MADMAX (prototype) experiment an outlook will be given on the time schedule for the MADMAX prototype including the operation and the planned ALP search at CERN as well as on ongoing developments such as future low noise receivers.
The BREAD Collaboration is establishing a program of broadband searches for terahertz axion dark matter. Its hallmark is a cylindrical metal barrel converting axions to photons, focused by a parabolic reflector to low-noise photosensors. Practically, this novel dish antenna geometry enables enclosure inside standard cryostats and high-field solenoidal magnets. BREAD plans to open multiple decades of unexplored coupling sensitivity across the meV to eV mass range that has long eluded resonant-cavity haloscopes. We present the BREAD conceptual design and science program towards large-scale experiment, proposed in Phys. Rev. Lett. 128 (2022) 131801, with recent R&D experimental progress towards dark photon pilots planned at Fermilab.
We report the first search for the Sagittarius tidal stream of axion dark matter around 4.55 µeV using CAPP-12TB haloscope data acquired in March of 2022.
Our result excluded the Sagittarius tidal stream of Dine-Fischler-Srednicki-Zhitnitskii and Kim-Shifman-Vainshtein-Zakharov axion dark matter densities of ρa >~ 0.184 and >~ 0.025 GeV/cm3 , respectively, over a mass range from 4.51 to 4.59 µeV at a 90% confidence level.
We present the design, status and first results of a detector to search for axions and axion-like particles in the galactic halo using laser interferometry. The detector is sensitive to the polarisation rotation of linearly polarised light induced by an axion field in the mass range from $10^{−16}$ $eV$ up to $10^{−8}$ $eV$, and is likely to significantly surpass the CAST limit. Currently, we search Axion masses at $2$ $neV$ at sensitivities of $10^{-10}$ $GeV^{-1}$ with unprecedented resonant intensities of $4.5$ $MW/cm^{2}$. The inclusion of squeezed states of light will increase our sensitivities further. Our experiment has the potential to be further scaled up to a multi-kilometre long detector, and to then set constraints of the axion-photon coupling coefficient of $10^{−18}$ $GeV^{-1}$ for axion masses of $10^{−16}$ $eV$, or detect a signal.
Conveners:
Summer Blot (DESY)
Pau Novella (IFIC)
Davide Sgalaberna (ETH)
Jessica Turner (Durham University)
Contact: eps23-conveners-t04 @desy.de
This talk presents the latest results of the reactor antineutrino flux and spectrum measurement at Daya Bay. The antineutrinos were generated by six 2.9 GWth nuclear reactors and detected by eight antineutrino detectors deployed in two near (560 m and 600 m flux-weighted baselines) and one far (1640 m flux-weighted baseline) underground experimental halls. From December 2011 to December 2020, Daya Bay collected the largest sample of inverse beta decay (IBD) candidates to date.The ratio of measured to predicted antineutrino flux was found to be 0.953±0.014(exp.) for the Huber-Mueller (HM) model. The measured antineutrino IBD yield was (5.89±0.07)$\times$10$^{-43}$cm$^{2}$/fission, and its normalized variation as the fuel evolution was (-0.300±0.024)$\times$10$^{-43}$cm$^{2}$/fission. The HM model predictions in two measurements were rejected to 3.6 $\sigma$ and 3.0 $\sigma$, but the SM2018 model prediction was consistent with these two results. Daya Bay measured the prompt energy spectrum of antineutrinos in the detector, which can not be described by the HM and SM2018 model, the discrepancies between data and models were 25 $\sigma$ and 27 $\sigma$. We also found that altering the predicted antineutrino spectrum from $^{239}$Pu fission does not improve the agreement with the measurement for either model. The individual antineutrino spectra of the two dominant isotopes, $^{235}$U and $^{239}$Pu, were extracted and unfolded to antineutrino energy spectra. To further improve this precision, Daya Bay and PROSPECT jointly determined the $^{235}$U and $^{239}$Pu antineutrino spectra.
The KArlsruhe TRItium Neutrino (KATRIN) experiment is designed to determine the mass of the electron antineutrino by kinematic measurements of the tritium beta-decay with a target sensitivity of 0.2 eV$/c^2$ (90\% C.L.). In 2022, KATRIN reported the most stringent limit on the neutrino mass with $m_\nu < 0.8$ eV$/c^2$ (90$\%$ C.L) based on data acquired during the first two science runs of 2019. Along with the neutrino mass determination, the precise measurement of the beta-decay spectrum near the kinematic endpoint allows KATRIN to search for nonstandard or sterile neutrino with masses in the eV range.
In this talk we discuss the first results of the KATRIN searches for a sterile neutrino in the 3+1 model scenario, the analysis framework for the new datasets of the first five science runs and relevant systematic effects. The talk concludes with an outlook on KATRIN sensitivity to the eV-scale sterile neutrino from the new dataset in comparison to the results of other sterile neutrino searches.
The MicroBooNE experiment utilizes an 85-ton active volume liquid argon time projection chamber (LArTPC) neutrino detector. It can distinguish between photons and electron electromagnetic showers and can select charged current electron neutrino and muon neutrino events with exceptional performance. In this talk, we will presentresults on MicroBooNE's investigation of the MiniBooNE Low Energy Excess and neutrino Short Baseline Anomalies more generally. We will present the initial findings from MicroBooNE's search for sterile neutrinos in a 3+1 model, utilizing Fermilab's Booster Neutrino Beam (BNB). We will explore the impact of degeneracy caused by the cancellation of nue appearance and disappearance. Additionally, we will demonstrate how combining data from BNB and Neutrinos at the Main Injector (NuMI) beams, which have substantially different nue/numu ratios, can break this degeneracy. Moreover, we will show MicroBooNE's search for neutrino-induced single-photon production and the latest developments in the search for single-photons.
The latest results of the DANSS experiment are presented. The plastic scintillator detector is located under 3.1 GW industrial reactor core of Kalinin Nuclear Power Plant, and its main purpose is the search for the short baseline neutrino oscillations. The inverse beta decay reaction is used for the antineutrino detection. The data are collected at three distances – 10.9, 11.9 and 12.9 meters from the reactor core center. The total number of the antineutrino events has reached 7M with about 1.5M new events from the lust year. New limits on the sterile neutrino oscillation parameters are presented. The evolution of the antineutrino counting rate and spectrum with the time of the reactor campaign will also be discussed. A model-dependent analysis of the data, including the absolute antineutrino flux, excludes nearly the whole area of the sterile neutrino parameters, preferred by the recent BEST results, and also the best fit point of the Neutrino-4 experiment. The study of the cosmic muon flux variations at the detector depth of 50 m.w.e., caused by the temperature and barometric effects, is also presented.
The status of the coming DANSS modernization will be presented. This upgrade will improve DANSS energy resolution and increase the sensitive volume, which will allow to cover of even larger area of the sterile neutrino parameters. In this case, the sensitivity of the DANSS detector will allow to check the latest BEST and Neutrino-4 experimtents in a model-independent way.
The Short-Baseline Near Detector (SBND) will be one of three Liquid Argon Time Projection Chamber (LArTPC) neutrino detectors positioned along the axis of the Booster Neutrino Beam (BNB) at Fermilab, as part of the Short-Baseline Neutrino (SBN) Program. The detector is anticipated to begin operation later this year. SBND is characterized by superb imaging capabilities and will record over a million neutrino interactions per year. Thanks to its unique combination of measurement resolution and statistics, SBND will carry out a rich program of neutrino interaction measurements and novel searches for physics beyond the Standard Model (BSM). It will enable the potential of the overall SBN sterile neutrino program by performing a precise characterization of the unoscillated event rate, and constraining BNB flux and neutrino-argon cross-section systematic uncertainties. In this talk, the physics reach, current status, and future prospects of SBND are discussed.
The ICARUS collaboration has employed the 760-ton T600 detector in a successful three-year physics run at the underground LNGS laboratory, performing a sensitive search for LSND-like anomalous $\nu_e$ appearance in the CNGS beam, contributing to the constraints on the allowed neutrino oscillation parameters to a narrow region around 1 eV$^2$. After a significant overhaul at CERN, the T600 detector has been installed at Fermilab. Following the cryogenic commissioning, in 2020 ICARUS started its operation collecting the first neutrino events from the Booster Neutrino Beam (BNB) and the Neutrinos at the Main Injector (NuMI) beam off-axis, which were used to test the ICARUS event selection, reconstruction and analysis algorithms. ICARUS completed its commissioning phase in June 2022, moving then to data taking for neutrino oscillation physics, aiming at first to either confirm or refute the claim by Neutrino-4 short-baseline reactor experiment. ICARUS will also perform measurements of neutrino cross sections with the NuMI beam and several Beyond Standard Model searches. After the first year of operations, ICARUS will jointly search for evidence of sterile neutrinos with the Short-Baseline Near Detector (SBND), within the Short-Baseline Neutrino (SBN) program. In this presentation, preliminary technical results from the data with the BNB and NuMI beams are presented both in terms of performance of all ICARUS subsystems and its capability to select and reconstruct neutrino events.
We present an updated and improved global fit analysis of current flavor and electroweak precision observables to derive bounds on unitarity deviations of the leptonic mixing matrix and on the mixing of heavy neutrinos with the active flavours, which is motivated by the lastest experimental updates on key observables such as $V_{ud}$, the Z invisible width and the W mass.
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
The quark model predicts exotic hadrons beyond the conventional quark-antiquark mesons and three quark baryons. Exotic candidates have since been observed in the early 2000's. Since then several exotic states have been discovered. LHCb has reported on tetraquark candidates such as the X(3872), the discovery of pentaquark resonances in 2015, and the first double charmed tetraquark. Many theoretical approaches, including hadronic molecules and tightly bound tetra- and penta-quarks, aim to describe the nature and properties (mass/quantum numbers) of these states, also predicting that these exotic candidates may be part of a larger multiplet of exotic states. The discovery of further exotic hadrons and measurement of their properties will help to scrutinize these theoretical models and determine the internal structure of these states. LHCb is in a unique position to study a wide range of decay modes for multiple b-hadron species. The latest results of these studies from LHCb are presented along with prospects for the Run 3 data.
Belle II offers unique possibilities for the discovery and interpretation of exotic multiquark states to probe the fundamentals of QCD. This talk presents recent results on searches for the hidden bottom transition between $\Upsilon(10750)$ and $\chi_{bJ}$, and measurements of the energy dependence of the $e^+e^- \to B^{(*)} \bar B ^{(*)}$ cross section.
The spectroscopy of charmonium-like states together with the spectroscopy of charmed and strange baryons is discussed. It is a good testing tool for the theories of strong interactions, including: QCD in both the perturbative and non-perturbative regimes, LQCD, potential models and phenomenological models [1, 2, 3]. An understanding of the baryon spectrum is one of the primary goals of non-perturbative QCD. In the nucleon sector, where most of the experimental information is available, the agreement with quark model predictions is astonishingly small, and the situation is even worse in the strange and charmed baryon sector. The experiments with antiproton-proton annihilation and proton-proton (proton-nuclei) collisions are well suited for a comprehensive spectroscopy program, in particular, the spectroscopy of charmonuim-like states and flavour baryons. Charmed and strange baryons can be produced abundantly in both processes, and their properties can be studied in detail [1, 2, 3].
For this purpose an elaborated analysis of charmonium and exotics spectrum together with spectrum of charmed and strange baryons is given. The recent experimental data from different collaborations (BaBar, Belle, BES, LHCb,…) are analyzed. A special attention was given to the recently discovered XYZ-particles. The attempts of their possible interpretation are considered [4 - 7]. The results of physics simulation are obtained. Some of these states can be interpreted as higher lying charmonium and tetraquarks with a hidden charm [5, 6, 7] and strangeness [8, 9]. It has been shown that charge/neutral tetraquarks must have their neutral/charged partners with mass values which differ by few MeV. This hypothesis coincides with that proposed by Maiani and Polosa [10] and need confirmation nowdays. Many heavy baryons with charm and strangeness are expected to exist. But much more data on different decay modes are needed before firmer conclusions can be made. These data can be derived directly from the experiments using a high quality antiproton beam with √Spp\bar up to 5.5 GeV planned at FAIR and proton-proton (proton-nuclei) collisions with √SpN up to 26 GeV planned at NICA.
References
[1] W. Erni et al., arXiv:0903.3905v1 [hep-ex] (2009) 63.
[2] N. Brambilla et al., European Physical Journal C 71:1534, (2011) 1.
[3] J. Beringer et al., Review of Particle Physic, Physical. Review, D 86, (2012).
[4] M.Yu. Barabanov, A.S. Vodopyanov, Physics of Particles and Nuclei Letters, V.8, N.10, (2011) 1069.
[5] M.Yu. Barabanov, A.S. Vodopyanov, S.L. Olsen, Physics of Atomic Nuclei, V.77, N.1, (2014) 126.
[6] M.Yu. Barabanov, A.S. Vodopyanov, S.L. Olsen , Physica Scripta, T 166 (2015) 014019.
[7] M.Yu. Barabanov, A.S. Vodopyanov, S.L. Olsen, A.I. Zinchenko, Physics of Atomic Nuclei, V.79, N 1 (2016) 126.
[8] R. Aaij et al., Phys. Rev. 95, (2017) 012002
[9] R. Aaij et al., Phys. Rev 118 (2017) 022003
[10] L. Maiani, F. Piccinini, A.D. Polosa, V. Riquer, Phys. Rev. Lett. 99 (2007) 182003
Collisions of small systems show signatures suggestive of collective flow associated with QGP formation in heavy-ion collisions. Jet quenching is also a consequence of QGP formation, but no significant evidence of it in small systems has been found to date. Measuring or constraining the magnitude of jet quenching in small systems is essential to determine the limits of QGP formation. The ALICE collaboration presents a broad search for jet quenching in minimum bias (MB) and high multiplicity (HM) pp and p-Pb collisions, based on several observables: the semi-inclusive acoplanarity distribution of jets recoiling from a high-$p_{\text{T}}$ hadron (h+jet) and intra-jet measurements. Marked broadening of the h+jet acoplanarity distribution is observed in HM compared to MB events, which could arise from jet quenching. Both data and PYTHIA simulations suggest that this broadening arises from a generic bias of the HM selection towards multi-jet final states. We also report the average charged-particle multiplicity and jet fragmentation functions in HM and MB-selected populations in pp and p-Pb collisions, whose differences are qualitatively described by PYTHIA. Finally, to disentangle jet fragmentation and hadronisation effects, the transverse momentum ($j_{\rm T}$) distributions of charged-particle jet constituents are measured for several $z$ ranges in pp and p-Pb collisions. No significant difference is observed within the measurement uncertainties. We discuss the implications of these results for the understanding of collective effects and jet quenching in small systems.
In recent years, evidence of collective effects has been observed in small collision systems at the LHC; however, its precise origin remains unknown.
In this presentation, we will discuss new measurements of anisotropic flow observables (flow harmonic coefficients, flow vector correlation and decorrelation, nonlinear flow response) in pp and p--Pb collisions with ALICE. The highlights include the measurements of flow vector decorrelation in $p_{\rm T}$ and $\eta$, and flow harmonic magnitude of identified particles in different multiplicity intervals. Non-flow contributions are significantly suppressed by the use of forward detectors, allowing a large pseudorapidity separation of the correlated particles up to $|\Delta\eta| \sim 8$. Comprehensive comparisons to model calculations provide tight constraints on the fluctuating initial conditions, allowing us to explore how anisotropic flow develops from the initial geometry through the dynamic evolution of the created system in pp and p-Pb collisions, and allow us to investigate the role of the quark-coalescence mechanism.
Femtoscopy is a technique used to measure the space--time dimensions of hot and dense matter created in high-energy collisions via particles with low relative momentum correlating due to quantum statistics effects and/or final-state interactions. It allows investigating the dynamics of the medium emitting correlating particles and to explore the hadronic interaction among the produced particles.
In this talk, the recent experimental results obtained by the ALICE Collaboration at the LHC are presented. The dynamics of the particle-emitting source is discussed employing a 3D analysis of kaon correlations in p-Pb collisions at 5.02 TeV. This study disfavors models which predict strong collective effects in p-A collisions and the same time there is an indication that the matter created in small and large collision systems in very peripheral collisions evolves similarly. Femtoscopy also allowed demonstrating the existence of a common emission source for all baryons in small collision systems and investigating the low-energy scattering properties between particle pairs, which were previously difficult or impossible to access experimentally. In particular, the recent ALICE results on the p-Λ and Λ-K correlations, obtained in high-multiplicity pp collisions at 13 TeV, allowed greatly improving the construction of realistic nuclear equations of state and providing a new gateway towards studying exotic bound states. In addition, the measurement of three-hadron correlation functions for the p-p-p and p-p-Λ hints to the existence of genuine three-body effects. Finally, the measured two-body p-d correlation function shows that an effective two-body approach fails and the three-nucleon dynamics has to be included to properly model the data.
This study presents measurements of dijet and neutral pion production in high-energy nuclear
collisions using the LHCb detector. The measurements provide essential insights into the parton
distribution functions, nuclear structure, and particle production dynamics within the
framework of Quantum Chromodynamics (QCD). The nuclear modification factors for neutral
pions produced in proton-lead collisions at a center-of-mass energy of 8.16 TeV show a
significant suppression in the forward region and the first evidence of enhancement in the
backward region. The inclusive 𝑏𝑏̅ and 𝑐𝑐̅-dijet production cross-sections in the forward
region of pp collisions, measured at a center-of-mass energy of 13 TeV, are found to be in
agreement with theoretical predictions at next-to-leading order. The measurements of dijet and
neutral pion production provide complementary information on the PDFs and nuclear structure,
which are important for understanding QCD dynamics in high-energy nuclear collisions and
for further improving the theoretical models
Fragmentation Functions (FF) are universal non-perturbative objects that model hadronization in some general kind of processes. They are mainly extracted from experimental data, hence constraining the parameters of the corresponding fits is crucial for achieving reliable results. As expected, the production of lighter hadrons is favoured w.r.t. heavy ones, thus we would like to exploit the precise knowledge of pion FFs to constraint the shape of kaon (or heavier) FFs. In this talk, we show how imposing specific cuts on photon-hadron production leads to relations between the u-started FFs. For doing so, we exploit the reconstruction of momentum fractions in terms of experimentally-accessible quantities and introduce NLO QCD + LO QED corrections to reduce the theoretical uncertainties.
The focus of the session is on top quarks precision measurements and theory calculations.
Conveners:
Gauthier Durieux (CERN)
Abideh Jafari (DESY)
Narei Lorenzo Martinez (LAPP, Annecy)
Contact: eps23-conveners-t07 @desy.de
Duration: 15'+5'
Effective Field Theories (EFTs) provide a framework for capturing the effects of yet unseen heavy degrees of freedom in a model-independent manner. However, constructing a complete and minimal set of operators, especially at higher mass dimensions, is challenging.
In this talk, we present $\texttt{AutoEFT}$, an implementation of an algorithm that systematically handles redundancies among operators due to equations of motion, integration-by-parts identities, and repeated fields. This algorithm enables the construction of on-shell bases for a broad range of EFTs. Additionally, it facilitates the exploration of various aspects within this field, such as investigating higher mass dimensions or the relationship between different operator bases.
$\texttt{AutoEFT}$ can be applied to phenomenologically relevant theories like the Standard Model and its extensions, including new light particles or additional symmetry groups.
Duration: 15'+5'
The top quark sector provides valuable constraints on SMEFT operators. This talk will focus on top quark measurements directly targeting constraints on EFT, as well as interpretations of differential or inclusive measurements in the EFT framework.
Duration: 15'+5'
The Standard Model effective field theory (SMEFT) provides a general framework to describe the Beyond standard model physics expected to be valid upto certain higher energy scale, say $\Lambda$. It is quite demanding and challenging too, to explore the signature of this kind of generalised theory. In order to explore it, we concentrate on the modifications of top quark yukawa coupling - an important avenue to study the EWSB. With this goal, we consider the production of pp$\to$tHq at the LHC. Identifying the relevant sensitive dimension 6 EFT operators connected with this process, we try to find the accessible range of the corresponding wilson coefficients constrained by the latest LHC measurements which are directly sensitive to these operators. We try to develop a strategy of constraining these operators and obtain the best fit values. Since, these EFT operators modify the vertices, their effects can be observed in various kinematic distributions, in particular at the tail side. Presumably, signatures of these operators
can be observed in the excess of events at the higher side of certain kinematic variables. We discuss and devise strategy to look for the
signatures of those considered set of operators at the LHC with the high luminosity options 300fb$^{-1}$ and 3000~fb$^{-1}$. Bin wise significances are also presented. Recently we have some new results which are not included in the version submitted in arXive (2210.05503)
It is now under review in JHEP.
Duration: 12'+3'
We assess the impact of top quark production at the LHC on global analyses of parton distributions (PDFs) and of Wilson coefficients in the SMEFT, both separately and in the framework of a joint interpretation. We consider the broadest top quark dataset to date containing all available measurements based on the full Run II luminosity. First, we determine the constraints that this dataset provides on the large-x gluon PDF and study its consistency with other gluon-sensitive measurements. Second, we carry out a SMEFT interpretation of the same dataset using state-of-the-art SM and EFT theory calculations, resulting in bounds on 25 Wilson coefficients modifying top quark interactions. Subsequently, we integrate the two analyses within the SIMUnet approach to realise a simultaneous determination of the SMEFT PDFs and the EFT coefficients and identify regions in the parameter space where their interplay is most phenomenologically relevant. We also demonstrate how to separate eventual BSM signals from QCD effects in the interpretation of top quark measurements at the LHC.
Duration: 12'+3'
Recasting phenomenological Lagrangians in terms of SM effective field theory (SMEFT) provides a valuable means of connecting potential BSM physics at momenta well above the electroweak scale to experimental signatures at lower energies. In this work we jointly fit the Wilson coefficients of SMEFT operators as well as the PDFs using jet and top quark pair data, obtaining self-consistent constraints to possible BSM physics effects. Global fits are boosted with machine-learning techniques in the form of neural networks to ensure efficient scans of the full PDF+SMEFT parameter space. We focus on several operators relevant for top-quark pair and jet production at hadron colliders and obtain constraints on the Wilson coefficients. We find mild correlations between the extracted Wilson coefficients, PDFs, and other QCD parameters, and see indications that these correlations may become more prominent in future analyses based on data of higher precision.
Duration: 15'+5'
We present a global analysis of Beauty, Top, Z, and Drell-Yan measurements within the framework of the Standard Model effective field theory (SMEFT). We use the minimal flavor violation (MFV) assumption and perform a combined analysis of up to 14 Wilson coefficients. We demonstrate that the combination of measurements from different sectors yields stronger constraints on the SMEFT coefficients than the individual fits and emphasize how synergies in the global approach allow probing scales as high as 18 TeV. Based on the constraints on the Wilson coefficients obtained in our analysis, we predict the dineutrino branching ratios ${\cal{B}}(B^0 \to K^{* 0} \nu \bar\nu)$ and ${\cal{B}}(B^+ \to K^+ \nu \bar\nu)$ and discuss how future measurements of these observables could provide new information in the search for physics beyond the Standard Model when being included in the global fit.
Duration: 15'+5'
Top quarks and in general heavy quarks are likely messengers to new physics. The scrutiny of these particles properties must be completed by the measurement of electroweak qqbar production at high energies, in particular for the top. Projects as the International Linear Collider will offer an the extremely favorable and low-background environment of e+e- annihilation and high energy reach.
This talk will review the opportunities for precision measurements of the top quark (and the other heavy quark) properties at the International Linear Collider. These include the archival measurement of the top quark mass, the search for beyond-Standard-Model contributions to the top quark electroweak form factors, the search for CP violation in the top quark couplings and the experimental challenges behind all this.
Duration: 15'+5'
As the heaviest particle of the Standard Model, with a mass close to the electroweak scale, the top quark is an interesting candidate to look for hints of new physics. The electroweak couplings of the top quarks are specially relevant in many extensions of the Standard Model. Thanks to the data from the Large Hadron Collider, these couplings are currently being studied with high precision in different processes. In order to analyse if there is still room for new physics in the electro-weak couplings of the top quark, we perform a global fit to these couplings. Following the Standard Model Effective Field Theory formalism we have constrained the Wilson coefficients of the dimension-six operators that affect the top quark electro-weak couplings. In this work we consider, for the first time, the QCD corrections at NLO for most of the processes included. Furthermore, we have included recently measured processes, such as $tZq$ and t$\gamma q$, and the first differential measurements in $t\bar{t}Z$ and $t\bar{t}\gamma$ production. A special effort is made to understand the uncertainties due to the truncation of the EFT expansion and due to the poorly known correlations among measurements. The results of the fit to LHC run 2 data are compared to prospects for future Higgs/electroweak/top factory lepton colliders. We present bounds on the relevant operator coefficients based on current data and on future prospects. Work based on JHEP02(2022)032, arXiv:2205.02140 and arXiv:2206.08326.
Conveners:
Thomas Blake (University of Warwick)
Marzia Bordone (CERN)
Thibaud Humair (MPP)
Contact: eps23-conveners-t08 @desy.de
The rare $B \to K^{(*)} \bar{\ell} \ell$ decays exhibit a long-standing tension with Standard Model (SM) predictions, which can be attributed to a lepton-universal short-distance $b \to s \bar{\ell} \ell$ interaction. We present two novel methods to disentangle this effect from long-distance dynamics: one based on the determination of the inclusive $b \to s \bar{\ell} \ell$ rate at high dilepton invariant mass ($q^2\geq 15~{\rm GeV}^2$), the other based on the analysis of the $q^2$ spectrum of the exclusive mode $B \to K \bar{\ell} \ell$ (in the entire $q^2$ range).
Using the first method, we show that the SM prediction for the inclusive $b \to s \bar{\ell} \ell$ rate at high dilepton invariant mass is in good agreement with the result obtained summing the SM predictions for one- and two-body modes ($K$, $K^*$, $K\pi$). This observation allows us to perform a direct comparison of the inclusive $b \to s \bar{\ell} \ell$ rate with data. This comparison shows a significant deficit ($\sim 2\sigma$) in the data, fully compatible with the deficit observed at low-$q^2$ on the exclusive modes. This provides independent evidence of an anomalous $b \to s \bar{\ell} \ell$ short-distance interaction, free from uncertainties on the hadronic form factors.
To test the short-distance nature of this effect we use a second method, where we analyze the exclusive $B \to K \bar{\ell} \ell$ differential branching ratio data in the entire $q^2$ region. Here, after using a dispersive parametrization of the narrow charmonium resonances, we extract the non-SM contribution to the universal Wilson coefficient $C_9$ for every bin in $q^2$. The $q^2$-independence of the result, and its compatibility with the inclusive determination, provide a consistency check of the short-distance nature of this effect.
The idea of partial compositeness (PC) in Composite Higgs models offers an attractive means to explain the flavor hierarchies observed in nature. In this talk, predictions of a minimal UV realization of PC, considering each Standard-Model (SM) fermion to mix linearly with a bound state consisting of a new scalar and a new fermion, are presented, taking into account the dynamical emergence of the composites. Employing the non-perturbative functional renormalization group, the scaling of the relevant correlation functions is examined and the resulting SM-fermion mass spectrum is analyzed.
Rare B-hadron decays mediated by b-> sll transitions provide a sensitive test of Lepton Flavour Universality (LFU), a symmetry of the Standard Model by which the coupling of the electroweak gauge bosons to leptons is flavour universal. Extensions of the SM do not necessarily preserve this symmetry and may give sizable contributions to these processes. Precise measurements of LFU ratios are, therefore, an extremely sensitive probe for New Physics. Likewise, breaking of LFU can result in Lepton-Flavour violating decays of the form b->s\ell\ell'. This talks summarizes recent measurements of Lepton Flavour Universality at LHCb, as well as searches for Lepton-Flavour violating decays
Rare B decays at LHCb, such as those mediated via $b \to s\mu\mu$ transitions, offer sensitivity to heavy New Physics particles. Over the last decade, a pattern of tensions between measured $b \to s\mu\mu$ decays and their Standard Model predictions have emerged. These tensions appear in both branching fraction ratios and angular analyses. This talk will give an overview of recent $b \to s\mu\mu$ measurements, performed with data collected by the LHCb experiment, and outline future prospects.
Recent CMS results on rare decays with FCNC transition b-->sll are reported. The analyses are based on proton-proton collision data collected in pp collisions at sqrt(s)=13 TeV.
Decays of B mesons that proceed through radiative penguin amplitudes probe a large class of generic non-SM models for which Belle II has unique reach. We present recent results from an inclusive $b \to s \gamma$ analysis and a $B \to \rho \gamma$ analysis. In addition, we report results on $b \to s \ell^+ \ell^-$ decays of B mesons, which proceed through electroweak penguin amplitudes and offer multiple probes of non-SM physics.
Baryon asymmetry of the Universe (BAU) provides unambiguous evidence for need of New Physics (NP). In this context, a general two Higgs doublet model (g2HDM) without $Z_2$ symmetry is appealing because it can solve BAU via electroweak baryogenesis (EWBG), while the scenario can be tested in direct searches as well as in low energy precision measurements of flavor observables. We discuss electric dipole moment (EDM) of electron and $b \to s\gamma$ as probes of EWBG scenarios. The consistency with the current electron EDM bound demands cancellation between different NP contributions, hinting at NP extra Yukawa couplings that have hierarchical structure similar to the Standard Model. We point out that $b \to s\gamma$ provides an independent crucial bound on EWBG, as the corresponding NP effects are chirally enhanced. In particular, we show that projected improvements in measurements of $b \to s\gamma$ CP asymmetries at Belle II, in conjunction with electron EDM, will provide key probe of parameter space of the extra top and bottom Yukawa couplings.
We analyze the implications of current $b \to s \ell \ell$ ($\ell=e, \, \mu$) measurements on several $B \to K^* \tau^+ \tau^-$ observables under the assumption that the possible new physics can have both universal as well as nonuniversal couplings to leptons. For these new physics solutions, we intend to identify observables with large deviations from the Standard Model (SM) predictions as well as to discriminate between various new physics scenarios. For this we consider the $B \to K^* \tau^+ \tau^-$ branching fraction, the $K^*$ longitudinal fraction, the tau forward-backward asymmetry and the optimized angular observables. Further, we construct the $\tau - \mu$ lepton-flavor differences ($Q_{\tau \mu}$) between these tau observables and their muonic counterparts in $B \to K^* \mu^+ \mu^-$ decay along with the lepton-flavor ratios ($R_{\tau \mu}$) of all of these observables. We find that the current data allows for deviations ranging from 25% up to an order of magnitude from the SM value in a number of observables. A precise measurement of these observables can also discriminate between a number of new physics solutions.
Conveners:
Ilaria Brivio (Universita di Bologna)
Karsten Köneke (Universität Freiburg)
Matthias Schröder (Universität Hamburg)
Contact: eps23-conveners-t09 @desy.de
Higgs physics with ILC
speaker TBA
Technologically mature accelerator and detector design and well understood physics program makes ILC is a realistic option for realization of a future Higgs factory. Energy staged data collection, employment of beam polarization and capability to reach a TeV center-of-mass energy, enable unique sensitivity to New Physic's deviations from the Standard Model predictions in the Higgs sector and beyond. Coupling precisions of the order of 1% and better are necessary to pin down a concrete New Physic's model. Measurement of the Higgs self-coupling as a shaping parameter of the Higgs potential will benefit from the accessibility of high-energy scales (500 GeV and above). Flexibility to operate from the Z-pole up to a TeV scale enables CP properties of the Higgs boson to be probed in numerous production and decay vertices. These and other ILC measurements will be highlighted in this talk.
The Higgs mechanism is a central part of the Standard Model which has not yet been fully established experimentally without the measurement of the Higgs self-coupling. Future linear $e^+e^-$ colliders are able to access center-of-mass energies of 500 GeV and beyond and can therefore probe the Higgs self-coupling directly through the measurement of double Higgs production. A new analysis of the capability to measure the Higgs self-coupling at ILC500 is ongoing and has identified aspects concerning the reconstruction techniques to fully exploit the detector potential, which are expected to improve precision reach and will be presented in this contribution. Additionally, the requirements that the Higgs self-coupling measurement puts on the choice of center-of-mass energy will be evaluated as this is important for shaping the landscape of future colliders such as ILC or C$^3$.
e study possible CP-violation effect of the Higgs to Z boson coupling
at the future $e^+e^−$ collider. We find that the azimuthal angular distribution of the muon, produced by $e^+e^− \rightarrow HZ → H\mu^−\mu^+$, can be sensitive to such a CP-violation effect when we apply initial transversely polarized beams. Based on this angular
distribution, we construct a CP sensitive asymmetry, and obtain this asymmetry by Whizard simulation. By comparing the SM prediction with $2\sigma$-range of this asymmetry, we estimate the discovery limit of the CP-odd coupling in HZZ interaction, which is $\tilde{c}_{AZZ}\in[-0.1, 0.1]$ for $500~\mathrm{fb}^{−1}$, and $\tilde{c}_{AZZ}\in[-0.05, 0.05]$ for $2000~\mathrm{fb}^{−1}$.
Although the studies of tensor structure of the Higgs boson interactions with vector bosons and fermions at CMS and ATLAS experiments have established that the JPC quantum numbers of the Higgs boson should be 0++, small CP violation in the Higgs sector (i.e. less than 10% contribution of the CP-odd state) cannot be excluded with the current experimental precision. We review possibility to measure CP violating mixing angle between scalar and pseudoscalar states of the extended Higgs sector, at 1 TeV ILC with ILD detector.
The European Strategy for Particle Physics has identified an e$^+$e$^-$ Higgs factory as its top priority for a post-LHC collider, as a first step towards a future hadron collider at the energy frontier. Precision measurements and searches for new phenomena in the Higgs sector are among the most important goals in particle physics. Electron-positron collisions at the CERN Future Circular Collider (FCC-ee) at $\sqrt{s}=240$, 365, and 125 GeV will provide the ultimate precision on model-independent Higgs boson couplings, mass, total width, and CP parameters, as well as searches for exotic and invisible decays. Very high energy proton-proton collisions (up to 100 TeV) at the FCC-hh will produce a data sample of $10^{10}$ scalar bosons allowing further studying the Higgs self-coupling and rare ($H\to \gamma\gamma, \mu\mu, 4\ell$,...) decays. There is a remarkable complementarity of the FCC-ee and FCC-hh colliders, which in combination offer the best possible overall study of the Higgs boson properties.
The hadron collider phase of the Future Circular Collider (FCC-hh) is a proton-proton collider operating at a center-of-mass energy of 100 TeV. It is one of the most ambitious projects planned for the rest of this century and offers ample opportunities in the hunt for new physics, both through its direct detection reach as well as through indirect evidence from precision measurements.
Extracting a precision measurement of the Higgs self-coupling from the Higgs pair production cross-section will play a key role in our understanding of electroweak symmetry breaking, as the self-coupling gives insight into the nature of the Higgs potential. With the large dataset of 30 $\text{ab}^{-1}$ which is envisioned to be collected during the FCC-hh runtime the Higgs self-coupling will be determined down to the percent level.
This talk presents recent studies of di-Higgs measurements in various final states (e.g. $bb\gamma\gamma$, $bb\tau\tau$, $bbWW$) and their combination. More challenging final states, such as $bbllE_{T}^{\text{miss}}$ are explored for their potential at the FCC-hh for the first time. Updates to the parametrization of detector scenarios for a project so far ahead in the future are discussed as well.
The Large Hadron-electron Collider and the Future Circular Collider in electron-hadron mode [1] will make possible the study of DIS in the TeV regime providing electron-proton collisions with instantaneous luminosities of $10^{34}$ cm$^{−2}$s$^{−1}$. With a charged current cross section around 200 (1000) fb at the LHeC (FCC-eh), Higgs bosons will be produced abundantly. We examine the opportunities for studying several of its couplings, particularly $H\to b\bar b$, $H\to c\bar c$, $H\to WW$, and Higgs to invisible. We also discuss the possibilities to measure anomalous Higgs couplings, and the implications of precise parton densities measured in DIS on Higgs physics. We finally address the complementarity in measuring Higgs couplings between the LHeC and the FCC-he and the respective hadronic colliders, the HL-LHC and the FCC-hh, and $e^+e^-$ Higgs factories, but will also emphasise the gain in accuracy achievable by combining results between those colliders.
[1] LHeC Collaboration and FCC-he Study Group: P. Agostini et al., J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
Multi-TeV center of mass energies muon collisions are an ideal environment for studying Higgs boson properties. At these energies the high production rates will allow precise measurements of its couplings to fermions and bosons. Moreover, in such collisions it will be possible to study the Higgs potential by measuring the double Higgs production cross section and determining the trilinear self-couplings. The studies proposed for this contribution have been performed with a detailed simulation of the signal and physics background samples and by evaluating the effects of the beam-induced background on the detector performance.
A global fit of the expected results on all Higgs boson measurements will be also presented to demonstrate the impact of the muon collider program.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
We present the results from searches for SUSY signatures produced via the electroweak interaction. All searches use proton-proton collision data at a centre-of-mass energy of 13 TeV, recorded with the CMS detector during Run II of the LHC operations. The analyzed data correspond to an integrated luminosity of 137/fb. The results are interpreted within simplified models of electroweakino or slepton pair production and they are consistent with expectations from the Standard Model. We also present a combination of these results, which provides a more comprehensive coverage of the model parameter space than the individual searches and adds sensitivity in the compressed mass parameter regions.
The direct production of electroweak SUSY particles, including sleptons, charginos, and neutralinos, is a particularly interesting area with connections to dark matter and the naturalness of the Higgs mass. The small production cross-sections and challenging experimental signatures lead to difficult searches. This talk will highlight the most recent results of searches performed by the ATLAS experiment for supersymmetric particles produced via electroweak processes, including analyses targeting small mass splittings between SUSY particles.
Supersymmetry (SUSY) provides elegant solutions to several problems in the Standard Model, and searches for SUSY particles are an important component of the LHC physics program. Naturalness arguments favour supersymmetric partners of the gluons and third generation quarks with masses light enough to be produced at the LHC. This talk will present the latest results of searches conducted by the ATLAS experiment which target gluino and squark production, including stop and sbottom, in a variety of decay modes within RPC SUSY
There have been a lot of developments on identification of heavy objects decaying hadronically though large-size jets and tau leptons using machine learning techniques. These techniques have revolutionized searches for Supersymmetry at the LHC. In this talk, recent searches for Supersymmetry using heavy object tagging and tau leptons will be presented. The results are obtained from the proton-proton collision data at the center of mass energy of 13 TeV collected during the LHC Run 2.
The latest results of searches for supersymmetry in photonic final states with the CMS experiment will be presented. The analyses are based on the full dataset of proton-proton collisions collected during the Run 2 of the LHC at a center-of-mass energy of 13 TeV. The results are interpreted in models including the stealth SUSY models and gauge mediated supersymmetry breaking models.
Supersymmetry (SUSY) provides elegant solutions to several problems in the Standard Model, and searches for SUSY particles are an important component of the LHC physics program. With increasing mass bounds on MSSM scenarios other non-minimal variations of supersymmetry become increasingly interesting. This talk will present the latest results of searches conducted by the ATLAS experiment targeting strong and electroweak production in R-parity-violating models, as well as stop production in non-minimal-flavour-violating models
Results from the CMS experiment are presented for supersymmetry searches targeting so-called compressed spectra, with small mass splittings between the different supersymmetric partners. Such a spectrum presents unique experimental challenges. This talk describes the new techniques utilized by CMS to address such difficult scenarios. The searches use proton-proton collision data at the center of mass energy of 13 TeV collected during the LHC Run 2.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) features a sophisticated two-level triggering system composed of the Level 1 (L1), instrumented by custom-design hardware boards, and the High-Level Trigger (HLT), a software based trigger based on the complete event information and full detector resolution. The CMS L1 Trigger relies on separate calorimeter and muon trigger systems that provide jet, e/γ, τ, and muon candidates along with calculations of energy sums to the Global Trigger, where selections are made based on the candidate kinematics. During its second run of operation, the L1 trigger hardware was entirely upgraded to handle proton-proton collisions at a center-of-mass energy of 13.6 TeV with a peak instantaneous luminosity of $2.2 \cdot 10^{34} cm^{-2}s^{-1}$, more than double the design luminosity for the machine. For the Run 3 of the LHC, an optimized and expanded Level-1 and HLT trigger physics menu has been developed to meet the requirements of the ambitious CMS physics program. A wide range of measurements and searches will profit from the new features and strategies implemented in the trigger system. Dedicated variables and non-standard trigger techniques targeting Long Lived Particles searches and other unconventional physics signatures have been developed. Moreover, the implementation of new kinematic computations at the trigger level will improve b-physics measurements and resonance searches. This talk will present these new features along with their performance measured in Run 3 of the LHC.
Various theories beyond the Standard Model predict new, long-lived particles with unique signatures which are difficult to reconstruct and for which estimating the background rates is also a challenge. Signatures from displaced and/or delayed decays anywhere from the inner detector to the muon spectrometer, as well as those of new particles with fractional or multiple values of the charge of the electron or high mass stable charged particles are all examples of experimentally demanding signatures. The talk will focus on the most recent results using 13 TeV pp collision data collected by the ATLAS detector.
Many models beyond the standard model predict new particles with long lifetimes, such that the position of their decay is measurably displaced from their production vertex, and particles giving rise to other non-conventional signatures. We present recent results of searches for long-lived particles and other non-conventional signatures obtained using data recorded by the CMS experiment at Run-II of the LHC. Prospects for Run III will also be presented.
The High Luminosity LHC will be a tremendous opportunity to search for long-lived particles (LLPs) from an extended hidden/dark sector, feebly connected to the known SM sector. Such LLP searches will require special detectors that are shielded against SM backgrounds, and are therefore displaced from the proton-proton collision point. The CODEX-b detector, planned to be placed behind a thick shielding wall inside the LHCb cavern, around 25m from the LHCb interaction point, provides an opportunistic solution. On the journey to the construction of the full detector, a demonstrator (CODEX-𝛽) is foreseen for installation and operation during LHC Run 3. This talk will discuss the general principles of the CODEX-b detector, its prospects, and the progress being made to install the demonstrator unit.
SHADOWS (Search for Hidden And Dark Objects With the SPS) is a
proposed proton beam-dump experiment for the search of a large variety
of feeble-interacting particles (FIP) at the CERN SPS. It will exploit the
potential for searches and discoveries at the intensity frontier offered by
the upgrade of the ECN3 beam line.
SHADOWS, will be located off-axis, which allows the optimisation of the
S/B ratio, and will collect data from up to 5x10$^{19}$ protons of 400 GeV on
target in 4 years of operations. The conceived detector, with a
transversal size of 2.5x2.5 m$^2$ and a length of 34 m, offers excellent
tracking and timing performance for the identification and reconstruction
of most of the visible final state of FIP decays.
SHADOWS will allow to explore a large parameter space region of many
FIPs, like light dark scalars, axion-like particles and heavy neutral
leptons, with masses ranging between 0.1 and 10 GeV.
It will be possibly complemented by a dedicated neutrino experiment
(NaNu@SHADOWS) to further extend the physics reach, in particular for
the study of tau-neutrino events.
After a general introduction on the current status of FIP searches, the
talk will describe the SHADOWS detector concept and will focus on the
physics programme of the experiment
Hidden sectors can help explain many important hints for new physics, but the large variety of viable models is a challenge for the model-independent interpretation of experimental light hidden particle searches. Standard techniques such as simplified models or effective field theories (EFTs) typically envision minimalist hidden sectors with only a single new particle, and it is not always clear how to map the resulting constraints on to realistic standard model (SM) extensions. Extending the EFT approach, we present techniques published in 2105.06477 and 2203.02229 that help interface these minimalist EFTs with realistic hidden sectors, and show how to streamline the computation of hidden particle production rates by factorizing them into i) a model-independent SM contribution, and ii) a observable-independent hidden sector contribution.
.
The idea that new physics could take the form of feebly interacting particles (FIPs) - particles with a mass below the electroweak scale, but which may have evaded detection due to their tiny couplings or very long lifetime - has gained a lot of attraction in the last decade, and numerous experiments have been proposed to search for such particles. It is important, and now very timely, to consistently compare the potential of these experiments for exploring the parameter space of various well-motivated FIPs. In the talk, I address this pressing issue by presenting an open-source tool to estimate the sensitivity of many experiments - located at Fermilab or the CERN's SPS, LHC, and FCC-hh - to various models of FIPs in a unified way: the Mathematica-based code SensCalc.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
Quantum entanglement is one of the fundamental correlations between particles that has not yet been confirmed with high-energy photons. Quantum electrodynamics (QED) predicts that annihilation photons produced by the decay of the singlet state of positronium (Ps) atoms are entangled in their polarization [1]. Since these photons have an energy of 511 keV, there is no polarizer to measure the polarization of these photons. However, the direction of the scattering photons when interacting with an electron via the Compton process depends strongly on the polarization of the photon [2]. Therefore, Compton scattering can be used as a polarization analyser. The polarization direction of the photon can be defined as the cross product of the momentum vector of the incident and scattered photon [3]. By measuring the polarization of annihilation photons originating from Ps decays, their polarization correlation can be determined. Measuring this correlation for the first time will answer the open question of studying quantum entanglement in high-energy photons [4]. It also has applications in PET imaging [5,6], where the quality of image reconstruction depends directly on the selection of pure annihilation photons.
J-PET is a multi-strip detector based on plastic scintillators that has the potential to measure such correlation in the whole phase space [7,8]. Photons interact with plastic mainly via Compton scattering, which makes J-PET particularly suitable for this type of study. In this presentation, the key-features of the J-PET detector, analysis scheme and the preliminary results of the study will be presented.
References:
[1] Snyder, H. S., Pasternack, S. & Hornbostel, J Phys. Rev.73, 440 (1948)
[2] O. Klein, Y. Nishina, Y. Z. Physik 52, 853 (1929)
[3] P. Moskal et al., Acta Phys. Polon. B 47, 509 (2016)
[4] BC Hiesmayr and P. Moskal, Sci. Rep. 7 (2017) 15349
[5] D.P. Watts et al., Nature Commun. 12, 2646 (2021)
[6] S.D. Bass et al., Rev. mod. Phys. 95 (2023) 021002
[7] S. Niedzwiecki et al., Acta Phys. Polon. B48 (2017) 1567
[8] P. Moskal et al. Eur. Phys. J. C 78, 970 (2018)
In the high luminosity phase of the LHC (HL-LHC) the collider will operate at an instantaneous luminosity of 1.5x10$^{34}$/cm/s. This poses stringent requirements on the capabilities of subdetectors due to the increased particle multiplicity and hit occupancy. The Upgrade II LHCb RICH (Ring-imaging Cherenkov) subsystem, in particular, will require improvements in spatial and time resolution to maintain good particle identification performance in the HL-LHC environment.
To address these requirements, an improvement in the readout electronics is planned during the Long Shutdown 3 (LS3) phase, from 2026 to 2029. The goal is to provide hit timestamps with an accuracy of the order of O(100) ps from Run4 (2029-2032) onwards, in parallel with the development of novel sensors capable of sub-100 ps time resolution for Run5 (2035-2038). The LS3 enhancements foresees the use of the FastRICH, a 65-nm CMOS chip, under development by ICCUB-CERN. In my talk, a prototype opto-electronic chain with fast-timing Cherenkov photon hit detection is presented as a proof-of-principle for the future RICH detectors. This readout is equipped with the FastIC, which has a wide input signal dynamic range similar to the FastRICH.
In order to evaluate the time resolution of the prototype photo-detection chain equipped with the FastIC chip beam test campaigns were conducted in 2021 and 2022 at CERN SPS charged particle beam facility with 180 GeV/c protons and pions. The beam test setup involved a borosilicate lens placed upstream of the sensors, in order to generate Cherenkov photons and focus the Cherenkov ring on the sensor plane. The sensors used in the tests included 1-inch and 2-inch Multi-Anode Photomultipliers currently used in the LHCb RICH, as well as a Silicon Photomultiplier matrix. Each sensor was accompanied by a dedicated front-end board and a digital board (DB) for signal extraction. The DB incorporated a custom Time-to-Digital Converter implemented in an FPGA with an average bin width of 150 ps. The trigger for the setup was provided by a crossed pair of scintillators combined with a Micro-Channel Plate Photomultiplier, which served as the time reference for the system.
To assess the time resolution of the prototype, specific time reconstruction algorithms were developed. The results show that the estimated Single Photon Time Resolution of the 1-inch and 2-inch MaPMTs, considering all sources of jitter, is compatible with the expected MaPMT transit-time-spread of approximately 150 ps. Additionally, the measurements obtained from the beam test were validated through dedicated laser measurements conducted in the laboratory.
To further validate the beam test conditions and their effects on time resolution, a comprehensive GEANT4 simulation was performed. This allowed for a detailed study of the beam effects on the time resolution and a comparison between the expected and observed photon yields.
In this talk the setup for a prototype photodetector module for future fast-timing RICH detectors is presented, with a focus on the final results coming from the timing analysis and on the corrections applied to account for the time-walk effect.
Research in non-perturbative QED in strong-field backgrounds has gained interest in recent years, due to advances in high-intensity laser technologies that make extreme fields accessible in the laboratory. One key signature of strong-field QED is non-linear Compton scattering in collisions between a relativistic electron beam and a high-intensity laser pulse. In the vicinity of strong fields, the electron gains a larger effective mass, which leads to a laser-intensity-dependent shift of the kinematic Compton edge and the appearance of higher-order harmonics in the energy spectrum. One of the challenges of measuring the Compton energy spectrum in laser-electron-beam collisions is the enormous flux of outgoing Compton-scattered electrons and photons, ranging from $10^3$ to $10^9$ particles per collision. We present a combined detector system for high-rate Compton electron detection in the context of the planned LUXE experiment, consisting of a spatially segmented gas-filled Cherenkov detector and a Scintillator screen imaged by an optical camera system. The detectors are placed in a forward dipole spectrometer to resolve the electron energy spectrum. Finally, we discuss techniques to reconstruct the non-linear Compton electron energy spectrum from the high-rate electron detection system and to extract the features of non-perturbative QED from the spectrum.
SiPMs are the baseline photodetector technology for the dual-radiator Ring-Imaging Cherenkov (dRICH) detector of the EPIC experiment at the future Electron-Ion Collider (EIC). SiPMs offer significant advantages being cheap devices, highly efficient and insensitive to the high magnetic field (~ 1.5 T) at the expected location of the sensors in the experiment. However, they are not radiation tolerant and one has to test whether the increase in Dark Count Rate (DCR) can be mitigated to maintain single-photon performance with current SiPM technology in a moderately hostile (< 10$^{11}$ 1-MeV n$_{\rm eq}$/cm$^{2}$) radiation environment. Several options are available to maintain the DCR to an acceptable rate (below ~ 100 kHz/mm$^{2}$) by reducing the SiPM operating temperature and by recovering the radiation damage with high-temperature annealing cycles. Moreover, by utilising high-precision TDC electronics the use of timing information can effectively reduce background due to DCR.
In this talk we present the current status of the R&D and the results of studies performed on significant samples of commercial and prototype SiPM sensors. The devices have undergone proton irradiation in two campaigns aimed at studying the device performance with increasing NIEL doses up to 10$^{11}$ 1-MeV n$_{\rm eq}$/cm$^{2}$, the device recovery with long high-temperature annealing cycles and the reproducibility of the performance in repeated irradiation-annealing cycles. It was also explored the use of Joule annealing as a potential way to perform high-temperature annealing in-situ. In October 2022 the sensors were mounted inside the dRICH detector prototype and successfully tested with particle beams at the CERN PS accelerator. The results are obtained with a complete chain of front-end and readout electronics based on the first 32-channel prototypes of the ALCOR chip, a new ASIC designed for SiPM readout.
A high-performance muon detector system is crucial to realise physics goals of the CMS experiment at the LHC. The CMS muon spectrometer, consisting of different detector technologies across different pseudorapidity (η) regions, demonstrated efficient tracking and triggering of muons during Run1 and Run2 of the LHC operations. The legacy CMS muon detector system, consists of drift tube (DT) chambers in the barrel and cathode strip chambers (CSC) in the endcap regions, complemented by Resistive Plate Chambers (RPC) in both the barrel and endcap. During the long shutdown (LS) 2 period, Gas Electron Multiplier (GEM) chambers were added in the first station of forward regions to enhance the redundancy of the muon system while maintaining the precision of muon momentum resolution at the Level-1 trigger. Several muon system upgrades are planned to withstand the challenging conditions of increased instantaneous luminosity and higher pileup expected during the high-luminosity LHC (HL-LHC) operation. The upgrades targeting the detector electronics have been completed for CSCs during LS2, while those planned for DTs will be implemented during LS3. Another GEM-based station (GE2/1) as well as the new generation RPC (iRPC) stations (RE3/1 and RE4/1), will be instrumented at high-η regions. Various detector demonstrators corresponding to these upgrades were installed in the CMS detector during LS2 to test new detection technologies and readout systems during Run3 with the aim to refine and optimise data-taking parameters and test the detector components before the final production and installation for future LHC upgrades. The performance studies for all four muon sub-detector systems, carried out using the first dataset collected at a collision energy of 13.6 TeV in 2022, are reported in this presentation. Furthermore, the operation stability and performance evaluation of the DT, GEM and iRPC demonstrators are also presented.
Given the High Luminosity phase of the Large Hadron Collider (HL-LHC), which is expected to deliver an instantaneous luminosity 5 times higher with respect to the present value, the muon spectrometer of the CMS experiment will undergo specific upgrades targeting both the electronics and detectors to cope with the new challenging data-taking conditions and to improve the present tracking and triggering capabilities. In the current CMS muon system, different detector technologies have been chosen to optimize its performance, the Drift Tubes (DT) and Resistive Plate Chambers (RPC) are installed in the barrel, complemented by the two end-caps hosting cathode strip chambers (CSC) and RPC. The upgrade of the electronics will target the present system, based on the DT, the CSC, and the RPC. Most of the CSC electronics upgrade has been completed in Long Shutdown 2 (LS2). The electronics upgrade of the DT is planned for LS3 and currently, different prototypes of the new On Board electronics for DT(OBDT) designs are being tested and validated in CMS in the slice-test demonstrators. The detector upgrades concern the deployment of new stations for the end-cap, where the background rate is expected to be higher. The upgrades are based on the triple gas electron multiplier (GEM) and improved resistive plate chambers (iRPC) technology, featuring improved time and spatial resolution and enhanced rate capability. During LS2 the GE1/1 station, based on the GEM technology has been installed in the end-cap region, covering the pseudo-rapidity range 1.55 < |η| < 2.18. Additionally, the installation of two GEM stations is foreseen in the future (GE2/1 and ME0), to improve the muon reconstruction in the end-caps and to extend the coverage of the muon system up to |η| ~ 2.8. The presentation will give an overview of the Muon Spectrometer upgrades, describing the electronics developments for the DT and CSC, and It will provide an overview of the new stations based on triple-GEM (GE2/1, ME0) and iRPC detectors (RE3/1 and RE4/1) that will be installed before Long Shutdown 3.
Particle detectors made of plastic scintillator with three dimensional granularity and sub-ns time resolution are capable of simultaneous particle tracking and calorimetry. However, large-scale detectors with fine granularity require great efforts on the manufacturing and assembly processing, which can be prohibitive, time consuming, expensive and hard to control with the desired precision. The 3DET collaboration recently developed the additive manufacture technology enabling the large-scale production of optically-isolated 3D-segmented scintillating detectors.
In this talk we present the measurement results on the first fully 3D-printed scintillating particle detector prototype. A novel technique was developed to 3D-print both scintillator and white reflector, together with mm diameter holes with sub-0.1 mm tolerance hosting the WLS fibers for reading out, without need of any post-processing. A light yield analogous to one of the conventional plastic scintillator detectors and a few percent light crosstalk were obtained. We will present the first comparison on the optical performance between the standard scintillator and the 3D-printed prototypes. The possibility of adopting inorganic scintillator was also studied. This work is the milestone, for the first time, demonstrate that the 3D printing of particle detectors is getting ready for future particle physics experiments.
Plastic scintillators are widely used in particle physics experiments. Additive manufacturing techniques allow the production of parts with free shapes and, depending on the application, direct integration with other detector components. This opens up new possibilities for the development of, for example, trigger and veto systems or 3D-segmented detectors like high-granularity calorimeters utilizing structured scintillators with diffuse reflective subdivisions. ARBURG Plastic Freeforming (APF) devices feature the processing of several different granulates at the same time including in-line drying, melting points up to 350°C and high-frequency droplet discharging. The usage of granulates to 3D-print plastic scintillators has the advantage that original materials produced without plasticizers or polymerization starters can be used. However, it must be investigated whether the materials degrade under the high process temperatures to which they are exposed. Achieving high transparency and surface quality are further challenges, as with other techniques. Using the APF process, we have 3D-printed scintillator samples made from granulate based on polystyrene. We have used both commercial granulate with POPOP and p-terphenyl wavelength-shifting additives as well as self-made granulate with PPO and bis-MSB. With these samples we have performed several measurements to evaluate their performance with regard to transparency, fluorescence behavior, decay time and light-yield. We present the results by comparison with reference scintillators and polymethylmethacrylate samples.
Conveners:
Tatiana Pieloni (École Polytechnique Fédérale de Lausanne)
Mauro Migliorati (Universita di Roma)
Marc Wenskat (Universität Hamburg)
Contact: eps23-conveners-t13 @desy.de
SuperKEKB is a double-ring collider consisting of a 7-GeV electron ring (HER) and a 4-GeV positron ring (LER) with a circumference of approximately 3 km, constructed by reusing the KEKB tunnel. To further increase the peak luminosity, “Nano-beam scheme with large crossing angle” is adopted. Electrons and positrons collide at a larger horizontal crossing angle while maintaining the bunch length about the same as that of KEKB. Therefore, the actual collision area can be considerably shorter than the bunch length. This mitigates the hourglass effect that results from colliding over the entire length of the bunch and allows a strong vertical squeeze to increase luminosity.
SuperKEKB was commissioned for 4 months in 2022 and now it has entered Long Shutdown 1 (LS1) for approximately 15 months. Various upgrades are currently underway. The beam currents were increased gradually, and the maximum of 1145 mA and 1460 mA were stored in the HER and LER, respectively. The peak luminosity of 4.7 ×10^34 cm-2 s-1 with a vertical beta function (βy*) of 1 mm at the collision point was achieved, breaking the previous year’s SuperKEKB world record in June 2022. The recent progress will be presented, and then the problems and issues to be overcome will be discussed along with the upgrade work during LS1 for further improvement of the luminosity performance.
The Circular Electron Positron Collider (CEPC) was proposed by Chinese scientists in Sept. 2012, shortly after the discovery of the Higgs boson at LHC in July 2012. CEPC would enable the precision study of the Higgs boson and facilitate the search for new physics beyond the standard model. The accelerator design and the technology R&D of CEPC has been evolving since the launch of the project under the balance among the machine performance, mastering of the key technologies, as well as cost and power consumption. The Conceptual Design Report (CDR) has been released in 2018 and the Technical Design Report (TDR) will be released in 2023. In this report, we will summarize the status of the accelerator design of CEPC as well as the development of the key technologies.
The Future Circular Collider electron-positron (FCC-ee) is a proposed high-energy lepton collider that aims to reach unprecedented precision in the measurements of fundamental particles. However, several beam related processes produce particles in the Machine-Detector Interface (MDI) region, which can adversely affect the measurements' accuracy. This contribution presents a study of the beam-induced backgrounds at FCC-ee.
The study uses the turnkey software Key4HEP to estimate the occupancy levels induced by beam-beam interactions, beam losses due to failure scenarios, and the Synchrotron Radiation (SR) in the CLIC-Like Detector (CLD). Dedicated softwares are used to produce the primary particles for each of these processes: GuineaPig++ for the beam-beam interactions, X-suite for the beam losses coming from particle transport, and BDSIM for the SR photons.
The goal of a next-generation e+e- collider is to carry out precision measurements to percent level of the Higgs boson properties that are not accessible at the LHC and HL-LHC. In this talk we will present the study of a new concept for a high gradient, high power accelerator with beam characteristics suitable to study the Higgs boson, the Cool Copper Collider (C3), with the goal of significantly reducing capital and operating costs. C3 is based on the latest advances in rf accelerator technology and utilizes optimized cavity geometries, novel rf distribution and operation a cryogenic temperatures to allow the linear accelerator to achieve high accelerating gradients while maintaining overall system efficiency. We will present the latest demonstrated performance of prototype accelerators and highlight the future development path for C3.
The International Linear Collider (ILC) and Compact Linear Collider (CLIC) are well-developed with mature designs as a next-generation high-energy electron-positron collider, for exploring the Higgs-boson, top-quark and beyond-Standard Model sectors. An overview and status of each collider project will be given, including the design, key technologies, accelerator systems, energy-staging strategies, and the most-recent cost and power estimates. An overview of the ongoing development strategy for each project over the next 4-5 years will also be given.
Details of the ILC Technology Network, CLIC X-band technology applications, and sustainability will be reported in complementary talks at the conference.
In 2023, ILC entered into a new stage. A new budget to facilitate the developments of accelerator technology especially focusing on three topics, SRF, nano beam and particle sources, which are important for the ILC, was approved. To maximize the output of these activities, the International technology network has formed. In this presentation, the latest status of the project on accelerator development will be reviewed.
The CLIC study has developed compact, high gradient and energy efficient acceleration units as building blocs for a high energy linear collider for high-energy physics. Many of these components are now available in industry. These properties promise cost effective solutions for small linear accelerators in a variety of applications. The CLIC study actively promoted and supported such spin-off developments from the beginning. The applications include beam manipulation and diagnostic in research linacs such as FEL light sources, compact Compton-scattering x-ray sources, medical linacs for cancer treatment and compact neutron sources for material investigations. The presentation will introduce the x-band technologies developed and discuss examples of its use in some of the applications.
Positron sources are challenging for all high-energy lepton colliders due to the high luminosity, stability and polarization requirements imposed by phyiscs. In this talk an overview will be given on the different concepts of positron sources. The focus will be set on the currently most mature designs as, for instance, the undulator-based positron source, foreseen for the ILC baseline design, and its adaption for further future collider designs as, for instance, the HALHF concept. An overview will be given on current R&D issues within the ITN initiative as, e.g. the focusing system, the target loads and the polarization as well as their relevance for achieving the envisaged physics goals.
To ensure high luminosity for high energy physics experiments at the International Linear Collider (ILC), a source of sufficient amounts of positrons is required. One approach for this is to produce positrons by generating electron-positron pairs from high energy photons impinging on a high-Z target material and capturing these particles with a magnetic focusing device. This beam-optics element matches the phase space of the high-divergence, large-energy-spread positron beam into the acceptance of the downstream accelerating section’s beam optics. As commonly used matching devices show unstable focusing for the required pulse duration or insufficient positron capture efficiencies (yields), a pulsed solenoid was proposed and shown in simulation to outperform other proposed devices. In this contribution we discuss the current status of the design of this pulsed solenoid with respect to positron capture efficiency, cooling, mechanical stability, and other critical performance aspects.
Cosmic Neutrinos, Direct Neutrino Mass Measurement, and Neutrinoless Double Beta Decay.
Conveners:
Livia Conti (INFN)
Carlos Perez de los Heros (Uppsala University)
Martin Tluczykont (Universität Hamburg)
Gabrijela Zaharijas (UNG)
Contact: eps23-conveners-t01 @desy.de
The accretion of dark matter around the black hole could lead to the formation of surrounding halo. Such a dark matter dressed black hole can leave characteristic imprints in the observations of gamma-ray and gravitational waves, which can be used to explore the nature of dark matter.
Non-linear memory is one of the most intriguing predictions of general relativity which is generated by the passage of gravitational waves leaving the spacetime permanently deformed. A GW signal from for example binary black hole can be thought of having two parts the oscillatory part which is known as the “chirp” and a much fainter non-oscillatory (DC like) part which is the non-linear memory. A non-linear memory is produced by all the sources of GWs and has the peculiarity that even if the oscillatory part of the source lies at high frequency the non-linear memory will be available at low frequency. This property of non-linear memory is what we focus on exploiting here.
There are several cases where we can use memory as a resource for the current and future ground based detectors. To do this I will show examples of how one can use the non-linear memory to probe seemingly inaccessible sources of GWs like ultra low mass compact binary mergers where the oscillatory part lies at outside the reach of any current detectors and only non-linear memory could be detected if these sources exist. Another example will be the matter effects from binary neutron stars and black hole neutron star binaries which are at high frequency but the non-linear memory is accessible. I will also discuss the post-merger neutron star memory and the prospects of its detection. Also I will briefly talk about the how memory can help in constraining binary black hole parameters.
Domain Walls (DW) are 2-dimensional topological defects predicted by several theories beyond the Standard Model. They are expected to arise from the breaking of a discrete symmetry in the early universe. The motion and the eventual annihilation of these objects are projected to generate a stochastic background of gravitational waves (GW), that could in principle be probed by ground-based GW detectors. We perform a search for a novel model of signature of this stochastic GW background in the data from the first three observing runs of LIGO and Virgo. First, we implement the search for an agnostic double peaked model, followed by a phenomenological approach, in order to place constraints on the parameters characterizing the DWs. Finally, detection prospects for third generation detectors are being discussed.
Gravitational wave detection is a powerful tool that provides us with new ways to understand the universe. However, certain parameters, such as inclination and distance, are degenerate. This limitation hinders our ability to accurately measure other important factors like precession. Breaking the degeneracy between inclination and distance can also give us new insights into formation channels and cosmology. The memory effect, a unique characteristic of gravitational waves, can aid in breaking this degeneracy, especially in symmetric mass systems. In this work, we conducted a Parameter Estimation study to investigate the memory effect and its potential to enhance our interpretation of gravitational wave signals.
By means of the code HARM_COOL, which works for conservative relativistic magnetohydrodynamics, we developed a new scheme for the simulation of system formed after compact binary merger. Our code works with a tabulated equation of state of dense matter, accounts for the neutrino leakage, and follows the mass outflows via tracer particle method.
I will discuss the numerical scheme, and compare several recovery methods that have been included in our code. I will also show results of a numerical simulation, addressed to the post-merger system formed after the coalescence of binary neutron stars, or a neutron star and stellar mass black hole. The r-process nucleosynthesis in the ejected material may lead to electromagnetic signal, observed as a kilonova.
Ultra-low mass primordial black holes (PBH) which may briefly dominate the energy density of the universe but completely evaporate before the big bang nucleosynthesis (BBN), may lead to interesting observable signatures. We propose a novel test of this scenario by detecting its characteristic doubly peaked gravitational wave (GW) spectrum in future GW observatories. Here the first-order adiabatic perturbation from inflation and from the isocurvature perturbations due to PBH distribution, source tensor perturbations in second-order and lead to two peaks in the induced GW background. These resonant peaks are generated at the beginning of standard radiation domination in the presence of a prior PBH-dominated era. We explore the possibility of probing a class of baryogenesis models wherein the emission of massive unstable particles from the PBH evaporation and their subsequent decay contributes to the matter-antimatter asymmetry. We then include spinning PBHs and consider the emission of light relativistic dark sector particles, which contribute to the dark radiation (DR) and massive stable dark sector particles, thereby accounting for the dark matter (DM) component of the universe. The ISGWB can be used to probe the non-thermal production of these heavy DM particles, which cannot be accessible in any laboratory searches. For the case of DR, we find novel complementarity measurements of ΔNeff from these emitted particles and the ISGWB from PBH domination. Our results indicate that the ISGWB has a weak dependence on the initial PBH spin. However, for gravitons as the DR particles, the initial PBH spin plays a significant role, and between the only above a critical value of the initial spin parameter a∗, which depends only on initial PBH mass, the graviton emission can be probed in the CMB experiment. In the second part of the talk we will discuss how such a PBH-dominated era can be probed successfully using gravitational waves (GW) emitted by local and global cosmic strings. In addition to the step-like suppression of the GW spectrum, we propose a novel feature -- a knee in the step -- which provides information on the duration of the PBH-dominated era. Detecting GW from cosmic strings by detectors like LISA, ET, or BBO would set constraints on PBHs with masses between $10^6$ and $10^9$ g for local strings with tension $G\mu = 10^{-11}$, and PBHs masses between $10^4$ and $10^9$ g for global strings with symmetry-breaking scale $\eta = 10^{15}~\mathrm{GeV}$.
The talk is based on Refs:
https://arxiv.org/abs/2304.04793
https://arxiv.org/abs/2205.06260
https://arxiv.org/abs/2212.00775
Conveners:
Fady Bishara (DESY)
James Frost (University of Oxford)
Silvia Scorza (LPSC Grenoble)
Contact: eps23-conveners-t03 @desy.de
The presence of dark matter can explain several observations in the universe. However, its nature is still unknown. Therefore, the study of dark matter is a rapidly evolving field. New techniques and methods are being applied all the time. The measurement of the direction of WIMP-induced nuclear recoils is a challenging strategy to extend dark matter searches beyond the neutrino floor and provide an unambiguous signature of the detection of Galactic dark matter. The sensitivity of the gas detectors are limited by the small achievable detector mass to reach neutrino floor. NEWSdm is an innovative directional experiment proposal based on the use of a solid target which is made by newly developed nuclear emulsion and read-out systems achieving a position accuracy of 10 nanometers. The nuclear emulsion technology is the most promising technique with nanometric resolution to disentangle the dark matter signal from the neutrino background. In this talk we discuss the experiment design, its physics potential, the near-future plans. After the submission of a Letter of Intent, a new facility for emulsion handling was constructed in the Gran Sasso underground laboratory. A Conceptual Design Report is in preparation and will be submitted in 2023.
The LUX-ZEPLIN (LZ) dark matter search experiment is a dual-phase xenon time projection chamber operating at the Sanford Underground Research Facility in Lead, South Dakota, USA. It is comprised of 10-tonnes of liquid xenon, outfitted with photomultiplier tubes in both the central and the self-shielding regions. This is then enclosed within an active gadolinium-loaded liquid scintillator veto and all submerged in an ultra-pure water tank veto system. LZ has completed its first science run, collecting data from an exposure of 60 live-days and delivering a world-leading sensitivity to searches for Weakly Interacting Massive Particles (WIMPs). This talk will provide an overview of LZ's search utilising a model-agnostic Effective Field Theory (EFT) framework that describes several possible dark matter interactions with nucleons. In this talk, we highlight the key backgrounds, data analysis techniques, and signal models relevant to this study and present the results from this search.
In this talk, we present two gauge models for light-dark matter: one with an exotic positive charged lepton and the other one is a variant with right-handed neutrinos. The scalar self-interacting dark matters are stable without imposing new symmetry and should be weak-interacting. We study the impact of the self-interacting light dark matter on the formation of the dark halo, the observation properties of neutron stars, and its effect on the gravitational wave signal. We also present a search by the LIGO-Virgo-KAGRA collaborations for ultralight dark matter using cross-correlation and excess power methods for O3 observing run.
Dark matter is believed to account for 85$\%$ of the matter of the Universe. The lead dark matter candidate is the WIMP (weakly interacting massive particles). Light dark matter refers to WIMP candidates with a mass of less than 1 GeV. The concept of light DM has been developed in order to explain the 511 keV $\gamma$-ray from the galactic bulge, as observed by the INTEGRAL satellite. There are a lot of candidates for light DM, and these candidates span a wide range of potential masses and couplings to the visible sector. Probing the vast parameter space of light-dark matter requires a correspondingly broad experimental program which includes neutrino fixed target experiments. NO$\nu$A is a high luminosity long-baseline fixed-target accelerator neutrino experiment at Fermilab that can provide a potentially interesting probe in searching for signatures of DM scattering with electrons in its near detectors. We aim to search for the MeV-scale dark matter particles that might be generated within the NuMI beam and produce detectable electron scattering signals in NO$\nu$A near detector. In this talk, we present our analysis of the single electron events using a simulated sample and show the sensitivity of the NO$\nu$A experiment.
Dark matter is a mysterious and elusive form of matter in our Universe of which we can only measure gravitational effects. According to the most accredited theoretical models, dark matter particles in our galaxy might annihilate and produce standard model particle-antiparticle pairs which, traveling through the galaxy, can reach the Earth and be detected by space-borne experiments such as AMS-02 or GAPS. Nuclei-antinuclei pairs are promising probes for indirect dark matter detection due to their rare production rate in inelastic collisions between cosmic rays and the interstellar medium.
In this talk, recent ALICE measurements of (anti)nuclei production in pp collisions and of antinuclei inelastic cross section will be presented. These results are used to constrain the flux of secondary antinuclei from cosmic ray interactions by modeling their production rate and absorption term in the propagation equation. The current precision of the antinuclei inelastic cross section measurements is expected to be improved in Run 3 using a dedicated experimental setup installed in ALICE whose expected performance will be shown.
Conveners:
Summer Blot (DESY)
Pau Novella (IFIC)
Davide Sgalaberna (ETH)
Jessica Turner (Durham University)
Contact: eps23-conveners-t04 @desy.de
The main goal of the GERmanium Detector Array (GERDA) experiment at the Laboratori Nazionali del Gran Sasso (LNGS, Italy) is the search for the lepton-number-violating neutrinoless double-beta ($0\nu\beta\beta$) decay of 76Ge. The potential discovery of such phenomenon would have significant implications in cosmology and particle physics, unrevealing the Majorana nature of neutrinos. The main feature of the Gerda design consisted in operating an array of bare germanium diodes enriched in 76Ge in an active liquid argon shield. Phase II physics run (December 2015 - November 2019) reached an unprecedentedly low background index of $5.2 \times 10^{−4}$ counts/(keV kg yr) in the signal region, collecting an exposure of 103.7 kg yr while operating in a background-free regime. No signal was observed after a total exposure of 127.2 kg yr for a combined analysis of Phase I (November 2011 - September 2013) and Phase II data. A lower bound on the half-life of 0νββ decay in 76Ge was set at $T_{1/2} > 1.8 \times 10^{26}$ yr (90% C.L.), which coincides with the median expectation under the no signal hypothesis.
This contribution will review the GERDA experiment design and its final results, both on 76Ge double-beta decays searches with and without neutrinos, and recent results on searches for tri-nucleon decay of 76Ge and search for new exotics physics.
The Cryogenic Underground Observatory for Rare Events (CUORE) is the first bolometric experiment searching for 0νββ decay that has been able to reach the one-tonne mass scale. The detector, located at the LNGS in Italy, consists of an array of 988 TeO2 crystals arranged in a compact cylindrical structure of 19 towers. CUORE began its first physics data run in 2017 at a base temperature of about 10 mK and in April 2021 released its 3rd result of the search for 0νββ, corresponding to a tonne-year of TeO2 exposure. This is the largest amount of data ever acquired with a solid state detector and the most sensitive measurement of 0νββ decay in 130Te ever conducted, with a median exclusion sensitivity of 2.8×10^25 yr. We find no evidence of 0νββ decay and set a lower bound of 2.2 ×10^25 yr at a 90% credibility interval on the 130Te half-life for this process. In this talk, we present the current status of CUORE search for 0νββ with the updated statistics of one tonne-yr. We finally give an update of the CUORE background model and the measurement of the 130Te 2νββ decay half-life, study performed using an exposure of 300.7 kg⋅yr.
The NEXT experiment searches for the neutrinoless double beta decay in Xe-136 using a series of detectors based on the high pressure xenon gas time projection chamber (HPXeTPC) technology. The previous stage of this family of detectors was NEXT-White, the first radiopure detector of the NEXT series, with 5kg of Xe. Its goals were a detailed assessment of the backgrounds for Xe-136 double beta decay searches, the measurement of the Xe-136 2νββ half-life and the characterisation of the detector performance at energies close to the Xe-136 decay energy. Since its decommissioning in 2021, NEXT has entered its current stage, with the construction of the NEXT-100 detector. NEXT-100 will hold up to 80 kg and is estimated to start running by the beginning of 2024. This detector will perform NEXT's first sensitive neutrinoless double beta decay search in Xe-136. Both NEXT-White and NEXT-100 are hosted by the Laboratorio Subterráneo de Canfranc, located in the Spanish Pyrenees. R&D has also started for next-generation NEXT detectors beyond NEXT-100, which may enable for the first time the detection of the daughter 136Ba++ ion produced in the Xe-136 decay. In this talk we will discuss the latest results of the experiment brought by NEXT-White, including NEXT’s first search for the 0νββ, the status of NEXT-100 construction and R&D prospects towards future tonne scale detectors.
We present an analysis of neutrinoless double beta decay (DBD) mediated by non-interfering exchange of light and heavy neutrinos, in the context of current calculations of nuclear matrix elements (NME) in different nuclear models.
We derive joint upper bounds on the light and heavy contributions to the Majorana effective mass through an updated combination of the latest data from the following experiments and isotopes: GERDA and MAJORANA (Ge), KamLAND-Zen and EXO (Xe), and CUORE (Te), for different choices of NME. We then consider three Ton-scale project which might provide possible DBD evidence at >3sigma level in the allowed parameter space: LEGEND (Ge), nEXO (Xe) and CUPID (Mo). The combinations of possible DBD signals mediated by light, heavy, and light+heavy neutrinos is studied for different choices of NME, showing the conditions under which the underlying mechanism(s) can be identified or not. In particular, the role of NME ratios in different isotopes is elucidated through appropriate graphical representations. By using different "true" and "test" NME sets as a proxy for NME uncertainties, significant bias effects may emerge, confusing the identification of light vs heavy neutrino contributions to DBD signals. These results provide further motivations for more accurate NME calculations and for multi-isotope DBD searches at the Ton scale.
Observation of the neutrinoless double-beta ($0\nu\beta\beta$) decay, a process forbidden in the Standard Model, would demonstrate lepton number violation and provide key insights into matter-antimatter asymmetry of the Universe and the Majorana nature of neutrino. It is a challenging quest that requires experimental conditions ensuring little to no background, superb energy resolution, and high masses of studied isotopes. The Large Enriched Germanium Experiment for Neutrinoless double-beta Decay (LEGEND) is designed to provide such experimental conditions, and will operate in virtually background free regime, aiming at unambiguous discovery of $0\nu\beta\beta$ decay of $^{76}$Ge.
The first stage of the experiment, LEGEND-200, utilizes up to 200 kg of $^{76}$Ge-enriched detectors, and is currently in operation. The second stage, LEGEND-1000, will employ 1000 kg of enriched germanium, and is projected to reach sensitivity to the half-life of $0\nu\beta\beta$ decay in excess of $10^{28}$ years, a big leap compared to previously achievable limits.
LEGEND-200 is located at LNGS, Italy, inheriting the existing GERDA facility. The commissioning of LEGEND-200 was completed in October 2022 with the deployment of 140 kg of enriched high-purity $^{76}$Ge detectors, and physics data taking started in March 2023. In this talk I will summarize the status of the experiment and its current and future milestones. The talk is presented on behalf of the LEGEND collaboration.
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
We discuss the preliminary results of the new global nCTEQ23 nuclear PDF analysis, combining a number of our previous analyses into one consistent framework with updates to the underlying theoretical treatment as well as the addition of new available data. In particular, the nCTEQ23 global release will be the first nCTEQ release containing neutrino DIS scattering data in a consistent manner together with JLab high-x DIS data and new LHC p-Pb data. These additions will allow to improve the data-driven description of nuclear PDFs in new regions such as the gluon for very low-x or the nuclear strange quark PDF.
A first measurement of the 1-jettiness event shape observable in neutral-current deep-inelastic electron-proton scattering is presented. The 1-jettiness observable $\tau_{1b}$ is defined such that it is equivalent to the thrust observable defined in the Breit frame. The data were taken in the years 2003 to 2007 with the H1 detector at the HERA ep collider at a center-of-mass energy of 319 GeV and correspond to an integrated luminosity of 351.6 pb$^{−1}$. The triple-differential cross sections are presented as a function of the 1-jettiness $\tau_{1b}$, the event virtuality $Q^2$ and the inelasticity $y$ in the kinematic region $Q^2>150$ GeV$^2$. The data have sensitivity to the parton distribution functions of the proton, the strong coupling constant and to resummation and hadronisation effects. The data are compared to selected predictions.
Paper in preparation, will be ready for EPS2023
The Future Circular Collider (FCC) is a post-LHC project aiming at direct and indirect searches for physics beyond the SM in a new 91-km tunnel at CERN. In addition, the FCC-ee offers unique possibilities for high-precision studies of the strong interaction in the clean environment provided by e$^+$e$^-$ collisions, thanks to its broad span of center-of-mass energies, ranging from the Z pole to the top-pair threshold, and its huge integrated luminosities yielding $O(5\times 10^{12})$ and $O(2\times10^8)$ jets from Z and W bosons decays respectively, $O(2\times 10^5)$ pure gluon jets from Higgs boson decays, as well as $O(2\times10^6)$ top quarks. In this contribution, we will summarize the impact that the FCC-ee will have on our improved knowledge of the strong force including: (i) QCD coupling determinations with permil uncertainties, (ii) ultraprecise studies of parton radiation and jet properties (ligh-quark/heavy-quark/gluon discrimination, jet substructure, etc.); and (iii) accurate scrutiny of nonperturbative QCD phenomena (color reconnection, hadronization, final-state hadron interactions,...).
The Large Hadron-electron Collider and the Future Circular Collider in electron-hadron mode [1] will make possible the study of DIS in the TeV regime providing electron-proton (nucleus) collisions with per nucleon instantaneous luminosities around $10^{34}$ ($10^{33}$) cm$^{−2}$s$^{−1}$. In this talk we review the opportunities that these proposals offer for the determination of the partonic structure of protons and nuclei in view of the recent findings at the LHC and the future possibilities at the LHC and the EIC. The complete unfolding of all parton species in a single experiment with high precision in an extended kinematic domain would be possible already in a first stage of the machine with modest integrated luminosity. We also present the determination of the strong coupling constant with per mille accuracy using inclusive and jet DIS data.
[1] LHeC Collaboration and FCC-he Study Group: P. Agostini et al., J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
The measurement of exclusive $e^+e^-$ to hadrons processes is a significant part of the physics program of $BABAR$ experiment, aimed to improve the calculation of the hadronic contribution to the muon g−2 and to study the intermediate dynamics of the processes. We present the most recent studies performed on the full data set of about 470 $\text{fb}^{-1}$ collected at the PEP-II $e^+e^-$ collider at a center-of-mass energy of about 10.6 GeV.
In particular, we report the results on $e^+e^- \to \pi^+\pi^-\pi^0$. From the fit to the measured $3\pi$ mass spectrum we determine the products $\Gamma(V\to e^+e^-)\cal{B}(V\to 3\pi)$ for the $\omega$ and $\phi$ resonances and for $\cal{B}(\rho\to 3\pi)$. The latter isospin-breaking decay is observed with $6\sigma$ significance. The measured $e^+e^- \to \pi^+\pi^-\pi^0$ cross section is used to calculate the leading-order hadronic contribution to the muon magnetic anomaly from this exclusive final state with improved accuracy.
We show also new results on the study of $e^+e^- \to 2 K 3\pi$ processes, in an energy range from production threshold up to about 4 GeV. For each process, the cross section is measured as a function of the invariant mass of the hadronic final state. The production of several intermediate final states is also measured, allowing for the search for new decay modes of recently discovered resonances.
We present FMNLO, a framework to combine general-purpose Monte Carlo generators and fragmentation functions (FFs). It is based on a hybrid scheme of phase-space slicing method and local subtraction method, and is accurate to next-to-leading order (NLO) in QCD. The new framework has been interfaced to MG5_aMC@NLO and made publicly available in this work. We demonstrate its unique ability by giving theoretical predictions of various fragmentation measurements at the LHC, followed by comparisons with the data. With the help of interpolation techniques, FMNLO allows for fast calculation of fragmentation processes for a large number of different FFs, which makes it a promising tool for future fits of FFs. As an example, we perform an NLO fit of parton fragmentation functions to unidentified charged hadrons using measurements at the LHC. We find the ATLAS data from inclusive dijet production show a strong constraining power. Notable disparities are found between our gluon FF and that of BKK, DSS, and NNFF, indicating the necessity of additional constraints and data for the gluon fragmentation function.
Fragmentation functions are one of the key components of the factorisation theorem used to calculate heavy-flavour hadron production cross-sections. Due to their non-perturbative nature, fragmentation functions are typically constrained in the clean environments of $\mathrm{e^{+}e^{-}}$ and $\mathrm{e^{-}p}$ collisions.
Recent measurements of charm-hadron spectra and of the ratios of charmed-hadron abundances in pp collisions have questioned the universality of fragmentation functions across leptonic and hadronic collision systems. In this talk, we present measurements of differential observables that consider the surrounding hadronic density in addition to the heavy-flavour hadron itself. These measurements allow for a closer connection to the charm fragmentation functions and stronger constraints on the properties of hadronization in hadronic collisions.
We report the final measurement of the fraction of longitudinal momentum of jets carried by $\rm{D}^{0}$ and $\rm{D}_{s}^{+}$ mesons as well as $\Lambda_{c}^{+}$ baryons. We also report the final results of correlations between heavy-flavour decay electrons and charged particles from small to larger systems. The comparison of azimuthal correlations between $\Lambda_{c}^{+}$ baryons with charged particles and D mesons with charged particles in pp collisions will also be shown. The latter will provide quantitative access to the angular profile, $p_{\rm T}$ and multiplicity distributions of the jets produced by the heavy-quark fragmentation.
Properties of partonic fragmentation in QCD depend on parton flavours in $1\rightarrow2$ splitting processes in parton showers due to the different Casimir factors of quarks and gluons, and to the different masses of light- and heavy-flavour quarks. Heavy-flavour jets provide a unique experimental tool to probe these flavour dependencies, particularly at low and intermediate transverse momenta where mass effects are significant. The ALICE detector has excellent particle tracking and PID performance to tag jets with reconstructed heavy-flavour hadrons. These capabilities are essential for jet substructure studies as they remove significant contamination from heavy-flavour hadron decay products and allows us to trace the quark flavour through the splitting tree. We report a series of heavy-flavour jet substructure measurements tagged with a reconstructed $\rm{D}^{0}$ meson. These include first measurements of the jet axes differences between different recombination and grooming schemes and the jet angularities where the angular exponent can tune the sensitivity to mass and Casimir effects. Additionally, the groomed momentum fraction and opening angle of the first splitting are reported, which link to fundamental ingredients of the splitting functions. Finally, to comprehensively study the perturbative and non-perturbative aspects of jet structure, we measured the energy-energy correlators (EEC) that emphasize the angular structure of the energy flow within jets. Defined as the energy-weighted cross section of particle pairs inside jets, the EECs as a function of pair distance show a distinct separation of the perturbative from the non-perturbative regime, revealing parton flavor dependent dynamics of jet formation as well as the confinement of the partons into hadrons. Comparisons with flavour-untagged jets probe flavour dependencies from the charm quark mass and the high purity quark nature of the $\rm{D}^{0}$-tagged jet sample. Further comparisons to different MC generators will assess the role of these flavour dependencies in other parton shower prescriptions.
The focus of the session is on top quarks precision measurements and theory calculations.
Conveners:
Gauthier Durieux (CERN)
Abideh Jafari (DESY)
Narei Lorenzo Martinez (LAPP, Annecy)
Contact: eps23-conveners-t07 @desy.de
Duration: 14'+4'
The Large Hadron-electron Collider and the Future Circular Collider in electron-hadron mode [1] will make possible the study of DIS in the TeV regime providing electron-proton collisions with instantaneous luminosities of $10^{34}$ cm$^{−2}$s$^{−1}$. In this talk we will review the opportunities for measuring standard and anomalous top quark couplings, both to lighter quarks and to gauge bosons, flavour changing and conserving, through single top quark and $t\bar t$ production. We will discuss the studies in inclusive DIS of different EW parameters like the effective mixing angle and the gauge boson masses, and the weak neutral and charged current couplings of the gauge bosons. We will also review the possibilities in direct $W$ and $Z$ production, and analyse the implications of a precise determination of parton densities at the LHeC or FCC-eh on EW measurements at hadronic colliders. Special emphasis is given to possibilities to empower $pp$ and $e^+e^-$ physics at the LHC and FCC.
[1] LHeC Collaboration and FCC-he Study Group: P. Agostini et al., J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
Duration: 14'+4'
The Future Circular Collider (FCC) is a post-LHC project aiming at direct and indirect searches for physics beyond the SM in a new 91 km tunnel at CERN. The FCC with electron-positron beams (FCC-ee) will provide measurements of the Z and W bosons couplings and masses 1--3 orders of magnitude better than the present state-of-the-art. With the run around the Z pole, where the integrated luminosity is expected to be about six orders of magnitude larger than at LEP, the Z boson mass and width, as well as the $Z \to b\bar{b}$ partial width, and the forward-backward asymmetries for leptons and quarks will be measured with ppm-scale precision. As a result, the effective weak mixing angle and the electromagnetic coupling at the Z pole can be extracted with $O(10^{-5})$ relative uncertainties. Similarly, the $2\times10^8$ W boson pairs expected close to the threshold, will deliver top-notch precision determinations of the W boson mass and width at the level of few hundred keV. This new level of experimental accuracy requires a proactive study of accelerator operation and detector design beyond anything that has so far been achieved at colliders. Such studies have begun and welcome new ideas and participants. Via electroweak loop corrections or mixing of new physics with the SM particles, the indirect discovery potential for new weakly interacting particles extends up to energy scales of around 50 TeV, or down to couplings of $10^{-11}$.
Duration: 14'+4'
The bound state of two tau leptons, ditauonium $\mathcal{T}$, is the heaviest and also most compact QED atom, and remains unobserved to date. There are several motivations for its study such as precisely extracting properties of the tau lepton, carrying out novel tests of SM and its basic CPT symmetries, and searching for BSM effects impacting the tau lepton. We will first discuss the spectroscopic properties of ditauonium, including energy levels and decay channels. A systematic survey of search strategies at present and future lepton and hadron colliders will then be provided. The spin-triplet state, ortho-ditauonium, can be observed at a future super tau-charm factory (STCF) via $e^+e^-\to\mathcal{T}_1\to\mu^+\mu^-$, where a threshold scan with monochromatized beams can also provide a very precise extraction of the tau lepton mass with $\mathcal{O}(25$ keV) uncertainty. Observing pp $\to \mathcal{T}_1(\mu^+\mu^-)+X$ is possible at the LHC by identifying its displaced vertex with a good control of the combinatorial dimuon background. The spin-singlet state, para-ditauonium, will be observable in photon-photon collisions at the FCC-ee via $\gamma\gamma\to\mathcal{T}_0\to\gamma\gamma$.
References: arXiv:2202.02316, arxiv:2204.07269. arxiv:2302.07365
Duration: 14'+4'
Future Higgs Factories will allow the precise study of $e⁺e⁻\rightarrow q\bar{q}$ with $q=s,c,b,t$ interactions at different energies, from the Z-pole to high energies never reached before.
In this contribution, we will discuss the experimental prospects for the measurement of differential observables in $e⁺e⁻\rightarrow b\bar{b}$ and $e⁺e⁻\rightarrow c\bar{c}$ processes at high energies, 250 and 500 GeV, with polarised beams, using full simulation samples and the reconstruction chain from the ILD concept group.
These processes call for superb primary and secondary vertex measurements, a high tracking efficiency to correctly measure the vertex charge and excellent hadron identification capabilities using $dE/dx$. This latter aspect will be discussed in detail together with its implementation within the standard flavour tagging tools developed for ILD (LCFIPlus). In addition, prospects associated with potential improvements of the $dE/dx$ reconstruction using cluster counting techniques will also be discussed. Finally, we will briefly discuss the potential of the discovery of BSM models such as Randall-Sundrum models with warped extra dimensions, profiting from measurements of $b/c$ quark-related observables at different beam energies and polarisations.
Duration: 14'+4'
The LUXE experiment (Laser Und XFEL Experiment) is an experiment in planning at DESY Hamburg using the electron beam of the European XFEL. LUXE is intended to study collisions between a high-intensity optical laser pulse and 16.5 GeV electrons from the XFEL electron beam, as well as collisions between the laser pulse and high-energy secondary photons. This will elucidate quantum electrodynamics (QED) at the strong-field frontier, where the electromagnetic field of the laser is above the Schwinger limit. In this regime, QED is non-perturbative. This manifests itself in the creation of physical electron-positron pairs from the QED vacuum, similar to Hawking radiation from black holes. LUXE intends to measure the positron production rate in an unprecedented laser intensity regime. The experiment has received a stage 1 critical approval (CD1) from the DESY management and is finalising its technical design report (TDR). It is expected to start running in 2026. An overview of the LUXE experimental setup and its challenges and progress will be given, along with a discussion of the expected physics reach in the context of testing QED in the non-perturbative regime.
Duration: 12'+3'
One of the most intriguing indication of a theory beyond the Standard Model is the well-known discrepancy between the theoretical prediction of the muon anomalous magnetic moment $a_\mu = (g-2)_\mu/2$ and its experimental value. Another possible explanation for this inconsistency is the incorrect evaluation of the Hadronic Leading Order (HLO) contribution $a_\mu^{HLO}$, which is also the main source of the theoretical uncertainty. Furthermore, there are several tensions between the results obtained with the time-like approach based on $e^+e^-\to hadrons$ data and lattice simulations. In this context, the recently proposed MUonE experiment aims at providing a novel determination of $a_\mu^{HLO}$ through the study of elastic muon-electron scattering. The high-precision measurement of the differential cross section allows the determination of $a_\mu^{HLO}$ from the running of the electromagnetic coupling $\alpha(t)$ in the space-like momentum region.
In this contribution, we will present the status of the theoretical calculations that are required to achieve a competitive measurement of $a_\mu^{HLO}$ with the MUonE experiment. In particular, the precision goal of 10 ppm on the differential cross section requires at least a QED Next-to-Next-Leading Order (NNLO) computation of the muon-electron scattering with full mass dependency. The result for the NNLO photonic and leptonic corrections, both virtual and real, will be presented. Their implementation in Monte Carlo tools for data analysis will also be discussed.
Duration: 12'+3'
The MUonE experiment aims to measure the differential cross section of the $\mu e$ elastic scattering using the CERN SPS muon beam with mean energy of 160 GeV onto atomic electrons of a low-Z target. Thanks to the intense M2 beam with in-spill intensity of $5 \times 10^7$ muons/s, a precise measurement of the scattering angles allows to extract the running QED coupling, and by a novel approach the leading hadronic contribution to the muon anomalous magnetic moment. This determination will be completely independent from the usual data-driven method, based on measurements of the R-ratio in $e^+e^-$ annihilations, and could clarify the discrepancy between these results and the theory-based estimates from recent lattice calculations.
The MUonE challenge resides in the control of theoretical and experimental uncertainties to an unprecedented level of precision for a scattering experiment. A pilot run is scheduled for this summer, which will operate for the first time multiple tracking stations and a prototype electromagnetic calorimeter connected to a triggerless readout system. The status and plans of the experiment will be presented.
Conveners:
Thomas Blake (University of Warwick)
Marzia Bordone (CERN)
Thibaud Humair (MPP)
Contact: eps23-conveners-t08 @desy.de
Physics of the up-type flavour offers unique possibilities of testing the Standard Model (SM) compared to the down-type flavour sector. Here, I discuss SM and New Physics (NP) contributions to the rare $D$-meson decays $ D^0 \to \pi^+ \pi^- \ell^+ \ell^- $. In particular, I discuss the effect of including the lightest scalar isoscalar resonance in the SM picture, namely, the $f_0 (500)$. Other than showing in the total branching ratio at an observable level, the $f_0 (500)$ resonance manifests as interference terms with the vector resonances, such as at high invariant mass of the leptonic pair in distinct angular observables. Recent data from LHCb optimize the sensitivity to $P$-wave contributions. I point out that a different definition of the observables used therein is sensitive to the $S$-wave. Finally, I study which observables are sensitive to the interference of the $S$-wave with generic NP contributions in semi-leptonic four-fermion operators.
We update our analysis of D meson mixing including the combined analysis of B and D decays. We derive constraints on absorptive and dispersive CP violation by combining all available data. We also provide posterior distributions for observable parameters appearing in D physics, as well as for the CKM phase gamma.
Measuring the total charm cross section is important for the comparison to theoretical predictions with the highest precision available today, which are
completely known up to NNLO in perturbative QCD for the total inclusive
charm cross sections in pp collisions, while they are only known up to NLO+NLL for
differential and fiducial cross sections. All total charm cross-section
measurements obtained at LHC so far are either invalidated by not yet
accounting for the recent observations of the nonuniversality of charm
fragmentation, or fiducially limited to the central rapidity range only,
or to some other fiducial range, such that no NNLO comparison is possible.
Combining the published information from all LHC experiments on the measured
charm fiducial cross sections with all available charm fragmentation
nonuniversality measurements, the first measurement of the inclusive total
charm cross section in pp collisions at 5 TeV that accounts for charm
fragmentation nonuniversality is obtained, and compared to the
corresponding NNLO QCD prediction.
In order to achieve this, the remaining nonmeasured phase space regions
are interpolated or extrapolated by constraining the uncertainties of the
shape of the available perturbative NLO+NLL QCD predictions in a
novel data driven way including nonuniversality corrections, such that these uncertainties are considerably
reduced without the need to refer to any particular nonperturbative model for charm fragmentation nonuniversality.
The size of the effect of ignoring charm fragmentation nonuniversality
in previous inclusive total charm cross section evaluations is also
quantified.
We propose new methods to determine the $\Upsilon(4S)\to B^+B^-$ and $\Upsilon(4S)\to B^0\overline B^0$ decay rates. These rates and their ratio are some of the limiting uncertainties in absolute branching fraction measurements, and thereby for a variety of applications, such as flavor symmetry relations. The new methods we propose are based in one case on exploiting the $\Upsilon(5S)$ data sets, in the other case on the different average number of charged tracks in $B^\pm$ and $B^0$ decays. We estimate future sensitivities using these methods and discuss possible measurements of $f_d / f_u$ at the LHC.
The Future Circular Collider (FCC) is a post-LHC project aiming at direct and indirect searches for physics beyond the SM in a new 91 km tunnel at CERN. The abundant production of beauty and charm hadrons in the $8\times 10^{12}$ Z boson decays expected in e+e- collisions at FCC-ee offers outstanding opportunities in flavour physics with b and c hadron samples that exceed those available at Belle II by a factor of 20, and are complementary to the LHC heavy-flavour programme. A wide range of measurements will be possible in heavy-flavour spectroscopy, rare decays of heavy-flavoured particles and CP-violation studies, which will benefit from the low-background experimental environment, the high Lorentz boost, and the availability of the full spectrum of hadron species. The tau pairs production in the Tera-Z phase will be 3 times larger than at Belle II, and thanks to more favorable experimental conditions (better tau - hadrons separation, better tau hemispheres separation, higher momentum tracks) it will be possible to significantly improve the determinations of the tau-lepton properties -- lifetime, leptonic and hadronic widths, and mass -- allowing for important tests of lepton universality. Furthermore, it will be possible to extend the searches for Lepton-Flavour-Violating tau decays, and, via the measurement of the tau polarisation, FCC-ee can access a precise determination of the neutral-current couplings of electrons and taus. These measurements present strong experimental challenges to exploit as far as possible statistical uncertainties $O(10^{-5})$, raising strict detector requirements. This contribution will present an overview of the broad potential of the FCC-ee flavour physics program and also some preliminary results from recent analyses.
We analyze the implications of current $b \to s \ell \ell$ ($\ell=e, \, \mu$) measurements on several $B \to K^* \tau^+ \tau^-$ observables under the assumption that the possible new physics can have both universal as well as nonuniversal couplings to leptons. For these new physics solutions, we intend to identify observables with large deviations from the Standard Model (SM) predictions as well as to discriminate between various new physics scenarios. For this we consider the $B \to K^* \tau^+ \tau^-$ branching fraction, the $K^*$ longitudinal fraction, the tau forward-backward asymmetry and the optimized angular observables. Further, we construct the $\tau - \mu$ lepton-flavor differences ($Q_{\tau \mu}$) between these tau observables and their muonic counterparts in $B \to K^* \mu^+ \mu^-$ decay along with the lepton-flavor ratios ($R_{\tau \mu}$) of all of these observables. We find that the current data allows for deviations ranging from 25% up to an order of magnitude from the SM value in a number of observables. A precise measurement of these observables can also discriminate between a number of new physics solutions.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
This talk presents a model independent search for an additional heavy, mostly sterile, neutral lepton (HNL) which is capable of mixing with the Standard Model tau neutrino with a mixing strength of $|U_{\tau 4}|^{2}$, corresponding to the square of the extended Pontecorvo–Maki–Nakagawa–Sakata (PMNS) matrix element. HNLs are hypothetical particles predicted by many beyond-Standard-Model theories. HNLs can explain oscillation anomalies as well as the baryon asymmetry in the universe through leptogenesis, and can also provide dark matter candidates. We search for HNL production in the decays of the tau lepton analyzing a data set from the $BABAR$ experiment, with a total integrated luminosity of 424 fb$^{-1}$. A kinematic approach is taken and no assumptions are made regarding the model behind the origins of the HNL, its lifetime or decay modes. A binned likelihood technique is utilized and HNLs of mass $100
The smallness of neutrino masses, together with neutrino oscillations could be pointing to physics beyond the standard model, can be naturally accommodated by the so-called "seesaw" mechanism, in which new Heavy Neutral Majorana Leptons (HNL) are postulated. Several models with HNLs exist that incorporate the seesaw mechanism, sometimes also providing a DM candidate or giving a possible explanation for the baryon asymmetry. This talk presents searches for HNLs interpreted in such models, using both prompt and long-lived signatures in CMS using the full Run-II data-set collected at the LHC.
Neutrinos are the most elusive particles known. Heavier sterile neutrinos mixing with the Standard Model partners might solve the mystery of the baryon asymmetry of the universe and take part in the mass generation mechanism for the light neutrinos. Future lepton colliders, including e+e− Higgs factories, as well as multi-TeV electron and muon machines, will provide the farthest search reach for such neutrinos in the mass range from above the Z pole into the multi-TeV regime. In our contribution, we will discuss the future lepton collider search potential for such particles in their prompt decays. We will also present a new approach to use kinematic variables to constrain the nature of heavy neutrinos, probing their Majorana or Dirac character. Finally, we will discuss the complementarity in the flavor-mixing parameter space between the two types of lepton colliders.
MicroBooNE is an 85-tonne active mass liquid argon time projection chamber (LArTPC) at Fermilab. With an excellent calorimetric, spatial and energy resolution, the detector was exposed to two neutrino beams between 2015 and 2020. These characteristics make MicroBooNE a powerful detector not just to explore neutrino physics, but also for Beyond the Standard Model (BSM) physics. Recently, MicroBooNE has published a search for heavy neutral leptons and Higgs portal scalars from kaon decays. In addition, MicroBooNE has developed tools for a neutron-antineutron oscillation search for the upcoming Deep Underground Neutrino Experiment (DUNE). This talk will explore MicroBooNE’s capabilities for BSM physics and highlight its most recent results.
The Short-Baseline Near Detector (SBND) is a 112-ton liquid argon time projection chamber (LArTPC) detector located 110-meters downstream the Booster Neutrino Beam target at Fermilab. As the near detector of the Short-Baseline Neutrino Program, SBND is especially sensitive to any new particles produced in the beam. In addition to the excellent spatial and energy resolution of the LArTPC technology, SBND features photon detection and cosmic–ray tagger systems achieving ns-time resolution. In this talk, we will review SBND’s capabilities and prospects for searches for Beyond Standard Model physics such as heavy neutral leptons, sub-GeV dark matter, and dark neutrinos.
Conveners:
Alessandra Gnecchi (INFN, Milan)
Craig Lawrie (DESY)
Alexander Westphal (DESY)
Contact: eps23-conveners-t11 @desy.de
The RG equation for the effective potential in the leading log (LL) approximation is constructed which is valid for an arbitrary scalar field theory in 4 dimensions, including non-renormalizable case. In renormalizable case this equation is reduced to thew usual RG equation with one-loop beta-function.
The solution to this equation sums up the leading $\log\phi$ contributions to all orders of perturbation theory. In general, this is the second order nonlinear partial differential equation, but in some cases it can be reduced to the ordinary one. For particular examples, this equation is solved numerically and the LL effective potential is constructed. As an illustration we consider two examples: the power like potential and the cosmological potential $\tan^2\phi$. In the physically interesting cases the derived equation opens the possibility to study the properties of the effective potential, the presence of additional minimum, spontaneous symmetry breaking, stability of the ground stat, etc.
We present new soliton solutions in a class of four-dimensional supergravity theories. For special values of the parameters, the solutions can be embedded in the gauged maximal N=8 theory and uplifted in the higher-dimensional D=11 theory. We also find BPS soliton configurations, preserving a certain fraction of supersymmetry.
Solitons play a special role in classical physics as well as in quantum and string theory, determining a richer structure of the full non-perturbative regime. This different class of exact solutions can be obtained from a double Wick rotation of a former black hole configuration, the new solutions characterizing a regular spacetime configuration devoid of horizons.
In non-supersymmetric AdS gravity, solitons play a fundamental role as they can be treated as ground states for suitable field theories. The negative mass of the AdS soliton has a natural interpretation as the Casimir energy of gauge theory living on the conformal boundary. In a non-susy version of the AdS/CFT conjecture, this would indicate that the soliton is the lowest energy solution with the chosen boundary conditions, leading to a new kind of positive energy conjecture.
Finally, BPS gravitational solitons preserving a certain fraction of supersymmetry can be found, providing a privileged framework in studying the system evolution: the resulting dynamical equations are in fact typically first-order, as compared to the standard second order equations of motion.
I will present a new computer program, feyntrop, which uses the tropical Monte Carlo approach to evaluate Feynman integrals numerically.
In order to apply this approach for physical kinematics, we introduce a new parametric representation of Feynman integrals that implements the causal iε prescription concretely while retaining projective invariance. feyntrop can efficiently evaluate dimensionally regulated, quasi-finite Feynman integrals, with not too exceptional kinematics in the physical region, with a relatively large number of propagators and with arbitrarily many kinematic scales. I will provide the necessary mathematical background and discuss many explicit examples of evaluated Feynman integrals.
We study the one loop renormalisation of 4d SU(N) Yang-Mills theory with M adjoint representation scalar multiplets. We calculate the coupled one-loop renormalization group flows for this theory by developing an algebraic description, which we find to be characterised by a non-associative algebra of marginal couplings. The 4d one loop beta function of the gauge coupling $g^2$ vanishes for the case M = 22, which is intriguing for string theory. There are real fixed flows (fixed points of $λ/g^2$) only for $M \geq 406$, rendering one-loop fixed points of the gauge coupling and scalar couplings incompatible.
We discuss a new classical action that enables efficient computation of the gluonic tree amplitudes but does not contain any triple point vertices. This new formulation is obtained via a canonical transformation of the light-cone Yang-Mills action, with the field transformations based on Wilson line functionals. In addition to MHV vertices, the action contains also N^kMHV vertices, where 1≤k≤n−4, and n is the number of external legs. We computed tree-level amplitudes up to 8 gluons and found agreement with standard results. In order to systematically develop quantum corrections to this new action, we first study the one-loop effective action for the MHV action, where we were able to demonstrate that there are no missing loop contributions when treating the quantum corrections this way. Although successful, the effective action still uses the Yang-Mills vertices in the loop. To overcome this and make the MHV vertices explicit in the loop we derived the one-loop effective action via a different approach. We now extend this to obtain loop amplitudes using our new action. The presentation is based on a manuscript under preparation; https://doi.org/10.1007/JHEP11(2022)132 ; and https://doi.org/10.1007/JHEP07(2021)187.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
The international Electron Proton Ion Collider (ePIC) experiment collaboration has formed to design and construct the first detector to be ready at the beginning of operation of the Electron-Ion Collider (EIC), a new collider to be built at the Brookhaven National Laboratory. This new facility aims to understand the properties of nuclear matter and its emergence from the underlying partonic structure and dynamics of quarks and gluons, thanks to its high luminosity (10$^{33}$ – 10$^{34}$ cm$^{-2}$s$^{-1}$ and polarized beams with $\sqrt{s_{ep}}$= 28 – 140 GeV.
Almost a quarter of century after the shutdown of the HERA collider, this new microscope for nuclear matter will start operations. By measuring inclusive and semi-inclusive DIS and exclusive processes in electron-proton/ion collisions, the emergence of nuclear properties can be studied by precisely imaging gluons and quarks inside protons and nuclei, such as their distributions in space and momentum, the role of quarks and gluons in building the nucleon spin and the properties of gluons in nuclei at high energies, including the possibility to probe the gluon saturation regime.
The planned measurements demand challenging detector capabilities in terms of hermeticity, energy resolution, tracking precision, particle identification and required pseudorapidity coverage. The ePIC detector will be located at the IP6 interaction region of the EIC accelerator and it is planned to be completely built by mid-2031 to start commissioning with cosmics and early accelerator operations. The ePIC planned physics performance as well as the main choices for its detector sub-systems will be described, emphasizing the novel aspects in terms of technologies being used in this new high-energy physics experiment. The design status of ePIC and future path toward project approval and construction will be discussed.
The Large Hadron-electron Collider and the Future Circular Collider in electron-hadron mode [1] will make possible the study of DIS in the TeV regime providing electron-proton (nucleus) collisions with per nucleon instantaneous luminosities around $10^{34}$($10^{33}$) cm$^{−2}$s$^{−1}$ by colliding a 50-60 GeV electron beam from energy-recovery linac with the LHC/FCC hadron beams concurrently with other experiments for hadron-hadron collisions. Here we describe the current detector design for such experiments [1,2] and the key developments needed, included in the 2021 ECFA detector research and development roadmap [3], particularly concerning machine-detector interface, large acceptance tracking and calorimetry, and the technological choices to be taken in order to fulfil the demands of their physics programmes.
[1] LHeC Collaboration and FCC-he Study Group: P. Agostini et al., J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
[2] K.D. J. Andre et al., Eur. Phys. J. C 82 (2022) 1, 40, e-Print: 2201.02436 [hep-ex].
[3] ECFA Detector R&D Roadmap Process Group, report CERN-ESU-017, http://cds.cern.ch/record/2784893, 10.17181/CERN.XDPL.W2EX.
The CMS experiment at CERN uses a two-stage trigger system to filter and store events of physics importance: a hardware-based Level 1 (L1) trigger that uses fast electronics (based on FPGA's and ASIC's) to process data in a pipeline fashion at 40 MHz with an output rate of around 100 kHz and a software-based High-Level Trigger (HLT) run on computer farms with an average output rate of around 1.5 kHz. Many novel trigger algorithms, coupled with technological developments such as heterogeneous computing in GPU's were developed to cope with the increased centre of mass energy, instantaneous luminosity and the physics needs of Run3. This talk summarises the performance of the CMS HLT during the first year of Run3.
The High-Luminosity LHC will open an unprecedented window on the weak-scale nature of the universe, providing high-precision measurements of the standard model as well as searches for new physics beyond the standard model. Such precision measurements and searches require information-rich datasets with a statistical power that matches the high-luminosity provided by the Phase-2 upgrade of the LHC. Efficiently collecting those datasets will be a challenging task, given the harsh environment of 200 proton-proton interactions per LHC bunch crossing. For this purpose, CMS is designing an efficient data-processing hardware trigger (Level-1) that will include tracking information and high-granularity calorimeter information. Trigger data analysis will be performed through sophisticated algorithms such as particle flow reconstruction, including widespread use of Machine Learning. The current conceptual system design is expected to take full advantage of advances in FPGA and link technologies over the coming years, providing a high-performance, low-latency computing platform for large throughput and sophisticated data correlation across diverse sources. The expected impact on the physics reach of the experiment will be summarized in this presentation, and illustrated with selected benchmark channels.
During the second Long Shutdown (LS2) of the LHC, three new detectors were installed in ALICE, implementing continuous data readout with online reconstruction and data compression to benefit from the increased luminosity of the LHC beam during Run 3 and 4. The interaction rate in ALICE has been gradually increasing; the goal is to reach 50 kHz for Pb-Pb and up to 1 MHz for pp collisions. One of the new ALICE detectors is the Fast Interaction Trigger (FIT). Its main functionality includes generating minimum latency interaction triggers (<425 ns), luminosity monitoring with online feedback to the LHC, precision collision time with a resolution better than 40 ps, determination of the centrality and event plane for heavy-ion collisions and tagging of diffractive and ultra-peripheral events. The FIT detector consists of three subsystems: two fast Cherenkov arrays with 2 cm thick quartz radiators coupled to modified MCP-PMT photosensors (FT0), a large-area scintillator disc (FV0) implementing a novel light collection system, and a Forward Diffractive Detector (FDD). FDD comprises two plastic scintillator arrays with fast wavelength shifting bars, optical fibre bundles and PMTs. The FDD arrays are located ~20 m at the opposites of the interaction point. A brief description of the detector and its functionalities will be given together with the commissioning and its performance status.
The tracking performance of the ATLAS detector relies critically on its 4-layer Pixel Detector. As the closest detector component to the interaction point, this detector is subjected to a significant amount of radiation over its lifetime. At the start of the LHC proton-proton collision RUN3 in 2022, the innermost layer IBL, consisting of planar and 3D pixel sensors, had received an integrated fluence of approximately Φ = 1 × 10^15 1 MeV neq/cm2. The ATLAS collaboration is continually evaluating the impact of radiation on the Pixel Detector. In this talk the key status and performance metrics of the ATLAS Pixel Detector are summarised, and the operational experience and requirements to ensure optimum data quality and data taking efficiency will be described, with special emphasis to radiation damage experience. A quantitative analysis of charge collection, dE/dX, occupancy reduction with integrated luminosity, underdepletion effects, effects of annealing will be presented and discussed, as well as the operational issues and mitigation techniques adopted for the LHC Run3.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
The LUXE experiment aims at studying high-field QED in electron-laser and photon-laser interactions, with the 16.5 GeV electron beam of the European XFEL and a laser beam with power of up to 350 TW. The experiment will measure the spectra of electrons, positrons and photons in expected ranges of $10^{−3}$ to $10^9$ per 1 Hz bunch crossing, depending on the laser power and focus. These measurements have to be performed in the presence of low-energy high radiation-background. To meet these challenges, for high-rate electron and photon fluxes, the experiment will use Cherenkov radiation detectors, scintillator screens, sapphire sensors, as well as lead-glass monitors for backscattering off the beam-dump. A four-layer silicon-pixel tracker and a compact electromagnetic tungsten calorimeter will be used to measure the positron spectra. The layout of the experiment and the expected performance under the harsh radiation conditions will be presented. The experiment has received stage 1 critical approval (CD1) from the DESY management and is in the process of preparing its technical design report (TDR). It is expected to start running in 2026.
The Scintillating Fibre (SciFi) is a new tracker detector after magnet at LHCb, which was installed in last year and bas been being under commissioning using LHC 2022 and 2023 early collision data. This detector was built from scintillating fibres with a diameter of 250 um. Scintillation light from fibres is recorded with arrays of state-of-the-art multi-channel silicon photomultipliers (SiPMs). A custom ASIC is used to digitize the SiPM signals. Subsequent digital electronics performs clustering and data-compression before the data is sent via optical links to the DAQ system. The front-end electronics (FEE) internal clock per data link is adjusted in order to achieve low transmission error rates of the data transmission before commissioning with beam. Then SciFi FEE time phase is calibrated with respect to beam interactions to capture detector signals with the correct phase and in the correct bunch cycle. The master clock and control phase are scanned with granularity around 0.78ns to obtain the beam scan samples, and the baseline time (t0) per data link is defined using these samples. A time offset is applied to t0 for best efficiency, which is determined using specific SciFi simulation, given that particle arrival times for each data link from proton-proton collisions are different. Moreover, the t0 has been monitored over a long time scale, which shows a good time stability of SciFi electronics. The detector position is measured and monitored by a camera system, which offers the first detector position information. The most precise alignment information is obtained with a software algorithm that uses charged particle trajectories. Positions of each half-layer, module and mat are parametrized by three translations and three rotations and these alignment parameters play an important role in the track reconstruction. In addition, the detector position resolution in track reconstruction is improved significantly after alignment. This talk will give an overview of the SciFi, and then present the experience during commissioning and preliminary performance.
High intensity beams provide a significant challenge to DAQ systems, in particular when reading out many sensors. The MUonE experiment has been conducting beam tests using the M2 muon beam at CERN, with in-spill intensity of $5 \times 10^7$ muons/s, using silicon strip sensors with a bandwidth of 5 Gb/s per module. A pilot run is scheduled for late summer, which will incorporate several such modules arranged in three tracking stations and a prototype electromagnetic calorimeter connected to a triggerless readout system. Limits on processing and data storage will necessitate online event selection to be implemented in hardware, on state-of-the-art AMD-Xilinx UltraScale+ FPGAs.
The status and plans of the MUonE DAQ operation will be presented, outlining a general purpose platform for online event selection, from simple occupancy cuts, to track reconstruction, vertexing and particle identification using low-latency machine learning.
LUXE (Laser Und XFEL Experiment) is a new experiment in planning in Hamburg, which will study Quantum Electrodynamics at the strong-field frontier. LUXE intends to measure the positron production rate in this unprecedented regime by using, among others, a silicon tracking detector. The large number of expected positrons traversing the sensitive detector layers results in an extremely challenging combinatorial problem, which can become computationally expensive for classical computers. We investigate the potential future use of gate-based quantum computers for pattern recognition in track reconstruction, based on a quadratic unconstrained binary optimisation and a quantum graph neural network. The talk will introduce these approaches and compare their performance with a classical track reconstruction algorithm.
The Quantum Angle Generator (QAG) is a new quantum machine learning model designed to produce precise images on current Noise Intermediate Scale (NISQ) Quantum devices. The QAG model uses variational quantum circuits as its core, and multiple circuit architectures are evaluated. With the addition of the MERA-up sampling architecture, the QAG model achieves exceptional results that are analyzed and evaluated in detail. To the best of our knowledge, this is the first quantum model to achieve such accurate outcomes.
This study explores the QAG model's noise robustness through an extensive quantum noise study. The results indicate that the model when trained on a quantum device, can learn the hardware noise behavior and produce excellent outcomes. When simulated quantum hardware noise is included, the model's results remain stable until approximately 1.5% of noise during inference and almost 3% in training. However, running the noise-less trained model on real quantum hardware leads to a decrease in accuracy. If the model is trained on hardware, it can learn the underlying noise behavior, where the same precision is achieved by the noisy simulator. Additionally, the training showed that the model can recover even with significant hardware calibration changes during training with up to 8% of noise.
This work demonstrates the QAG model's ability to learn hardware noise behavior and deliver accurate results in the presence of realistic noise levels expected in real-world quantum hardware. The QAG model is utilized on simulated calorimeter shower images, which are significant in high-energy physics simulations used to determine particle energies and identify unknown particles at CERN's Large Hadron Collider.
Particle track reconstruction plays a crucial role in the exploration of new physical phenomena, particularly when rare signal tracks are obscured by a significant background. In muon colliders where beam muons interacting with the detector produce secondary and tertiary background particles, track reconstruction can be computationally intensive due to the large number of detector hits. The formulation of the reconstruction task as Quadratic Unconstrained Binary Optimisation (QUBO) enables the use of quantum computers, which are believed to offer an advantage over classical computers in such optimisation scenarios.
Timing information provided by the tracker is a key element in suppressing the large background at muon colliders. The QUBO parameters are determined by combining spatial and temporal information from detector hits, resulting in a 4D quantum algorithm.
To demonstrate the effectiveness of this approach, the quantum algorithm is used to reconstruct signal tracks from samples consisting of Monte Carlo simulated charged particles overlaid with background hits. I will present the obtained reconstruction performance and discuss possible paths for further improvements.