- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
The European Physical Society Conference on High Energy Physics (EPS-HEP) is one of the major international conferences that reviews the field every second year since 1971 and is organized by the High Energy and Particle Physics Divison of the European Physical Society. Previous conferences in this series were held in Hamburg (online), Ghent, Venice, Vienna, Stockholm, Grenoble, Krakow, Manchester, Lisbon, and Aachen.
The 2023 European Physical Society conference for high energy physics will be hosted jointly by DESY and the Universität Hamburg. The conference will feature plenary, review and parallel sessions covering all major areas and developments in high energy and particle physics, astroparticle physics, neutrino physics and related areas.
We very much hope that this conference will be an event that brings scientists from our field together in person to discuss science and enjoy the excitement of an in-person conference!
We encourage participation in person, but limited remote participation will also be possible.
We ask everyone to register for the EPS-HEP2023 conference, if you want to attend any of the sessions. Remote access to sessions will be possible using a key given to registered participants only. For remote participation the registration fee will be signficantly reduced.
Participants to the conference are invited to submit abstracts for parallel session talks and for posters. Please follow the instructions at Call for Abstracts to submit and manage your abstracts.
The proceedings of the EPS-HEP2023 conference will be published in PoS - Proceedings of Science, the open access online journal organised by SISSA, the International School for Advanced Studies based in Trieste.
Registration | |
Opening | February 1, 2023 |
End of early registration | May 30, 2023 (extended to June 26, 2023) |
End of late registration | August 1, 2023 |
Abstract submission | |
Opening | March 8, 2023 |
Closing | June 2, 2023 |
Acceptance notification | June 20, 2023 |
Conveners:
Livia Conti (INFN)
Carlos Perez de los Heros (Uppsala University)
Martin Tluczykont (Universität Hamburg)
Gabrijela Zaharijas (UNG)
Contact: eps23-conveners-t01 @desy.de
NUSES is a new space mission project aiming to test innovative observational and technological approaches related to the study of low energy cosmic and gamma rays, high energy astrophysical neutrinos, Sun-Earth environment, Space weather and Magnetosphere-Ionosphere-Lithosphere Coupling (MILC). The satellite will host two experiments, named Terzina and Zirè. While Terzina will focus on space based detection of ultra high energy cosmic ray or neutrino induced extensive air showers, Zirè will perform measurements of electrons, protons and light nuclei from few up to hundreds of MeVs, also testing new tools for the detection of cosmic MeV photons. Monitoring of possible MILC signals will also be possible extending the sensitivity to very low energy electrons with a dedicated Low Energy Module (LEM). Innovative technologies for space-based particle detectors, e.g. exploiting Silicon Photo Multipliers (SiPMs) for the light readout system, will be adopted. In this work, a general overview of the scientific goals, the design activities, and the overall status of the NUSES mission will be presented and discussed.
Axionlike particles (ALPs) are predicted in many extensions of the Standard Model and are viable dark matter candidates. These particles could mix with photons in the presence of a magnetic field. Searching for the effects of ALP-photon mixing in gamma-ray observations of blazars has provided some of the strongest constraints on ALP parameter space so far. For the first time, we perform a combined analysis on Fermi Large Area Telescope data of three bright flaring flat-spectrum radio quasars, with the blazar jets themselves as the dominant mixing region. We include a full treatment of photon-photon dispersion within the jet and account for the uncertainty in our B-field model by leaving the field strength free in the fitting. Overall, we find no evidence for ALPs but are able to exclude the photon-ALP couplings > 5e-12 / GeV for ALP masses between 5 neV and 200 neV with 95% confidence. In this mass region, these are the strongest bounds on the photon-ALP coupling to date from gamma-ray observations.
The High Energy cosmic-Radiation Detection (HERD) facility, planned for launch in 2027 and is one of the scientific payloads on board of the Chiniese Space Station. HERD's primary scientific objectives covers several high energy astrophysics topics, including the search for dark matter annihilation products, precise measurements of the cosmic electron (and positron) spectrum beyond 10 TeV, analysis of cosmic ray spectra for various species up to the knee energy, and the monitoring and surveying of high-energy gamma rays. At the heart of HERD lies a 3-dimensional imaging calorimeter, surrounded by a fiber tracker, a plastic scintillator detector, and a silicon charge detector on five sides. To ensure calibration of TeV nuclei, a transition radiation detector is employed. Thanks to its design with five instrumented sides, HERD has an acceptance area an order of magnitude greater than that of existing experiments. In this presentation, I will provide an overview of the recent progress made in the HERD project.
The IceCube Neutrino Observatory has detected neutrinos from various astrophysical sources with its 1km3 detector volume in Antarctic ice. IceTop, the cosmic-ray detector on the surface of IceCube, consists of 81 pairs of ice-Cherenkov tanks. The rise in threshold of measurements due to accumulating snow inspired the next generation of South Pole detectors comprising of elevated scintillator panels and radio antennas controlled by a central DAQ system referred to as the Surface Array Enhancement (SAE). The planned IceCube Gen-2 Surface Array is expected be built on the same design. An initial study with the SAE prototype station was already conducted. We briefly review the Enhancement and the deployment as well as calibration status of the upcoming stations of the planned array of 32 stations. The focus of this contribution is on the radio detection of extensive air showers. A preliminary proof of concept for the Xmax estimation with the data from the 3 antennas of the prototype station was carried out. An extension of the method from previous analyses is also briefly discussed.
The Baikal-GVD is a large neutrino telescope being under construction in Lake Baikal. Recently it is the largest neutrino telescope operating in Northern Hemisphere. The result of the winter expedition of 2023 is the three-dimensional array of 3 456 photo-sensitive units (optical modules). The data collection is allowed by the design of the experiment while being in a construction phase. In this contribution we review the design and the basic characteristics of the Baikal-GVD detector. Some preliminary results on diffuse neutrino flux measurements with the partially completed detector will be presented.
Conveners:
Summer Blot (DESY)
Pau Novella (IFIC)
Davide Sgalaberna (ETH)
Jessica Turner (Durham University)
Contact: eps23-conveners-t04 @desy.de
Borexino is a 280-ton liquid scintillator detector that took data from May 2007 to October 2021 at Laboratori Nazionali del Gran Sasso in Italy. Thanks to the unprecedented radio-purity of the detector, the real time spectroscopic measurement of solar neutrinos from both the pp chain and CNO fusion cycle of the Sun has been performed. Borexino also reported the first directional measurement of sub-MeV $^7$Be solar neutrinos with Phase-I period (May 2007-May 2010) using a novel technique called Correlated and Integrated Directionality (CID). In this technique, the directional solar neutrino signal is discriminated from the isotropic background events by correlating the well known position of the Sun and the direction of early hits of each event, exploiting the sub-dominant Cherenkov photons emitted at early times. This angular distribution in data is fitted with the signal and background distributions from Borexino Monte Carlo simulation to infer the number of solar neutrinos, after taking into account all the systematics. For the first time, we provide the CNO rate measurement without using an independent constraint on $^{210}$Bi background rate by exploiting CID technique on full Borexino detector live time dataset. This talk will present the complete analysis strategy and latest results on CNO solar neutrinos obtained by using CID technique in Borexino. In addition, we also present the most precise CNO measurement obtained by Borexino using a multivariate technique on Phase-III dataset as used in 2022 analysis, where the novel CID result is now applied as an additional constraint
KATRIN is probing the effective electron anti-neutrino mass by a precise measurement of the tritium beta-decay spectrum near its kinematic endpoint. Based on the first two measurement campaigns a world-leading upper limit of 0.8 eV (90% CL) was placed. New operational conditions for an improved signal-to-background ratio, the steady reduction of systematic uncertainties and a substantial increase in statistics allow us to expand this reach. In this talk, I will present the status of the KATRIN experiment and provide an insight into the latest result.
The smallness of neutrino masses is one of the most intriguing puzzles in the context of particle physics. One of the most natural ways to introduce naturally suppressed masses is the construction of dimension-5 effective operators, called Weinberg operators. In the presence of only the standard Higgs scalar doublet, these kinds of operators arise in the three usual seesaw models. In this manuscript, we want to understand which may be the consequences in this context of the addition of new scalar Higgs multiplets. We will take into account the possible UV completions of such models and the phenomenology due to the dimension-6 operators.
We discuss a TeV scale extension of the Standard Model in which a dark sector seeds neutrino mass generation radiatively within the linear seesaw mechanism. Since symmetry prevents tree-level contributions, tiny neutrino masses are generated at one-loop from spontaneous lepton number violation by the small vacuum expectation value of a Higgs triplet. The model can have sizeable rates for lepton flavour violating processes such as µ → eγ. We also comment on the implications for dark-matter and collider searches.
I will present some of the results obtained regarding the emergence of decoherence in neutrino oscillations. In our model all the particles, including the source and detector, are treated dynamically and evolved consistently with Quantum Field Theory; decoherence can emerge naturally given the time evolution of the initial state and the final state considered.
We have shown that some of the assumptions commonly used in the literature, such as the covariance of the wavepackets, are inconsistent. We found that a crucial ingredient for decoherence is the localization in space-time of the neutrino creation and detection: in Nature, such a measurement is usually carried out by environmental interactions, however it could also be approximated by considering localized wavefunctions in the final state. On the other hand, if the environmental interactions are not present (for example, if the decay happens in vacuum), the final position of the daughter particles will not be measured, i.e. they will be described by plane waves instead: in this case the neutrino is not localized either, and we don't have decoherence.
A consequence of the time-evolution is that a Gaussian wavepacket will gradually spread: I will show that such an effect could in principle affect decoherence; moreover it would depend on the absolute mass scale of the neutrino, not on the $\Delta m^2$, which could offer a possible way to probe such a parameter by studying the neutrino oscillations.
Conveners:
Laura Fabbietti (TU München)
Gunther Roland (MIT)
Jasmine Brewer (CERN)
Jeremi Niedziela (DESY)
Contact: eps23-conveners-t05 @desy.de
The physics of ultraperipheral ultrarelativistic heavy-ion collisions
gives an excellent opportunity to study photon-photon interaction.
Vast moving charged particles (nuclei) are surrounded by an electromagnetic field that can be considered as a source of (almost real) photons. The photon flux scales as the square of the nuclear charge, so $^{208}$Pb has a considerable advantage over protons as far as a flux of photons is considered.
Here we discuss possible future studies of photon-photon scattering using a planned ALICE 3 apparatus. ALICE 3 is planned as a next-generation heavy-ion detector for the LHC Runs 5 and 6. The broad range of (pseudo)rapidities and lower cuts on transverse momenta open a necessity to consider not only dominant box contributions but also other, not yet studied, subleading contributions, such as double-hadronic photon fluctuations, $t/u$-channel neutral pion exchange or resonance excitations ($\gamma \gamma \to R$) and deexcitation ($R \to \gamma \gamma$). Here we include $R = \pi^0$, $\eta$, $\eta'$ contributions. The resonance contributions give intermediate photon transverse momenta. However, these contributions can be eliminated by imposing windows on diphoton invariant mass. We study in detail individual fermionic box contributions. The electron/positron boxes dominate at low $M_{\gamma \gamma} <$ 1 GeV diphoton invariant masses.
The $Pb Pb \to Pb Pb \gamma \gamma$ cross section is calculated within equivalent photon approximation in the impact parameter space. Several differential distributions will be presented and discussed. We predict a huge cross section for typical ALICE 3 cuts, a few orders of magnitude larger than for current ATLAS or CMS experiments. We also consider the two-$\pi^0$ background, which can, in principle, be separated/eliminated at the new kinematical range for the ALICE-3 measurements by imposing dedicated cuts.
Measurements of direct photons provide valuable information on the properties of the quark-gluon plasma because they are colour-neutral and created during all phases of the collision. Sources of photons include initial hard scatterings, Bremsstrahlung and the fragmentation process, jet-medium interactions, and radiation from the medium. Direct thermal photons, produced by the plasma, are sensitive to the collective flow at photon production time and an effective medium temperature. Furthermore, Bose-Einstein correlations can be used to study the space-time evolution of the medium created in heavy-ion collisions with Hanbury Brown and Twiss interferometry. Direct prompt photons produced in hadronic collisions have minimal event activity from the hard process, allowing the isolation method to suppress background photons. Isolated photon measurements in pp and p--Pb collisions can constrain NLO pQCD predictions. Hadrons correlated with isolated photons are a promising channel to study the energy loss in heavy-ion collisions and to constrain the $Q^{2}$ of the initial hard scattering, obtaining information on the amount of energy lost by the parton recoiling off the photon.
The ALICE experiment reconstructs photons from conversion photons using its excellent tracking capabilities and directly in calorimeters. Combining these methods, ALICE can measure direct photons at mid-rapidity with transverse momentum from 0.4 GeV/$c$. This talk presents ALICE measurements of direct-photon distributions using statistical (decay-photon subtraction, thermal photons) and isolation (prompt photons) methods in different collision systems and energies and their correlations.
Photonuclear reactions are induced by the strong electromagnetic field generated by ultrarelativistic heavy-ion collisions. These processes have been extensively studied in ultraperipheral collisions, in which the impact parameter is larger than twice the nuclear radius. In recent years, the observation of coherent J/$\psi$ photoproduction has been claimed in nucleus--nucleus (A--A) collisions with nuclear overlap, based on the measurement of an excess (with respect to hadroproduction expectations) in the very low transverse momentum ($p_{\rm T}$) J/$\psi$ yield. Such quarkonium measurements can help constraining the nuclear gluon distribution at low Bjorken-x and high energy. In addition, they can shed light on the theory behind photon induced reactions in A--A collisions with nuclear overlap, including possible interactions of the measured probes with the formed and fast expanding quark-gluon plasma. In order to confirm the photoproduction origin of the very low-$p_{\rm T}$ J/$\psi$ yield excess, polarization measurement is a golden observable. It is indeed expected that the produced quarkonium would keep the polarization of the incoming photon due to s-channel helicity conservation. ALICE can measure inclusive and exclusive quarkonium production down to zero transverse momentum, at forward rapidity (2.5 <$y$< 4) and midrapidity (|$y$|< 0.9). In this contribution, we will report on the new preliminary measurement of the $y$-differential cross section and the new first polarization analysis at LHC of coherently photoproduced J/$\psi$ in peripheral Pb--Pb collisions. Both measurements are conducted at forward rapidity in the dimuon decay channel. These results will be discussed together with the recent results on coherent J/$\psi$ photoproduction as a function of centrality at both mid and forward rapidities. Comparison with models will be shown when available.
One of the main challenges in nuclear physics is studying the structure of the atomic nucleus. Recently, it has been shown that high-energy heavy-ion collisions at RHIC and the LHC can complement low-energy experiments. Heavy-ion collisions provide a snapshot of the nuclear distribution at the time of collisions, offering a unique and precise probe of the nuclear structure.
This talk presents our latest developments in nuclear structure studies using the novel multi-particle correlations technique at relativistic energies. Specifically, we demonstrate how to precisely constrain the quadrupole deformation $\beta_2$ and triaxial structure of $^{129}$Xe and showcase new opportunities to observe the 𝛼-clustering structure of $^{16}$O using A Multi-Phase Transport model (AMPT). We propose a new multi-particle correlation algorithm that allows us to study genuine multi-particle correlations of the anisotropic flow, $v_{n}$, and the mean transverse momentum, $[p_{\rm T}]$. These new cumulants not only show better sensitivities to probe the nuclear structure than existing standard observables like anisotropic flow and/or event-by-event fluctuations of $[p_{\rm T}]$, but they also help to pin down the uncertainty in the width of the nucleon and the neutron skin of $^{208}$Pb at the LHC. This approach can help resolve the current discrepancy of state-of-the-art nuclear theory predictions from Ab initio and recent determination from parity-violating asymmetries in polarised electron scattering from PREX. These latest developments have vast potential in heavy-ion data taking at the LHC. They will be a crucial component in spanning the bridge between the fields of low-energy nuclear physics at the MeV energy scale and high-energy heavy-ion physics at the TeV energy scale.
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
The elastic scattering of protons at 13 TeV is measured in the range of the protons??? transverse momenta allowing the access to the Coulomb-Nuclear-Interference region. The data were collected thanks to dedicated special LHC beta* = 2.5km optics. The total cross section as well as rho-parameter, the ratio of the real to imaginary part of the forward elastic scattering amplitude, are measured and compared to various models and to results from other experiments. The measurement of exclusive production of pion pairs at the LHC using 7 TeV data is also presented. This represents the first use of proton tagging to measure an exclusive hadronic final state at the LHC. In addition, the analysis of the momentum difference between charged hadrons in pp, p-lead, and lead-lead collisions of various energies is performed in order to study the dynamics of hadron formation. The spectra of correlated hadron chains are explored and compared to the predictions based on the quantized fragmentation of a three dimensional QCD helix string. If ready, the measurement of charged particle distributions using LHC data collected at 13.6 TeV of centre-of-mass energy will also be shown.
Exclusive and diffractive physics measurements are important for better understanding of the non-perturbative regime of QCD. Recent results from the CMS and TOTEM experiments using pp collisions at a center-of-mass energy of 13 TeV are presented in this talk.
We will gain unprecedented, high accuracy insights into internal structure of the atomic nucleus thanks to lepton-hadron collision studies in the coming years at the Electron-Ion-Collider (EIC) in the United States. A good control of radiative corrections is necessary for the EIC to be fully exploited and to extract valuable information from various measurements. However, there is a significant gap to fill: there are no automated simulation tools relevant for the EIC that can incorporate next-to-leading order (NLO) QCD radiative corrections.
This talk presents our implementation of photoproduction in MadGraph5_aMC@NLO, a widely used for (N)LO calculations at the Large Hadron Collider (LHC), at fixed order. It applies to in electron-hadron collisions, in which the quasi-real photon comes from an electron and to proton-nucleus, nucleus-nucleus collisions. In addition, I will also present another extension of the MadGraph5_aMC@NLO framework towards asymmetric collisions in order to provide predictions e.g. for proton-nucleus collisions.
We evaluate the cross section for diffractive bremsstrahlung of a single photon in the $pp \to pp \gamma$ reaction at high energies and at forward photon rapidities. Several differential distributions, for instance, in ${\rm y}$, $k_{\perp}$ and $\omega$, the rapidity, the absolute value of the transverse momentum, and the energy of the photon, respectively, are presented. We compare the results for our ``exact'' model with two versions of soft-photon-approximations, SPA1 and SPA2, where the radiative amplitudes contain only the leading terms proportional to $\omega^{-1}$. The SPA1, which does not have the correct energy-momentum relations, performs surprisingly well in the kinematic range considered. We discuss also azimuthal correlations between outgoing particles. The azimuthal distributions are not isotropic and are different for our exact model and SPAs. We discuss also the possibility of a measurement of two-photon-bremsstrahlung in the $pp \to pp \gamma \gamma$ reaction. In our calculations we impose a cut on the relative energy loss ($0.02 < \xi_{i} < 0.1$, $i = 1,2$) of the protons where measurements by the ATLAS Forward Proton (AFP) detectors are possible. The AFP requirement for both diffractively scattered protons and one forward photon (measured at LHCf) reduces the cross section for $p p \to p p \gamma$ almost to zero. On the other hand, much less cross-section reduction occurs for $pp \to pp \gamma \gamma$ when photons are emitted in opposite sides of the ATLAS interaction point and can be measured by two different arms of LHCf. For the SPA1 ansatz we find $\sigma(pp \to pp \gamma \gamma) \simeq 0.03$~nb at $\sqrt{s} = 13$ TeV and with the cuts $0.02 < \xi_{i} < 0.1$, $8.5 < {\rm y}_{3} < 9$, $-9 < {\rm y}_{4}< -8.5$. Our predictions can be verified by ATLAS and LHCf combined experiments. We discuss also the role of $p p \to p p \pi^0$ background for single photon production.
References: P. Lebiedowicz, O. Nachtmann, A. Szczurek, arXiv:2303.13979 [hep-ph].
We discuss production of far-forward $D$ mesons/antimesons and neutrinos/antineutrinos from their semileptonic decays in proton-proton collisions at the LHC energies. We include the gluon-gluon fusion $gg \to c\bar{c}$, the intrinsic charm (IC) $gc \to gc$ as well as the recombination $gq \to Dc$ partonic mechanisms. The calculations are performed within the $k_T$-factorization approach and the hybrid model using different unintegrated parton distribution functions (uPDFs) for gluons from the literature, as well as within the collinear factorization approach. We compare our results to the LHCb data for forward $D^{0}$-meson production at $\sqrt{s} = 13$ TeV for different rapidity bins in the interval $2 < y < 4.5$. The IC and recombination model give negligible contributions at the LHCb kinematics. Both the mechanisms start to be crucial at larger rapidities and dominate over the standard charm production mechanisms. At high energies there are so far no experiments probing this region. We present uncertainty bands for the both mechanisms. Somewhat reduced uncertainty bands will be available soon from fixed-target charm meson production experiments in $pA$-collisions. We present also energy distributions for forward electron, muon and tau neutrinos to be measured at the LHC by the currently operating FASER$\nu$ experiment, as well as by future experiments like FASER$\nu2$ or FLArE, proposed very recently by the Forward Physics Facility project.
Contributions of different mechanisms are shown separately. For all kinds of neutrinos (electron, muon, tau) the subleading contributions, i.e. the IC and/or the recombination, dominate over light meson (pion, kaon) and the standard charm production contribution driven by fusion of gluons for neutrino energies $E_{\nu} > 300$ GeV. For electron and muon neutrinos both the mechanisms lead to a similar production rates and their separation seems rather impossible. On the other hand, for $\nu_{\tau} + {\bar \nu}_{\tau}$ neutrino flux the recombination is reduced further making the measurement of the IC contribution very attractive.
[1] R. Maciuła and A. Szczurek, Far-forward production of charm mesons and neutrinos at forward physics facilities at the LHC and the intrinsic charm in the proton, Phys. Rev. D 107, no.3, 034002 (2023).
Conveners:
Thomas Blake (University of Warwick)
Marzia Bordone (CERN)
Thibaud Humair (MPP)
Contact: eps23-conveners-t08 @desy.de
The entanglement in the neutral kaon pairs produced at the DA$\Phi$NE $\phi$-factory is a unique tool to test discrete symmetries. The exchange of in and out states required for a genuine test involving an antiunitary transformation implied by time-reversal is implemented exploiting the entanglement of ${K}^0\bar{K}{}^0$ pairs produced at a $\phi$-factory. We will present the final result of the first direct test of CPT and T symmetries in neutral kaon transitions performed at KLOE.
Novel quantum phenomena have been recently discussed [1] in association to a peculiar time correlation between entangled neutral kaons produced at a φ-factory: the past state of the first decayed kaon, when it was still entangled before its decay, is post-tagged by the result and the time of the future observation on the other kaon decay. This surprising “from future to past” effect is fully observable. Preliminary results obtained on the analysis of data collected by the KLOE experiment at the DAΦNE collider are presented, showing experimental evidence of this new effect.
[1] J. Bernabeu, A. Di Domenico, Phys. Rev. D 105 (11) (2022) 116004.
The NA62 experiment at CERN collected the world's largest dataset of charged kaon decays in 2016-2018, leading to the first measurement of the branching ratio of the ultra-rare $K^+ \rightarrow \pi^+ \nu \bar\nu$ decay, based on 20 candidates. In this talk the NA62 experiment reports recent results from analyses of $K^+ \rightarrow \pi^0 e^+ \nu \gamma$, $K^+ \rightarrow \pi^+ \mu^+ \mu^-$ and $K^+\rightarrow \pi^+ \gamma \gamma$ decays, using a data sample recorded in 2017--2018. The radiative kaon decay $K^+ \rightarrow \pi^0 e^+ \nu \gamma$ (Ke3g) is studied with a data sample of O(100k) Ke3g candidates with sub-percent background contaminations. Results with the most precise measurements of the Ke3g branching ratios and T-asymmetry are presented. The $K^+ \rightarrow \pi^+ \mu^+ \mu^-$ sample comprises about 27k signal events with negligible background contamination, and the presented analysis results include the most precise determination of the branching ratio and the form factor. The $K^+ \rightarrow \pi^+ \gamma \gamma$ sample contains about 4k signal events with $10\%$ background contamination, and the analysis improves the precision of the branching ratio measurement by a factor of 3 with respect to the previous measurements.
Rare kaon decays are among the most sensitive probes of both heavy and light new physics beyond the Standard Model description thanks to high precision of the Standard Model predictions, availability of very large datasets, and the relatively simple decay topologies. The NA62 experiment at CERN is a multi-purpose high-intensity kaon decay experiment, and carries out a broad rare-decay and hidden-sector physics programme. Recent NA62 results on searches for violation of lepton flavour and lepton number in kaon decays, and searches for production of hidden-sector mediators in kaon decays, are presented. Future prospects of these searches are discussed.
We give a general prescription for the transformation between four-fermion effective operator bases via corrected Fierz identities at the one-loop level. The procedure has the major advantage of only relating physical operators between bases, eliminating the necessity for Fierz-evanescent operators, thereby reducing the number of operators which enter in higher-order EFT computations. Additionally, when performing basis changes using loop-corrected Fierz identities, the dependence on renormalization scheme factorizes between the two bases, implying that such transformations simultaneously change renormalization scheme along with the operator basis. We illustrate the utility of loop-corrected Fierz identities in flavor physics through several examples of BSM phenomenology.
Conveners:
Ilaria Brivio (Universita di Bologna)
Karsten Köneke (Universität Freiburg)
Matthias Schröder (Universität Hamburg)
Contact: eps23-conveners-t09 @desy.de
The mass of the Higgs boson is a fundamental parameter of the Standard Model which can be measured most precisely in its decays to four leptons and two photons, which benefit from excellent mass resolution. The total width of the Higgs boson is another important parameter for Higgs sector phenomenology. It is too small to be measured directly at the LHC, but indirect measurements can be performed using the off-shell Higgs boson production process in the ZZ and WW final states, as well as through interference effects in the diphoton decay channel.
This talk presents the most recent mass and width measurements by the ATLAS experiment using the full Run 2 dataset of pp collisions at the LHC collected at 13 TeV.
The Higgs boson mass, and its decay width, are fundamental properties of this particle. Here, we discuss the latest measurements of these properties, as well as their future prospects, with the CMS experiment.
While the Standard Model predicts that the Higgs boson is a CP-even scalar, CP-odd contributions to the Higgs boson interactions with vector bosons and fermions are presently not strongly constrained. A variety of Higgs boson production processes and decays can be used to study the CP nature of the Higgs boson interactions. This talk presents the most recent results of such analyses by the ATLAS experiment, based on pp collision data collected at 13 TeV
We discuss recent results of Higgs boson measurements with the CMS experiment, where the Higgs boson has high transverse momentum and its decay products are merged. Several production modes and final state channels are presented.
We report progress on the inclusive hadroproduction of a Higgs+jet system at LHC and FCC collision energies. Kinematic sectors explored fall into the so-called semi-hard regime, where both the fixed-order and the high-energy dynamics come into play. We propose a novel version of a matching procedure aimed at combining NLO fixed-order computations, as obtained from POWHEG, with the NLL resummation of energy logarithms via JETHAD. We present preliminary analyses on assessing the weight of systematic uncertainties, such as the ones coming from: finite top- and bottom-quark masses, collinear parton densities, energy-scale variations. According to our knowledge, POWHEG+JETHAD represents a first and pioneering implementation of a matching in the context of the high-energy resummation at NLL and for rapidity-separated two-particle final states.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
The Mu2e experiment, currently under construction at Fermilab, will search for neutrinoless mu->e conversion in the field of an aluminum atom. A clear signature of this charged lepton flavor violating two-body process is given by the monoenergetic conversion electron of 104.97 MeV produced in the final state.
An 8 GeV/c pulsed proton beam interacting on a tungsten target will produce the pions decaying in muons; a set of superconducting magnets will drive the negative muon beam to a segmented aluminum target where the topped muons will eventually convert to electrons; a set of detectors will be used to both identify conversion electrons and reject beam and cosmic backgrounds.
The experiment will need 3-5 years of data-taking to achieve a factor of
$10^4$ improvement on the current best limit on the conversion rate.
After an introduction to the physics of Mu2e, we will report on the status of the different components of the experimental apparatus. The updated estimate of the experiment’s sensitivity and discovery potential will be presented.
A resonant structure has been observed at ATOMKI in the invariant mass of electron-positron pairs, produced after excitation of nuclei such as $^8$Be and $^4$He by means of proton beams. Such a resonant structure can be interpreted as the production of an hypothetical particle (X17) whose mass is around 17 MeV.
The MEG-II experiment at the Paul Scherrer Institut whose primary physics goal is the search for the charged lepton violation process $\mu$ $\rightarrow$ $ e \gamma$ is in the position to confirm and study this observation. MEG-II employs a source of protons able to accelerate them up to a kinetic energy of about 1 MeV. These protons are absorbed on a thin target where they excite nuclear transitions to produce photons for the Xenon calorimeter calibration of the MEG-II detector
By using a dedicated 2 $\mu$m thick target containing lithium atoms the $^7$Li(p,e$^+$e$^{-}$)$^8$Be process is being studied with a magnetic spectrometer including a cylindrical drift chamber and a system of fast scintillators. This aims to reach a better invariant mass resolution than previous experiments and to study the production of the X17 with a larger acceptance and therefore to shed more light into the nature of this observation.
After a 2022 engineering run, a month-long data-taking was conducted in Feb 2023. We report about our first results on the search and the study of this hypothetical X17 particle.
Charged lepton flavor violation (CLFV) is an unambiguous signature of new physics. In the Belle experiment, we study various CLFV signatures, which include $\tau$ leptons in the final state. In this presentation, we report searches for CLFV in $\Upsilon(1S) \to \ell^{\pm}\ell^{\prime\mp}$ and $\chi_{bJ}(1P)\to \ell^{\pm}\ell^{\prime\mp}$ decays, where $\ell,\ell^\prime
=e, \mu, \tau$ using $25 {\rm fb}^{-1}$ of $\Upsilon(2S)$ data. Using 772 million $B\bar{B}$ sample, we search for CLFV in $B^+\to\tau^{\pm}\ell^{\mp}$ decays, where $\ell=e,\mu$. Also,
we search for CLFV in $B^0_s\to\ell^{\pm}\tau^{\mp}$ decays, where $\ell=e,\mu$, using $121 {\rm fb}^{-1}$ of $\Upsilon(5S)$ sample.
Electric dipole moments (EDMs) of elementary particles are powerful probes of CP-violating New Physics (NP). In the context of a general two-Higgs doublet model (2HDM) which due to lack of any ad hoc discrete symmetry possesses complex extra Yukawa couplings that can help explain baryon asymmetry of the Universe (BAU), we discuss their NP contribution to EDMs of lepton and quarks. In leptons, while the electron EDM, given recent experimental improvements, continues to be the most sensitive probe of extra Yukawa couplings, we show that there exists NP scenarios in which muon EDM can be quite large and within sensitivity region of upcoming J-PARC and PSI experiments. We further present results for (chromo-)EDMs of various quarks. In particular, we show that neutron EDM together with electron EDM can provide crucial bound on the top Yukawa-driven BAU explanation in g2HDM. We also show the results for the top quark chromo-EDM, in light of recent analysis from CMS.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
We investigate the discovery potential for long-lived particles produced in association with a top-antitop quark pair at the (High-Luminosity) LHC. Compared to inclusive searches for a displaced vertex, top-associated signals offer new trigger options and an extra handle to suppress background. We propose a search strategy for a displaced di-muon vertex decaying in the tracking chambers, calorimeter or the muon chambers, in addition to a reconstructed top-antitop pair. Such a signature is predicted in many models with new light scalars or pseudo-scalars, which generically couple more strongly to top quarks than to light quarks. For axion-like particles with masses above the di-muon threshold and below the $b\bar{b}$ threshold, we find that the (High-Luminosity) LHC can probe effective top-quark couplings as small as $c_{tt}/f_a = 0.03~(0.01)~$TeV$^{-1}$ and proper decay lengths as long as $10~(400)$ m, with data corresponding to an integrated luminosity of 150 fb$^{-1}$ (3 ab$^{-1}$). In this talk I will present a summary of the analysis, including signal and background kinematics, the event selection, and predictions for LHC Run 2 and High-Luminosity LHC.
Some say SUSY is dead
, because LHC has not discovered it yet. But is this
really true? It turns out that the story is more subtle. SUSY can be 'just
around the corner', even if no signs of it has been found and a closer
look is needed to quantify the impact of LHC limits and their implications
for future colliders. In this contribution, a study of prospects for SUSY
based on scanning the relevant parameter space of (weak-scale) SUSY
parameters, is presented.
I concentrate on the properties most relevant to evaluate the experimental
prospects: mass differences, lifetimes and decay-modes. The observations are
then confronted with estimated experimental capabilities, including -
importantly - the detail of simulation these estimates are based upon.
I have mainly considered what can be expected from LHC and HL-LHC, where it
turns that large swaths of SUSY parameter space will be quite hard to access.
For e+e- colliders, on the other hand, the situation is simple:
at such colliders, SUSY will be either discovered or excluded almost to
the kinematic limit.
Supersymmetric models with low electroweak fine-tuning are more prevalent on the string landscape than fine-tuned models. We assume a fertile patch of landscape vacua containing the minimal supersymmetric standard model (MSSM) as a low-energy EFT. Such models are characterized by light higgsinos in the mass range of a few hundred GeV whilst top squarks are in the 1-2.5 TeV range. Other sparticles are generally beyond current LHC reach. We evaluate prospects for top squark searches of the expected natural SUSY at HL-LHC.
The existence of the magnetic monopole has eluded physicists for centuries. The NOvA Far Detector (FD), used for neutrino oscillation searches, also has the ability to identify magnetic monopoles. With a surface area of 4,100 m$^2$ and a location near the earth’s surface, the 14 kt FD provides us with the unique opportunity to be sensitive to potential low-mass monopoles unable to penetrate underground experiments. We have designed a novel data-driven triggering scheme that continuously searches the FD’s live data for monopole-like patterns. At the offline level, the largest challenge in reconstructing monopoles is to reduce the 148,000 Hz speed-of-light cosmic ray background. In the absence of any signal events in a 95-day exposure of the FD, we set limits on the monopole flux of $2 \times 10^{-14} \mathrm{cm}^{-2} \mathrm{s}^{-1} \mathrm{sr}^{-1}$ at 90% C.L. for monopole speed $6 \times 10^{-4} < \beta < 5 \times 10^{-3}$ and mass greater than $5 \times 10^8$GeV. In this talk, I will review the current monopole results and discuss the sensitivities of future searches using more than 8 years of collected FD data.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
The Pierre Auger Observatory was built to study ultra-high-energy cosmic rays. It has a hybrid design that allows one to observe the main features of extensive air showers with unprecedented precision. However, these discoveries have opened new questions about the nature of cosmic rays. One of the most intriguing is the discrepancy between the observed number of muons and the expected value from the more updated hadronic interaction models. Therefore, the design of AugerPrime, the upgrade of the Pierre Auger Observatory, includes the installation of a new detection system, the Underground Muon Detector (UMD), to perform a direct measurement of the number and temporal distribution of muons in extensive air showers. This presentation will be an overview of the main characteristics of the Underground Muon Detector: final design and deployment status, as well as the calibration and reconstruction processes. Furthermore, the first results obtained during the engineering array phase will be presented, showing the contribution of the UMD to solving the still open questions about cosmic ray physics.
We present an estimation of the noise induced by scattered light inside the main arms of the Einstein Telescope (ET) gravitational wave detector. Both ET configurations for high- and low-frequency interferometers are considered. As it is already the case in the existing experiments, like LIGO and Virgo, optically coated baffles are used to mitigate and suppress the noise inside the vacuum tubes. We propose baffle layouts for ET and compute the remaining scattered light noise contribution to ET sensitivity. Virgo has introduced the novel concept of instrumented baffles, with the aim to implement active monitoring of the stray light distribution close to the main mirrors. We present the technology and the comparison of the data with simulations, and show their potential to monitor the performance of the mirrors, the presence of defects and point absorbers in the mirror substrates, and to assist in the pre-alignment of the arms.
The Any Light Particle Search II (ALPS II) is a Light-Shining-through-a-Wall experiment operating at DESY, Hamburg. Its goal is to probe the existence of Axions and Axion Like Particles (ALPs), possible candidates for dark matter. In the ALPS II region of interest, a rate of photons reconverting from Axions/ALPs on the order of $10^{-5}$ cps is expected. A first science run at lower sensitivity based on a heterodyne detection method was successfully started in May 2023. The design sensitivity is expected to be reached in 2024. A complementary science run is foreseen with a single photon detection scheme. This requires a sensor capable of measuring low-energy photons (1.165 eV) with high efficiency and a low dark count rate. We investigate a tungsten Transition Edge Sensor (TES) system as a photon-counting detector that promises to meet these requirements. This detector exploits the drastic change in its resistance caused by the absorption of a single photon when operated in its superconducting transition region at millikelvin temperatures. In order to achieve the required sensitivity, the implementation of the TES into the ALPS II experiment needs to be carefully optimized. In this work, we present the progress on measurements for the characterization of our system and data analysis for background reduction. Additionally, an overview of ongoing setup simulations will be given, which are an essential step toward a comprehensive understanding of our system.
Neutron spectroscopy is an invaluable tool for many scientific and industrial applications, including searches for dark matter. In deep underground dark matter experiments, neutron induced background produced by cosmic ray muons and natural radioactivity, may mimic a signal. However, neutron detection techniques are complex and, thus, measurements remain elusive. Use of $^3$He based detectors - the most widely used technique, to date – is not a viable solution, since $^3$He is scarce and expensive.
A promising alternative for fast neutron spectroscopy is the use of a
nitrogen-filled Spherical Proportional Counter. The neutron can be detected and its energy measured through the $^{14}$N(n,$\alpha$)$^{11}$B and $^{14}$N(n,p)$^{14}$C reactions, which for fast neutrons have comparable cross sections to the $^3$He(n,p)$^3$H reaction. Furthermore, the use of a light element, such as nitrogen, keeps $\gamma$-ray efficiency low and enhances the signal to background ratio in mixed radiation environments. This constitutes a safe, inexpensive, effective and reliable alternative.
The latest developments in spherical proportional counter instrumentation, such as resistive multi-anode sensors for high-gain operation with high-charge collection efficiency and gas purifiers that minimize gas contaminants to negligible levels, which enable neutron detection with increased target mass, and thus higher efficiency, are presented. Measurements for fast and thermalised neutrons from an Am-Be source and from the University of Birmingham MC40 cyclotron are shown, and compared with simulations.
LaBr3:Ce crystals have been introduced for radiation imaging in medical
physics, with photomultiplier or single SiPM readout. An R & D was pursued
using 1” LaBr3:Ce to realize compact large area detectors with SiPM array readout, aiming at high light yields, good energy resolution, good detector linearity and fast time response for low-energy X-rays. A natural application was found inside the FAMU project at RIKEN-RAL muon facility, that aims at a precise measure of the proton Zemach radius to solve the so-called ”proton radius puzzle”, triggered by the recent measure of the proton charge radius at PSI. The goal is the detection of characteristic X-rays around 130 KeV. The drawbacks of these detectors in practical use are due both to SiPMs’ gain drift with temperature and to a worse timing, as compared to a conventional readout with photomultipliers (PMTs). The gain drift with temperature has been studied in laboratory, inside a climatic chamber, with different SiPM arrays, including the new Hamamatsu S14161 MPPC array with enhanced sensitivity in the UV region for PET. Correction for this effect have been studied and an effective correction was found by developing a custom 8-channels NIM module based on CAEN A7585D chips, with temperature feedback. The effect was reduced by a factor five, in the temperature range 10-40 ◦C.
The poor timing characteristics of the detectors (especially falltime), due to the large capacity of the used SiPM arrays, were also studied and different solutions were implemented. With a standard parallel ganging typical risetime (falltime) of the order of 50 (300) ns are obtained. Long falltime are a problem in experiments as FAMU, where a ”prompt” component must be separated from a ”delayed” one in the signal X-rays to be detected. A dedicated R & D was pursued to settle this problem starting from the hybrid ganging of SiPM cells, to the development of a suitable zero pole circuit with a parallel ganging, using an increased overvoltage and to finally the development of an ad-hoc electronics (1-4) to split the 1” SiPM array in 4 quadrants, thus reducing the detectors’ capacitance. The aim was to improve the timing characteristics, while keeping a good FWHM energy resolution. Reductions in falltime (risetime) up to a factor 2-3X were obtained with no deterioration of the energy resolution. A FWHM energy resolution better than 3 % (8%) at the Cs137 (Co57) peak was obtained. These results compare well with the best results obtained with a PMT readout.
Conveners:
Alessia Bruni (INFN Bologna)
Marie-Lena Dieckmann (Universität Hamburg)
Gwenhaël Wilberts Dewasseige (UC Louvain)
Contact: eps23-conveners-t14@desy.de
Communicating science through mobile smartphone and tablet applications is one of the most efficient ways to reach general public of diverse background and age coverage. The Higgsy project was created to celebrate the 10th anniversary of the discovery of the Higgs boson at CERN in 2022. This project introduces a mobile game to search for the Higgs boson production in a generic particle physics detector. The MatterBricks mobile game was created for a major national event in Belgium, held in 2023, which is an augmented-reality project to learn about elementary particles. The talk will cover the main features of the two mobile applications and will give further prospects for reaching general public through mobile application development process.
This talk describes an outreach exposition centered around a replica of the Alpha Magnetic Spectrometer Payload Operation Control Room (AMS POCC) as a means to help people comprehend the continuous monitoring and control of space mission payloads by various control rooms on Earth. The exposition's added value stems from the AMS collaboration's monitoring software development, enabling individuals to access AMS telemetry data. This innovation emerged during the pandemic, granting AMS collaborators the ability to participate remotely in the day-to-day operations of the experiment due to restrictions on physical access to the CERN site, where the AMS POCC is located. The replica of the POCC, along with accompanying posters and videos, serves as an effective starting point for communicating the significance of fundamental research in the areas of space radiation and cosmic rays.
The talk will also feature the initial implementation of the exposition, which took place on May 2023, in Bologna, during an event organized in collaboration with the Moon Village Association Italian branch and the Marco Peroni studio. The outreach event encompasses the above-mentioned exposition titled "Far yet so close: Cosmic Ray Measurements in Space with the Alpha Magnetic Spectrometer (ams02.space) and Ground Control Operations of Space Missions," as well as the exposition "Radiation and Safety on the Moon: Active Shielding for Radiation Protection." These exhibitions shed light on how fundamental research addresses the challenge of shielding against cosmic rays in lunar exploration. The AMS Roma Sapienza research group and the Marco Peroni studio jointly undertake the collaboration activities presented at the event.
The event took place within the "Living in Space" Permanent Exhibition located at the premises of the Marco Peroni Engineering Studio. The event successfully bridges the gap between scientific knowledge and public understanding through this approach. It allows individuals from diverse backgrounds to learn about and appreciate the advancements made in space exploration and radiation protection. By making science relatable, accessible, and visually stimulating, the outreach event fosters curiosity, encourages active participation, and promotes scientific literacy among non-scientific audiences.
Promoting and sharing the audio-visual heritage of the history of Italian physics are the two main goals of La Mediateca INFN: the history of physics through videos, the new cultural project of INFN, a website dedicated to a general audience, but in particular aimed at students of Italian schools and university researchers and students. Today, it includes almost 200 videos totaling more than 70 hours of interviews, documentaries, newscasts, conferences and seminars, giving rise to a digital archive open to everyone to do research, gather information, explore, and re-trace paths, anecdotes and events in the history of physics.
To make La Mediateca known especially by young students, a large in person and online event focusing on the project. The event was followed by over 600 classes, with almost 11 thousand high school students that connected from all over Italy. La Mediateca was also at the heart of a contest for high school students, called “Audioritratti di Scienza” (Science Audioportaits): over 500 students participated in the contest, submitting 130 original podcasts.
These initiatives were evaluated through two different tools: an assessment questionnaire filled in by the students who participated in the contest and the analysis of the numbers and behaviors of the users visiting the website La Mediateca INFN, from November 2022 until today.
During this talk, the main features of La Mediateca INFN will be presented. Furthermore, the reach of the website, the results of questionnaire and the participation in the project’s events will be discussed to understand how the history of physics can be a hook to engage young students and to make them closer to physics and science.
ScienzaPerTutti(*), literally ScienceForAll, is the web portal dedicated to Physics education and popularization of science curated by INFN Italian National Institute for Nuclear Physics’ researchers. The contents are mainly addressed to High School students and teachers and are designed to engage the audience with the main topics of modern research in particle and nuclear physics, theoretical and astroparticle physics. The missions are to promote public awareness of science, to raise interest towards the importance of discoveries along with the applications in everyday life, to support teaching/learning of modern physics using innovative methods.
The portal, created in 2002, has evolved through the years including different multimedia products like didactic units, research materials, columns, infographics, videos, interviews, book reviews, and podcasts, and expanding the reachability that has currently an average of 3000 entries every day.
After an introduction to the leading sections of ScienzaPerTutti web site, this contribution will present the development of the annual contest addressed to Middle and High-School students that, in 2022, arrived at its XVIII edition. Every year the contest is devoted to a different topic and participants are asked to design and realize a multimedia product to share their work. In 2023 the contest was centered on the Physics of sports, students have to choose a sport and describe the Physics beyond it. In particular, High School students were also asked to imagine the same sport played on another planet or in condition out of the ordinary to invent a new sport. 299 teams from 95 Italian schools applied for the 2023 competition and we will here report about the works and the outcomes.
(*) https://scienzapertutti.infn.it
The graphical program FeynGame is introduced, which allows didactic access to Feynman diagrams in a playful way. It offers didactic approaches for different levels of experience: from games involving simple clicking and drawing, to practicing the theory of fundamental interactions, to the mathematical representation of scattering amplitudes.
For the specialist, FeynGame may also represent a highly intuitive and flexible tool for drawing Feynman diagrams for publications which can be adjusted to personal needs and taste in an very simple way.
Opening plenary of the EPS-HEP2023
The 2023 EPS High Energy and Particle Physics Prize is awarded to Cecilia Jarlskog for the discovery of an invariant measure of CP violation in both quark and lepton sectors; and to the members of the Daya Bay and RENO collaborations for the observation of short-baseline reactor electron-antineutrino disappearance, providing the first determination of the neutrino mixing angle Θ13, which paves the way for the detection of CP violation in the lepton sector.
The 2023 Giuseppe and Vanna Cocconi Prize is awarded to the SDSS/BOSS/eBOSS collaborations for their outstanding contributions to observational cosmology, including the development of the baryon acoustic oscillation measurement into a prime cosmological tool, using it to robustly probe the history of the expansion rate of the Universe back to 1/5th of its age providing crucial information on dark energy, the Hubble constant, and neutrino masses.
The 2023 Gribov Medal is awarded to Netta Engelhardt for her groundbreaking contributions to the understanding of quantum information in gravity and black hole physics.
The 2023 Young Experimental Physicist Prize of the High Energy and Particle Physics Division of the EPS is awarded to Valentina M. M. Cairo for her outstanding contributions to the ATLAS experiment: from the construction of the inner tracker, to the development of novel track and vertex reconstruction algorithms and to the searches for di-Higgs boson production.
The 2023 Outreach Prize of the High Energy and Particle Physics Division of the EPS is awarded to Jácome (Jay) Armas for his outstanding combination of activities on science communication, most notably for the 'Science & Cocktails' event series, revolving around science lectures which incorporate elements of the nightlife such as music/art performances and cocktail craftsmanship and reaching out to hundreds of thousands in five different cities world-wide.
Conveners:
Livia Conti (INFN)
Carlos Perez de los Heros (Uppsala University)
Martin Tluczykont (Universität Hamburg)
Gabrijela Zaharijas (UNG)
Contact: eps23-conveners-t01 @desy.de
The KM3NeT Collaboration is incrementally building a network of water-Cherenkov neutrino observatory in the Mediterranean Sea, consisting of two telescopes, named ARCA (Astroparticle Research with Cosmics in the Abyss) and ORCA (Oscillation Research with Cosmics in the Abyss), sharing the same detection technology. ARCA, located off the shores of Sicily, in its completed shape will be a cubic-kilometre scale modular telescope made of 230 detection units, optimised for neutrino astronomy in the TeV-PeV energy range. ORCA, off the shores of Toulon, will be a 7-Mton modular telescope made of 115 detection units, focused on neutrino oscillations and neutrino mass hierarchy, for neutrinos in the 1-100 GeV energy range. At the current time, ARCA consists of 21 detection units whereas ORCA has 15 already installed. Both telescopes have been already taking data for a few years, providing good understanding of backgrounds as well as of the expected signals and hence of the scientific potential of KM3NeT. The technique for neutrino detection and measurement is reviewed, along with outlooks for the completion of the two telescopes and the expected performances for detection of astrophysical neutrino sources, measurement of neutrino oscillation parameters and neutrino mass ordering. Contributions of KM3NeT to global efforts for multimessenger astronomy are also discussed. Early physics outputs of both telescopes are reported.
The ANTARES neutrino telescope was operational in the Mediterranean Sea from 2006 to 2022. The detector array, consisting of 12 lines with a total of 885 optical modules, was designed to detect high-energy neutrinos covering energies from a few tens of GeV up to the PeV range. Despite the relatively small size of the detector, the results obtained are relevant in the field of neutrino astronomy, due to the view of the Southern sky and the good angular resolution of the telescope. This presentation will give an overview of the legacy results of ANTARES, including searches for point sources, neutrinos from the galactic ridge, from dark matter annihilation, and from transients, as well as measurements of neutrino oscillations and limits on new physics.
The IceCube Neutrino Observatory is a cubic-kilometer scale neutrino detector at the South Pole. IceCube consists of over 5000 photosensors deployed on cables deep in the Antarctic ice. The sensors detect neutrinos via the Cherenkov light emitted by secondary particles produced in neutrino interactions.
With the measurement of the isotropic astrophysical neutrino flux in the TeV-PeV energy range, IceCube has opened a new window into the high-energy universe.
During the past few years, IceCube has detected deviations from isotropy with neutrino emission from the blazar TXS056+056 and the Seyfert Galaxy NGC1068. The neutrino emission spectra of both objects differ substantially, hinting at differences in the underlying production mechanisms.
Adding to the complexity of the neutrino sky, IceCube has recently measured neutrino emission from the Galactic Plane, which offers valuable new information to the study of galactic cosmic ray production and transport.
In this contribution, we will present an overview of IceCube's results on the origin of galactic and extra-galactic neutrino emission.
Astrophysical hypotheses suggest the existence of neutrinos beyond the energy range currently reached by optical detectors (> 10 PeV). The observation of such particles by capturing the coher- ent emission of their interaction in ice, i.e. Askaryan radiation, is the aim of the Radio Neutrino Observatory in Greenland (RNO-G). Located at Summit Station, RNO-G represents the first neu- trino detector oriented towards the Northern sky, and it will play a role in the future shaping of the larger IceCube-Gen2 Radio Array. The first installed stations of RNO-G are currently active and collecting data, while the full array will reach completion within the next years. The plan includes a grid of 35 radio stations, each designed to be low powered and autonomous. Learning from previous radio detectors, each station includes both shallow antennas mainly for cosmic-ray iden- tification, and in-ice deep antennas with a phased array trigger for detection and reconstruction. We present the motivation, design and current status of the detector.
Continuous gravitational waves are long-duration gravitational-wave signals
that still remain to be detected.
These signals are expected to be produced by rapidly-spinning
non-axisymmetric neutron stars, and would provide valuable information on the
physics of such compact objects; additionally, they would allow us to probe the
galactic population of EM-dark neutron stars, whose properties may be different
from the pulsar population observed through electromagnetic means.
Other sources include the evaporation of boson clouds around spinning black holes,
or binary systems of light compact objects such as planetary-mass black holes.
In this talk, I give a brief overview of the continuous gravitational-wave search
results produced by the LIGO-Virgo-KAGRA collaboration using data from their third
observing run O3, and discuss prospects from the now ongoing
fourth observing run O4.
The 5n-vector ensemble method is a statistical multiple test for the targeted search of continuous gravitational waves from an ensemble of known pulsars. This method can improve the detection probability combining the results from individually undetectable pulsars if few signals are near the detection threshold. In this presentation, I show the results of the 5n-vector ensemble method considering the O3 data set from the LIGO and Virgo detectors and an ensemble of 223 known pulsars. I show no evidence for a signal from the ensemble and set a 95% credible upper limit on the mean ellipticity assuming a common exponential distribution for the pulsars' ellipticitites. Using two independent hierarchical Bayesian procedures, the upper limits on the mean ellipticity are $2.2 \times 10^{-9}$ and $1.2 \times 10^{-9}$ for the analyzed pulsars.
The first observation of gravitational waves (GWs) with laser interferometers of the LIGO collaboration in 2015 was about 100 years after their prediction within general relativity. In this talk we focus on the detection of gravitational waves in a higher frequency regime with superconducting radio frequency (SCRF) cavities. This approach has already been considered as probes for GWs before laser interferometers were built and the operational spectrum reaches up to high GW frequencies above ∼10 kHz. Measurements in this frequency range could give possible hints to new physics beyond the standard model or insights into early universe phenomena.
The detection principle is based on the GW induced transition between two electromagnetic eigenmodes of the SCRF cavity. We consider the interplay of the indirect coupling to the cavity boundaries and the
direct coupling to the electromagnetic field explained by the Gertsenshtein effect. We precisely analyse all contributing effects and derive in detail the coupling coefficients for the frequency range O(kHz-GHz).
Aiming on improving the describtion of GWs the results are applied using the MAGO cavity prototype built at INFN in 2005 in Genoa. Together with FNAL the Universität Hamburg and DESY restart research on this detector by characterizing its geometry and mechanical and electromagnetic eigenmodes. The prototype parameters give predictions for achievable sensitivities in the desired frequency range and can be compared to possible GW generating sources. Further improvements on the MAGO cavity prototype parameters indicate that the region of new physics is in reach.
Conveners:
Fady Bishara (DESY)
James Frost (University of Oxford)
Silvia Scorza (LPSC Grenoble)
Contact: eps23-conveners-t03 @desy.de
We consider threshold effects of thermal dark matter (DM) pairs (fermions and antifermions) interacting with a thermal bath of dark gauge fields in the early expanding universe. Such threshold effects include the processes of DM pairs annihilating into the dark gauge fields (light d.o.f.) as well as electric transitions between pairs forming a bound state or being unbound but still feeling non-perturbative long range interactions (Sommerfeld effect). We scrutinize the process of bound-state formation (bsf) and the inverse thermal break-up process (bsd), but also (de-)excitations, providing a thermal decay width due to the thermal bath. We compute the corresponding observables by exploiting effective-field-theory (EFT) techniques to separate the various scales (the mass of the particles M, the momenta Mv, the energies Mv^2, as well as thermal scales: the temperature T, the Debye mass m_D), which are intertwined in general. To do so we make use of the so-called non-relativistic EFT (NREFT) as well as potential non-relativistic EFT (pNREFT) at finite T. These processes play an important role for a quantitative treatment of the dynamics of the relevant d.o.f. at the thermal freeze-out regime and the corresponding observables appear in the relevant evolution equations, from which we eventually determine the relic energy density of DM.
The null results of dark matter at experiments motivate us to look beyond the usual freeze-out mechanisms, and work out the upper bound on the dark matter masses which could be probed at experiments. In this talk, we shall briefly overview different production mechanisms, and correspondingly upper bounds of dark matter in those scenarios. In addition, we shall focus on the exponential mechanism for dark matter production and list out the differences of our approach with the ones in the literature.
Models of feebly-interacting Dark Matter (DM), potentially detectable in long-lived particle searches, have gained popularity due to the non-observation of DM in direct detection experiments. Unlike DM freeze-out, which occurs when the dark sector particles are non-relativistic, feebly-interacting DM is primarily produced at temperatures corresponding to the heaviest mass scale involved in the production process. Consequently, incorporating finite temperature corrections becomes essential for an accurate prediction of the relic density. However, current calculations are often performed at either zero temperature or rely on thermal masses to regulate infrared divergences. In our study, we utilize the Closed-Time-Path (CTP) formalism to compute the production rate of feebly-interacting DM associated with a gauge charged parent. We compare our results with the aforementioned approaches such as the insertion of thermal masses, zero temperature calculations and a recent calculation that interpolates between finite temperature results in the ultra-relativistic and non-relativistic regime. Furthermore, we discuss the applicability and feasibility of these different approaches for phenomenological studies.
Although the Standard Model is very successful, there are still open
problems which it cannot explain, one of it being dark matter (DM).
This has led to various Beyond Standard Model theories, of which Two
Higgs Doublet models are very popular, as they are one of the simplest
extensions and lead to a rich phenomenology. Further extensions with
a complex singlet lead to a natural DM candidate.
The aim of this work is to explore the dark sector in a Two
Higgs Doublet Model extended by a complex scalar singlet, where the
imaginary component of the singlet gives rise to a pseudo-scalar DM
candidate. Both, the doublets, and the singlet, obtain a vacuum ex-
pectation value (vev), where the singlet vev leads to additional mixing
of the doublet and the singlet scalar sector. We examine the influence
of the Higgs sector parameters on DM relic density as well as direct and indirect detection cross sections. The results are then compared with
constraints from experiments.
We investigate ways of identifying two kinds of dark matter component particles at high-energy colliders. The strategy is to notice and
distinguish double-peaks(humps) in some final state observable. We
carried out our analysis in various popular event topologies for dark
matter search, such as mono-X and n-leptons+n-jets final state along
with missing energy/transverse momenta. It turns out that a lepton-collider is suitable for such analyses. The observables which are best-suited for this purpose have been identified, based on the event topology. The implication of beam-polarization is also explored in detail. Lastly, a quantitative measure of the distinguishability of the two peaks has been established in terms of a few newly-constructed interesting variables.
The cold dark matter (CDM) candidate with weakly interacting massive
particles can successfully explain the observed dark matter relic
density in cosmic scale and the large-scale structure of the Universe.
However, a number of observations at the satellite galaxy scale seem
to be inconsistent with CDM simulation.
This is known as the small-scale problem of CDM.
In recent years, it has been demonstrated that
self-interacting dark matter (SIDM) with a light mediator offers
a reasonable explanation for the small-scale problem.
We adopt a simple model with SIDM and focus on the effects of
Sommerfeld enhancement.
In this model, the dark matter candidate is a leptonic scalar particle
with a light mediator.
We have found favored regions of the parameter space with proper masses and coupling strength generating a relic density that is
consistent with the observed CDM relic density.
Furthermore, this model satisfies the constraints of recent direct searches and indirect detection for dark matter
as well as the effective number of neutrinos and the
observed small-scale structure of the Universe.
In addition, this model with the favored parameters can resolve the
discrepancies between astrophysical observations and $N$-body simulations.
Axion-like particles (ALPs) are leading candidates to explain the dark matter in the universe. Their production via the misalignment mechanism has been extensively studied for cosine potentials characteristic of pseudo-Nambu-Goldstone bosons. In this work we investigate ALPs with non-periodic potentials, which allow for large misalignment of the field from the minimum. As a result, the ALP can match the relic density of dark matter in a large part of the parameter space. Such potentials give rise to self-interactions which can trigger an exponential growth of fluctuations in the ALP field via parametric resonance, leading to the fragmentation of the field. We study these effects with both Floquet analysis and lattice simulations. Using the Press-Schechter formalism, we predict the halo mass function and halo spectrum arising from ALP dark matter. These halos can be dense enough to produce observable gravitational effects such as astrometric lensing, diffraction of gravitational wave signals from black hole mergers, photometric microlensing of highly magnified stars, perturbations of stars in the galactic disk or stellar streams. These effects would provide a probe of dark matter even if it does not couple to the Standard Model. They would not be observable for halos predicted for standard cold dark matter and for ALP dark matter in the standard misalignment mechanism. We determine the relevant regions of parameter space in the (ALP mass, decay constant)-plane and compare predictions in different axion fragmentation models.
Axion kinetic misalignment is a mechanism that may enhance the dark matter relic found in models of QCD axions or axion-like-particles by considering initial conditions with large kinetic energy. This is interesting because it motivates axion dark matter at lower decay constants where the couplings to matter, incl. detectors, are stronger. I will here give an introduction to this mechanism. I will discuss some of the phenomenology that arises from kinetic misalignment and I will also briefly discuss our recent work on how the mechanism can be realized.
Conveners:
Laura Fabbietti (TU München)
Gunther Roland (MIT)
Jasmine Brewer (CERN)
Jeremi Niedziela (DESY)
Contact: eps23-conveners-t05 @desy.de
Light-flavour hadrons represent the bulk of particles produced in high-energy hadronic collisions at the LHC.
Measuring their pseudorapidity dependence provides information on the partonic structure of the colliding hadrons. It is, in particular at LHC energies sensitive to non-linear QCD evolution in the initial state.
In addition, measurements of light-flavour hadron production in small collision systems at the LHC energies have shown the onset of phenomena (e.g. radial flow and long-range correlations) that resemble what is typically observed in nucleus-nucleus collisions and attributed to the formation of a deconfined system of quarks and gluons.
The improved detector commissioned during LS2 makes ALICE the perfect setup for these measurements.
In this talk, particle production mechanisms are explored by addressing the charged-particle pseudorapidity densities measured in pp and Pb−Pb collisions, presenting the results from Run 3 for the first time.
In addition, new results on identified light-flavour particle production measured in high-multiplicity triggered events will be shown. These will be interpreted in light of the first results from Run 3 on the identified particle production in pp collisions as a function of multiplicity, spanning from the lowest collision energy of $\sqrt{s}$ = 900 GeV to the highest collision energy ever achieved in the laboratory of $\sqrt{s}$ = 13.6 TeV.
The study of strange particle production in heavy-ion collisions plays an important role in understanding the dynamics of the strongly interacting system created in the collision. The enhanced production of strange hadrons in heavy-ion collisions relative to that in pp collisions is historically one of the signatures of the formation of the quark-gluon plasma. The study of strangeness production in small collision systems is also of great interest. One of the main challenges in hadron physics is the understanding of the origin of the increase of (multi)strange hadron yields relative to pion yields with increasing charged-particle multiplicity observed in pp and p-Pb collision systems, a feature that is reminiscent of the heavy-ion phenomenology.
In this talk, new results on multiple productions of strange hadrons in pp collisions are presented. In addition, recent measurements on the production of (multi)strange hadrons in small collision systems as a function of multiplicity and effective energy are shown. These results are discussed in the context of state-of-the-art phenomenological models.
Charmonia have long been recognized as a valuable probe of the nuclear matter in extreme conditions, such as the strongly interacting medium created in heavy-ion collisions and known as quark-gluon plasma (QGP). At LHC energies, the regeneration process due to the abundantly produced charm quarks, was found to considerably affect measured charmonium observables. Comprehensive production measurements of charmonia, including both ground and excited states, are crucial to discriminate among different regeneration scenarios assumed in theoretical calculations. Charmonia can also be sensitive to the initial state of the heavy-ion collision. In particular, their spin-alignment can be affected by the strong magnetic field generated in the early phase, as well as by the large angular momentum of the medium in non-central collisions. The determination of the component originating from beauty hadron decays, known as non-prompt charmonium, grants a direct insight into the nuclear modification factor of beauty hadrons, which is expected to be sensitive to the energy loss experienced by the ancestor beauty quarks inside the QGP. Furthermore, once it is subtracted from the inclusive charmonium production, it allows for a direct comparison with prompt charmonium models.
In this contribution, newly published results of inclusive J/$\psi$ production, including yields, average transverse momentum and nuclear modification factors, obtained at central and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV, will be presented. At midrapidity, newly published measurements of prompt and non-prompt J/$\psi$ production will also be shown. Recently published results obtained at forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV will be discussed. These include, among others, the $\psi$(2S)-to-J/$\psi$ (double) ratio and the $\psi$(2S) nuclear modification factor, as well as the J/$\psi$ polarization with respect to a quantization axis orthogonal to the event-plane. Results will be compared to available model calculations.
Electromagnetic probes such as photons and dielectrons (e$^{+}$e$^{-}$ pairs) are a unique tool to study the space-time evolution of the hot and dense matter created in ultra-relativistic heavy-ion collisions. They are produced at all stages of the collision with negligible final-state interactions. At intermediate dielectron invariant mass ($m_{\rm ee} > 1$ GeV/$c^{2}$), thermal radiation from the quark-gluon plasma carries information about the early temperature of the medium. At LHC energies, it is however dominated by a large background from correlated heavy-flavour hadron decays. At smaller $m_{\rm ee}$, thermal radiation from the hot hadronic phase contributes to the dielectron spectrum via decays of $\rho$ mesons, whose spectral function is sensitive to chiral-symmetry restoration. Finally, at vanishing $m_{\rm ee}$, the real direct photon fraction can be extracted from the dielectron data. In pp collisions, such measurement in minimum bias events serves as a baseline and a fundamental test for perturbative QCD calculations, while studies in high charged-particle multiplicity events allow one to search for thermal radiation in small colliding systems. The latter show surprising phenomena similar to those observed in heavy-ion collisions.
In this talk, final ALICE results, using the full data sample collected during the LHC Run 2, will be presented. They include measurements of the dielectron and direct-photon production in central Pb--Pb at the centre-of-mass energy per nucleon pairs, $\sqrt{s_{\rm NN}}$, of 5.02 TeV, as well as of direct photons in minimum bias and high-multiplicity pp collisions at $\sqrt{s} = 13$ TeV. Finally, first results with the Run 3 pp data at $\sqrt{s} = 13.6$ TeV, using the upgraded ALICE detector to disentangle the different dielectron sources, will be reported.
Hypernuclei are bound states of nucleons and hyperons. The study of their properties, such as their lifetimes and binding energies, provide information on the hadronic interaction between hyperons and nucleons which are complementary to those obtained from correlation measurements. Precise modeling of this interaction is a fundamental input for the calculation of the equation of state of high-density nuclear matter inside neutron stars. Moreover, measurements of their production rate in different collision systems are important to constrain (hyper)nuclei production models, such as the statistical hadronization model and baryon coalescence.
In this presentation, recent results on (anti)hypertriton production in small collision systems and the first-ever observations of the (anti)hyperhydrogen-4 and (anti)hyperhelium-4 in Pb-Pb collisions are presented. These measurements pave the way for detailed investigations of the large charge symmetry breaking implied by the Λ binding energy difference in these hypernuclei. Moreover, differential measurements of their productions yields will contribute to a better understanding of their production models. Recent results on the hypertriton production, high-precision measurements of its lifetime and binding energy in Pb-Pb collisions will also be shown and discussed in the context of the state-of-the-art theoretical models.
The investigation of the quark content of hadrons has been a major goal of non-perturbative strong interaction physics. In the last decade, several resonances in the mass range 1000-2000 MeV/$c^2$ have emerged that cannot be explained by the quark model. The internal structure of exotic resonances such as $\rm f_0$, $\rm f_1$, and $\rm f_2$ is currently unknown. Different scenarios are possible ranging from two-quark, four-quark, molecule, a hybrid state, or glueballs. A modification of the measured yields of these exotic hadrons in AA and pA collisions as compared to pp collisions has been proposed as a tool to investigate their internal structure.
The excellent particle identification capabilities of the ALICE detector along with the large data sample collected in pp and p-Pb collisions provide an opportunity for multi-differential studies of such high-mass resonances. In this presentation, the first-ever measurement of $\rm f_1$ production in pp collisions and measurements of $\rm f_0$ and $\rm f_2$ production both in pp and p-Pb collisions will be presented. The measurements of their mass, width, and yields will be presented and their sensitivity to the internal structure of these exotic resonances will be discussed. These results will pave the way for future experimental investigations on the internal structure of other exotic hadrons.
Short-lived hadronic resonances are unique tools for studying the hadron-gas phase that is created in the late stages of relativistic heavy-ion collisions. Measurements of the yield ratios between resonances and the corresponding stable particles are sensitive to the competing rescattering and regeneration effects. These measurements in small collision systems, such as pp and p-Pb, are a powerful method to reveal a possible short-lived hadronic phase. In addition, resonance production in small systems is interesting to study the onset of strangeness enhancement, collective effects, and the hadron production mechanism. On this front, the $\phi$ meson is particularly relevant since its yield is sensitive to different production models: no effect is expected by strange number canonical suppression but its production is expected to be enhanced in the rope-hadronization scenario.
In this presentation, recent measurements of hadronic resonances in different collision systems, going from pp to Pb-Pb collisions, are presented. These include transverse momentum spectra, yields, and yield ratios as a function of multiplicity and spherocity. The presented results are discussed in the context of state-of-the-art phenomenological models of hadron production. The resonance yields measured in Xe-Xe and Pb-Pb collisions are used as an experimental input in a partial chemical equilibrium-based thermal model to constrain the kinetic freeze-out temperature. This is a novel procedure that is independent of assumptions on the flow velocity profile and the freeze-out hypersurface.
We present a selection of very recent results by the CMS collaboration on heavy-ion physics at the LHC (CERN).
The center-of-mass energies available at modern accelerators, such as the Large Hadron Collider (LHC), and at future generation accelerators, such as the Electron-Ion Collider (EIC) and Future Circular Collider (FCC), offer us a unique opportunity to investigate hadronic matter under the most extreme conditions ever reached. One of the most intriguing phenomena of strong interaction is the so-called gluon saturation in nucleons and nuclei. In the saturation regime, the density of partons, per unit transverse area, in hadronic wavefunctions becomes very large leading to non-linear effects, that are described by the Balitsky-JIMWLK hierarchy of equations.
Pursuing the goal of obtaining accurate theoretical predictions to test the physics of saturation, we compute the cross-sections of diffractive single and double hadron photo- or electroproduction with large $p_T$, on a nucleon or a nucleus at next-to-leading logarithmic accuracy. We employ a hybrid formalism mixing collinear factorization and high energy small-$x$ factorization. This new class of processes provides an access to precision physics of gluon saturation dynamics, with very promising future phenomenological studies at the EIC, or, at the LHC in $p A$ and $A A$ scattering, using Ultra Peripheral Collisions (UPC).
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
“The modeling of the soft radiation in MC approaches and the inclusion of the intrinsic kT effect in a consistent and “simple” way is one of the successes of the Parton Branching TMD approach. In this approach, a consistent treatment of the Parton Shower evolution and the TMD evolution is carried out by the PB-TMD initial state shower. In this talk, the azimuthal correlation, φ12, of high transverse momentum jets in pp collisions at √s = 13 TeV is studied by applying PB-TMD distributions and the PB-TMD initial state shower to NLO calculations via MCatNLO. Also, in the same kinematics regime, the Z+jet azimuthal correlation is studied. The different patterns of Z+jet and dijet azimuthal correlations can be used to search for potential factorization-breaking effects in the back-to-back region, which depend on the different color and spin structure of the final states and their interferences with the initial states. In order to investigate these effects experimentally, we propose to measure the ratio of the distributions in φ for Z+jet- and multijet production at low and at high transverse momenta.”
Multi-jet events at various kinematic regimes are subject of wide scaled studies in the LHC program and future colliders. Merging of TMDs, parton showers and matrix elements is a delicate matter that is sensitive to the process and observable of interest. We present studies of the merging scale in the TMD merging framework, using the Cascade3 Monte Carlo generator. The merging scale separates hard and soft partonic emissions, and serves as an extension of the concept of factorization scale which allows one to treat exclusive production channels. Differential jet rates of Z plus jet events at LHC energies have been investigated to determine the dependence of theoretical predictions on the merging scale as a function of the DY mass, including the case of high-mass DY, and analyze the associated theoretical systematics.
At leading order in positron-proton collisions, a lepton scatters off a quark through virtual photon exchange, producing a quark jet and scattered lepton in the final state. The total transverse momentum of the system is typically small, however deviations from zero can be attributed to perturbative initial and final state radiations in the form of soft gluon radiation when the transverse momentum difference, $\vert\vec{P}_{\perp}\vert$, is much greater than the total transverse momentum of the system, $\vert\vec{q}_{\perp}\vert$. The soft gluon radiation comes only from the jet, and should result in a measurable azimuthal asymmetry between $\vec{P}_{\perp}$ and $\vec{q}_{\perp}$. Quantifying the contribution of soft gluon radiation to this asymmetry should serve as a novel test of perturbative QCD as well as an important background estimation for measurements of the lepton-jet imbalance that have recently garnered intense investigation. The measurement is performed in positron-proton collisions from HERA Run II measured with the H1 detector. A new machine learning method is used to unfold eight observables simultaneously and unbinned. The final measurement, the azimuthal angular asymmetry, is then derived from these unfolded and unbinned observables. Results are compared with parton shower Monte Carlo predictions as well as soft gluon radiation calculations from a Transverse Momentum Dependent (TMD) factorization framework.
https://www-h1.desy.de/h1/www/publications/htmlsplit/H1prelim-23-031.long.html
We present novel analyses on accessing the 3D gluon content of the proton via spin-dependent TMD gluon densities, calculated through the spectator-model approach. Our formalism embodies a fit-based spectator-mass modulation function, suited to catch longitudinal-momentum effects in a wide kinematic range. Particular attention is paid to the time-reversal even Boer-Mulders and the time-reversal odd Sivers functions, whose accurate knowledge, needed to perform precise 3D analyses of nucleons, motivates synergies between LHC and EIC Communities.
We present a novel method of extraction of the Collins-Soper kernel directly from the comparison of differential cross-sections measured at different energies. Using this method, we analyze the simulated data from the CASCADE event generator and extract the Collins-Soper kernel predicted by the Parton Pranching method in the wide range of transverse distances. Using the method, we also solve a long standing problem of comparison between TMDs obtained from PB and factorization approaches.
The Transverse Momentum Dependent (TMD) Parton Branching (PB) method is a Monte Carlo (MC) framework to obtain QCD high energy collider predictions grounded in ideas originating from the TMD factorization. It provides an evolution equation for the TMD parton distribution functions and allows to use those within TMD MC generators.
In this work, we analyze the structure of the TMD PB Sudakov form factor. We discuss the logarithmic order of the low-qt resummation achieved so far by the PB method by comparing its Sudakov form factor to the Collins-Soper-Sterman (CSS) one and we illustrate how the accuracy of PB can be increased by using the ideas of physical (effective) coupling. By using appropriate integration limits in PB, we show how we can analytically identify a term analogous to Collins-Soper (CS) kernel. We investigate the effects of different evolution scenarios on PB TMDs and integrated TMDs and on a numerical extraction of the CS kernel.
The Parton-Branching method (PB) allows the determination of Transverse Momentum Dependent (TMD) parton densities, which cover the region from very small to $k_T$. In the very small $k_T$ region, the contribution from the intrinsic motion of partons (intrinsic $k_T$) plays a role, but also contributions of very soft gluons, which are resummed in the evolution equation. A detailed study shows the importance of very soft gluons (below a resolvable scale) to both the integrated as well as TMD parton densities.
The PB TMD parton densities together with a NLO calculation for the hard process in the MC@NLO style are used to calculate the transverse momentum spectrum of Drell-Yan pairs over a wide mass range. The sensitivity to the intrinsic $k_T$-distribution is used to determine its free parameters. Starting from the PB-NLO-HERAI+II-2018 set2 TMD parton distributions, the width of the intrinsic $k_T$ -distribution is determined, resulting in a slightly larger width than in the default set.
The width of the intrinsic $k_T$-distribution is independent of the mass of the Drell-Yan pair and independent of the center-of-mass energy $\sqrt{s}$, in contrast to other approaches.
QCD calculations for collider physics make use of perturbative solutions of renormalisation group equations (RGEs). RGE solutions can contribute significantly to systematic uncertainties of theoretical predictions for physical observables. We propose a method to express these systematic effects in terms of resummation scales, using techniques borrowed from soft-gluon resummation approaches. We discuss applications to collinear and Sudakov processes at hadron colliders, including deep-inelastic scattering (DIS) structure functions and the Drell-Yan (DY) vector-boson transverse momentum distribution.
The talk is based on work in progress in collaboration with V. Bertone
and G. Bozzi and on work published in Phys. Rev. D 105 (2022) 096003
"Perturbative hysteresis and emergent resummation scales".
Conveners:
Thomas Blake (University of Warwick)
Marzia Bordone (CERN)
Thibaud Humair (MPP)
Contact: eps23-conveners-t08 @desy.de
The Fermilab muon $g-2$ experiment was designed to measure the muon's anomalous magnetic moment $a_\mu=(𝑔−2)/2$ to 140 parts per billion. The value of $a_\mu$ is proportional to the difference frequency $\omega_a = \omega_s - \omega_c$ between the muon's cyclotron frequency and spin precession frequency in the uniform magnetic field of the $g-2$ storage ring. The frequency $\omega_a$ is extracted from the time distribution of the mu-decay positrons recorded by 24 electromagnetic calorimeters positioned around the inner circumference of the storage ring. We will discuss the various approaches to the frequency determination including the reconstruction and fitting of time distributions, fitting of time distributions, and procedures for handling the effects of gain changes, positron pileup and beam dynamics. We also discuss the data consistency checks and the strategy for the averaging of the $\omega_a$ across the different analyses.
The Muon g-2 experiment at Fermilab measures the muon magnetic-moment anomaly, $a_\mu=(g-2)/2$, with the ultimate goal of 140 parts per billion (ppb) precision. This requires determining the absolute magnetic field, averaged over space and time, experienced by the muons, expressed as the nuclear magnetic resonance frequency of protons in a spherical pure water sample at a specified reference temperature. A chain of calibrations and measurements maps and tracks the magnetic field providing the muon-weighted average field with precision better than 60 ppb. This talk will present the principles, practical realizations, and innovations incorporated into the measurement and analysis of the magnetic field for the 2019-20 data sets.
The Muon g-2 experiment at Fermilab is making progress towards its physics goal of measuring the muon anomalous magnetic moment with the unprecedented precision of 140 parts per billion. In April 2021 the collaboration published the first measurement, based on the first year of data taking. The second result is based on the second and third years of data taking combined. In this talk, we discuss the corrections to the anomalous spin precession signal due to beam dynamics effects being used to determine the anomalous spin precession frequency for the second result.
During the last 15 years the "Radio MontecarLow (“Radiative Corrections and Monte Carlo Generators for Low Energies”) Working Group, see www.lnf.infn.it/wg/sighad/, has been providing valuable support to the development of radiative corrections and Monte Carlo (MC) generators for low energy e+e- data and tau-lepton decays. Its operation which started in 2006 proceeded until the last few years bringing together at 20 meetings both theorists and experimentalists, experts working in the field of e+e- physics and partly also the tau community and produced the report “Quest for precision in hadronic cross sections at low energy: Monte Carlo tools vs. experimental data” S. Actis et al. Eur. Phys. J. C 66, 585-686 (2010) (https://arxiv.org/abs/0912.0749), which has more than 300 citations.
While the working group has been operating for more than 15 years without a formal basis for funding, parts of our program have recently been included as a Joint Research Initiative in the group application of the European hadron physics community, STRONG2020, to the European Union with the specific goal of creating an annotated database for low-energy hadronic cross sections in e+e- collisions. The database will contain information about the reliability of the data sets, their systematic errors, and the treatment of RC. In parallel the theory community is continuing its effort towards the realization of a MC with full NNLO corrections for low energy e+e- data into hadrons, which is of relevance for the precise determination of the leading hadronic contribution to the muon g-2. We will report on both these initiatives.
Systematic Operator Product Expansions can be applied to the hadronic light-by-light tensor in those kinematic regimes where there is any large external Euclidean momentum. In this talk it is reviewed how this can be applied to the different kinematic regimes entering into the muon g-2 integral, shedding some light on the interplay of short-distance and long-distance contributions in the data-driven approach and constraining the regime where the largest error in HLbL comes from. This leads to a better theoretical control of the corresponding uncertainties. This talk is based on Phys.Lett.B 798 (2019) 134994, JHEP 10 (2020) 203, JHEP 04 (2021) 240 and especially on JHEP 02 (2023) 167 and work in progress.
Charged lepton flavor violation (CLFV) is forbidden in the Standard Model but possible in several new physics scenarios. Thus, the observation of CLFV would be a clear signature of new physics. We report the world-leading result of lepton-flavor-violating decays of \tau lepton decays into a light charged lepton ($\ell$ = e, $\mu$) and a vector meson using the full data sample collected by the Belle. In addition, we report the searches for new physics in $\tau$ decays, especially with its decay involving heavy neutral lepton. We also cover recent searches for tau decays into a scalar non-standard-model particle and a light charged lepton. The results are based on the data set collected by the Belle experiment at the KEKB asymmetric-energy $e^{+}e^{-}$ collider.
The CPT symmetry is one of the most fundamental symmetries in physics. Any violation of this symmetry would have profound implications for our understanding of the universe [1]. In this study, we report the CPT symmetry tests in 3$\gamma$ decays of polarised $^3$S$_{1}$ positronium using the Jagiellonian Positron Emission Tomography device. The J-PET experiment allows sensitive and precise tests of CPT symmetry by measuring the angular correlation between the spin of ortho-positronium and the momenta direction of the annihilation photons emitted in ortho-positronium decay [2]. The potential of J-PET in determining the full range of the expectation value of this correlation has resulted in improving the precision of the CPT symmetry test to 10$^{-4}$ already [3]. The accuracy of this previous measurement was limited by statistics only. The new test is based on the increased statistics due to the modified experimental setup aiming at the improvement of detection efficiency and due to the usage of different positronium production chamber. The high precision of this test would open the possibility to explore the limits of CPT symmetry validity in the charged leptonic sector.
[1] R. Lehnert, Symmetry 8(11), 114 (2016).
[2] P. Moskal et al., Acta Phys. Polon. B 47, 509 (2016).
[3] P. Moskal et al., Nat. Commun. 12, 5658 (2021).
he MEG II experiment, which focuses on investigating Charged Lepton Flavour Violation in muon decays, completed the commissioning of all subdetectors in time for the 2021 run. Recently, it concluded the second year of data collection at the Paul Scherrer Institut (CH).
The experimental apparatus has been specifically designed to search for $\mu^+ \rightarrow e^+ \gamma$ decays, aiming for a significant improvement over the current sensitivity of $4.2 \times 10^{-13}$. This requires high-performance and lightweight detectors capable of handling the pileup effect.
In this presentation, I will provide a brief overview of the experimental techniques employed and share the lessons learned from operating the detectors in a high-rate environment. Subsequently, I will describe the analysis approach applied to the 2021 dataset. Although we collected only few weeks of data in 2021, we anticipate achieving a sensitivity of $8.5 \times 10^{-13}$, although our analysis will be strongly limited by available statistics. Currently, the data in the signal region are blinded, and we are preparing the Likelihood analysis using signal sidebands. The MEG experiment's current limit is twice as good as what we can achieve with the 2021 dataset but relies on the complete dataset spanning approximately four years. However, the 2021 data provides a unique opportunity to fine-tune the analysis procedure and address systematic uncertainties as the collaboration works towards improving statistical precision.
Finally, I will present the data-taking plan for MEG II, which will continue until the end of 2026, to achieve a sensitivity of $6 \times 10^{-14}$. Additionally, I will touch upon some potential other searches being considered by the MEG II collaboration, extending beyond the $\mu^+ \rightarrow e^+ \gamma$ decay channel.
Conveners:
Ilaria Brivio (Universita di Bologna)
Karsten Köneke (Universität Freiburg)
Matthias Schröder (Universität Hamburg)
Contact: eps23-conveners-t09 @desy.de
In this talk, the latest results from the CMS experiment on inclusive and simplified template cross section measurements of the Higgs boson are discussed. We cover the latest measurements for the fermionic decay channels in this presentation. Measurements of the Higgs boson couplings in the fermionic Higgs boson decay channels are also presented.
Detailed measurements of Higgs boson properties and its interactions can be performed using its decays into fermions, providing a key window into the nature of the Yukawa interactions. This talk presents the latest measurements of the Higgs boson properties in various leptonic (ττ, μμ) and quark (bb,cc) decay channels by the ATLAS experiment, using the full Run 2 pp collision dataset collected at 13 TeV. They include in particular measurements within the Simplified Template Cross Section framework, and their interpretations in specific scenarios of physics beyond the Standard Model, as well as generic extensions in the context of Standard Model Effective Field Theories.
The study of Higgs boson production in association with one or two top quarks provides a key window into the properties of the two heaviest fundamental particles in the Standard Model, and in particular into their couplings. This talk presents measurements of tH and ttH production in pp collisions collected at 13 TeV with the ATLAS detector using the full Run 2 dataset of the LHC.
In this talk, the latest results from the CMS experiment on inclusive and simplified template cross section measurements of the Higgs boson are discussed. We cover the latest measurements for the bosonic decay channels in this presentation. Measurements of the Higgs boson couplings in the bosonic Higgs boson decay channels are also presented.
Higgs boson decays to bosons provide very detailed measurements of its properties and interactions, and shine light on the mechanism of electroweak symmetry breaking. This talk presents the latest measurements of the Higgs boson coupling properties performed by the ATLAS experiment in various bosonic decay channels (WW, ZZ and yy) using the full Run 2 pp collision dataset collected at 13 TeV. Results on production mode cross sections, Simplified Template Cross Sections (STXS), and their interpretations are presented. Specific scenarios of physics beyond the Standard Model are tested, as well as generic extensions within the framework of the Standard Model Effective Field Theory.
We present an overview of the most recent differential and fiducial Higgs boson cross section measurements from CMS. A variety of Higgs boson final states are covered.
The Higgs boson decay to two W bosons provides the largest branching fraction among bosonic decays, and can be used to perform some of the most precise measurements of the Higgs boson production cross sections. This talk presents Higgs boson fiducial and differential cross section measurements by the ATLAS experiment in the WW decay channel, targeting both the gluon-gluon fusion and vector-boson fusion production modes, as well as complementary measurements in the ZZ and gamma-gamma final states. The results are based on pp collision data collected at 13 TeV and 13.6 TeV during Run 2 and Run 3 of the LHC.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
We study the impact of three different BSM models in the charge asymmetry defined for the 2SS$\ell$ (with $\ell= e, \mu$) with jets ($n_j\geq2$) final state at the LHC, at $\sqrt{s}=13$ TeV, where the main SM contribution is the $t\bar{t}W$ production. We consider the impact of a heavy neutral scalar/pseudoscalar arising from a 2HDM model; a simplified RPV MSSM model with electrowikino production (Higgsino or wino-like); and an effective theory with dimension 6 four-quark operators. We propose measuring the charge asymmetries differentially with respect to different kinematic observables, and inclusively/exclusively with the number of b-tagged jets in the final state ($n_b\geq\{1, 2, 3\}$). We show that the 2HDM and the four quark operator schemes may be sensitive to the detection of new physics, even for an integrated luminosity of 139 fb$^{-1}$
Several physics scenarios beyond the Standard Model predict the existence of new particles that can subsequently decay into a pair of Higgs bosons. These include pairs of SM-like Higgs bosons (HH) as well as asymmetric decays into two scalars of different masses (SH). This talk summarises ATLAS searches for resonant HH and SH production with LHC Run 2 data. Several final states are considered, arising from various combinations of Higgs boson decays.
An overview of the results of searches for massive new resonances by the CMS Collaboration is presented. The results include searches for resonances such as W' and Z' particles decaying to final states with top quarks as well as charged Higgs boson searches. The CMS search program covers a variety of final states targeting different new physics models including extended Higgs sectors. The results are based on the large dataset collected during Run 2 of the LHC at a centre-of-mass energy of 13 TeV.
In the Standard Model, one doublet of complex scalar fields is the minimal content of the Higgs sector in order to achieve spontaneous electroweak symmetry breaking. However, several theories beyond the Standard Model predict a non-minimal Higgs sector and introduce charged scalar fields that do not exist in the Standard Model. As a result, singly- and doubly-charged Higgs bosons would be a unique signature of new physics with a non-minimal Higgs sector. As such, they have been extensively searched for in the ATLAS experiment, using proton-proton collision data at 13 TeV during the LHC Run 2. In this presentation, a summary of the latest experimental results obtained in searches for both singly- and doubly-charged Higgs bosons are presented.
The discovery of the Higgs boson with the mass of about 125 GeV completed the particle content predicted by the Standard Model. Even though this model is well established and consistent with many measurements, it is not capable of solely explaining some observations. Many extensions of the Standard Model addressing such shortcomings introduce additional Higgs-like bosons. The current status of searches for additional low- and high-mass neutral Higgs bosons based on the full LHC Run 2 dataset of the ATLAS experiment at 13 TeV are presented.
Following the potential discovery of new heavy particles at the LHC or a future collider, it will be crucial to determine their properties and the nature of the underlying Physics. Of particular interest is the possibility of Beyond-the-Standard-Model (BSM) scalar trilinear couplings.
In this talk, I will consider as a specific example the scalar top (stop) trilinear coupling parameter, which controls the stop–stop–Higgs interaction, in the Minimal Supersymmetric Standard Model and I will discuss possible strategies for its experimentally determination. I will show that the best prospects for determining the stop trilinear coupling arise from its quantum effects entering the prediction for the mass of the SM-like Higgs boson in comparison to the measured value. Importantly, the Higgs-boson mass exhibits a high sensitivity to the stop trilinear coupling even for heavy masses of the non-standard particles.
Next, I will review different renormalisation prescriptions for the stop trilinear coupling, and their impact in the context of Higgs-boson mass calculations. I will show that a mixed renormalisation scheme is preferred in view of the present level of accuracy of this calculation, and I will clarify the source of potentially large logarithms that cannot be resummed with standard renormalisation group methods.
Axion-like particles (ALPs) are gauge-singlet under the Standard Model (SM) and appear in many well-motivated extensions of the SM. Since they arise as pseudo-Nambu-Goldstone bosons of an approximate axion shift-symmetry, the masses of ALPs can naturally be much smaller than the energy scale of the underlying UV model, making them an attractive target for the Large Hadron Colloder (LHC) and the future High-Luminosity LHC (HL-LHC). In this talk, we present a method for determining the nature of a possible signal in searches for ALPs produced via gluon-fusion and decaying into top-antitop-quark ($t\bar{t}$) final states in proton-proton scattering at $\sqrt{s} = 13$ TeV. Such a signal has the potential to explain a local $3.5\, \sigma$ excess in resonant $t\bar{t}$ production at a mass scale of approximately $400$ GeV, observed by the CMS collaboration in LHC Run-II data. In particular, we investigate how ALP production can be distinguished from the production of pseudoscalar Higgs bosons as they arise in models featuring a second Higgs doublet, making use of the invariant $t\bar{t}$ mass distribution and angular correlations sensitive to $t\bar{t}$ spin correlation. Furthermore, comparisons to existing experimental bounds from the LHC are presented and discussed.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
With the restart of the proton-proton collision program in 2022 (Run-3) at the Large Hadron Collider (LHC), the ATLAS detector aims to double the integrated luminosity accumulated during the ten previous years of operation. After this data-taking period the LHC will undergo an ambitious upgrade program to be able to deliver an instantaneous luminosity of $7.5\times 10^{34}$ cm$^{-2}$ s$^{-1}$, allowing the collection of more than 3 ab$^{-1}$ of data at $\sqrt{s}=$14 TeV. This unprecedented data sample will allow ATLAS to perform several precision measurements to constrain the Standard Model Theory (SM) in yet unexplored phase-spaces, in particular in the Higgs sector, a phase-space only accessible at the LHC. To benefit from such a rich data-sample it is fundamental to upgrade the detector to cope with the challenging experimental conditions that include huge levels of radiation and pile-up events. The ATLAS upgrade comprises a completely new all-silicon tracker with extended rapidity coverage that will replace the current inner tracker detector; a redesigned trigger and data acquisition system for the calorimeters and muon systems allowing the implementation of a free-running readout system. Finally, a new subsystem called High Granularity Timing Detector, will aid the track-vertex association in the forward region by incorporating timing information into the reconstructed tracks. A final ingredient, relevant to almost all measurements, is a precise determination of the delivered luminosity with systematic uncertainties below the percent level. This challenging task will be achieved by collecting the information from several detector systems using different and complementary techniques. This presentation will describe the ongoing ATLAS detector upgrade status and the main results obtained with the prototypes, giving a synthetic, yet global, view of the whole upgrade project.
In the high-luminosity era of the Large Hadron Collider, the instantaneous luminosity is expected to reach unprecedented values, resulting in up to 200 proton-proton interactions in a typical bunch crossing. To cope with the resulting increase in occupancy, bandwidth and radiation damage, the ATLAS Inner Detector will be replaced by an all-silicon system, the Inner Tracker (ITk). The innermost part of the ITk will consist of a pixel detector, with an active area of about 13 m2. To deal with the changing requirements in terms of radiation hardness, power dissipation and production yield, several silicon sensor technologies will be employed in the five barrel and endcap layers. Prototype modules assembled with RD53A readout chips have been built to evaluate their production rate. Irradiation campaigns were done to evaluate their thermal and electrical performance before and after irradiation. A new powering scheme – serial – will be employed in the ITk pixel detector, helping to reduce the material budget of the detector as well as power dissipation. This contribution presents the status of the ITk-pixel project focusing on the lessons learned and the biggest challenges towards production, from mechanics structures to sensors, and it will summarize the latest results on closest-to-real demonstrators built using module, electric and cooling services prototypes.
The inner detector of the present ATLAS experiment has been designed and developed to function in the environment of the present Large Hadron Collider (LHC). At the ATLAS Phase-II Upgrade, the particle densities and radiation levels will exceed current levels by a factor of ten. The instantaneous luminosity is expected to reach unprecedented values, resulting in up to 200 proton-proton interactions in a typical bunch crossing. The new detectors must be faster and they need to be more highly segmented. The sensors used also need to be far more resistant to radiation, and they require much greater power delivery to the front-end systems. At the same time, they cannot introduce excess material which could undermine tracking performance. For those reasons, the inner tracker of the ATLAS detector was redesigned and will be rebuilt completely. The ATLAS Upgrade Inner Tracker (ITk) consists of several layers of silicon particle detectors. The innermost layers will be composed of silicon pixel sensors, and the outer layers will consist of silicon microstrip sensors. This contribution focuses on the strip region of the ITk. The central part of the strip tracker (barrel) will be composed of rectangular short (~ 2.5 cm) and long (~5 cm) strip sensors. The forward regions of the strip tracker (end-caps) consist of six disks per side, with trapezoidal shaped sensors of various lengths and strip pitches. After the completion of final design reviews in key areas, such as Sensors, Modules, Front-End electronics, and ASICs, a large scale prototyping program has been completed in all areas successfully. We present an overview of the Strip System and highlight the final design choices of sensors, module designs and ASICs. We will summarise results achieved during prototyping and the current status of pre-production and production on various detector components, with an emphasis on QA and QC procedures.
In June 2022 the data taking of the Belle II experiment was stopped for the Long Shutdown 1 (LS1), which is primarily required to install a new two-layer DEPFET detector (PXD) and upgrade components of the accelerator. The whole silicon tracker (VXD) will be extracted from Belle II, then the outer four-layer double-sided strip detector (SVD) is split into its two halves to allow access for the PXD installation. Then a new VXD commissioning phase will begin such that it will be ready to take data by the end of 2023. We describe the challenges and status of this VXD upgrade.
In addition, we report on the performance of the SVD, which has been operated since 2019. The high hit efficiency and the large signal-to-noise are monitored via online data-quality plots.
The good cluster-position resolution is estimated using the unbiased residual with respect to the track, resulting in reasonable agreement with the expectations. A novel procedure to group SVD hits event-by-event, based on their time, has been developed. Using the grouping information during reconstruction allows to significantly reduce the fake rate while preserving the tracking efficiency.
So far, in the layer closest to the I.P., the SVD average occupancy has been less 0.5%, which is well below the estimated limit for acceptable tracking performance. As the luminosity increases, higher machine backgrounds are expected and the excellent hit-time information in SVD can be exploited for background rejection. We have developed a method that uses the SVD hit-time to estimate the collision time (event-T0) with similar precision to the estimate based on the drift chamber. The execution time needed to compute SVD event-T0 is three orders of magnitude smaller, allowing a faster online reconstruction that is crucial in a high luminosity regime. Furthermore, the front-end chip (APV25) is operated in “multi-peak” mode, which reads six samples. To reduce background occupancy, trigger dead-time and data size, a 3/6-mixed acquisition mode, based on the timing precision of the trigger, has been successfully tested in physics runs.
Finally, concerning the radiation damage, the SVD dose is estimated by the correlation of the SVD occupancy with the dose measured by the diamonds of the monitoring and beam-abort system. Although the sensor current and the strip noise have shown a moderate increase due to radiation, we expect the detector performance will not be seriously degraded during the lifespan of the detector.
With the emergence of advanced Si sensor technologies such as LGADs, it is now possible to achieve exceptional time measurement precision below 50 ps. As a result, the implementation of time-of-flight (TOF) particle identification for charged hadrons at future $e^{+}e^{-}$ Higgs factory detectors has gained an increasing attention. Other particle identification techniques require a gaseous tracker with excellent dE/dx (or dN/dx) resolution, or a RICH, which adds additional material in front of the calorimeter.
TOF measurements can be implemented either in the outer layers of the tracker or in the electromagnetic calorimeter, and are thus particularly interesting as a PID method for detector concepts based on all-silicon trackers and optimised for particle-flow reconstruction.
In this presentation, we will explore potential integration scenarios of TOF measurement in a future Higgs factory detector, using the International Large Detector (ILD) as an example. We will focus on the challenges associated with crucial components of TOF particle identification, namely track length reconstruction and TOF measurements. The subsequent discussion will highlight the vital impact of precise track length reconstruction and various TOF measurement techniques, including recently developed machine learning approaches. We will evaluate the performance in terms of kaon-pion and kaon-proton separation as a function of momentum, and discuss potential physics applications.
The increase of the particle flux (pile-up) at the HL-LHC with instantaneous luminosities up to L ≃ 7.5 × 1034 cm−2s −1 will have a severe impact on the ATLAS detector reconstruction and trigger performance. The end-cap and forward region where the liquid Argon calorimeter has coarser granularity and the inner tracker has poorer momentum resolution will be particularly affected. A High Granularity Timing Detector (HGTD) will be installed in front of the LAr endcap calorimeters for pile-up mitigation and luminosity measurement. The HGTD is a novel detector introduced to augment the new all-silicon Inner Tracker in the pseudo-rapidity range from 2.4 to 4.0, adding the capability to measure charged-particle trajectories in time as well as space. Two silicon-sensor double-sided layers will provide precision timing information for minimum-ionising particles with a resolution as good as 30 ps per track in order to assign each particle to the correct vertex. Readout cells have a size of 1.3 mm × 1.3 mm, leading to a highly granular detector with 3.7 million channels. Low Gain Avalanche Detectors (LGAD) technology has been chosen as it provides enough gain to reach the large signal over noise ratio needed. The requirements and overall specifications of the HGTD will be presented as well as the technical design and the project status. The R&D effort carried out to study the sensors, the readout ASIC, and the other components, supported by laboratory and test beam results, will also be presented.
The LHCb experiment has been upgraded during the second long shutdown of the Large Hadron Collider at CERN, and the new detector is currently operating at the LHC. The Vertex Locator (VELO) is the detector surrounding the interaction region of the LHCb experiment and is responsible of reconstructing the proton-proton collision (primary vertices) as well as the decay vertices of long-lived particles (secondary vertices).
The VELO is composed by 52 modules with hybrid pixel detector technology, operating at just 5.1 mm from the beams. The sensors consist of 200 μm thick n-on-p planar silicon sensors, read out via 3 VeloPix ASICs. The sensors are attached to a 500 μm thick silicon plate, which embeds 19 micro-channels for the circulation of the CO$_2$ evaporative cooling. The VELO operates in an extreme environment, which poses significant challenges to its operation. During the lifetime of the detector, the sensors are foreseen to accumulate an integrated fluence of up to 8×10$^{15}$ 1MeV n$_{eq}$cm$^{−2}$, roughly equivalent to a dose of 400 MRad. Moreover, due to the geometry of the detector, the sensors will face a highly non-uniform irradiation, with fluences in the hottest regions expected to vary by a factor 400 within the same sensor. The highest occupancy ASICs foresee a maximum pixel hit rate of 900 Mhit/s and an output data rate exceeding 15 Gbit/s. The design, operation and early results obtained during the first year of commissioning will be presented.
The high-luminosity upgrade of the LHC (HL-LHC) brings unprecedented requirements for real-time and precision bunch-by-bunch online luminosity measurement and beam-induced background monitoring. A key component of the CMS Beam Radiation Instrumentation and Luminosity (BRIL) system is a stand-alone luminometer, the Fast Beam Condition Monitor (FBCM), which is fully independent from the CMS central trigger and data acquisition services and able to operate at all times with an asynchronous readout. FBCM utilizes a dedicated front-end ASIC to amplify the signals from CO2-cooled silicon-pad sensors with a few nanoseconds timing resolution also enabling the measurement of beam-induced background. FBCM uses a modular design with two half-disks of twelve modules at each end of CMS, with 4 service modules placed around the disk edge at a radius of reduced radiation fluence. The electronics system design adapts several components from the CMS Tracker for power, control and read out functionalities. The dedicated FBCM ASIC contains 6 channels with 600e- ENC and adjustable shaping time to optimize the noise with regards to sensor leakage current. Each channel outputs a single binary high-speed asynchronous signal carrying the Time-of-Arrival and Time-over-Threshold information. The chip output signal is sent via a radiation-hard gigabit transceiver and an optical link to the back-end electronics for analysis. The paper reports on the design and the testing program for the FBCM detector.
Conveners:
Alessia Bruni (INFN Bologna)
Marie-Lena Dieckmann (Universität Hamburg)
Gwenhaël Wilberts Dewasseige (UC Louvain)
Contact: eps23-conveners-t14@desy.de
Since 1983 the Italian groups collaborating with Fermilab (US) have been running a 2-month summer training program for Master students. While in the first year the program involved only 4 physics students, in the following years it was extended to engineering students. Many students have extended their collaboration with Fermilab with their Master Thesis and PhD.
The program has involved almost 600 Italian students from more than 20 Italian universities. Each intern is supervised by a Fermilab Mentor responsible for the training program. Training programs spanned from Tevatron, CDF, CMS, Muon (g-2), Mu2e and SBN (MicroBooNE, SBND and ICARUS) and DUNE design and data analysis, development of particle detectors, design of electronic and accelerator components, development of infrastructures and software for tera-data handling, quantum computing and research on superconductive elements and accelerating cavities.
In 2015 the University of Pisa included the program within its own educational programs. Summer Students are enrolled at the University of Pisa for the duration of the internship and at the end of the internship they write summary reports on their achievements. After positive evaluation by a University of Pisa Examining Board, interns are acknowledged 6 ECTS credits for their Diploma Supplement. In the years 2020 and 2021 the program was canceled due to the sanitary emergency but in 2022 it was restarted and allowed a cohort of 21 students to be trained for nine weeks at Fermilab. We are now organizing the 2023 program.
The REINFORCE EU (Research Infrastructures FOR Citizens in Europe) was a three- year long SwafS project which engaged citizens in active collaboration with the scientists working in large research infrastructures across Europe. The overall aim was to bridge the gap between them and reinforce society’s science capital. The citizen scientists had at their disposal data from four different “discovery demonstrators” hosted on the online Zooniverse platform.
The demonstrators asked for the citizen contribution to front-end research such as: gravitational wave astronomy, deep sea neutrino telescopes, particle search at CERN and cosmic rays. The task of the citizens was to help the scientists to optimize the detectors and/or the reconstruction algorithms.
The focus of the talk will be on the demonstrator titled “Search for new particles at CERN”, where citizen-scientists visually inspected events collected by the ATLAS detector at LHC and searched for signatures of new particles. To make this possible, the demonstrator adopted a three-stage architecture. The first two stages used simulated data to train citizens, but also to allow for a quantitative assessment of their performance and a comparison with machine learning algorithms. The third stage used real data, providing two research paths: (a) study of Higgs boson decays to two photons, one of which could be converted to an electron-positron pair by interaction with detector material, and (b) search for yet undiscovered long-lived particles, predicted by certain theories Beyond-the-Standard-Model.
The results of 360,000 classifications showed that citizen scientists can carry out complicated tasks responsibly, with a performance comparable to that of a purpose-built machine-based algorithm and can identify interesting patterns or errors in the reconstruction, in individual events. Moreover, the demonstrator showed that the statistical combination of user responses (user consensus) appears to be quite a powerful tool that can be further considered and exploited in fundamental scientific research.
The demonstrator approach to applying citizen science to high energy physics proved that users could contribute to the field, but also identify areas where further study is necessary.
The International Particle Physics Outreach Group (IPPOG) is a network of scientists, science educators and communication specialists working across the globe in informal science education and public engagement for particle physics. The primary methodology adopted by IPPOG includes the direct participation of scientists active in current research with education and communication specialists, in order to effectively develop and share best practices in outreach. IPPOG member activities include the International Particle Physics Masterclass programme, International Day of Women and Girls in Science, Worldwide Data Day, International Muon Week and International Cosmic Day organisation, and participation in activities ranging from public talks, festivals, exhibitions, teacher training, student competitions, and open days at local institutes. These independent activities, often carried out in a variety of languages to public with a variety of backgrounds, all serve to gain the public trust and to improve worldwide understanding and support of science. We present our vision of IPPOG as a strategic pillar of particle physics, fundamental research and evidence-based decision-making around the world.
The war on Ukraine has affected significantly scientific cooperation and communication in particle physics and also in many other fields of scientific, cultural and educational exchange. Immediately after the war in Feb 2022, many scientific institutions paused or banned scientific cooperation and exchange with Russian and Belorusian institutes and their scientists. Publications were out on hold or even banned, if Russian scientists were on the author list.
The Science4Peace Forum was created in March 2022 as a consequences of these restrictions, as a completely independent Forum for open communication among scientists, using independent ZOOM rooms and Webpages. The basic ideas of the S4P Forum are fully in line with the policy of IUPAP of supporting and encouraging free scientific exchange. In the course of discussions on the consequences of the war on Ukraine, the S4P Forum organized a high-level panel discussion on "Sanctions in Science - One Year of Sanctions" [[1]].
The war on Ukraine has also increased enormously the risk of nuclear escalation. Together with 14 Nobel laureates and many Scientists, the S4P Forum launched an appeal to subscribe to the "no-first use" policy and urged the governments to subscribe to the Treaty of the Prohibition of Nuclear Weapons adopted by the United Nations [[2]].
The S4P Forum fully supports the ideas and activities originating from the International Year of Basic Sciences for Sustainable Development (IYBSSD) to open and to keep open discussion channels at all levels, using Science as a common language.
This presentation is submitted on behalf of the Science4Peace Forum.
The communities of astrophysics, astronomers and high energy physicists have been pioneers in establishing Virtual Research and Learning Networks (VRLCs)[1] generating international productive consortiums in virtual research environments and forming the new generation of scientists. These environments are key to improve accessibility and inclusion for students and researchers in developing countries. In this talk we will discuss one in particular: LA-CoNGA Physics (Latin American alliance for Capacity buildiNG in Advance physics) [2].
LA-CoNGA physics aims to support the modernization of the university infrastructure and the pedagogical offer in advanced physics in four Latin American countries: Colombia, Ecuador, Peru and Venezuela. This virtual teaching and research network is composed of 3 partner universities in Europe and 8 in Latin America, high-level scientific partners (CEA, CERN, CNRS, DESY, ICTP), and several academic and industrial partners. The project is co-funded by the Education, Audiovisual and Culture Executive Agency (EACEA) of the European Commission.
Open Science education and Open Data are at the heart of our operations. In practice LA-CoNGA physics has created a set of postgraduate courses in Advanced Physics (high energy physics and complex systems) that are common and inter-institutional, supported by the installation of interconnected instrumentation laboratories and an open e-learning platform. This program is inserted as a specialization in the Physics masters of the 8 Latinamerican partners in Colombia, Ecuador, Peru and Venezuela. It is based on three pillars: courses in high energy physics theory/phenomenology, data science and instrumentation. The program is complemented by transversal activities like seminars, citizen science projects and open science hackathons [3].
In the current context, VRLCs and e-learning platforms are contributing to solve challenges, such as distance education during the COVID19 pandemic and internationalization of institutions in developing countries.
[1] http://www.oecd.org/sti/inno/international-distributed-research-infrastructures.pdf
[2] https://laconga.redclara.net/en/home/
[3] https://laconga.redclara.net/hackathon/
The ATLAS Collaboration has developed a number of highly successful programmes featuring educational content for schools and universities, as well as communication strategies to engage the broader public. The ATLAS Open Data project has successfully delivered open-access data, simulations, documentation and related resources for education and outreach use in High Energy Physics and related computer sciences based on data collected in proton–proton collisions at 8 TeV and 13 TeV. These resources have found substantial application worldwide in schools, universities and other public settings. Building on this success and in support of CERN’s Open Data Policy the ATLAS experiment plans to continue to release 13 TeV data for educational purposes and – for the first time – also for research purposes. The ATLAS Communication Programme prepares substantial web content through online press statements, briefings that explain topical result releases to the public, video content (interviews with analysers, tours and live Q&As), and social media engagement. We will summarise the landscape of the ATLAS Open Data project and discuss communication strategies, types of content, and the effect on target audiences, with best practices shared.
The “Congrès des deux infinis” (“Congress of the two infinities”) was a huge education and outreach festival, organized last Fall in Réunion island, parallel to the international conference EDSU2022 – “Fourth World Summit on Exploring the Dark Side of the Universe”. During two weeks, tens of events – public lectures, conferences in schools, university seminars, school contests, teacher training sessions, topical round tables, etc. – have allowed more than 6,500 spectators (teachers, students, general audience) from all over the island to benefit from this uncommon concentration of French-speaking researchers in physics.
This Congress, unique in many ways, did not come by chance. It is a climax of many activities organized in the past decade around the “infinitely small” and the “infinitely large” basic science topics, by a group of extremely motivated local teachers, far from mainland France – and thus with a limited access to research institutes, scientists or even educational resources. This endeavor started with the participation of some of these teachers to the French Teacher Program, organized yearly at CERN for 15 years, and then expanded with the help of CNRS/IN2P3 (the leading public research institute for these fields in France) and the Ministry of National Education.
After presenting the main realizations of these teachers over the years – characterized by the recurring use of the unique Réunion environment as a playground for educational activities –, this parallel talk will describe in details the contents of the “Congrès des deux infinis”, with a focus of the organizational difficulties that had to be overcome during the preparation phase and later on, during the Congress itself to give all participants a high-quality experience. To conclude, the experience gained along the way will be shared, for colleagues (researchers or teachers) who could be interested in organizing a similar event.
INFN Kids (*) is a science education project of the Italian National Institute for Nuclear Physics addressed to young people of Primary and Middle schools age. The initiative aims at raising children’s curiosity towards science with a focus on Physics, inspiring them with science by illustrating the different research fields that INFN is pursuing, the development in technologies along with the applications in everyday life and presenting people who animate science. It gathers technicians and researchers of thirteen units and National labs in the design and realization of multimedia products, laboratory-based activities, comics, science demos and exhibits. The activities are conducted online and in person in schools, science festivals and at INFN’s sites.
The adopted methodologies and the didactic tools (lectures, interactive lessons, hands-on sessions, science games) involve children in the direct exploration of natural phenomena.
Given the manifold plan of activities the recipients of the project are also teachers and families, and this allowed to expand and use different formats to meet the audience’s requests.
We here present an overview of the ongoing initiatives to share our experiences and we illustrate in particular the comics centered on the characters Leo and Alice that drive children in the investigation of the micro and macro world, and the laboratory-based activities designed to introduce kids some fundamental concepts related to matter and its inner structure.
(*) https://web.infn.it/infn-kids/
The climate crisis and the degradation of the world's ecosystems require humanity to take immediate action. The international scientific community has a responsibility to limit the negative environmental impacts of basic research. The HECAP+ communities (High Energy Physics, Cosmology, Astroparticle Physics, and Hadron and Nuclear Physics) make use of common and similar experimental infrastructure, such as accelerators and observatories, and rely similarly on the processing of big data. Our communities therefore face similar challenges to improving the sustainability of our research. This document aims to reflect on the environmental impacts of our work practices and research infrastructure, to highlight best practice, to make recommendations for positive changes, and to identify the opportunities and challenges that such changes present for wider aspects of social responsibility.
Conveners:
Livia Conti (INFN)
Carlos Perez de los Heros (Uppsala University)
Martin Tluczykont (Universität Hamburg)
Gabrijela Zaharijas (UNG)
Contact: eps23-conveners-t01 @desy.de
The IceCube Neutrino Observatory has measured the high-energy astrophysical neutrino flux but has not yet detected prompt atmospheric neutrinos originating from charmed meson decays. Understanding the prompt neutrino flux is crucial for improving models of high-energy hadronic interactions and advancing astrophysical neutrino measurements. We present a combined analysis of cascades and up-going tracks to explore the subdominant prompt neutrino flux. We propose a robust method for calculating upper limits of the prompt flux, considering the model dependencies on the astrophysical and conventional atmospheric neutrino fluxes.
We investigate the kinematical regions that are important for producing prompt neutrinos in the atmosphere and in the forward region of the LHC, as probed by different experiments. We illustrate the results in terms of the center-of-mass nucleon-nucleon collision energies and rapidities of neutrinos and of the parent heavy-flavoured hadrons. We find overlap in only part of the kinematic space and we point out the physics requirements needed to appropriately describe the two regimes.
The contribution is based on W. Bai et al. [arXiv:2212.07865]
In recent years, the IceCube Neutrino Observatory has started to unravel the high-energy neutrino sky. The discoveries of TXS0506+056 and NGC1068 as neutrino emitters and neutrino emission from the galactic plane hint at a zoo of possible neutrino sources. However, open questions regarding the production mechanisms remain that require a new generation of neutrino telescopes to answer.
The Pacific Ocean Neutrino Experiment (P-ONE) is a planned, next-generation neutrino telescope off the coast of Vancouver Island, where it will leverage deep-sea infrastructure provided by Ocean Networks Canada (ONC). Once completed, P-ONE aims for greatly improved resolutions compared to IceCube, complementing other next-generation telescopes, such as KM3NeT. The first detector line is currently under construction and targeted for deployment in 2024. In this contribution, I will present status of the first detector line, and prospects for the full detector array.
Dark compact objects, like primordial black holes, can span a large range of masses depending on their time and mechanism of formation. In particular, they can have subsolar masses and form binary systems with an inspiral phase that can last for long periods of time. Additionally, these signals have a slow increase of frequency, and, therefore, are well suited to be searched with continuous gravitational waves methods. We present a new pipeline called COmpact Binary Inspiral (COBI), based on the Band Sampled Data (BSD) framework, which specifically targets these signals. We describe the method and propose a possible setup for a search on O4 LIGO-Virgo data. We characterize the pipeline performances in terms of sensitivity and computing cost of the search corroborating the results with software injections on O3 data.
The current and upcoming generations of gravitational wave experiments represent an exciting step forward in terms of detector sensitivity and performance. Key upgrades at the LIGO, Virgo and KAGRA facilities will see the next observing run (O4) probe a spatial volume around four times larger than the previous run (O3), and design implementations for e.g. the Einstein Telescope, Cosmic Explorer and LISA experiments are taking shape to explore a wider frequency range and probe cosmic distances.
In this context, however, a number of imminent data analysis problems face the gravitational wave community. It will be crucial to develop tools and strategies to analyse (amongst other scenarios) signals that arrive coincidentally in detectors, longer signals that are in the presence of non-stationary noise or other shorter transients, as well as noisy, potentially correlated, coherent stochastic backgrounds. With these challenges in mind, we develop PEREGRINE, a new sequential simulation-based inference approach designed to study broad classes of gravitational wave signal.
In this talk, I discuss the need of the hour for flexible, simulation-efficient, targeted inference tools like PEREGRINE before demonstrating its accuracy and robustness through direct comparison with established likelihood-based methods. Specifically, we show that we are able to fully reconstruct the posterior distributions for every parameter of a spinning, precessing compact binary coalescence using one of the most physically detailed and computationally expensive waveform approximants (SEOBNRv4PHM). Crucially, we are able to do this using only 2% of the waveform evaluations that are required in e.g. nested sampling approaches, highlighting our simulation efficiency as the state-of-the-art when it comes to gravitational waves data analysis.
Dust particles (diameter ≳0.5um) present inside the clean environments of the ground based interferometric detectors of gravitational waves can contribute to scatter light significantly, adding to the residual scattering originated by imperfections of high quality optical components. Stray light, i.e. light that leaves the main optical beam, picks up phase noise by reflecting off of mechanically noisy surfaces and couples back in the main beam, is suspected to contribute much of the unexplained excess noise observed in the mid-low frequency band. Dust particles can scatter light either when deposited on the optics as well as by crossing the beam while they move in space. Knowing the amount and size distribution of dust particles present in the different environments we can predict the amount of scattered light they generate and elaborate mitigation strategies. We describe the dust monitoring system we have set up at Virgo to size the amount of dust that deposits, both on in-air benches and in vacuum towers, as both a check of cleanliness procedures and an alert system. We also describe a work to estimate the effect of dust particles in the beam pipes of the future Einstein Telescope: this is fundamental for setting cleanliness requirements for the production and installation of the ~100 km of vacuum tubes of the interferometer’s main arms. Finally we describe an experimental facility to measure the particles deposited on witness samples, and to measure the scattering properties of surfaces, both clean and contaminated by dust.
Conveners:
Fady Bishara (DESY)
James Frost (University of Oxford)
Silvia Scorza (LPSC Grenoble)
Contact: eps23-conveners-t03 @desy.de
DarkSide-50 is an experiment for direct dark matter detection at Laboratori Nazionali del
Gran Sasso. It uses a dual-phase time projection chamber filled with low-radioactivity argon extracted from underground. Thanks to single electron sensitivity and with an analysis based on the sole ionization signal, Darkside-50 set the most stringent exclusion limit on WIMPs with a mass of few GeV/c$^2$. A recent analysis improves by 10 times the existing exclusion limits for spin-independent WIMP-nucleon interactions in the [1.2, 3.6] GeV/c$^2$ mass range. Thanks to the inclusion of the Migdal effect, the exclusion limits are extended down to 40 MeV/c$^2$ dark matter mass. Furthermore, new constraints are set to the interactions of dark matter particles with electron final state, namely low-mass WIMPs interacting with electrons, galactic axions, dark photons, and sterile neutrinos.
XENONnT is the follow-up to the XENON1T experiment aiming for the direct detection of dark matter in the form of weakly interacting massive particles (WIMPs) using a liquid xenon (LXe) time projection chamber (TPC). The detector, operated at Laboratori Nazionali del Gran Sasso (LNGS) in Italy, features a total LXe mass of 8.5 tonnes of which 5.9 tonnes are active. XENONnT has completed its first science run and is currently taking data for the second science run. It has achieved an unprecedented purity for both, electronegative contaminants, with an electron lifetime exceeding 10 ms due to a novel purification in liquid phase, and for radioactive radon, with an activity of 1.72±0.03 𝜇Bq/kg due to a novel radon distillation column.
This talk will present the latest results from the search for nuclear recoils induced by WIMPs using data from the first science run with an exposure of 1.1 tonne-year. In addition, results from other searches for non-standard interactions and new particles via their electronic interactions will be shown.
LUX-ZEPLIN (LZ) is a direct detection dark matter experiment hosted in the Davis Campus of the Sanford Underground Research Facility in Lead, South Dakota. LZ's central detector is a dual-phase time projection chamber utilizing 7 active tonnes of liquid xenon (LXe) and is aided by a LXe "skin" detector and liquid scintillator-based outer detector to veto events inconsistent with dark matter particles. LZ recently reported its first results on Spin-Independent and Spin-Dependent interactions between nucleons and Weakly Interacting Massive Particles (WIMPs) with an exposure of 60 live days with a fiducial mass of 5.5 tonnes, setting world leading limits on the exclusion of spin-independent WIMP-nucleon scattering with WIMP masses > 9 GeV/c^2.
This talk will provide an overview of the experiment and details of the recent LZ results, as well as projections for LZ’s full exposure consisting of 1000 live days.
Cryogenic Rare Event Search with Superconducting Thermometers (CRESST) is a direct detection dark matter (DM) search experiment located at Laboratori Nazionali del Gran Sasso (LNGS) in Italy. The experiment employs cryogenic and scintillating crystals to search for nuclear recoils from DM particles, and has repeatedly achieved thresholds below 100 eV in its third phase (CRESST III) for a wide range of target materials including CaWO$_4$, LiAlO$_2$, Al$_2$O$_3$, and Si. The sensitivity to measure small energy depositions makes CRESST one of the leading experiments in sub-GeV dark matter search. A major challenge for all low-mass dark matter searches is the presence of an unknown event population at very low energies, called the low energy excesses (LEE). The scientific effort at CRESST in the latest run has been primarily towards an understanding of the origin of this excess. However, we set also new limits on low-mass DM. We report dark matter search results as well as updates on the understanding of LEE from CRESST-III. We conclude the talk with our future plans.
With its increasing statistical significance, the DAMA/LIBRA annual modulation signal is a cause for tension in the field of dark matter direct detection. A possible standard dark matter explanation for this signal is highly incompatible with the null results of numerous other experiments. The COSINUS experiment aims at a model-independent cross-check of the DAMA/Libra signal claim.
For such a model-independent cross-check, the same detector material as used by DAMA has to be employed. Thus COSINUS will use NaI crystals operated as cryogenic scintillating calorimeters at millikelvin temperatures. Such a setup enables independent measurement of both temperature and scintillation light signals via transition edge sensors (TESs). The dual-channel readout allows particle discrimination on an event-by-event basis, as the amount of light produced depends on the particle type (light quenching).
The construction of COSINUS started in December 2021 at the LNGS underground laboratory in central Italy. We will report on the current status of the construction and results from our prototype detectors.
The NEWS-G collaboration is searching for light dark matter candidates using a novel gaseous detector concept, the spherical proportional counter. Access to the mass range from 0.05 to 10 GeV is enabled by the combination of low energy threshold, light gaseous targets (H, He, Ne), and highly radio-pure detector construction. Initial NEWS-G results obtained with SEDINE, a 60 cm in diameter spherical proportional counter operating at LSM (France), excluded for the first time WIMP-like dark matter candidates down to masses of 0.5 GeV. First physics results using the commissioning data of a 140 cm in diameter spherical proportional counter, S140, constructed at LSM using 4N copper with 500 um electroplated inner layer will be presented, along with the new developments in read-out sensor technologies using resistive materials and multi-anode read-out that enables its operation. The first physics campaign in SNOLAB (Canada) with the detector was recently completed. To suppress background and so enhance the sensitivity of future detectors, NEWS-G are developing novel electroforming techniques. The potential to achieve sensitivity reaching the neutrino floor in light Dark Matter searches with a next generation detector, DarkSPHERE, will also be presented.
Conveners:
Summer Blot (DESY)
Pau Novella (IFIC)
Davide Sgalaberna (ETH)
Jessica Turner (Durham University)
Contact: eps23-conveners-t04 @desy.de
T2K is a long baseline neutrino experiment which exploits a neutrino and antineutrino beam produced at the Japan Particle Accelerator Research Centre (J-PARC) to provide world-leading measurements of the parameters governing neutrino oscillation. Neutrino oscillations are measured by comparing neutrino rates and spectra at a near detector complex, located at J-PARC, and at the water-Cherenkov far detector, Super-Kamiokande, located 295 Km away.
The latest T2K results include multiple analysis improvements, in particular a new sample is added at the far detector requiring the presence of a pion in muon-neutrino interactions. It is the first time that a pion sample is included in the study of neutrino disappearance at T2K and, for the first time, a sample with more than one Cherenkov ring is exploited in the T2K oscillation analysis, opening the road for future samples with charged- and neutral-pion tagging. The inclusion of such a sample assures proper control of the oscillated spectrum on a larger neutrino-energy range and on subleading neutrino-interaction processes.
T2K is also engaged in a major effort to perform a joint fit with the Super-Kamiokande neutrino atmospheric measurements and another joint fit with NOvA. Such combinations allow to lift the degeneracies between the measurement of the CP-violating phase $\delta_{CP}$ and the measurement of the ordering of the neutrino mass eigenstates. Results and prospects of such joint fits will be discussed.
The NOνA experiment is a long-baseline, off-axis neutrino experiment that aims to study the mixing behavior of neutrinos and antineutrinos using the Fermilab NuMI neutrino beam near Chicago, IL. The experiment collects data at two functionally identical detectors, the Near Detector is near the neutrino production target at Fermilab; the 14 kt Far Detector is 810 km away in Ash River, MN. Both detectors are tracking calorimeters filled with liquid scintillator which can detect and identify muon and electron neutrino interactions with high efficiency. The physics goals of NOvA are to observe the oscillation of muon (anti)neutrinos to electron (anti)neutrinos, understand why matter dominates over antimatter in the universe, and to resolve the ordering of neutrino masses. To that end, NOvA measures the electron neutrino and antineutrino appearance rates, as well as the muon neutrino and antineutrino disappearance rates. In this talk I will give an overview of NOvA and present the latest results combining both neutrino and antineutrino data.
In the current epoch of neutrino physics, many experiments are aiming for precision measurements of oscillation parameters. Thus, various new physics scenarios which alter the neutrino oscillation probabilities in matter deserve careful investigation. Recent results from NOvA and T2K show a slight tension on their reported values of the CP violating phase $\delta_{CP}$. Since the baseline of NOvA is much larger than the T2K, the neutral current non-standard interactions (NSIs) of neutrinos with the earth matter during their propagation might play a crucial role for such discrepancy. In this context, we study the effect of a vector leptoquark which induces non-standard neutrino interactions that modify the oscillation probabilities of neutrinos in matter. We show that such interactions provide a relatively large value of NSI parameter $\varepsilon_{e \mu}$. Considering this NSI parameter, we successfully explain the recent discrepancy between the observed $\delta_{CP}$ results of T2K and NOvA. We also briefly discuss the implication of $U_3$ leptoquark on lepton flavour violating muon decay modes: $\mu \to e \gamma$ and $\mu \to ee e$.
We report on the latest measurement of atmospheric neutrino oscillation parameters using data from the IceCube Neutrino Observatory. The DeepCore array in the central region of IceCube enables the detection and reconstruction of atmospheric neutrinos at energies as low as $\sim5$ GeV. This energy threshold allows the measurement of muon neutrino disappearance over a wide range of baselines available for atmospheric neutrinos. The present analysis is performed using a new data sample of DeepCore, which includes significant improvements in data calibration, detector simulation, data processing, and a detailed treatment of systematic uncertainties. The observed relative fluxes of neutrino flavors as functions of their reconstructed energy and arrival directions allow us to measure the atmospheric mixing parameters, $\sin^2\theta_{23}$ and $\Delta m^2_{32}$. The resulting improvement in the precision measurement of both parameters with respect to our previous result makes this the most precise measurement of oscillation parameters using atmospheric neutrinos.
The Super-Kamiokande experiment (Super-K) is a water Cherenkov detector in Japan. It has been collecting atmospheric neutrino events in ultrapure water from 1996 to 2020, after which it was upgraded with the addition of Gadolinium sulfate in the water. Tau neutrinos are not expected in the atmospheric neutrino flux below 10 GeV unless they appear from the oscillation of atmospheric muon neutrinos. Super-K is capable of detecting these oscillated tau neutrinos - which would be an unambiguous confirmation of neutrino oscillations. In the last published results from Super-K in 2018 with 328 kt year exposure, the hypothesis of no tau neutrino appearance was rejected with 4.6 sigma significance. The current analysis uses all of the data collected on the pure water phase, corresponding to 484 kt year. The statistics have been significantly increased by expanding the fiducial volume of the detector from 22.5 kt to 27.2 kt. In total, nearly 50% more events have been added to the analysis.
The Daya Bay reactor neutrino experiment is the first experiment that measured a non-zero value for the neutrino mixing angle $\theta_{13}$ in 2012. Antineutrinos from six 2.9 GW$_{\text{th}}$ reactors were detected in eight identically designed detectors deployed in two near and one far underground experimental halls. The near-far arrangement in km-scale baselines of anti-neutrino detectors allows for a high-precision test of the three-neutrino oscillation framework. Daya Bay's collection of physics data already ended on Dec. 12, 2020. In this talk, I will show the measurement results of $\theta_{13}$ and the mass-squared difference, based on the Gd-capture tagged sample in the complete dataset. The updated results on the H-capture-based oscillation analysis and search for light sterile neutrino will also be reported if ready.
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
The HERAPDF2.0 ensemble of parton distribution functions (PDFs) was introduced in 2015. The final stage is presented, a next-to-next-to-leading-order (NNLO) analysis of the HERA data on inclusive deep inelastic ep scattering together with jet data as published by the H1 and ZEUS collaborations. A perturbative QCD fit, simultaneously of αS(M2Z) and and the PDFs, was performed with the result αS(M2Z) = 0.1156±0.0011 (exp) +0.0001−0.0002 (model +parameterisation) ± 0.0029 (scale). The PDF sets of HERAPDF2.0Jets NNLO were determined with separate fits using two fixed values of αS(M2Z), αS(M2Z) = 0.1155 and 0.118, since the latter value was already chosen for the published HERAPDF2.0 NNLO analysis based on HERA inclusive DIS data only. The different sets of PDFs are presented, evaluated and compared. The consistency of the PDFs determined with and without the jet data demonstrates the consistency of HERA inclusive and jet-production cross-section data. The inclusion of the jet data reduced the uncertainty on the gluon PDF. Predictions based on the PDFs of HERAPDF2.0Jets NNLO give an excellent description of the jet-production data used as input.
We compute the NNLO massive corrections for diphoton production in quantum chromodynamics (QCD). This process is very important as a test of perturbative QCD and as a background process for the decay of a Higgs into two photons. We compute semi-analitically the master integrals via power series expansion, classifying Feynman diagrams in different topologies and finding the canonical basis for non elliptic integrals. We present a study of the maximal cut for the non-planar topology showing the elliptic curve defining the integral. We then present the matrix element computed for the first time, in terms of form factors. Finally, we study the impact of our novel massive corrections on the phenomenology of the process, for different observables.
The production of jets and prompt isolated photons at hadron colliders provides stringent tests of perturbative QCD. The latest measurements by the ATLAS experiment, using proton-proton collision data at $\sqrt{s}$ =13 TeV, are presented. Prompt inclusive photon production is measured for two distinct photon isolation cones, R=0.2 and 0.4, as well as for their ratio. The measurement is sensitive to gluon parton density distribution. Various measurements using dijet events are presented, as well. The measurement of new event-shape jet observables defined in terms of reference geometries with cylindrical and circular symmetries using the energy mover???s distance are discussed. In addition, measurements of variables probing the properties of the multijet energy flow and cross-section ratios of two- and three-jet production are highlighted. The measurements are compared to state-of-the-art NLO and NNLO predictions and used to determine the strong coupling constant.
Singular elements associated with the QCD factorization in the collinear limit are key ingredients for high-precision calculations in particle physics. They govern the collinear behaviour of scattering amplitudes, as well as the perturbative energy evolution of PDFs and FFs. In this talk, we explain the computation of multiple collinear and higher-order QCD splittings with massive partons. Our results might be highly-relevant for the consistent introduction of mass effects in the subtraction formalism and PDF/FF evolution.
One of the main obstacles to the calculation of next-to-next-to-leading order (NNLO) corrections in QCD is the presence of infrared singularities. Together with Raoul Röntsch, Kirill Melnikov and other collaborators, I am developing a more general approach to the nested soft-collinear subtraction method to address this problem for the production of an arbitrary final state at hadron colliders. In this presentation, I will discuss results for the process $P+P \to V + n$ gluons at NNLO, demonstrating the analytic cancellation of poles and presenting finite remainders of integrated subtraction terms, and will outline how the method can be completely generalized.
A precise measurement of the luminosity is a crucial input for many ATLAS physics analyses, and represents
the leading uncertainty for W, Z and top cross-section measurements. The first ATLAS luminosity determination in Run-3 of the LHC, for the dataset recorded in 2022, at center-of-mass energy of 13.6TeV follows the procedure developed in Run-2 of the LHC. It is based on van der Meer scans during dedicated running
periods each year to set the absolute scale, and an extrapolation to physics running conditions using
complementary measurements from the ATLAS tracker and calorimeter subsystems. The presentation discusses the procedure of the ATLAS luminosity measurement, as well as the results obtained for the 2022 pp dataset.
The associated production of vector bosons V (W, Z or gamma) and jets originating from heavy-flavour (c or b) quarks is a dominant background for many SM and Higgs boson measurements and new physics searches beyond the SM. The study of events with a vector boson accompanied by heavy-flavour jets is crucial to test the theoretical predictions in perturbative QCD up to NNLO, and provides a key tool for Monte Carlo generators. Newest 13 TeV differential cross sections of V+ c/b jets are measured as a function of several kinematic observables with the CMS detector at 8 and 13 TeV will be presented, with special attention on pQCD and EW aspects of their production, PDF constraints and modelling of the heavy flavour content of the proton.
The focus of the session is on top quarks precision measurements and theory calculations.
Conveners:
Gauthier Durieux (CERN)
Abideh Jafari (DESY)
Narei Lorenzo Martinez (LAPP, Annecy)
Contact: eps23-conveners-t07 @desy.de
15'+5'
15'+5'
Duration" 15'+5'
A large fraction of the top quarks produced at the LHC emerges from electroweak interactions, via the so-called t-channel single-top production.
Predictions for this process can be used, for instance, to constrain
the CKM matrix element, and probe possible anomalous couplings
in the tWb vertex. QCD corrections to t-channel single-top production
are known up to NNLO in the factorisable approximation, namely neglecting the crosstalk between different quark lines.
In this contribution we report on the recent calculation of QCD non-factorisable corrections to t-channel single-top production and stress the importance of these corrections in the light of increasing the accuracy of theoretical predictions for this process. We present results for the total cross section and for selected observables relevant to proton-proton collisions at the LHC and the FCC.
Duration: 15'+5'
We compare double-differential normalized production cross-sections for top-antitop + X hadroproduction at NNLO QCD accuracy, as obtained through a customized version of the MATRIX framework interfaced to PineAPPL, with recent data by the ATLAS and the CMS collaborations.
We take into account theory uncertainties due to scale variation and we see how predictions vary as a function of parton distribution function (PDF) choice and top-quark mass value, considering different state-of-the-art PDF fits with their uncertainties.
Notwithstanding the overall reasonable good agreement, we observe discrepancies at the level of a few sigma between data and theoretical predictions in some kinematical regions, which can be alleviated by refitting the top-quark mass values, and/or PDFs and/or alpha_s(M_Z), considering the correlations between these three quantities.
In a fit of top-quark mass standalone, we notice that, for all considered PDF fits used as input, some datasets point towards top-quark mass values lower by about two sigma than those emerging from fitting other datasets, suggesting a possible tension between experimental measurements using different decay channels, and/or the need of better estimating uncertainties on the latter.
15'+5'
Duration: 15+5'
The comparison of theory predictions and experimental measurements is one of the main roads for discovering physics beyond the Standard Model. The tremendous amount of data that has been and will be further collected at the LHC already demands a high level of precision from the theory predictions.
In this talk I will focus on ttZ production, whose phenomenological interest is well-established. The intricate resonance structure and the high multiplicity of the final state make the achievement of theory results for this process extremely challenging. I will present how we took another step forward to predict this process at high accuracy by computing for the first time the complete set of fully off-shell QCD and EW corrections.
Conveners:
Thomas Blake (University of Warwick)
Marzia Bordone (CERN)
Thibaud Humair (MPP)
Contact: eps23-conveners-t08 @desy.de
The tree-level determination of the CKM angle gamma is a standard candle measurement of CP violation in the Standard Model. The latest LHCb results from time-integrated measurements of CP violation using beauty to open charm decays are presented. These include updates to previous Run 1 measurements using the full LHCb Run 1+2 data sample and the latest LHCb gamma & charm mixing combination.
This talk will summarize the latest results on branching fractions and CP violation in the B->DX family of decays.
In this work, we investigate the time-dependent angular analysis of $B_s^0 \rightarrow \phi \phi$ decay to search for new physics signals via CP-violating observables. We work with a new physics Hamiltonian containing both left- and right-handed Chromomagnetic dipole operators. The hierarchy of the helicity amplitudes in this model gives us a new scheme of experimental search, which is different from the ones LHCb has used in its analysis. To illustrate this new scheme, we perform a sensitivity study using two pseudo datasets generated using LHCb's measured values. We find the sensitivity of CP-violating observables to be of the order of $5-7\%$ with the current LHCb statistics. Moreover, we show that Belle(II)'s $B^0_d \rightarrow \phi K_s$ and LHCb's $B_s^0 \rightarrow \phi \phi$ measurements could be coupled within our model to obtain the chirality of the new physics.
The study of CP violation in charmless B decays is of great interest as penguin and tree-level topologies contribute to the decay amplitudes with comparable strengths. The former topologies may be sensible to new particles appearing in the loops as virtual contributions. However the interpretation of physics quantities in terms of CKM parameters is not trivial due to strong-interaction effects between quarks. Amplitude analyses over the phase space of multibody charmless B decays allows the extraction of relevant information to refine the model describing the dynamics of strong interaction. In this presentation the most recent measurement analyses of multibody charmless B decays at LHCb are presented.
The investigation of $B$-meson decays into charmed and charmless hadronic final states is a keystone of the Belle II program. It offers theoretically reliable and experimentally precise constraints on CKM unitarity, it is sensitive to effects from non-SM physics, and it furthers knowledge about uncharted $b \to c$ hadronic transitions. Recent results on branching ratios and direct CP-violating asymmetries of $B \to K \pi$ decays are presented that lead to world-leading tests of the SM based on the $K \pi$ isospin sum rule. First observations of new $B \to D^{(*)}KK_S$ decays and new results from combined analyses of Belle and Belle II data to determine the CKM angle $\phi_3$ (or $\gamma$) are also presented.
Nowadays little is know about the dynamics behind the decays of heavy flavoured particles into final states with baryons. The description of these decays is very challenging from the theoretical point of view and more experimental results are fundamental to shed light on the particular features of these decays, like the enhancement close to the p-pbar threshold in multibody decays, or the suppression of two-body final states. The most recent results of LHCb in the search for charmless baryonic decays of beauty hadrons are discussed in this presentation.
Time-dependent measurements of CP violation are chief goals of the Belle II physics program. Comparison between penguin-dominated $b \to q\bar qs$ and tree-dominated $b \to c \bar cs$ results allows for stringent tests of CKM unitarity that are sensitive to non-SM physics. This talk presents recent Belle II results on $B^0 \to K_S \pi^0$, $B^0 \to K_S K_S K_S$, and $B^0 \to \phi K_S$ decays.
Conveners:
Ilaria Brivio (Universita di Bologna)
Karsten Köneke (Universität Freiburg)
Matthias Schröder (Universität Hamburg)
Contact: eps23-conveners-t09 @desy.de
Simplified template cross-sections provide a detailed description of the properties of Higgs boson production at the LHC. They are most precisely determined in the combination of the measurements performed in the different Higgs boson decay channels. This talk presents these combined measurements, as well as their interpretations in the context of specific scenarios of physics beyond the Standard Model, as well as generic extensions within the framework of the Standard Model Effective Field Theory. A combination of measurements of the branching fraction of Higgs boson decays into invisible particles is also presented, and interpreted as constraints on the cross section of WIMP dark matter interactions with nucleons.
In the decade since the discovery of the Higgs boson, its properties have been measured in detail, and appear to be consistent with the expectation of the Higgs boson in the SM. However, anomalous contributions to the Higgs boson couplings are not excluded. In this talk we review the most recent results from the CMS experiment on anomalous Higgs boson couplings. Interpretations of such anomalous interactions within an effective field theory framework are also discussed.
We analyse the sensitivity to beyond-the-Standard-Model effects of hadron collider processes involving the interaction of two electroweak and two Higgs bosons, VVHH, with V being either a W or a Z boson.
We examine current experimental results by the CMS collaboration in the context of a dimension-8 extension of the Standard Model in an effective-field-theory formalism. We show that constraints from vector-boson-fusion Higgs pair production on operators that modify the Standard Model VVHH interactions are already comparable with or more stringent than those quoted in the analysis of vector-bosonscattering final states. We study the modifications of such constraints when introducing unitarity bounds, and investigate the potential of new experimental final states, such as ZHH associated production. Finally, we show perspectives for the high-luminosity phase of the LHC.
Precision measurements of diboson production at the LHC is an important probe of the limits of the Standard Model. The gluon-fusion channel of this process offers a connection between the Higgs and top sectors. We study in a systematic way gluon-induced diboson production in the Standard Model Effective Field Theory. We compute the helicity amplitudes of double Higgs, double $Z/W$ and associated $ZH$ production at one loop and with up to one insertion of a dimension-6 operator. We study their high-energy limit and identify which operators in each channel lead to growths with energy for different helicity configurations. We perform a phenomenological study of associated $ZH$ production, including both quark and gluon initial states. Our analysis uses the channels in which the Higgs decays to b quarks and the $Z$ decays leptonically. To maximise our sensitivity to New Physics, we consider both the resolved and boosted Higgs regimes and employ a binning in $p_T$. We show that for some top operators the gluon-induced channel can offer competitive sensitivity to constraints obtained from top quark production processes.
We discuss rare Higgs boson production and decay channel searches with the CMS experiment. A particular focus of this talk are searches for very rare Higgs boson decays to a neutral light meson or quarkonium and a photon or Z boson, whose standard model branching fraction predictions are in the range 10^{-4}-10^{-9}. Such searches can help constrain Yukawa couplings to light and charm quarks. Other rare Higgs boson production and decay channels, such as H->Zgamma, will also be discussed.
Conveners:
Liron Barak (Tel Aviv University)
Lisa Benato (Universität Hamburg)
Dario Buttazzo (INFN Pisa)
Annapaola de Cosa (ETH)
Contact: eps23-conveners-t10 @desy.de
Many theories beyond the Standard Model predict new phenomena giving rise to multijet final states. These jets could originate from the decay of a heavy resonance into SM quarks or gluons, or from more complicated decay chains involving additional resonances that decay e.g. into leptons. Also of interest are resonant and non-resonant hadronic final states with jets originating from a dark sector, giving rise to a diverse phenomenology depending on the interactions between the dark sector and SM particles. This talk presents the latest 13 TeV ATLAS results.
Many new physics models and Standard Model extensions like, additional symmetries and forces, compositeness, extra dimensions, extended Higgs sectors, supersymmetry, dark sectors and dark matter particles, are expected to manifest themselves in final states with hadronic jets. This talk will present recent searches for new phenomena in such final states using the full Run II luminosity corresponding to 138 fb-1 collected with the CMS detector at the CERN LHC. Prospects for Run III will also be presented.
The role of the Parton Distribution Functions (PDF) is crucial not only in the precise determination of the SM parameters, but also in the interpretation of new physics searches at the LHC. In this talk we show the potential of global PDF analyses to inadvertently ‘fit away’ signs of new physics, by identifying specific scenarios in which the PDFs may completely absorb signs of new physics, thus biassing theoretical predictions. At the same time, we discuss several strategies to single out and disentangle such effects.
We study the influence of theoretical systematic uncertainties due to the quark density on LHC experimental searches for Z'-bosons.
Using an approach originally proposed in the context of the ABMP16 PDF set for the high-x behaviour of the quark density, we presents results on differential cross section and Forward-Backward asymmetry observables commonly used to study Z' signals in dilepton channels.
The Large Hadron-electron Collider and the Future Circular Collider in electron-hadron mode [1] will make possible the study of DIS in the TeV regime providing electron-proton collisions with per nucleon instantaneous luminosities of $10^{34}$ cm$^{−2}$s$^{−1}$. We review the possibilities for detection of physics beyond the SM in these experiments, focusing on feebly interacting particles like heavy neutrinos or dark photons, on anomalous gauge couplings, and on theories with heavy resonances like leptoquarks, or with contact interactions. We will emphasise the complementarity of searches at the LHeC (FCC-eh), and the respective hadronic colliders, the HL-LHC and the FCC-hh, and $e^+e^-$ Higgs factories.
[1] LHeC Collaboration and FCC-he Study Group: P. Agostini et al., J. Phys. G 48 (2021) 11, 110501, e-Print: 2007.14491 [hep-ex].
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
The International Large Detector (ILD) is a detector designed primarily for the International Linear Collider (ILC), a high-luminosity linear electron-positron collider with an initial center-of-mass energy of 250 GeV, extendable to 1 TeV.
The ILD concept is based on particle flow for overall event reconstruction, which requires outstanding detector capabilities including superb tracking, very precise detection of secondary vertices and high-granularity calorimetry. In the past years ILD has been working with groups building and testing technological prototypes of the key sub-detector technologies, scalable to the full ILD size, studying their integration into a coherent detector, benchmarking the ILD performance and preparing for an optimization of the overall ILD size and costing. The current status has been made public in the ILD Interim Design Report (IDR, 2020) of interest for any future e+e– collider detector. A particular strength of the ILD concept is the integration of a well-developed concept for a detector, based on well understood prototypes, with a well-developed and available suite of simulation and reconstruction tools, which allow detailed and reliable studies to be performed. ILD as a general purpose detector optimized for high precision science at an e+e- collider can also serve as an excellent basis to compare the science reach and detector challenges for different collider options. ILD is actively exploring possible synergies with other Higgs/ EW factory options. In this talk we will report on the state of the ILD detector concept, report on recent results and discuss selected examples of studies of an ILD detector at other colliders than ILC.
The Circular Electron Positron Collider (CEPC) was been proposed as a Higgs and high luminosity Z factory in last few years. The detector conceptual design of a updated detector consists of a tracking system, which is a high precision (about 100μm) spatial resolution Time Projection Chamber (TPC) detector as the main track device in very large 3D volume. The tracking system required the high precision performance requirements, but without power-pulsing not likely as the International Linear Collider (ILC), which leads to additional constraints on detector specifications, especially for the case of the machine operating at the high luminosity Z pole (Tera Z). TPC detection technology requires longitudinal time resolution of about 100ns and the physics goals require Particle Identification Detection (PID) resolution of very good separation power with cluster counting to be considered. The simulation and PID resolution show TPC technology potential to extend Tera Z at the future e+e- collider.
In this talk, the feasibility and status of high precision TPC as the main track detector for e+e collider will be presented. The simulation results of the pad/pixelated TPC technology for e+e- collider will be given. Compared with the pad readout, the pixelated readout option will obtain the better spatial resolution of single electrons, the very high detection efficiency in excellent tracking and good dE/dx performance. A smaller prototype TPC has been developed with a drift length of 500 mm, gaseous chamber, 20000V field-cage, the low power consumption FEE electronics and DAQ have been commissioned and some studies have been finished. Some results of the spatial resolution, the gas gain, the track reconstruction and PID will be reported.
A large, worldwide community of physicists is working to realize
an exceptional physics program of energy-frontier, electron-positron
collisions with the International Linear Collider (ILC)
and other collider projects (summarized and evaluated in
https://arXiv.org/abs/2208.06030). The International Large Detector (ILD) is one of the proposed detector concepts at the next \ee collider. The ILD tracking system consists of a Si vertex detector, forward tracking disks, a large volume Time Projection Chamber (TPC) and silicon tracking detectors inside and outside of the TPC, all embedded in a 3.5 T solenoidal field. An extensive research and development program
for a TPC has been carried out within the framework of the LCTPC
collaboration. A Large Prototype TPC in a 1 T magnetic field, which allows to accommodate up to seven identical Micropattern Gaseous Detector (MPGD) readout modules of the TPC-design being studied, has been built as a demonstrator at the 5 GeV electron test-beam at DESY. Three MPGD concepts are being developed for the TPC: Gas Electron Multiplier, Micromegas and Pixel, aslo known as GridPix, (≡ MicroMegas integrated on a Timepix chip).Successful test beam campaigns with the different technologies have been carried out during the last decade. Fundamental parameters such as transverse and longitudinal spatial resolution and drift velocity have been measured. In parallel, a new gating device based on large-aperture GEMs has been researched and successfully developed. Recent R\&D also led to a design of a
Micromegas module with monolithic cooling plate in 3D printing and 2-phase CO2 cooling. In this talk, we will review the track reconstruction
performance results and summarize the next steps towards the TPC
construction for the ILD detector. The TPC with pad/(pixel) electronics is designed to have about 106 pads/ (109 pixels) per endcap for continuous tracking and a momentum resolution of δ(1/pt)~1× 10−4/GeV/c (TPC only)/({δ(1/pt)~0.8×~10−4/GeV/c (60\% coverage, TPC only)}), and the dE/dx resolution is ≃5~\%/(≃4~\%). The momentum resolution including all tracking subdetectors is ~2~10−5/GeV/c
During the upcoming years of the High Luminosity Large Hadron Collider (HL-LHC) program, the CMS Muon spectrometer will face challenging conditions. The existing detectors, which consist of Drift Tubes (DT), Resistive Plate Chambers (RPC), and Cathode Strip Chambers (CSC), as well as recently installed Gas Electron Multiplier (GEM) stations, will need to sustain an instantaneous luminosity of up to 5-7 × 10^34 cm−2 s−1, resulting in increased pile-up, and about 10 times the originally expected LHC integrated luminosity. To cope with the high rate environment and maintain good performance, additional GEM stations and improved RPC (iRPC) detectors will be installed in the innermost region of the forward muon spectrometer. To test the effects of these challenging conditions a series of accelerated irradiation studies have been performed for all the muons systems, mainly at the CERN Gamma Irradiation Facility (GIF++), and also with specific X-ray sources. Furthermore, since RPCs and CSCs use gases with a global warming potential (GWP), ongoing efforts are being made to find new eco-friendly gas mixtures, as part of the CERN-wide program to phase out fluorinated greenhouse gases. This report presents the status of the CMS Muon system longevity studies, along with actions taken to reduce detector aging and minimize greenhouse gas consumption.
A key focus of the physics program at the LHC is the study of head-on proton-proton collisions. However, an important class of physics can be studied for cases where the protons narrowly miss one another and remain intact. In such cases, the electromagnetic fields surrounding the protons can interact producing high-energy photon-photon collisions. Alternatively, interactions mediated by the strong force can also result in intact forward scattered protons, providing probes of quantum chromodynamics (QCD). In order to aid identification and provide unique information about these rare interactions, instrumentation to detect and measure protons scattered through very small angles is installed in the beam pipe far downstream of the interaction point. We describe the ATLAS Forward Proton ‘Roman Pot’ Detectors (AFP and ALFA), including their performance and status. The physics interest, as well as the newest results on diffractive interactions, are also discussed.
Conveners:
Shikma Bressler (Weizmann Institute)
Stefano de Capua (Manchester University)
Friederike Januschek (DESY)
Jochen Klein (CERN)
Contact: eps23-conveners-t12 @desy.de
The effective design of instruments that rely on the interaction of radiation with matter for their operation is a complex task. Furthermore, the underlying physics processes are intrinsically stochastic in nature and open a vast space of possible choices for the physical characteristics of the instrument. While even large scale detectors such as e.g. at the LHC are built using surrogates for the ultimate physics objective, the MODE Collaboration (an acronym for Machine-learning Optimized Design of Experiments) aims at developing tools also based on deep learning techniques to achieve end-to-end optimization of the design of instruments via a fully differentiable pipeline capable of exploring the Pareto-optimal frontier of the utility function for future particle collider experiments and related detectors. The construction of such a differentiable model requires inclusion of information-extraction procedures, including data colle ction, detector response, pattern recognition, and other existing constraints such as cost. This talk will give an introduction to the goals of the newly founded MODE collaboration and highlight some of the already existing ingredients.
Novel technologies emerging from the second quantum revolution enable us to identify, control and manipulate individual quanta with unprecedented precision. One important area is the rapidly evolving new paradigm of quantum computing, which has the potential to revolutionize computing by operating on completely different principles. Expectations are high, as quantum computers have already solved complex problems that cannot be solved with classical computers.
A very important new branch is quantum machine learning (QML), which lies at the intersection of quantum computing and machine learning. QML combines classical Machine Learning with topics concerning Quantum Algorithms and Architectures. Many studies address hybrid quantum-classical approaches, but full quantum approaches are also investigated. The ultimate goal is to find the so-called quantum advantage, where quantum models outperform classical algorithms in terms of runtime or even solve problems that are intractable for classical computers.
However, in the current NISQ era (Noisy Intermediate-Scale Quantum computing), where noise in quantum computing challenges the accuracy of computations and the small number of qubits limits the size of the problem to be solved, it is difficult to achieve quantum advantage. Nevertheless, machine learning can be robust to noise and allows to deal with limited resources of present-day quantum computers.
In this talk, quantum machine learning will be introduced and explained with examples. Challenges and possible transfer to practical applications will be discussed.
This work introduces a comprehensive framework and discussion on the measurement of scientific understanding in agents, encompassing both humans and machine learning models. The focus is on artificial understanding, particularly investigating the extent to which machines, such as Large Language Models (LLMs), can exhibit scientific understanding. The presentation centers around fundamental physics, specifically particle physics, providing illustrative examples within this domain.
The study builds upon a philosophy of science perspective on scientific understanding, which is expanded to encompass a framework for assessing understanding in agents more broadly. The framework emphasizes three fundamental aspects of understanding: knowledge acquisition, explanatory capacity, and the ability to draw counterfactual inferences. Furthermore, the capabilities of LLMs to comprehend the intricacies of particle physics are examined and discussed.
Through this interdisciplinary exploration, the talk sheds light on the nature of scientific understanding in agents, bridging the gap between philosophical accounts and the potential of advanced machine learning models. The insights gained contribute to the ongoing dialogue on the boundaries of artificial understanding and its relevance in scientific research, particularly in the context of particle physics.
The work is based on https://arxiv.org/abs/2304.10327 and subsequent work.
While simulation plays a crucial role in high energy physics, it also consumes a significant fraction of the available computational resources, with these computing pressures being set to increase drastically for the upcoming high luminosity phase of the LHC and for future colliders. At the same time, the significantly higher granularity present in future detectors increases the physical accuracy required of a surrogate simulator. Machine learning methods based on deep generative models hold promise to provide a computationally efficient solution, while retaining a high degree of physical fidelity.
Significant strides have already been taken towards developing these models for the generation of particle showers in highly granular calorimeters, the subdetector which constitutes the most computationally intensive part of a detector simulation. However, to apply these models to a general detector simulation, methods must be developed to cope with particles incident at various points and under varying angles in the detector. This contribution will address steps taken to tackle the challenges faced when applying these simulators in more general scenarios, as well as the effects on physics observables after interfacing with reconstruction algorithms. In particular, results achieved with bounded information bottleneck and normalising flow architectures based on regular grid geometries will be discussed. Combined with progress on integrating these surrogate simulators into existing full simulation chains, these developments bring an application to benchmark physics analyses closer.
Deep learning methods are becoming key in the data analysis of particle physics experiments. One clear example is the improvement of neutrino detection using neural networks. Current neutrino experiments are leveraging these techniques, which, in combination, have exhibited to outperform standard tools in several domains, such as identifying neutrino interactions or reconstructing the kinematics of single particles. In this talk, I will show various deep-learning algorithms used in the context of voxelised neutrino detectors. I will present how to design and use advanced deep-learning techniques for tasks such as fitting particle trajectories and understanding the particles involved in the vertex activity. All these methods report promising results and are crucial for improving the reconstruction of the interacting particle kinematics and enhancing the sensitivity to future physics measurements.
FASER, the ForwArd Search ExpeRiment, is an LHC experiment located 480 m downstream of the ATLAS interaction point, along the beam collision axis. FASER was designed, constructed, installed, and commissioned during 2019-2022 and has been taking physics data since the start of LHC Run 3 in July 2022. This talk will present the status of the experiment, including detector design, detector performance, and first physics results from Run 3 data. Special focus will be drawn on signatures of new physics, i.e. searches for new light and very weakly-interacting particles such as dark photons.
Many extensions of the Standard Model with Dark Matter candidates predict new long-lived particles (LLP). The LHC provides an unprecedented possibility to search for such LLP produced at the electroweak scale and above. The ANUBIS concept foresees instrumenting the ceiling and service shafts above the ATLAS experiment with tracking stations in order to search for LLPs with decay lengths of O(10m) and above. After a brief review of the ANUBIS sensitivity, this contribution will discuss the first complete prototype detector module proANUBIS data taking in the ATLAS experimental cavern in 2023.
We will present the operational status of the milliQan Run 3 detector, which was installed during the 2022-3 YETS and is presently being commissioned. We will available initial results from data obtained with Run 3 LHC Collisions.
The NA62 experiment at CERN took data in 2016–2018 with the main goal of measuring the $K^+ \rightarrow \pi^+ \nu \bar\nu$ decay. We report on the search for visible decays of exotic mediators from data taken in "beam-dump" mode with the NA62 experiment. The NA62 experiment can be run as a "beam-dump experiment" by removing the Kaon production target and moving the upstream collimators into a "closed" position. More than $10^{17}$ protons on target have been collected in this way during a week-long data-taking campaign by the NA62 experiment. We report on new results from analysis of this data, with a particular emphasis on Dark Photon and Axion-like particle Models.
The parameters space for Weakly-Interacting-Massive-Particles as possible explanation for Dark Matter, is shrinking more and more. This triggered new attempts to create dark matter at accelerators. This alternative approach represents an innovative and open-minded way to broaden this research field in a wider range of energies with high-sensitivity detectors [1].
In this panorama is inserted the Positron Annihilation into Dark Matter Experiment (PADME) ongoing at the Laboratori Nazionali di Frascati of INFN. PADME was conceived to search a Dark Photon signal [2] by studying the missing-mass spectrum of single photon final states resulting from positrons annihilations with the electrons of a fixed target. Actually, the PADME approach allows to look for any new particle produced in e$^+$ e$^-$ collisions through a virtual off-shell photon such as long lived Axion-Like-Particles (ALPs), proto-phobic X bosons, Dark Higgs ...
After the detector commissioning and the beam-line optimization, PADME collaboration collected in 2020 about 5×10$^{12}$ positrons on target at 430 MeV. A fraction of these data have been used to evaluate the cross-section of the process e$^+$ e$^-$→γγ(γ) at √s=20 MeV with a precision of 5% [3]. This is the first measurement ever done at this energy, that detected the two final state photons, making it the first measurement allowing to define stringent limits to processes beyond Standard Model.
PADME has also the unique opportunity to confirm/disprove the particle nature of the X17 anomaly observed in the ATOMKI nuclear physics experiments studying de-excitation of some light nuclei [4]. The PADME 2022 data taking was conducted with this scope. About 10$^{10}$ positrons have been stopped on the target for each of the 47 beam energy values in the range 262 - 298 MeV. This precise energy scan is intended to study the reaction e$^+$ e$^-$→X17→e$^+$ e$^-$.
The talk will give an overview of the scientific program of the experiment and of the data analyses ongoing.
References
[1] P. Agrawal et al., Eur. Phys. J. C 81 (2021) 11, 1015.
[2] P. Albicocco et al., JINST 17 (2022) 08, P08032.
[3] F. Bossi et al., Phys. Rev. D 107 (2023) 1, 012008.
[4] L. Darmé et al., Phys. Rev. D 106 (2022) 11, 115036.
We present the most recent $BABAR$ searches for reactions that could simultaneously explain the presence of dark matter and the matter-antimatter asymmetry in the universe. This scenario predicts $B$-meson decays into an ordinary-matter baryon and a dark-sector anti-baryon $\psi_D$ with branching fractions accessible at the $B$ factories. The results are based on the full data set of about 430 $\text{fb}^{-1}$ collected at the $\Upsilon(4S)$ resonance by the $BABAR$ detector at the PEP-II collider.
We search, in particular, for decays like $B^{0}\to\psi_{D} {\cal B}$ where $\cal{B}$ is a baryon (proton, $\Lambda$, or $\Lambda_c$). The hadronic recoil method has been applied with one of the $B$ mesons from $\Upsilon(4S)$ decay fully reconstructed, while only one baryon is present in the signal $B$-meson side. The missing mass of signal $B$ meson is considered as the mass of the dark particle $\psi_{D}$. Stringent upper limits on the decay branching fraction are derived for $\psi_D$ masses between 1.0 and 4.2 GeV/c$^2$.
In a class of theories, dark matter is explained by postulating the existence of a 'dark sector',
which interacts gravitationally with ordinary matter. If this dark sector contains a U(1) symmetry,
and a corresponding 'dark' photon ($A_{D}$) , it is natural to expect that this particle with kineticly mix
with the ordinary photon, and hence become a 'portal' through which the dark sector can be studied.
The strength of the mixing is given by a mixing parameter $(\epsilon)$. This
same parameter governs both the production and the decay of the $A_{D}$ back to SM
particles, and for values of $\epsilon$ not already excluded, the signal would be
a quite small, and quite narrow resonance: If $(\epsilon)$ is large enough to
yield a detectable signal, its decay width will be smaller than the detector resolution, but so large
that the decay back to SM particles is prompt. For masses of the dark photon above the reach of
BelleII, future high energy e+e- colliders are ideal for searches for such a signal, due to the
low and well-known backgrounds, and the excellent momentum resolution and equally
excellent track-finding efficiency of the detectors at such colliders.
This contribution will discuss a study investigating the dependency of the limit on the mixing
parameter and the mass of the $A_{D}$ using the $A_{D}\rightarrow\mu^{+}\mu^{-}$ decay mode in
the presence of standard model background, using fully simulated signal and background events in
the ILD detector at the ILC Higgs factory. In addition, a more general discussion about the capabilities
expected for generic detectors at e+e- colliders operating at other energies will be given.
Conveners:
Summer Blot (DESY)
Pau Novella (IFIC)
Davide Sgalaberna (ETH)
Jessica Turner (Durham University)
Contact: eps23-conveners-t04 @desy.de
The near detector of T2K (ND280) is undergoing a major upgrade. A new scintillator tracker, named superFGD, with fine granularity and 3D-reconstruction capabilities has been assembled at J-PARC. The new Time Projection Chambers are under construction, based on the innovative resistive Micromegas technology and a field cage made of extremely thin composite walls. New scintillator panels with precise timing capability have been built to allow precise Time of Flight measurements.
The detector is currently in assembly phase following a detailed effort of characterization during detector production. The results of multiple tests of the detectors with charged beams, neutron beam, cosmics and X-rays will be presented. Among these results, we could mention the first measurement of neutron cross-section with the superFGD and the first detailed characterization of the charge spreading in resistive Micromegas detectors.
Thanks to such innovative technologies, the upgrade of ND280 will open a new way to look at neutrino interactions thanks to a significant improvement in phase space acceptance and resolution with an enhanced purity in the exclusive channels involving low-momentum protons, pions and neutrons. Sensitivity results and prospects of physics capabilities will be also shown.
The Deep Underground Neutrino Experiment (DUNE) is a next generation long baseline neutrino experiment for oscillation physics and proton decay studies. The primary physics goals of the DUNE experiment are to perform neutrino oscillation physics studies, search for proton decay, detect supernova burst neutrinos, make solar neutrino measurements and BSM searches. The liquid argon prototype detectors at CERN (ProtoDUNE) are a test-bed for DUNE’s far detectors, which have operated for over 2 years, to inform the construction and operation of the first two and possibly subsequent 17-kt DUNE far detector LArTPC modules. Here we introduce the DUNE and protoDUNE experiments and physics goals as well as discussing recent progress and results.
The ESSnuSB project aims to measure the leptonic CP violation at the second neutrino oscillation maximum using an intense neutrino beam, which will be produced by the powerful ESS proton linear accelerator. The first phase of the project was successfully concluded with the production of the Conceptual Design Report in which it was shown that this next-to-next generation neutrino oscillation experiment has a potential to start the precision era in the field of the leptonic CP violation measurement.
ESSnuSB+ is a continuation of this study which focuses on neutrino interaction cross-section measurement at the low enutrino energy region, exploring the sensitivity of the experimental set-up to additional physics scenarios and on the civil engineering of the near and far detectors sites. It foresees an intermediate step in the ESSnuSB construction phase in which a number of additional facilities will be built: a 1/4 power ESSnuSB neutrino production target system prototype, a low energy muon storage ring and a low energy monitored neutrino beam facility; a common near neutrino detector for the muon ring and monitored beam will be designed, and a study of the effect of Gd doping of ESSnuSB water Cherenkov detectors will be performed.
This talk will give an overview of the ESSnuSB and the ESSnuSB+ projects and their intended place in the landscape of leptonic CP violation measurements.
The Jiangmen Underground Neutrino Observatory (JUNO) is a 20-kiloton multi-purpose liquid scintillator detector under construction in a 700-meter underground laboratory in China. With its excellent energy resolution, sizeable fiducial volume, and remarkable background control, JUNO presents unique prospects to explore many important topics in neutrino and astroparticle physics.
By measuring the oscillation of reactor electron antineutrinos, JUNO can determine the neutrino mass ordering (NMO) and measure several oscillation parameters with sub-percent precision. Additionally, atmospheric neutrino measurements provide independent data for oscillation physics, consequently enhancing JUNO’s NMO sensitivity.
Besides oscillation measurements, JUNO has substantial potential for addressing a wide range of non-oscillation physics, such as detecting solar neutrinos, geo-neutrinos, supernova neutrinos, and diffuse supernova neutrino background, as well as searching for proton decay and other new physics beyond the Standard Model.
This talk presents JUNO’s physics potential in various domains.
The Jiangmen Underground Neutrino Observatory (JUNO) is a multi-purpose experiment, which is under construction in South China. Thanks to the 20 ktons of ultra-pure liquid scintillator (LS), JUNO will be able to perform innovative and groundbreaking measurements like the determination of neutrino mass ordering (NMO). Beyond NMO, JUNO will measure the three neutrino oscillation parameters with a sub-percent precision. Furthermore, the JUNO experiment is expected to have important physics reach with solar neutrinos, supernova neutrinos, geoneutrinos and atmospheric neutrinos.
The experiment is being constructed in a 700m underground laboratory, located about 53 km from both the Taishan and Yangjiang nuclear power plants. The JUNO central detector (CD) will be equipped with 17612 20-inch photomultiplier tubes (PMTs) and 25600 3-inch PMTs. The central detector will be surrounded by a water tank that will provide passive shielding from radioactivity decays and serve as a water Cherenkov detector to tag cosmic muons. Additionally, a plastic scintillator detector is located above the central detector to veto cosmic muons from the top.
The JUNO CD detector is expected to have an energy resolution better than 3% at 1 MeV and to have an absolute energy scale uncertainty better than 1% over the whole reactor antineutrino energy range.
The detector construction is expected to be completed in 2023. In this talk, I will present the detector design and the installation status of the various JUNO subsystems.
LiquidO is a new neutrino detection technology which uses opaque liquid scintillator with a very short scattering length and an intermediate absorption length. Reducing the scattering length down to the scale of millimetres causes the light to be confined to a few cm radius near its creation point. To extract the light a lattice of wavelength-shifting fibres runs through the scintillator. This technology provides high-resolution imaging that enables highly efficient identification of individual particles event-by-event down to the MeV scale and therefore offers a wide range of applications in particle physics. Additionally, the exploitation of an opaque medium gives LiquidO natural affinity for using dopants at unprecedented levels. The principles of the technique have been demonstrated with two small prototypes. The next step will be the construction of a 5-ton demonstrator and its operation at the Chooz nuclear power plant within the scope of an Innovation programme (EIC-Pathfinder project - AntiMatter-OTech) for monitoring nuclear reactor activity. CLOUD collaboration plans to exploit the fundamental science programme associated to this project. CLOUD collaboration includes 13 institutions over 10 countries.
Supernova (SN) explosions provide a perfect environment to produce and therefore test hypothetical particles. SN1987a gave a possibility to set a number of constraints on FIPs parameters using the energy-loss argument and further development of neutrino detectors extends those possibilities. I will discuss, how SN-produced FIPs may create detectable signatures that can significantly improve testable regions of FIP's parameters compared to those, provided by energy-loss arguments. in particular, for HNLs in the range of masses between $\sim 150$ MeV and $500$ MeV, it can potentially close a gap in the testable parameter's space between expected SHiP sensitivity and BBN
Conveners:
Laura Fabbietti (TU München)
Gunther Roland (MIT)
Jasmine Brewer (CERN)
Jeremi Niedziela (DESY)
Contact: eps23-conveners-t05 @desy.de
This talk presents the latest ATLAS measurements of collective phenomena in various collision systems, including pp collisions at 13 TeV, Xe+Xe collisions at 5.44 TeV, and Pb+Pb collisions at 5.02 TeV. These include measurement of vn-[pT] correlations in pp, Xe+Xe, and Pb+Pb, which carry important information about the initial-state geometry of the Quark-Gluon Plasma, provide an insight as to what effects are be observed without invoking hydrodynamic modeling, and can potentially shed light on any quadrupole deformation in the Xe nucleus. This talk will also present measurements of flow decorrelations differential in rapidity probing the longitudinal structure of the colliding system and study of the sensitivity of collective behavior in pp collisions to the presence of jets, which seek to distinguish the role that semi-hard processes play in the origin of these phenomena.
Studies have yielded strong evidence that a deconfined state of quarks and gluons, the quark--gluon plasma, is created in heavy-ion collisions. This hot and dense matter exhibits almost zero friction and a strong collective behavior. An unexpected collective behavior has also been observed in small collision systems. In this talk, the origin of collectivity in small collision systems is addressed by confronting PYTHIA8 and EPOS4 models using measurements of azimuthal correlations for inclusive and identified particles. In particular, anisotropic flow coefficients measured using two- and four-particle correlations with various pseudorapidity gaps, per-trigger yields, and balance functions are reported in pp collisions at $\sqrt{s}=13.6$ TeV and p--Pb collisions at $\sqrt{s_{NN}}=5.02$ TeV. The results are compared with the available experimental data.
Event classifiers based either on the charged-particle multiplicity or on event topologies, such as spherocity and Underlying Event, have been extensively used in proton-proton (pp) collisions by the ALICE Collaboration at the LHC. These event classifiers became very useful tools since the observation of fluid-like behavior in high multiplicity pp collisions, for example radial and anisotropic flow. Furthermore, the study as a function of the charged-particle multiplicity in the forward V0 ALICE detector allowed for the discovery of strangeness enhancement in high-multiplicity pp collisions. However, one drawback of the multiplicity-based event classifiers is that, requiring a high charged-particle multiplicity, biases the sample towards hard processes like multijet final states. These biases blur the effects of multi-parton (MPI) interactions and make it difficult to pin down the origins of fluid-like effects.
This contribution explores the use of a new event classifier, the charged-particle flattenicity, defined in ALICE using the charged-particle multiplicity estimated in 2.8 < $\eta$ < 5.1 and −3.7 < $\eta$ < −1.7 intervals. New final results on the production of pions, kaons, protons, and unidentified charged particles at midrapidity (|$\eta$| < 0.8) as a function of flattenicity in pp collisions at $\sqrt{s}$ = 13 TeV will be discussed. It will be shown how flattenicity can be used to select events more sensitive to MPI and less sensitive to final state hard processes. All the results are compared with predictions from QCD-inspired Monte Carlo event generators such as PYTHIA and EPOS. Finally, a preliminary study using the flattenicity estimator using Run 3 data will be shown.
Recent measurements of high multiplicity pp collisions at LHC energies have revealed that these systems exhibit features similar to quark-gluon plasma, such as presence of radial and elliptic flow, and strangeness enhancement, traditionally believed to be only achievable in heavy nucleus-nucleus collisions at high energy. To pinpoint the origin of these phenomena and to bring all collision systems in equal footings, along with charged-particle multiplicity, lately several event shape observables such as transverse activity classifier and transverse spherocity has been used extensively in experiments as well as in the phenomenological front.
In this contribution, we will summarise our phenomenological explorations [1-6] and compare with experimental results from LHC to conclude our learning so far from these studies. We observe that the event shape observables successfully differentiate the events based on soft and hard physics, however, obtaining these observables presents experimental challenges due to biases from detectors. In such a scenario, we propose to use machine learning methods for the determination of such observables in a dense environment like heavy-ion collisions. We will also provide a future outlook in view of Run 3 at the LHC.
The contribution would be based on our recent publications:
1. Phys. Rev. D107 (2023) 7, 074011
2. Phys. Rev. D107 (2023) 7, 076012
3. Phys. Rev. D103 (2021) 9, 094031
4. Sci. Rep. 12 (2022) 1, 3917
5. Eur. Phys. J. C82 (2022) 6, 524
6. J. Phys. G48 (2021) 4, 045104
Hard probes as heavy quarks (charm and beauty) and jets are valuable tools for investigating the properties of the quark-gluon plasma (QGP) formed in ultra-relativistic heavy-ion collisions. In particular, measurements of the nuclear modification factor $R_{\rm AA}$ of these probes allow us to characterise the in-medium energy loss of heavy quarks, light quarks and gluons while traversing the QGP, and to shed light on the jet-quenching phenomenology. Information on the heavy-quark diffusion and degree of participation in the medium collective motion can be obtained by measuring the elliptic-flow coefficient $v_2$ of heavy-flavour particles. Similarly, measurements of jets yield correlation with the event-plane orientation allow us to study the path-length dependence of jet energy loss due to quenching. Complementary insights into heavy-quark fragmentation and energy redistribution in the QGP can be obtained by measuring angular correlations involving heavy-flavour particles.
In this contribution, the newly published results on the non-prompt $v_2$ coefficient of ${\rm D}^0$ mesons in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV will be shown and compared to measurements of prompt D-meson $v_2$ in the same system. The recent final results of the heavy-flavour decay electron $R_{\rm AA}$ in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV will also be reported, together with measurements of prompt and non-prompt D mesons and $\Lambda_{\rm c}^+$ baryons. New results of angular correlations of heavy-flavour decay electrons with charged particles in the same collision system will also be discussed.
Measurements of the inclusive charged-particle jet yield in central Pb--Pb collisions, with the large uncorrelated background mitigated using a novel event mixing technique, will also be reported. In addition to explorations of the low-$p_{\rm T}$ frontier, the inclusive charged-particle jet $v_2$ in semi-central Pb--Pb collisions will be shown, quantifying the yield dependence relative to the event-plane orientation and probing the path-length dependence of jet energy loss. More differential measurements of this azimuthal dependence, obtained by using event-shape engineering to select specific event topologies, and the jet substructure observable $R_{\rm g}$ to select specific jet topologies, will be discussed. Such measurements improve our understanding of how jet suppression depends on both medium and jet properties.
Jets are excellent probes for studying the deconfined matter formed in heavy ion collisions. This talk presents the new observables to study how jets interact with the QGP. First, we introduce a new infrared and collinear safe measurement of the jet energy flow within jets reconstructed with different resolution parameters $R$. Changing the jet $R$ varies the relative contribution of competing energy-loss effects. Second, the measurement of jets recoiling from a trigger hadron (hadron+jet) provides unique probes of medium-induced modification of jet production. Jet deflection via multiple soft scatterings with the medium constituents may broaden the azimuthal correlation between the trigger hadron and the recoiling jet. In addition, the tail of this azimuthal correlation may be sensitive to single-hard Molière scatterings off quasi-particles in the medium. The $R$-dependence of recoil jet yield probes jet energy loss and intra-jet broadening. Finally, in inclusive jet populations, the principle axis of energy flow in the plane transverse to the jet axis examines the correlation of particles outside the jet cone with the energy of the jet. All three results may be sensitive to wake effects due to jet-medium energy transfer at low $p_\mathrm{T}$.
This talk presents measurements of the semi-inclusive distribution of charged-particle jets recoiling from a trigger hadron in pp and Pb--Pb collisions. We observe that the jet yield at low $p_\mathrm{T}$ and at large azimuthal angle between the trigger hadron and jet is significantly enhanced in Pb--Pb collisions with respect to pp collisions, which we interpret through comparisons to model calculations. In addition, the first energy flow between jets of different radii and correlations of tracks with the principle direction of energy flow in the plane transverse to the jet will be presented.
Conveners:
Marco Pappagallo (INFN and University of Bari)
Daniel Savoiu (Universität Hamburg)
Mikko Voutilainen (Helsinki University)
Marius Wiesemann (MPP)
Contact: eps23-conveners-t06 @desy.de
I will review the MSHT20 parton distribution functions and focus on our recent paper within the MSHT collaboration on the inclusion of theoretical uncertainties and higher order (N3LO) terms into the MSHT PDFs, producing the MSHT20aN3LO (approximate N3LO) set. This represents the first global analysis of parton distribution functions (PDFs) at approximate N3LO as well as simultaneously the inclusion of theoretical uncertainties into the MSHT PDFs from missing higher order terms beyond NNLO. I will review the formalism, higher orders and theoretical uncertainties included, and their effects on both the fit quality and PDFs before examining indicative N3LO predictions.
The radiation pattern within high energy quark and gluon jets (jet substructure) is used extensively as a precision probe of the strong force as well as an environment for optimizing event generators for nearly all tasks in high energy particle and nuclear physics. While there has been major advances in studying jet substructure at hadron colliders, the precision achievable by collisions involving electrons is superior, as most of the complications from hadron colliders are absent. Therefore jets are analyzed which were produced in deep inelastic scattering events and recorded by the H1 detector. This measurement is unbinned and multi-dimensional, making use of machine learning to correct for detector effects. Results are presented after unfolding the data to particle level for events in the fiducial volume of momentum transfer $Q^2>150$ GeV$^2$, inelasiticity $0.2< y < 0.7$, jet transverse momentum $p_{T,jet}>10$ GeV, and jet pseudorapidity $-1<\eta_{jet}<2.5$. The jet substructure is analyzed in the form of generalized angularites, and is presented in bins of $Q^2$ and $y$. All of the available object information in the events is used to achieve the best precision through the use of graph neural networks. Training these networks was enabled by the new Perlmutter supercomputer at Berkeley Lab that has a large number of Graphical Processing Units (GPUs). The data are compared with a broad variety of predictions to illustrate the versatility of the results for downstream analyses.
arxiv:2303.13620, submitted to PLB
We investigate the impact of the recently released FNAL-E906 (SeaQuest) data concerning the ratio of proton-deuteron and proton-proton DY production cross-sections on the sea quark PDFs. We find that they have constraining power on the light-quark sea isospin asymmetry (dbar-ubar)(x) and on the (dbar/ubar)(x) ratio at large longitudinal momentum fraction x values, that they are particularly relevant in the interval 0.25 < x < 0.45, and that their constraints turn out to be compatible with those from DY data in collider experiments (Tevatron and Large Hadron Collider) and in old fixed-target experiments by the FNAL-E605 and FNAL-E866 collaborations. We study the impact of nuclear corrections due to the deuteron target, finding them within 1% in most of the kinematic region covered by SeaQuest. We perform a new proton PDF fit, including SeaQuest data, using the ABMP16 methodology and we compare to other PDF fits, including these data or not yet.
On the basis of S. Alekhin et al. [arXiv:2306.01918]
We present recent updates in the xFitter software framework for global fits of parton distribution functions (PDFs) in high-energy physics. Our focus is on investigating the sensitivity to Z boson couplings using the forward-backward asymmetry in Drell-Yan production. By utilizing an effective approach and simulated data, we assess the accuracy of these couplings, specifically considering the full LHC data sample. Furthermore, we compare our results with predictions for future colliders, providing insights into their potential impact on understanding Z boson interactions.
The production of dijet events containing at least two jets is among the largest cross sections at the LHC, with QCD predictions directly sensitive to the strong coupling constant. Dijet cross section measurements from ATLAS and CMS, at center-of-mass energies of 7, 8 and 13 TeV are exploited for the determination of the strong coupling constant, using state-of-the-art next-to-next-to-leading order QCD predictions from NNLOJET which include subleading colour contributions. These are interfaced to the grid frameworks of APPLgrid and fastNLO. The large kinematic range of the dijet data allows for a comprehensive test of the renormalisation scale dependence of QCD.
Measurements of individual electroweak bosons at hadron colliders provide stringent tests of perturbative QCD and improve the modelling of background to many BSM searches. We present the measurement of the production of W boson in association with D+ and D*+ mesons. This precision measurement provides information about the strange content of the proton and is compared to NLO theoretical calculations. Also presented is the production of Z bosons in association with b-tagged large-radius jets. The result highlights issues with the modelling of an additional hadronic activity and provides a distinction between flavour-number schemes used in theoretical predictions. Finally, differential measurements of W and Z production with large missing transverse momentum in association with jets are discussed and compared to the state-of-the-art QCD theoretical predictions. The production rate of Z+jet events with large missing transverse momentum is used to measure the decay width of the Z boson decaying to neutrinos.
The study of the associated production of vector bosons and jets constitutes an excellent ground field to test the state-of-art pQCD predictions, and to understand the EW aspects of their production. The newsr results on the differential cross sections of vector bosons produced in association with jets at 13 TeV center-of-mass energy will be presented. Differential distributions as function of a broad range of kinematical observables are measured and compared with theoretical predictions up to NNLO. Final states with a vector boson and jets can be also used to study electroweak initiated processes, such as the vector boson fusion production of a photon, Z or W boson accompanied by a pair of energetic jets with large invariant mass, and they can be a powerful test of the EW emission of bosons.
Measurements of jet production in proton-proton collisions at the LHC are crucial for precise tests of QCD, improving the understanding of the proton structure and are important tools for searches for physics beyond the standard model. We present the most recent set of jet measurements performed using CMS data, from which measurements of the the strong coupling constant and PDF constraints are derived via QCD fits. Interpretation within the standard model effective field theory is also presented.
The focus of the session is on top quarks precision measurements and theory calculations.
Conveners:
Gauthier Durieux (CERN)
Abideh Jafari (DESY)
Narei Lorenzo Martinez (LAPP, Annecy)
Contact: eps23-conveners-t07 @desy.de
15'+5'
15'+5'
Duration: 15'+5'
Current measurements of the top mass have achieved a precision of less than 500 MeV. However, these measurements, relying on Monte Carlo Simulations, are affected by the top mass interpretation problem, introducing a theory uncertainty of $\mathcal{O}$(1 GeV). To address this challenge, accurate first principles calculations in short distance schemes are needed, allowing direct comparison with unfolded LHC data. This talk presents two complementary observables, the soft drop jet mass (Phys. Rev. D 100, 074021) and the 3-point energy correlator (Phys. Rev. D 107, 114002), where precise hadron-level predictions for the top mass can be achieved. I will review recent advancements in these approaches, including a new NNLL prediction for the soft drop jet mass in top quark jets that incorporates first principles treatment of hadronization corrections. Additionally, I will present an improved calibration of the Monte Carlo top mass parameter in collaboration with ATLAS, using the new theory input.
Duration: 15'+5'
The precise measurement of the properties of the top quark are among the most important goals of the LHC. The signature of top quarks can only be measured through their decay products, which are almost exclusively a W-boson and a b-quark, and unbiased measurements of the top-quark pair production process are therefore performed in the final state of two W-bosons and two b-quarks (WWbb). However, the WWbb final state has further contributions from single-top production and even from channels without intermediate top-quarks. At next-to-leading order QCD, these channels interfere and cannot be calculated separately any more, and since the top quarks can be off their mass shell, also finite width effects become important.
In this contribution, we exploit a measurement of the WWbb final state in the di-lepton decay channel from ATLAS at 13 TeV together with a next-to-leading order QCD prediction supplemented with parton shower in the Powheg-Box-Res framework (denoted "bb4l") for a determination of the top-quark mass and its width. We evaluate the impact of using the fully off-shell calculations, and study the correlation between the top quark mass and width. For the inference, we make use of a novel analytic parameter estimation ansatz, the Linear Template Fit, which will also be introduced briefly.
Duration: 15'+5'
The simulation of processes involving heavy unstable particles, like the top quark, holds significant importance in LHC physics. In this contribution, we address the exclusive simulation of top-quark pair production with dileptonic decays, including the non-resonant diagrams, interferences, and off-shell effects arising from the finite top-quark width. Our simulations, utilizing the mg5_aMC@NLO program, achieve next-to-leading order accuracy in QCD and is matched to parton showers through the MC@NLO method. We present phenomenological results with direct relevance to the 13 TeV LHC. We benchmark the impact of the off-shell effects on representative distributions relevant for top mass extractions, and compare our simulation to lower accuracy simulations and to data.
15'+5'
Duration: 15'+5'
With the help of the pole approximation, observables with polarised intermediate resonances can be calculated. Gauge-boson-pair production represents a particularly interesting class of processes to study polarisation. The definition of polarised signals at amplitudue level has enabled successful phenomenological studies of leptonically decaying vector bosons. The natural step forward from this is the investigation of bosons decaying into hadrons. In this talk I discuss the NLO QCD predictions for the production of a polarised ZW$^+$ pair, where the W$^+$ boson decays hadronically and the Z boson leptonically. Of particular interest are observables that are well suited for the discrimination amongst different polarisation states of both weak bosons. In addition I analyse the significant impact of NLO QCD corrections on differential distributions.