This is a half day meeting of people within FS doing computational work. The idea is to get to know each other and who is doing what. Topics will range from data analysis to control, imaging methods, AI applications, etc. . It's also a chance to get to know the new head of IT, the and the new computational imaging and FS-EC group heads.
Program will consist of very short talks and plenty of time to discuss. We will meet in the FLASH hall so that there is plenty of space.
The aim is to share a morning, 9am to 1pm. Coffee and lunch may be provided (logistics TBD). We use Indico and ask you to register to help track numbers and plan the meeting.
Within FS-EC (Experiment Control) we develop and adopt software solutions that cater the data acquisition processes in user operation in FS facilities as well as fundamental background services that are needed for a performant beamline infrastructure. The group covers aspects spanning across high level controls (Sardana), low level controls and hardware abstraction (Tango), detector drivers and adaptors (for high speed cameras), custom firmware for embedded devices (PILC) to beamline specific scripts and user interfaces. Further we are involved in data analysis (Maxwell support) as well as GPFS storage (ASAP3) related activities.
As one of the support groups within FS our mission is to contribute to smooth user operation in DESY FS facilities. This involves own developments (e.g. hundreds of Tango servers) as well as the contributions to international collaborations.
The knowledge gap between what we know about proteins and what we would like to do with them is broadly described as “protein dynamics". The nascent Protein Machinists group tackles this from computational angles: how to mathematically express protein conformations, compare them, establish what is contained within an experimentally observed population and manipulate them. I will also talk about the computational backbone of the scientific work.
Applying for measurements at large X-ray sources like PETRA III at DESY or BESSY II at HZB is highly competitive. If granted beam time, numerous samples need to be measured. These measurements often occur under in-situ or operando conditions, such as studying the degradation of a biodegradable Mg wire or structural changes in a battery during charging. These changes happen at a sub-micron scale, requiring computationally intensive techniques like phase-sensitive X-ray microscopy. However, these techniques don’t provide immediate feedback, increasing the risk of failed measurements and wasted beam time. The Helmholtz imaging project SmartPhase aims to address this issue by providing online reconstruction of measurement data. We are implementing self-optimizing phase-retrieval algorithms based on conventional and physics-informed machine learning approaches.
I present an overview over the simulation activities done with the XRAYPAC tookit developed at FS-CFEL-3.
Overview of RIC activities for FS, including e.g. results from EU projects and collaborations with LEAPS and Helmholtz.
A short overview of the services provided by central IT specifically for FS data taking and data processing, especially the ASAP3 framework and related services.
Accurate and precise information about the position of the detector is critical to the success of many types of experiment in photon science. This is particularly true for serial crystallography, which is sensitive to detector misalignments of less than one pixel’s width. However, serial crystallography data is also very useful for refining the detector geometry, consisting of sharp, bright Bragg peaks in regular and geometrically simple patterns. Several methods have been developed for this process over the last decade. The current state of the art involves refining the “global” detector geometry parameters along with the “local” crystal orientation parameters, in one very large least-squares refinement. The joint refinement of local and global parameters is needed to avoid a biased fit and slow convergence. Unfortunately, with a typical serial crystallography dataset consisting of many tens of thousands of crystals, only a fraction of the data can be used before the refinement becomes too computationally demanding.
The “Millepede” algorithm was developed in high energy physics to address exactly this problem. The relevant matrix equations can be rearranged such that the full calculation can be performed, without approximations but with vastly reduced memory and CPU requirements. This makes a full joint refinement using tens or even hundreds of thousands of crystals practical.
In this contribution, I will describe the ongoing experience of applying the Millepede algorithm to serial crystallography data, and the potential for other applications in photon science.
This presentation offers a succinct examination of FLASH, emphasizing the facility's key features. It outlines the IT infrastructure and collaborative relationships with different DESY groups. Additionally, it delves into the technical aspects of experimental control, data acquisition, and analysis methods. The talk also sheds light on current projects and future directions.
New detectors produce increasing volumes of data, and so performing data reduction close to the detector (e.g. on detector-specific data acquisition PCs) can potentially reduce the workload on later infrastructure. I will present on current work on this in FS-DS. Also, for historical reasons, FS-DS are responsible for the DESY part of a few collaboration projects on data reduction: Data-X, HIR3X workpackage 2, and LEAPS-INNOV workpackage 7. I will present these projects, and we can discuss how to proceed with them in future.
We have been using ASAP::O to process data in real time from an Eiger 16M detector, in serial crystallography experiments at P11. The system performs a peak search before indexing and integrating each diffraction pattern, producing Bragg reflection intensity measurements without any need (in principle) to store image data.
Through rounds of performance profiling, we reduced the time taken to process one pattern (in one thread) to only 455 ms, meaning that the dedicated P11 computing resources are sufficient to keep up with the 133 frames per second speed of the Eiger detector, despite the very large number of pixels (16M).
The pipeline is available for user experiments at P11 (and other beamlines).
Current femtosecond crystallography data processing routines such as CrystFEL by T. White generally assume that identical molecular structure for each of the tens of thousands of protein crystals used. This assumption is however known to be unphysical, and with the recent development of deep learning technology, we are exploring whether we can build a net capable of shaking out the subtle structural variation information hidden behind a cloud of noise. We report progress on the development of a variational autoencoder, with an architecture inspired both from work in particle physics and Cryo EM.
We will present the work and expertise of the newly established Computational Imaging Group at DESY. Being founded in mathematics, we provide expertise in inverse problems as well as theoretical foundations of machine learning methods.
We give an overview of mathematical methods in imaging and applications thereof, in particular algorithms for reducing computational cost in large scale problems. Moreover, we give insight into theoretically founded training of robust or sparse neural networks.
Can laser-induced molecular wave packets be used as a quantum simulator?
We'll give an overview of our computational projects at beamline P11.
Within this contribution, an overview of the imaging program of the SAXSMAT beamline P62 will be given. Especially, the computational aspects of the 3D reconstruction for the SAXS- and WAXS tensor tomography imaging experiments will be presented. Furthermore, an outlook of upcoming projects that include AI in the reconstruction algorithms will be discussed.
Besides the 3D tomography datasets conventional scanning SAXS- and WAXS datasets getting larger and larger and therefore more demanding on the data processing. A short overview of the computational aspects of those data will be discussed as well.
The FS-PETRA Beamline Optics Simulation group is developing tools and providing support for ray tracing and wave propagation simulations of PETRA beamlines. With the sharp increase of coherence at PETRA IV, wave propagation simulations gain in relevance and will be necessary for many upcoming beamlines. Start-to-end simulations of entire beamlines, including coherent imaging experiments, can be performed.
In my talk I will introduce the principles and simulations of such simulations and explain the unique computing requirements they have.