1 January 2025 to 28 February 2025
Online
Europe/Berlin timezone

High resolution X ray imaging with large field of view: How to reproducibly and reliably reduce big data without loss of information and manage experiments and metadata with aim of ANN/ML

1 Jan 2025, 11:20
5m
Online

Online

Speakers

Dr Simon Zabler (Fraunhofer IIS) Astrid Hoelzing (Fraunhofer IIS)

Description

Our associated partner establish the BM 18 on ESRF in 2022 featuring fan beam and large area detector (LAD), enabling energy-dispersive and time-dependent micro tomography for research and industrial applications. Our contribution to BM 18 will include installation of LAD and corresponding software for data acquisition and reconstruction. Setup allows very large FOV with resolutions down to 25 µm, thus users will generate big data (remotely). Data acquisition of 5GB/s are expected requiring new IT infrastructure and new algorithms for data handling. Compression of 1:20 is favorable, machine learning for automated scanner optimization and adaption is targeted. Adequate acquisition, handling and analysis software for big data with aim of ANN and ML is required, to be guided by scientist.
Fast data transfer and accelerated reconstruction by software is crucial. For automated, experience-based, autonomic user guided experiments with efficient ML/ANN parameterization of experiments, tracking of important meta data as well as optimized artefact correction and reconstruction algorithms are required. Quantification of quality criteria as well as combination with simulation are desirable.

What is your expertise in computing and / or software development?

Python software:PyXIT, Compressed sensing, artifact reduction, FBP and iterative reconstruction, starting ML/AI for dedicated applications (components and scanners)

Please describe areas in which you would like to improve your knowledge / skills.

We appreciate getting in touch with algorithms not only for artifact-reduced reconstruction but also with automated segmentation and data reduction with ML. How to generate sufficient training data applicable to a variety of scanners and materials? We want quantitative comprehensible quality criteri

My current most burning research question, I like to find partners for, is:

How to reduce big raw data from µCT intelligently and fast to segmented volume without loss of information, with accessable and findable metadata with the aim of AI/ML? Standards for format and metadata for broad usability? Link to existing solutions and platforms?

Please describe areas in which you can contribute to “data handling” teaching.

We are connected to university of wuerzburg and hold lectures at Chair for X ray microscopy (LRM). Data acquisition as well as reconstruction with multiple material, multiscalar and phase enhanced optimization.

What is your field and role?

Performing experiments, supporting users, artifact correction and reconstruction, data handling and reduction, segmentation

Please describe your expertise/areas in which you would like to contribute / advise.

We are familiar with high resolution and in situ X ray tomography as well as phase contrast imaging in lab and Synchrotron (ESRF, Bessy). Our partners establish BM 18 on ESRF.

In ErUM-Data, what kind of data are you dealing with?

X ray tomography, phase contrast imaging and reconstructed volume, optional different supplementing techniques like diffraction, (small angle) scattering and magnetic resonance tomography on different materials and compounds

Your ErUM - Committee is KFS - Komitee für Forschung mit Synchrotronstrahlung
Do you consent to the data usage and public abstract data posting in the ErUM-Data Community Information Exchange? Yes

Primary authors

Dr Simon Zabler (Fraunhofer IIS) Astrid Hoelzing (Fraunhofer IIS) Nazila Saeidnezhad (Fraunhofer IIS)

Presentation materials

There are no materials yet.