Speaker
Description
In quantum many-body physics, the analytic continuation of Greens' functions is a well-known problem.
The problem is ill-posed in the sense that the transformation kernel becomes chaotic for large energies and thus small noise
creates huge differences in the resulting spectral density function.
Some techniques in the field of machine learning, in particular neural networks, are known for handling this kind of problem.
Using a neural network and for the problem-optimized loss functions and hyperparameters, a network is trained to determine
the spectral density from the imaginary part of the Greens function given by quantum Monte Carlo simulations.
The network is able to recover the overall form of the spectral density function, even without adding constraints such as
normalization and positive definiteness.
There is no need to encode these constraints as regularizations since they are reflected automatically by the solution provided by the network.
This indicates the correctness of the inversion kernel learned by the neural network.
In the talk, I will explain the structure of the methods used to train the network and highlight the central results.
Summary
Using a neural network, predictions for solutions of an ill-posted problem are made.