Speaker
Description
We report progress in using LLM to generate particle theory Lagrangians. By treating Lagrangians as complex, rule-based constructs similar to linguistic expressions, we employ transformer architectures —proven in language processing tasks— to model and predict Lagrangians. A dedicated dataset, which includes the Standard Model and a variety of its extensions featuring various scalar and fermionic extensions, was utilized to train our transformer model from the ground up. The resulting model hopes to demonstrate initial capabilities reminiscent of LLM, mainly pattern recognition. The ultimate goal of this initiative is to establish an AI system capable of formulating theoretical explanations for experimental observations, a significant step towards integrating artificial intelligence into the iterative process of theoretical physics.