High energy Intelligence
The High energy Intelligence - HeI - project is a Marie Curie staff exchange programme whose purpose is to explore, integrate and develop cutting-edge advanced methods in high energy physics:
Integrability, conformal and S-matrix bootstrap methods and artificial intelligence .
How it works?
The European nodes of this project have funding to visit any other node of the consortium.
The collaboration organizes workshops and scientific events dedicated to the thematics of the project.
Our Objectives
Extending Bootstrap Methods
Extend the horizon of applicability of bootstrap methods by finding better constraints and more rigorous predictions. This includes studying the conformal window of QCD-like theories, integrable and supersymmetric theories, and theories with gravitational duals within string theory.
Refining QCD Physics Understanding
Push the boundaries of our understanding of QCD physics by obtaining the most refined partonic distribution functions of quarks and gluons in nuclear matter.
Combining AI/ML with Theoretical Physics
Combine Artificial Intelligence and Machine Learning training with cutting-edge research in theoretical physics. We aim to design neural networks that can be trained on partial data sets while solving non-perturbative constraint equations from theory.
Nodes
Partners
Our Activities
Worshops
Artificial intelligence for High Energy Physics - EPFL 10-20 June 2025 .
Minicourses
• Jamie Taylor: An Introduction to Solving PDEs Using Neural Networks
Abstract: In recent years, advances in machine learning techniques have begun to make their way into numerical analysis, offering new toolkits for tackling problems arising from partial differential equations (PDEs), with a wealth of new capabilities - and limitations - compared to more classical methods. Whilst many new ideas have been proposed for integrating neural networks (NNs) with PDE methods, the aim of this course is to consider simple test cases to introduce attendees to key concepts underlying such methodologies. In particular, we will focus on the most established methodology: Physics-Informed Neural Networks (PINNs). The three cornerstones of any such implementation are the choice of an appropriate loss function to be minimized, the NN architecture, and the optimization strategy employed, which will be the focus of this course. The course aims to be as self-contained as possible, however, familiarity with classical methods (e.g. FEM) and elementary concepts from data science-based machine learning (e.g. simple NNs, stochastic gradient descent) will be beneficial.
Seminars
• Xinan Zhou: Giant Graviton Correlators as Defect Systems
Abstract: In this talk, I will discuss correlation functions in 4d N = 4 SYM involving two maximal giant gravitons and two light 1/2-BPS operators. I will argue that it is most natural to view them as two-point correlators in the presence of a zero dimensional defect. Using this picture together with analytic bootstrap techniques, I will show how all infinitely many such correlators can be fully fixed just from symmetries and consistency conditions. Moreover, I will point out a hidden higher dimensional symmetry which repackages these correlators into a simple generating function. I will also present evidence that the same symmetry holds at weak coupling for loop correction integrands.
• Matthias Wilhelm: Refining Integration-by-Parts Reduction of Feynman Integrals with Machine Learning
Abstract: In this talk, we will present recent progress on applying machine-learning techniques to improve calculations in theoretical physics, in which we desire exact and analytic results. One example are so-called integration-by-parts reductions of Feynman integrals, which pose a frequent bottleneck in state-of-the-art calculations in theoretical particle and gravitational-wave physics. These reductions rely on heuristic approaches for selecting a finite set of linear equations to solve, and the quality of the heuristics heavily influences the performance. In this talk, we investigate the use of machine-learning techniques to find improved heuristics. We use funsearch, a genetic programming variant based on code generation by a Large Language Model, in order to explore possible approaches, then use strongly typed genetic programming to zero in on useful solutions. Both approaches manage to re-discover the state-of-the-art heuristics recently incorporated into integration-by-parts solvers, and in one example find a small advance on this state of the art.
This scientific collaboration is Marie Curie funded. For more information, visit CORDIS.