We are delighted to announce the acceptance of funding for the "CoMPLEx" Project as part of the H2020 - Marie-Sklodowska-Curie Actions.

Unfolding methods allow to correct the impact of detector effects on experimental data, while enhancing the data interpretability and facilitating their preservation. Since standard approaches are typically based on low-dimensional, binned histograms, we propose to develop improved neural network-based unfolding methods enabling high-dimensional event-by-event unfolding. We aim to achieve unfolding results with minimal bias, establish a fast and reliable evaluation of uncertainties from experimental data, as well as the unfolding procedure itself. Normalizing flow networks are particularly well suited for this goal due to their invertibility, their fast evaluation, and their access to the probability density of the target space. While being tested on ATLAS data, the proposed methods will have wide application in many related areas of fundamental research. The project will be implemented at LPNHE, in collaboration with the local ATLAS group and the ATLAS group from Heidelberg, benefiting in addition from support by SCAI.

 

Abstract:
The reconstruction of experimental data is always impacted by detector effects like limited resolution, acceptance and efficiencies. Unfolding methods aim at correcting for these effects by creating a probabilistic mapping between reconstructed and truth-level information, which enhances the data interpretability and facilitates their preservation. While standard approaches are typically based on low-dimensional, binned histograms, neural networks open the path to probabilistic, high-dimensional event-by-event unfolding. Tremendous progress in the development of neural network based unfolding methods has been achieved over the last years. While these methods have been tested extensively on pseudo-data, moving beyond proof of concepts represents a major challenge. In particular the application to large experimental dataset has to meet demanding requirements of high-performance and reliability. The future LHC Run 3 and the HL-LHC will enable simultaneous measurement of many observables with high statistical precision, turning the accuracy of established algorithms into a limiting factor. In order to meet the precision requirements of future measurements we need to achieve unfolding results with minimal bias, establish a fast and reliable evaluation of statistical and systematic uncertainties from experimental data, and determine the uncertainties of the unfolding procedure itself. Normalizing flow networks are particularly well suited for this goal due to their invertibility, their fast evaluation, and their access to the probability density of the target space. Particular challenges involve fast effective retraining to propagate simulation related uncertainties, robustness in regions of low statistics, and the preservation of correlations between observables and their associated uncertainties in different phase space regions. While being tested on ATLAS data the proposed methods will have wide application in many related areas of fundamental research.