Home » Publication » 27269

Dettaglio pubblicazione

2023, COMPUTING AND SOFTWARE FOR BIG SCIENCE, Pages - (volume: 7)

Convergent Approaches to AI Explainability for HEP Muonic Particles Pattern Recognition (01a Articolo in rivista)

Maglianella L., Nicoletti L., Giagu S., Napoli C., Scardapane S.

Neural networks are commonly defined as ‘black-box’ models, meaning that the mechanism describing how they give predictions and perform decisions is not immediately clear or even understandable by humans. Therefore, Explainable Artificial Intelligence (xAI) aims at overcoming such limitation by providing explanations to Machine Learning (ML) algorithms and, consequently, making their outcomes reliable for users. However, different xAI methods may provide different explanations, both from a quantitative and a qualitative point of view, and the heterogeneity of approaches makes it difficult for a domain expert to select and interpret their result. In this work, we consider this issue in the context of a high-energy physics (HEP) use-case concerning muonic motion. In particular, we explored an array of xAI methods based on different approaches, and we tested their capabilities in our use-case. As a result, we obtained an array of potentially easy-to-understand and human-readable explanations of models’ predictions, and for each of them we describe strengths and drawbacks in this particular scenario, providing an interesting atlas on the convergent application of multiple xAI algorithms in a realistic context.
keywords
© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma