首页|Tensor networks for explainable machine learning in cybersecurity

Tensor networks for explainable machine learning in cybersecurity

扫码查看
In this paper we show how tensor networks help in developing explainability of machine learning algorithms. Specifically, we develop an unsupervised clustering algorithm based on Matrix Product States (MPS) and apply it in the context of a real use-case of adversary-generated threat intelligence. Our investigation proves that MPS rival traditional deep learning models such as autoencoders and GANs in terms of performance, while providing much richer model interpretability. Our approach naturally facilitates the extraction of feature-wise probabilities, Von Neumann Entropy, and mutual information, offering a compelling narrative for classification of anomalies and fostering an unprecedented level of transparency and interpretability, something fundamental to understand the rationale behind artificial intelligence decisions.

Tensor networksExplainable AICybersecurityAnomaly detectionMatrix Product States (MPS)Adversary-Generated Threat IntelligenceMATRIX PRODUCT STATESRENORMALIZATION-GROUP

Aizpurua, Borja、Palmer, Samuel、Orus, Roman

展开 >

Multiverse Comp||Tecnun Univ Navarra

Multiverse Comp

Multiverse Comp||Donostia Int Phys Ctr||Ikerbasque Fdn Sci

2025

Neurocomputing

Neurocomputing

SCI
ISSN:0925-2312
年,卷(期):2025.639(Jul.28)
  • 23