%0 Conference Proceedings %T Unsupervised Multi-sensor Anomaly Localization with Explainable AI %+ Saarland University [Saarbrücken] %+ Deutsches Forschungszentrum für Künstliche Intelligenz GmbH = German Research Center for Artificial Intelligence (DFKI) %+ Technische Universität Darmstadt - Technical University of Darmstadt (TU Darmstadt) %A Ameli, Mina %A Pfanschilling, Viktor %A Amirli, Anar %A Maaß, Wolfgang %A Kersting, Kristian %Z Part 7: Explainable AI/Graph Representation and Processing Frameworks %< avec comité de lecture %@ 978-3-031-08332-7 %( IFIP Advances in Information and Communication Technology %B 18th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI) %C Hersonissos, Greece %Y Ilias Maglogiannis %Y Lazaros Iliadis %Y John Macintyre %Y Paulo Cortez %I Springer International Publishing %3 Artificial Intelligence Applications and Innovations %V AICT-646 %N Part I %P 507-519 %8 2022-06-17 %D 2022 %R 10.1007/978-3-031-08333-4_41 %K Anomaly localization %K Explainable artificial intelligence %K Unsupervised anomaly detection %K Multivariate time-series %K Multi-sensor data %Z Computer Science [cs]Conference papers %X Multivariate and Multi-sensor data acquisition for the purpose of device monitoring had a significant impact on recent research in Anomaly Detection. Despite the wide range of anomaly detection approaches, localization of detected anomalies in multivariate and Multi-sensor time-series data remains a challenge. Interpretation and anomaly attribution is critical and could improve the analysis and decision-making for many applications. With anomaly attribution, explanations can be leveraged to understand, on a per-anomaly basis, which sensors cause the root of anomaly and which features are the most important in causing an anomaly. To this end, we propose using saliency-based Explainable-AI approaches to localize the essential sensors responsible for anomalies in an unsupervised manner. While most Explainable AI methods are considered as interpreters of AI models, we show for the first time that Saliency Explainable AI can be utilized in Multi-sensor Anomaly localization applications. Our approach is demonstrated for localizing the detected anomalies in an unsupervised multi-sensor setup, and the experiments show promising results. We evaluate and compare different classes of saliency explainable AI approach on the Server Machine Data (SMD) Dataset and compared the results with the state-of-the-art OmniAnomaly Localization approach. The results of our empirical analysis demonstrate a promising performance. %G English %Z TC 12 %Z WG 12.5 %2 https://inria.hal.science/hal-04317155/document %2 https://inria.hal.science/hal-04317155/file/527511_1_En_41_Chapter.pdf %L hal-04317155 %U https://inria.hal.science/hal-04317155 %~ IFIP %~ IFIP-AICT %~ IFIP-TC %~ IFIP-WG %~ IFIP-TC12 %~ IFIP-AIAI %~ IFIP-WG12-5 %~ IFIP-AICT-646