%0 Conference Proceedings %T Explain Graph Neural Networks to Understand Weighted Graph Features in Node Classification %+ Yale University [New Haven] %+ JP Morgan AI Research %A Li, Xiaoxiao %A Saúde, João %< avec comité de lecture %( Lecture Notes in Computer Science %B 4th International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE) %C Dublin, Ireland %Y Andreas Holzinger %Y Peter Kieseberg %Y A Min Tjoa %Y Edgar Weippl %I Springer International Publishing %3 Machine Learning and Knowledge Extraction %V LNCS-12279 %P 57-76 %8 2020-08-25 %D 2020 %R 10.1007/978-3-030-57321-8_4 %K Explainability %K Graph Neural Networks %K Classification %Z Computer Science [cs] %Z Humanities and Social Sciences/Library and information sciencesConference papers %X Real data collected from different applications that have additional topological structures and connection information are amenable to be represented as a weighted graph. Considering the node labeling problem, Graph Neural Networks (GNNs) is a powerful tool, which can mimic experts’ decision on node labeling. GNNs combine node features, connection patterns, and graph structure by using a neural network to embed node information and pass it through edges in the graph. We want to identify the patterns in the input data used by the GNN model to make a decision and examine if the model works as we desire. However, due to the complex data representation and non-linear transformations, explaining decisions made by GNNs is challenging. In this work, we propose new graph features’ explanation methods to identify the informative components and important node features. Besides, we propose a pipeline to identify the critical factors used for node classification. We use four datasets (two synthetic and two real) to validate our methods. Our results demonstrate that our explanation approach can mimic data patterns used for node classification by human interpretation and disentangle different features in the graphs. Furthermore, our explanation methods can be used for understanding data, debugging GNN models, and examine model decisions. %G English %Z TC 5 %Z TC 8 %Z TC 12 %Z WG 8.4 %Z WG 8.9 %Z WG 12.9 %2 https://inria.hal.science/hal-03414729/document %2 https://inria.hal.science/hal-03414729/file/497121_1_En_4_Chapter.pdf %L hal-03414729 %U https://inria.hal.science/hal-03414729 %~ SHS %~ IFIP-LNCS %~ IFIP %~ IFIP-TC %~ IFIP-TC5 %~ IFIP-WG %~ IFIP-TC12 %~ IFIP-TC8 %~ IFIP-WG8-4 %~ IFIP-WG8-9 %~ IFIP-CD-MAKE %~ IFIP-WG12-9 %~ IFIP-LNCS-12279