%0 Conference Proceedings %T Using Relational Concept Networks for Explainable Decision Support %+ The Netherlands Organisation for Applied Scientific Research (TNO) %A Heer, Paolo, De %A Voogd, Jeroen %A Veltman, Kim %A Hanckmann, Patrick %A Lith, Jeroen, Van %< avec comité de lecture %( Lecture Notes in Computer Science %B 3rd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE) %C Canterbury, United Kingdom %Y Andreas Holzinger %Y Peter Kieseberg %Y A Min Tjoa %Y Edgar Weippl %I Springer International Publishing %3 Machine Learning and Knowledge Extraction %V LNCS-11713 %P 78-93 %8 2019-08-26 %D 2019 %R 10.1007/978-3-030-29726-8_6 %K Symbolic AI %K Neural networks %K Graph-based machine learning %K Explainability %K Decision support %Z Computer Science [cs]Conference papers %X In decision support systems, information from many different sources must be integrated and interpreted to aid the process of gaining situational understanding. These systems assist users in making the right decisions, for example when under time pressure. In this work, we discuss a controlled automated support tool for gaining situational understanding, where multiple sources of information are integrated.In the domain of operational safety and security, available data is often limited and insufficient for sub-symbolic approaches such as neural networks. Experts generally have high level (symbolic) knowledge but may lack the ability to adapt and apply that knowledge to the current situation. In this work, we combine sub-symbolic information and technologies (machine learning) with symbolic knowledge and technologies (from experts or ontologies). This combination offers the potential to steer the interpretation of the little data available with the knowledge of the expert.We created a framework that consists of concepts and relations between those concepts, for which the exact relational importance is not necessarily specified. A machine-learning approach is used to determine the relations that fit the available data. The use of symbolic concepts allows for properties such as explainability and controllability. The framework was tested with expert rules on an attribute dataset of vehicles. The performance with incomplete inputs or smaller training sets was compared to a traditional fully-connected neural network. The results show it as a viable alternative when data is limited or incomplete, and that more semantic meaning can be extracted from the activations of concepts. %G English %Z TC 5 %Z TC 12 %Z WG 8.4 %Z WG 8.9 %Z WG 12.9 %2 https://inria.hal.science/hal-02520050/document %2 https://inria.hal.science/hal-02520050/file/485369_1_En_6_Chapter.pdf %L hal-02520050 %U https://inria.hal.science/hal-02520050 %~ IFIP-LNCS %~ IFIP %~ IFIP-TC %~ IFIP-TC5 %~ IFIP-WG %~ IFIP-TC12 %~ IFIP-WG8-4 %~ IFIP-WG8-9 %~ IFIP-CD-MAKE %~ IFIP-WG12-9 %~ IFIP-LNCS-11713