%0 Conference Proceedings %T KANDINSKY Patterns as IQ-Test for Machine Learning %+ Medical University Graz %+ University of Teacher Education (PHBern) %A Holzinger, Andreas %A Kickmeier-Rust, Michael %A Müller, Heimo %< avec comité de lecture %( Lecture Notes in Computer Science %B 3rd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE) %C Canterbury, United Kingdom %Y Andreas Holzinger %Y Peter Kieseberg %Y A Min Tjoa %Y Edgar Weippl %I Springer International Publishing %3 Machine Learning and Knowledge Extraction %V LNCS-11713 %P 1-14 %8 2019-08-26 %D 2019 %R 10.1007/978-3-030-29726-8_1 %K Artificial intelligence %K Human intelligence %K Intelligence testing %K IQ-Test %K Explainable-AI %K Interpretable machine learning %Z Computer Science [cs]Conference papers %X AI follows the notion of human intelligence which is unfortunately not a clearly defined term. The most common definition given by cognitive science as mental capability, includes, among others, the ability to think abstract, to reason, and to solve problems from the real world. A hot topic in current AI/machine learning research is to find out whether and to what extent algorithms are able to learn abstract thinking and reasoning similarly as humans can do – or whether the learning outcome remains on purely statistical correlation. In this paper we provide some background on testing intelligence, report some preliminary results from 271 participants of our online study on explainability, and propose to use our Kandinsky Patterns as an IQ-Test for machines. Kandinsky Patterns are mathematically describable, simple, self-contained hence controllable test data sets for the development, validation and training of explainability in AI. Kandinsky Patterns are at the same time easily distinguishable from human observers. Consequently, controlled patterns can be described by both humans and computers. The results of our study show that the majority of human explanations was made based on the properties of individual elements in an image (i.e., shape, color, size) and the appearance of individual objects (number). Comparisons of elements (e.g., more, less, bigger, smaller, etc.) were significantly less likely and the location of objects, interestingly, played almost no role in the explanation of the images. The next step is to compare these explanations with machine explanations. %G English %Z TC 5 %Z TC 12 %Z WG 8.4 %Z WG 8.9 %Z WG 12.9 %2 https://inria.hal.science/hal-02520058/document %2 https://inria.hal.science/hal-02520058/file/485369_1_En_1_Chapter.pdf %L hal-02520058 %U https://inria.hal.science/hal-02520058 %~ IFIP-LNCS %~ IFIP %~ IFIP-TC %~ IFIP-TC5 %~ IFIP-WG %~ IFIP-TC12 %~ IFIP-WG8-4 %~ IFIP-WG8-9 %~ IFIP-CD-MAKE %~ IFIP-WG12-9 %~ IFIP-LNCS-11713