%0 Conference Proceedings %T New Frontiers in Explainable AI: Understanding the GI to Interpret the GO %+ Università degli Studi di Milano-Bicocca = University of Milano-Bicocca (UNIMIB) %+ IRCCS Istituto Ortopedico Galeazzi %A Cabitza, Federico %A Campagner, Andrea %A Ciucci, Davide %< avec comité de lecture %( Lecture Notes in Computer Science %B 3rd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE) %C Canterbury, United Kingdom %Y Andreas Holzinger %Y Peter Kieseberg %Y A Min Tjoa %Y Edgar Weippl %I Springer International Publishing %3 Machine Learning and Knowledge Extraction %V LNCS-11713 %P 27-47 %8 2019-08-26 %D 2019 %R 10.1007/978-3-030-29726-8_3 %K Ground truth %K Explainable AI %K Reliability %K Usable AI %Z Computer Science [cs]Conference papers %X In this paper we focus on the importance of interpreting the quality of the input of predictive models (potentially a GI, i.e., a Garbage In) to make sense of the reliability of their output (potentially a GO, a Garbage Out) in support of human decision making, especially in critical domains, like medicine. To this aim, we propose a framework where we distinguish between the Gold Standard (or Ground Truth) and the set of annotations from which this is derived, and a set of quality dimensions that help to assess and interpret the AI advice: fineness, trueness, representativeness, conformity, dryness. We then discuss implications for obtaining more informative training sets and for the design of more usable Decision Support Systems. %G English %Z TC 5 %Z TC 12 %Z WG 8.4 %Z WG 8.9 %Z WG 12.9 %2 https://inria.hal.science/hal-02520038/document %2 https://inria.hal.science/hal-02520038/file/485369_1_En_3_Chapter.pdf %L hal-02520038 %U https://inria.hal.science/hal-02520038 %~ IFIP-LNCS %~ IFIP %~ IFIP-TC %~ IFIP-TC5 %~ IFIP-WG %~ IFIP-TC12 %~ IFIP-WG8-4 %~ IFIP-WG8-9 %~ IFIP-CD-MAKE %~ IFIP-WG12-9 %~ IFIP-LNCS-11713