New Frontiers in Explainable AI: Understanding the GI to Interpret the GO - IFIP Open Digital Library Access content directly
Conference Papers Year : 2019

New Frontiers in Explainable AI: Understanding the GI to Interpret the GO

Abstract

In this paper we focus on the importance of interpreting the quality of the input of predictive models (potentially a GI, i.e., a Garbage In) to make sense of the reliability of their output (potentially a GO, a Garbage Out) in support of human decision making, especially in critical domains, like medicine. To this aim, we propose a framework where we distinguish between the Gold Standard (or Ground Truth) and the set of annotations from which this is derived, and a set of quality dimensions that help to assess and interpret the AI advice: fineness, trueness, representativeness, conformity, dryness. We then discuss implications for obtaining more informative training sets and for the design of more usable Decision Support Systems.
Fichier principal
Vignette du fichier
485369_1_En_3_Chapter.pdf (647.53 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02520038 , version 1 (26-03-2020)

Licence

Attribution

Identifiers

Cite

Federico Cabitza, Andrea Campagner, Davide Ciucci. New Frontiers in Explainable AI: Understanding the GI to Interpret the GO. 3rd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2019, Canterbury, United Kingdom. pp.27-47, ⟨10.1007/978-3-030-29726-8_3⟩. ⟨hal-02520038⟩
32 View
117 Download

Altmetric

Share

Gmail Facebook X LinkedIn More