Evaluating Explanations by Cognitive Value - Machine Learning and Knowledge Extraction
Conference Papers Year : 2018

Evaluating Explanations by Cognitive Value

Abstract

The transparent AI initiative has ignited several academic and industrial endeavors and produced some impressive technologies and results thus far. Many state-of-the-art methods provide explanations that mostly target the needs of AI engineers. However, there is very little work on providing explanations that support the needs of business owners, software developers, and consumers who all play significant roles in the service development and use cycle. By considering the overall context in which an explanation is presented, including the role played by the human-in-the-loop, we can hope to craft effective explanations. In this paper, we introduce the notion of the “cognitive value” of an explanation and describe its role in providing effective explanations within a given context. Specifically, we consider the scenario of a business owner seeking to improve sales of their product, and compare explanations provided by some existing interpretable machine learning algorithms (random forests, scalable Bayesian Rules, causal models) in terms of the cognitive value they offer to the business owner. We hope that our work will foster future research in the field of transparent AI to incorporate the cognitive value of explanations in crafting and evaluating explanations.
Fichier principal
Vignette du fichier
472936_1_En_23_Chapter.pdf (229.15 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-02060044 , version 1 (07-03-2019)

Licence

Identifiers

Cite

Ajay Chander, Ramya Srinivasan. Evaluating Explanations by Cognitive Value. 2nd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2018, Hamburg, Germany. pp.314-328, ⟨10.1007/978-3-319-99740-7_23⟩. ⟨hal-02060044⟩
166 View
183 Download

Altmetric

Share

More