Machine Learning Explainability Through Comprehensible Decision Trees - IFIP - Lecture Notes in Computer Science Access content directly
Conference Papers Year : 2019

Machine Learning Explainability Through Comprehensible Decision Trees

Abstract

The role of decisions made by machine learning algorithms in our lives is ever increasing. In reaction to this phenomenon, the European General Data Protection Regulation establishes that citizens have the right to receive an explanation on automated decisions affecting them. For explainability to be scalable, it should be possible to derive explanations in an automated way. A common approach is to use simpler, more intuitive decision algorithms to build a surrogate model of the black-box model (for example a deep learning algorithm) used to make a decision. Yet, there is a risk that the surrogate model is too large for it to be really comprehensible to humans. We focus on explaining black-box models by using decision trees of limited size as a surrogate model. Specifically, we propose an approach based on microaggregation to achieve a trade-off between comprehensibility and representativeness of the surrogate model on the one side and privacy of the subjects used for training the black-box model on the other side.
Fichier principal
Vignette du fichier
485369_1_En_2_Chapter.pdf (2.18 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02520062 , version 1 (26-03-2020)

Licence

Attribution

Identifiers

Cite

Alberto Blanco-Justicia, Josep Domingo-Ferrer. Machine Learning Explainability Through Comprehensible Decision Trees. 3rd International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2019, Canterbury, United Kingdom. pp.15-26, ⟨10.1007/978-3-030-29726-8_2⟩. ⟨hal-02520062⟩
64 View
269 Download

Altmetric

Share

Gmail Facebook X LinkedIn More