%0 Conference Proceedings %T A Projected Stochastic Gradient Algorithm for Estimating Shapley Value Applied in Attribute Importance %+ Thales SIX GTS France %+ Saclay Industrial Lab for Artificial Intelligence Research (SINCLAIR AI Lab) %A Simon, Grah %A Vincent, Thouvenot %< avec comité de lecture %( Lecture Notes in Computer Science %B 4th International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE) %C Dublin, Ireland %Y Andreas Holzinger %Y Peter Kieseberg %Y A Min Tjoa %Y Edgar Weippl %I Springer International Publishing %3 Machine Learning and Knowledge Extraction %V LNCS-12279 %P 97-115 %8 2020-08-25 %D 2020 %R 10.1007/978-3-030-57321-8_6 %K Attribute importance %K Interpretability %K Shapley value %K Monte Carlo %K Projected stochastic gradient descent %Z Computer Science [cs] %Z Humanities and Social Sciences/Library and information sciencesConference papers %X Machine Learning is enjoying an increasing success in many applications: medical, marketing, defence, cyber security, transportation. It is becoming a key tool in critical systems. However, models are often very complex and highly non-linear. This is problematic, especially for critical systems, because end-users need to fully understand the decisions of an algorithm (e.g. why an alert has been triggered or why a person has a high probability of cancer recurrence). One solution is to offer an interpretation for each individual prediction based on attribute relevance. Shapley Values allow to distribute fairly contributions for each attribute in order to understand the difference between a predicted value for an observation and a base value (e.g. the average prediction of a reference population). They come from cooperative game theory. While these values have many advantages, including their theoretical guarantees, they are however really hard to calculate. Indeed, the complexity increases exponentially with the dimension (the number of variables). In this article, we propose two novel methods to approximate these Shapley Values. The first one is an optimization of an already existing Monte Carlo scheme. It reduces the number of prediction function calls. The second method is based on a projected gradient stochastic algorithm. We prove for the second approach some probability bounds and convergence rates for the approximation errors according to the learning rate type used. Finally, we carry out experiments on simulated datasets for a classification and a regression task. We empirically show that these approaches outperform the classical Monte Carlo estimator in terms of convergence rate and number of prediction function calls, which is the major bottleneck in Shapley Value estimation for our application. %G English %Z TC 5 %Z TC 8 %Z TC 12 %Z WG 8.4 %Z WG 8.9 %Z WG 12.9 %2 https://inria.hal.science/hal-03414720/document %2 https://inria.hal.science/hal-03414720/file/497121_1_En_6_Chapter.pdf %L hal-03414720 %U https://inria.hal.science/hal-03414720 %~ SHS %~ IFIP-LNCS %~ IFIP %~ IFIP-TC %~ IFIP-TC5 %~ IFIP-WG %~ IFIP-TC12 %~ IFIP-TC8 %~ IFIP-WG8-4 %~ IFIP-WG8-9 %~ EDF %~ IFIP-CD-MAKE %~ IFIP-WG12-9 %~ IFIP-LNCS-12279