Automatic Fairness Testing of Machine Learning Models - Testing Software and Systems Access content directly
Conference Papers Year : 2020

Automatic Fairness Testing of Machine Learning Models

Arnab Sharma
  • Function : Author
  • PersonId : 1100077
Heike Wehrheim
  • Function : Author
  • PersonId : 1100078

Abstract

In recent years, there has been an increased application of machine learning (ML) to decision making systems. This has prompted an urgent need for validating requirements on ML models. Fairness is one such requirement to be ensured in numerous application domains. It specifies a software as “learned” by an ML algorithm to not be biased in the sense of discriminating against some attributes (like gender or age), giving different decisions upon flipping the values of these attributes.In this work, we apply verification-based testing (VBT) to the fairness checking of ML models. Verification-based testing employs verification technology to generate test cases potentially violating the property under interest. For fairness testing, we additionally provide a specification language for the formalization of different fairness requirements. From the ML model under test and fairness specification VBT automatically generates test inputs specific to the specified fairness requirement. The empirical evaluation on several benchmark ML models shows verification-based testing to perform better than existing fairness testing techniques with respect to effectiveness.
Fichier principal
Vignette du fichier
497758_1_En_16_Chapter.pdf (529.47 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03239825 , version 1 (27-05-2021)

Licence

Attribution

Identifiers

Cite

Arnab Sharma, Heike Wehrheim. Automatic Fairness Testing of Machine Learning Models. 32th IFIP International Conference on Testing Software and Systems (ICTSS), Dec 2020, Naples, Italy. pp.255-271, ⟨10.1007/978-3-030-64881-7_16⟩. ⟨hal-03239825⟩
53 View
57 Download

Altmetric

Share

Gmail Facebook X LinkedIn More