%0 Conference Proceedings %T Evidence Humans Provide When Explaining Data-Labeling Decisions %+ University of Chicago %+ Brown University %A Newman, Judah %A Wang, Bowen %A Zhao, Valerie %A Zeng, Amy %A Littman, Michael, L. %A Ur, Blase %Z Part 5: Methods for User Studies %< avec comité de lecture %( Lecture Notes in Computer Science %B 17th IFIP Conference on Human-Computer Interaction (INTERACT) %C Paphos, Cyprus %Y David Lamas %Y Fernando Loizides %Y Lennart Nacke %Y Helen Petrie %Y Marco Winckler %Y Panayiotis Zaphiris %I Springer International Publishing %3 Human-Computer Interaction – INTERACT 2019 %V LNCS-11748 %N Part III %P 390-409 %8 2019-09-02 %D 2019 %R 10.1007/978-3-030-29387-1_22 %K Machine teaching %K ML %K Explanations %K Data labeling %Z Computer Science [cs]Conference papers %X Because machine learning would benefit from reduced data requirements, some prior work has proposed using humans not just to label data, but also to explain those labels. To characterize the evidence humans might want to provide, we conducted a user study and a data experiment. In the user study, 75 participants provided classification labels for 20 photos, justifying those labels with free-text explanations. Explanations frequently referenced concepts (objects and attributes) in the image, yet 26% of explanations invoked concepts not in the image. Boolean logic was common in implicit form, but was rarely explicit. In a follow-up experiment on the Visual Genome dataset, we found that some concepts could be partially defined through their relationship to frequently co-occurring concepts, rather than only through labeling. %G English %Z TC 13 %2 https://inria.hal.science/hal-02553853/document %2 https://inria.hal.science/hal-02553853/file/488593_1_En_22_Chapter.pdf %L hal-02553853 %U https://inria.hal.science/hal-02553853 %~ IFIP-LNCS %~ IFIP %~ IFIP-TC13 %~ IFIP-INTERACT %~ IFIP-LNCS-11748