%0 Conference Proceedings %T Trust Indicators and Explainable AI: A Study on User Perceptions %+ Ecole Polytechnique Fédérale de Lausanne (EPFL) %+ IDIAP Research Institute %+ Université de Fribourg = University of Fribourg (UNIFR) %+ Bern University of Applied Sciences (BFH) %A Ribes, Delphine %A Henchoz, Nicolas %A Portier, Hélène %A Defayes, Lara %A Phan, Thanh-Trung %A Gatica-Perez, Daniel %A Sonderegger, Andreas %Z Part 9: Explainable AI %< avec comité de lecture %@ 978-3-030-85615-1 %( Lecture Notes in Computer Science %B 18th IFIP Conference on Human-Computer Interaction (INTERACT) %C Bari, Italy %Y Carmelo Ardito %Y Rosa Lanzilotti %Y Alessio Malizia %Y Helen Petrie %Y Antonio Piccinno %Y Giuseppe Desolda %Y Kori Inkpen %I Springer International Publishing %3 Human-Computer Interaction – INTERACT 2021 %V LNCS-12933 %N Part II %P 662-671 %8 2021-08-30 %D 2021 %R 10.1007/978-3-030-85616-8_39 %K Trust indicators %K Fake news %K Transparency %K Design %K Explainable AI %K XAI %K Understandable AI %Z Computer Science [cs]Conference papers %X Nowadays, search engines, social media or news aggregators are the preferred services for news access. Aggregation is mostly based on artificial intelligence technologies raising a new challenge: Trust has been ranked as the most important factor for media business. This paper reports findings of a study evaluating the influence of manipulations of interface design and information provided in the context of eXplainable Artificial Intelligence (XAI) on user perception and in the context of news content aggregators. In an experimental online study, various layouts and scenarios have been developed, implemented and tested with 266 participants. Measures of trust, understanding and preference were recorded. Results showed no influence of the factors on trust. However, data indicates that the influence of the layout, for example implicit integration of media source through layout structuration has a significant effect on perceived importance to cite the source of a media. Moreover, the amount of information presented to explain the AI showed a negative influence on user understanding. This highlights the importance and difficulty of making XAI understandable for its users. %G English %Z TC 13 %2 https://inria.hal.science/hal-04196849/document %2 https://inria.hal.science/hal-04196849/file/520516_1_En_39_Chapter.pdf %L hal-04196849 %U https://inria.hal.science/hal-04196849 %~ IFIP-LNCS %~ IFIP %~ IFIP-TC13 %~ IFIP-INTERACT %~ IFIP-LNCS-12933