%0 Conference Proceedings %T Reproducibility of Experiments in Recommender Systems Evaluation %+ University of Brighton %+ Gluru %+ University of the West of England [Bristol] (UWE Bristol) %+ University of West London %A Polatidis, Nikolaos %A Kapetanakis, Stelios %A Pimenidis, Elias %A Kosmidis, Konstantinos %Z Part 8: Recommender Systems %< avec comité de lecture %( IFIP Advances in Information and Communication Technology %B 14th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI) %C Rhodes, Greece %Y Lazaros Iliadis %Y Ilias Maglogiannis %Y Vassilis Plagianakos %I Springer International Publishing %3 Artificial Intelligence Applications and Innovations %V AICT-519 %P 401-409 %8 2018-05-25 %D 2018 %R 10.1007/978-3-319-92007-8_34 %K Recommender systems %K Evaluation %K Reproducibility %K Replication %Z Computer Science [cs]Conference papers %X Recommender systems evaluation is usually based on predictive accuracy metrics with better scores meaning recommendations of higher quality. However, the comparison of results is becoming increasingly difficult, since there are different recommendation frameworks and different settings in the design and implementation of the experiments. Furthermore, there might be minor differences on algorithm implementation among the different frameworks. In this paper, we compare well known recommendation algorithms, using the same dataset, metrics and overall settings, the results of which point to result differences across frameworks with the exact same settings. Hence, we propose the use of standards that should be followed as guidelines to ensure the replication of experiments and the reproducibility of the results. %G English %Z TC 12 %Z WG 12.5 %2 https://inria.hal.science/hal-01821035/document %2 https://inria.hal.science/hal-01821035/file/467708_1_En_34_Chapter.pdf %L hal-01821035 %U https://inria.hal.science/hal-01821035 %~ IFIP %~ IFIP-AICT %~ IFIP-TC %~ IFIP-WG %~ IFIP-TC12 %~ IFIP-AIAI %~ IFIP-WG12-5 %~ IFIP-AICT-519