%0 Conference Proceedings %T Evaluating Commonsense Knowledge with a Computer Game %+ University of Illinois [Chicago] (UIC) %A Mancilla-Caceres, Juan, F. %A Amir, Eyal %Z Part 2: Long and Short Papers %< avec comité de lecture %( Lecture Notes in Computer Science %B 13th International Conference on Human-Computer Interaction (INTERACT) %C Lisbon, Portugal %Y Pedro Campos %Y Nicholas Graham %Y Joaquim Jorge %Y Nuno Nunes %Y Philippe Palanque %Y Marco Winckler %I Springer %3 Human-Computer Interaction – INTERACT 2011 %V LNCS-6946 %N Part I %P 348-355 %8 2011-09-05 %D 2011 %R 10.1007/978-3-642-23774-4_28 %Z Computer Science [cs]Conference papers %X Collecting commonsense knowledge from freely available text can reduce the cost and effort of creating large knowledge bases. For the acquired knowledge to be useful, we must ensure that it is correct, and that it carries information about its relevance and about the context in which it can be considered commonsense. In this paper, we design, and evaluate an online game that classifies, using the input from players, text extracted from the web as either commonsense knowledge, domain-specific knowledge, or nonsense. A continuous scale is defined to classify the knowledge as nonsense or commonsense and it is later used during the evaluation of the data to identify which knowledge is reliable and which one needs further qualification. When comparing our results to other similar knowledge acquisition systems, our game performs better with respect to coverage, redundancy, and reliability of the commonsense acquired. %G English %Z TC 13 %2 https://inria.hal.science/hal-01590550/document %2 https://inria.hal.science/hal-01590550/file/978-3-642-23774-4_28_Chapter.pdf %L hal-01590550 %U https://inria.hal.science/hal-01590550 %~ IFIP-LNCS %~ IFIP %~ IFIP-TC %~ IFIP-TC13 %~ IFIP-INTERACT %~ IFIP-LNCS-6946