%0 Conference Proceedings %T Deep Photo Rally: Let’s Gather Conversational Pictures %+ Université de Tsukuba = University of Tsukuba %+ Rakuten %A Ookawara, Kazuki %A Kawata, Hayaki %A Muta, Masahumi %A Masuko, Soh %A Utsuro, Takehito %A Hoshino, Jun’ichi %Z Part 8: Poster and Interactive Session %< avec comité de lecture %( Lecture Notes in Computer Science %B 16th International Conference on Entertainment Computing (ICEC) %C Tsukuba City, Japan %Y Nagisa Munekata %Y Itsuki Kunita %Y Junichi Hoshino %I Springer International Publishing %3 Entertainment Computing – ICEC 2017 %V LNCS-10507 %P 387-391 %8 2017-09-18 %D 2017 %R 10.1007/978-3-319-66715-7_46 %K Augmented reality %K Anthropomorphic %K Deep Neural Networks %Z Computer Science [cs]Conference papers %X In this paper, we propose an anthropomorphic approach to generate speech sentences of a specific object according to surrounding circumstances using the recent Deep Neural Networks technology. In the proposal approach, the user can have pseudo communication with the object by photographing the object with a mobile terminal. We introduce some examples of application of the proposal approach to entertainment products, and show that this is an anthropomorphic approach capable of interacting with the environment. %G English %Z TC 14 %2 https://inria.hal.science/hal-01771242/document %2 https://inria.hal.science/hal-01771242/file/978-3-319-66715-7_46_Chapter.pdf %L hal-01771242 %U https://inria.hal.science/hal-01771242 %~ IFIP-LNCS %~ IFIP %~ IFIP-ICEC %~ IFIP-TC14 %~ IFIP-LNCS-10507