%0 Conference Proceedings %T GPU-Accelerated Clique Tree Propagation for Pouch Latent Tree Models %+ The Education University of Hong Kong (EdUHK) %A Poon, Leonard %< avec comité de lecture %( Lecture Notes in Computer Science %B 15th IFIP International Conference on Network and Parallel Computing (NPC) %C Muroran, Japan %Y Feng Zhang %Y Jidong Zhai %Y Marc Snir %Y Hai Jin %Y Hironori Kasahara %Y Mateo Valero %I Springer International Publishing %3 Network and Parallel Computing %V LNCS-11276 %P 90-102 %8 2018-11-29 %D 2018 %R 10.1007/978-3-030-05677-3_8 %K GPU acceleration %K Clique tree propagation %K Pouch latent tree models %K Parallel computing %K Probabilistic graphical models %Z Computer Science [cs]Conference papers %X Pouch latent tree models (PLTMs) are a class of probabilistic graphical models that generalizes the Gaussian mixture models (GMMs). PLTMs produce multiple clusterings simultaneously and have been shown better than GMMs for cluster analysis in previous studies. However, due to the considerably higher number of possible structures, the training of PLTMs is more time-demanding than GMMs. This thus has limited the application of PLTMs on only small data sets. In this paper, we consider using GPUs to exploit two parallelism opportunities, namely data parallelism and element-wise parallelism, for PTLMs. We focus on clique tree propagation, since this exact inference procedure is a strenuous task and is recurrently called for each data sample and each model structure during PLTM training. Our experiments with real-world data sets show that the GPU-accelerated implementation procedure can achieve up to 52x speedup over the sequential implementation running on CPUs. The experiment results signify promising potential for further improvement on the full training of PLTMs with GPUs. %G English %Z TC 10 %Z WG 10.3 %2 https://inria.hal.science/hal-02279542/document %2 https://inria.hal.science/hal-02279542/file/477597_1_En_8_Chapter.pdf %L hal-02279542 %U https://inria.hal.science/hal-02279542 %~ IFIP-LNCS %~ IFIP %~ IFIP-TC %~ IFIP-TC10 %~ IFIP-NPC %~ IFIP-WG10-3 %~ IFIP-LNCS-11276