%0 Conference Proceedings %T Speedup Critical Stage of Machine Learning with Batch Scheduling in GPU %+ Beihang University (BUAA) %A Gao, Yuan %A Wang, Rui %A An, Ning %A Wei, Yanjiang %A Qian, Depei %Z Part 6: Poster Sessions %< avec comité de lecture %( Lecture Notes in Computer Science %B 11th IFIP International Conference on Network and Parallel Computing (NPC) %C Ilan, Taiwan %Y Ching-Hsien Hsu %Y Xuanhua Shi %Y Valentina Salapura %I Springer %3 Network and Parallel Computing %V LNCS-8707 %P 522-525 %8 2014-09-18 %D 2014 %R 10.1007/978-3-662-44917-2_43 %K convolution neural network %K framework %K GPU %K batch process %Z Computer Science [cs]Conference papers %X As a superior data analysis method, Machine Learning suffers the bottleneck from limited computing capability for many years. With the advent of numerous parallel computing hardwares, modern GPU is becoming a promising carrier for the tasks of Machine Learning. In this paper, we propose an efficient GPU execution framework to speedup the forward propagation process of convolution neural network. By extending the convolution unrolling method to fit this batch mode, we get a significant increase of throughput but very little overhead. %G English %Z TC 10 %Z WG 10.3 %2 https://inria.hal.science/hal-01403124/document %2 https://inria.hal.science/hal-01403124/file/978-3-662-44917-2_43_Chapter.pdf %L hal-01403124 %U https://inria.hal.science/hal-01403124 %~ IFIP-LNCS %~ IFIP %~ IFIP-AICT %~ IFIP-TC %~ IFIP-LNCS-8707 %~ IFIP-TC10 %~ IFIP-NPC %~ IFIP-WG10-3