%0 Conference Proceedings %T BGElasor: Elastic-Scaling Framework for Distributed Streaming Processing with Deep Neural Network %+ Institute of Information Engineering [Beijing] (IIE) %+ School of Cyber Security %A Mu, Weimin %A Jin, Zongze %A Wang, Weiping %A Zhu, Weilin %A Wang, Weiping %Z Part 4: Big Data+Cloud %< avec comité de lecture %( Lecture Notes in Computer Science %B 16th IFIP International Conference on Network and Parallel Computing (NPC) %C Hohhot, China %Y Xiaoxin Tang %Y Quan Chen %Y Pradip Bose %Y Weiming Zheng %Y Jean-Luc Gaudiot %I Springer International Publishing %3 Network and Parallel Computing %V LNCS-11783 %P 120-131 %8 2019-08-23 %D 2019 %R 10.1007/978-3-030-30709-7_10 %K Data stream processing %K Load prediction %K Deep neural network %K Gated recurrent units %K Elasticity %Z Computer Science [cs]Conference papers %X In face of constant fluctuations and sudden bursts of data stream, elasticity of distributed stream processing system has become increasingly important. The proactive policy offers a powerful means to realize the effective elastic scaling. The existing methods lack the latent features of data stream, it leads the poor prediction. Furthermore, the poor prediction results in the high cost of adaptation and the instability. To address these issues, we propose the framework named BGElasor, which is a proactive and low-cost elastic-scaling framework based on the accurate prediction using deep neural networks. It can capture the potentially-complicated pattern to enhance the accuracy of prediction, reduce the cost of adaptation and avoid adaptation bumps. The experimental results show that BGElasor not only improves the prediction accuracy with three kinds of typical loads, but also ensure the end-to-end latency on QoS with low cost. %G English %Z TC 10 %Z WG 10.3 %2 https://inria.hal.science/hal-03770524/document %2 https://inria.hal.science/hal-03770524/file/486810_1_En_10_Chapter.pdf %L hal-03770524 %U https://inria.hal.science/hal-03770524 %~ IFIP-LNCS %~ IFIP %~ IFIP-TC %~ IFIP-TC10 %~ IFIP-NPC %~ IFIP-WG10-3 %~ IFIP-LNCS-11783