%0 Conference Proceedings %T Using PCI Pass-Through for GPU Virtualization with CUDA %+ Tunghai University [Taichung] %A Yang, Chao-Tung %A Wang, Hsien-Yi %A Liu, Yu-Tso %Z Part 11: Cloud & Grid %< avec comité de lecture %( Lecture Notes in Computer Science %B 9th International Conference on Network and Parallel Computing (NPC) %C Gwangju, South Korea %Y James J. Park %Y Albert Zomaya %Y Sang-Soo Yeo %Y Sartaj Sahni %I Springer %3 Network and Parallel Computing %V LNCS-7513 %P 445-452 %8 2012-09-06 %D 2012 %R 10.1007/978-3-642-35606-3_53 %K CUDA %K GPU virtualization %K Cloud computing %K IaaS %K PCI passthrough %Z Computer Science [cs]Conference papers %X Nowadays, NVIDIA’s CUDA is a general purpose scalable parallel programming model for writing highly parallel applications. It provides several key abstractions – a hierarchy of thread blocks, shared memory, and barrier synchronization. This model has proven quite successful at programming multithreaded many core GPUs and scales transparently to hundreds of cores: scientists throughout industry and academia are already using CUDA to achieve dramatic speedups on production and research codes. GPU-base clusters are likely to play an important role in future cloud data centers, because some compute-intensive applications may require both CPUs and GPUs. The goal of this paper is to develop a VM execution mechanism that could run these applications inside VMs and allow them to effectively leverage GPUs in such a way that different VMs can share GPUs without interfering with one another. %G English %Z TC 10 %Z WG 10.3 %2 https://inria.hal.science/hal-01551356/document %2 https://inria.hal.science/hal-01551356/file/978-3-642-35606-3_53_Chapter.pdf %L hal-01551356 %U https://inria.hal.science/hal-01551356 %~ IFIP-LNCS %~ IFIP %~ IFIP-AICT %~ IFIP-TC %~ IFIP-TC10 %~ IFIP-NPC %~ IFIP-WG10-3 %~ IFIP-LNCS-7513