memCUDA: Map Device Memory to Host Memory on GPGPU Platform - Network and Parallel Computing Access content directly
Conference Papers Year : 2010

memCUDA: Map Device Memory to Host Memory on GPGPU Platform

Abstract

The Compute Unified Device Architecture (CUDA) programming environment from NVIDIA is a milestone towards making programming many-core GPUs more flexible to programmers. However, there are still many challenges for programmers when using CUDA. One is how to deal with GPU device memory, and data transfer between host memory and GPU device memory explicitly. In this study, source-to-source compiling and runtime library technologies are used to implement an experimental programming system based on CUDA, called memCUDA, which can automatically map GPU device memory to host memory. With some pragma directive language, programmer can directly use host memory in CUDA kernel functions, during which the tedious and error-prone data transfer and device memory management are shielded from programmer. The performance is also improved with some near-optimal technologies. Experiment results show that memCUDA programs can get similar effect with well-optimized CUDA programs with more compact source code.
Fichier principal
Vignette du fichier
memCUDA_npc_final.pdf (244.83 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01058920 , version 1 (28-08-2014)

Licence

Attribution

Identifiers

Cite

Hai Jin, Bo Li, Qin Zhang, Wenbing Ao. memCUDA: Map Device Memory to Host Memory on GPGPU Platform. IFIP International Conference on Network and Parallel Computing (NPC), Sep 2010, Zhengzhou, China. pp.299-313, ⟨10.1007/978-3-642-15672-4_26⟩. ⟨hal-01058920⟩
193 View
434 Download

Altmetric

Share

Gmail Facebook X LinkedIn More