DLIR: An Intermediate Representation for Deep Learning Processors - Network and Parallel Computing Access content directly
Conference Papers Year : 2018

DLIR: An Intermediate Representation for Deep Learning Processors

Abstract

The Deep learning processor (DLP), especially ASIC-based accelerators, have been proved to be a promising device for accelerating the computation of deep learning algorithms. However, the learning cost of mastering these DLPs is high as they use different programming interfaces. On the other hand, many deep learning frameworks are proposed to ease the burden of developing deep learning algorithms, but few of them support DLPs. Due to the special features in DLPs, it is hard to integrate a DLP into existed frameworks.In this paper, we propose an intermediate representation (called DLIR) to bridge the gap between DL frameworks and DLPs. DLIR is a tensor-based language with built-in tensor intrinsics that can be directly mapped to hardware primitives. We show that DLIR allows better developing efficiency and is able to generate efficient code.
Fichier principal
Vignette du fichier
477597_1_En_19_Chapter.pdf (178.9 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02279553 , version 1 (05-09-2019)

Licence

Attribution

Identifiers

Cite

Huiying Lan, Zidong Du. DLIR: An Intermediate Representation for Deep Learning Processors. 15th IFIP International Conference on Network and Parallel Computing (NPC), Nov 2018, Muroran, Japan. pp.169-173, ⟨10.1007/978-3-030-05677-3_19⟩. ⟨hal-02279553⟩
406 View
135 Download

Altmetric

Share

Gmail Facebook X LinkedIn More