hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Streaming Tiles: Flexible Implementation of Convolution Neural Networks Inference on Manycore Architectures
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
Amrita University, Bengaluru, India.
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).ORCID iD: 0000-0002-4932-4036
2018 (English)In: 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Los Alamitos: IEEE Computer Society, 2018, p. 867-876Conference paper, Published paper (Refereed)
Abstract [en]

Convolution neural networks (CNN) are extensively used for deep learning applications such as image recognition and computer vision. The convolution module of these networks is highly compute-intensive. Having an efficient implementation of the convolution module enables realizing the inference part of the neural network on embedded platforms. Low precision parameters require less memory, less computation time, and less power consumption while achieving high classification accuracy. Furthermore, streaming the data over parallelized processing units saves a considerable amount of memory, which is a key concern in memory constrained embedded platforms. In this paper, we explore the design space for streamed CNN on Epiphany manycore architecture using varying precisions for weights (ranging from binary to 32-bit). Both AlexNet and GoogleNet are explored for two different memory sizes of Epiphany cores. We are able to achieve competitive performance for both Alexnet and GoogleNet with respect to emerging manycores. Furthermore, the effects of different design choices in terms of precision, memory size, and the number of cores are evaluated by applying the proposed method.

Place, publisher, year, edition, pages
Los Alamitos: IEEE Computer Society, 2018. p. 867-876
Keywords [en]
manycores, CNN, stream processing, embedded systems
National Category
Embedded Systems
Identifiers
URN: urn:nbn:se:hh:diva-36887DOI: 10.1109/IPDPSW.2018.00138ISBN: 978-1-5386-5555-9 (electronic)ISBN: 978-1-5386-5556-6 (print)OAI: oai:DiVA.org:hh-36887DiVA, id: diva2:1212121
Conference
The 7th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics, Vancouver, British Columbia, Canada, May 21, 2018
Projects
NGES (Towards Next Generation Embedded Systems: Utilizing Parallelism and Reconfigurability)
Funder
VINNOVA
Note

Funding: VINNOVA Strategic Innovation grant and the Department of Science and Technology, Government of India. ©2018 IEEE

Available from: 2018-06-01 Created: 2018-06-01 Last updated: 2018-08-20Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records BETA

Rezk, NesmaUl-Abdin, Zain

Search in DiVA

By author/editor
Rezk, NesmaUl-Abdin, Zain
By organisation
Centre for Research on Embedded Systems (CERES)
Embedded Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 38 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf