hh.sePublications
23456785 of 41
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Exploring Efficient Implementations of Deep Learning Applications on Embedded Platforms
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).ORCID iD: 0000-0002-4674-3809
2020 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The promising results of deep learning (deep neural network) models in many applications such as speech recognition and computer vision have aroused a need for their realization on embedded platforms. Augmenting DL (Deep Learning) in embedded platforms grants them the support to intelligent tasks in smart homes, mobile phones, and healthcare applications. Deep learning models rely on intensive operations between high precision values. In contrast, embedded platforms have restricted compute and energy budgets. Thus, it is challenging to realize deep learning models on embedded platforms.

In this thesis, we define the objectives of implementing deep learning models on embedded platforms. The main objective is to achieve efficient implementations. The implementation should achieve high throughput, preserve low power consumption, and meet real-time requirements.The secondary objective is flexibility. It is not enough to propose an efficient hardware solution for one model. The proposed solution should be flexible to support changes in the model and the application constraints. Thus, the overarching goal of the thesis is to explore flexible methods for efficient realization of deep learning models on embedded platforms.

Optimizations are applied to both the DL model and the embedded platform to increase implementation efficiency. To understand the impact of different optimizations, we chose recurrent neural networks (as a class of DL models) and compared its' implementations on embedded platforms. The comparison analyzes the optimizations applied and the corresponding performance to provide conclusions on the most fruitful and essential optimizations. We concluded that it is essential to apply an algorithmic optimization to the model to decrease it's compute and memory requirement, and it is essential to apply a memory-specific optimization to hide the overhead of memory access to achieve high efficiency. Furthermore, it has been revealed that many of the work understudy focus on implementation efficiency, and flexibility is less attempted.

We have explored the design space of Convolutional neural networks (CNNs) on Epiphany manycore architecture. We adopted a pipeline implementation of CNN that relies on the on-chip memory solely to store the weights. Also, the proposed mapping supported both ALexNet and GoogleNet CNN models, varying precision for weights, and two memory sizes for Epiphany cores. We were able to achieve competitive performance with respect to emerging manycores.

As a part of the work in progress, we have studied a DL-architecture co-design approach to increase the flexibility of hardware solutions. A flexible platform should support variations in the model and variations in optimizations. The optimization method should be automated to respond to the changes in the model and application constraints with minor effort. Besides, the mapping of the models on embedded platforms should be automated as well.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2020. , p. 81
Series
Halmstad University Dissertations ; 71
National Category
Embedded Systems
Identifiers
URN: urn:nbn:se:hh:diva-41969ISBN: 978-91-88749-51-2 (electronic)ISBN: 978-91-88749-50-5 (print)OAI: oai:DiVA.org:hh-41969DiVA, id: diva2:1426791
Presentation
2020-06-04, Wigforss, Visionen, Halmstad University, Kristian IV:s väg 3, Halmstad, 10:00 (English)
Opponent
Supervisors
Available from: 2020-05-14 Created: 2020-04-27 Last updated: 2020-05-14Bibliographically approved
List of papers
1. Recurrent Neural Networks: An Embedded Computing Perspective
Open this publication in new window or tab >>Recurrent Neural Networks: An Embedded Computing Perspective
2020 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 57967-57996Article in journal (Refereed) Published
Abstract [en]

Recurrent Neural Networks (RNNs) are a class of machine learning algorithms used for applications with time-series and sequential data. Recently, there has been a strong interest in executing RNNs on embedded devices. However, difficulties have arisen because RNN requires high computational capability and a large memory space. In this paper, we review existing implementations of RNN models on embedded platforms and discuss the methods adopted to overcome the limitations of embedded systems. We will define the objectives of mapping RNN algorithms on embedded platforms and the challenges facing their realization. Then, we explain the components of RNN models from an implementation perspective. We also discuss the optimizations applied to RNNs to run efficiently on embedded platforms. Finally, we compare the defined objectives with the implementations and highlight some open research questions and aspects currently not addressed for embedded RNNs. Overall, applying algorithmic optimizations to RNN models and decreasing the memory access overhead is vital to obtain high efficiency. To further increase the implementation efficiency, we point up the more promising optimizations that could be applied in future research. Additionally, this article observes that high performance has been targeted by many implementations, while flexibility has, as yet, been attempted less often. Thus, the article provides some guidelines for RNN hardware designers to support flexibility in a better manner. © 2020 IEEE.

Place, publisher, year, edition, pages
Piscataway: IEEE, 2020
Keywords
Compression, flexibility, efficiency, embedded computing, long short term memory (LSTM), quantization, recurrent neural networks (RNNs)
National Category
Computer Systems
Identifiers
urn:nbn:se:hh:diva-41981 (URN)10.1109/ACCESS.2020.2982416 (DOI)2-s2.0-85082939909 (Scopus ID)
Projects
NGES (Towards Next Generation Embedded Systems: Utilizing Parallelism and Reconfigurability)
Funder
Vinnova, INT/SWD/VINN/p-10/2015
Note

As manuscript in thesis.

Other funding: Government of India

Available from: 2020-04-30 Created: 2020-04-30 Last updated: 2020-05-12
2. Streaming Tiles: Flexible Implementation of Convolution Neural Networks Inference on Manycore Architectures
Open this publication in new window or tab >>Streaming Tiles: Flexible Implementation of Convolution Neural Networks Inference on Manycore Architectures
2018 (English)In: 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Los Alamitos: IEEE Computer Society, 2018, p. 867-876Conference paper, Published paper (Refereed)
Abstract [en]

Convolution neural networks (CNN) are extensively used for deep learning applications such as image recognition and computer vision. The convolution module of these networks is highly compute-intensive. Having an efficient implementation of the convolution module enables realizing the inference part of the neural network on embedded platforms. Low precision parameters require less memory, less computation time, and less power consumption while achieving high classification accuracy. Furthermore, streaming the data over parallelized processing units saves a considerable amount of memory, which is a key concern in memory constrained embedded platforms. In this paper, we explore the design space for streamed CNN on Epiphany manycore architecture using varying precisions for weights (ranging from binary to 32-bit). Both AlexNet and GoogleNet are explored for two different memory sizes of Epiphany cores. We are able to achieve competitive performance for both Alexnet and GoogleNet with respect to emerging manycores. Furthermore, the effects of different design choices in terms of precision, memory size, and the number of cores are evaluated by applying the proposed method.

Place, publisher, year, edition, pages
Los Alamitos: IEEE Computer Society, 2018
Keywords
manycores, CNN, stream processing, embedded systems
National Category
Embedded Systems
Identifiers
urn:nbn:se:hh:diva-36887 (URN)10.1109/IPDPSW.2018.00138 (DOI)2-s2.0-85052195969 (Scopus ID)978-1-5386-5555-9 (ISBN)978-1-5386-5556-6 (ISBN)
Conference
The 7th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics, Vancouver, British Columbia, Canada, May 21, 2018
Projects
NGES (Towards Next Generation Embedded Systems: Utilizing Parallelism and Reconfigurability)
Funder
Vinnova
Note

As manuscript in thesis.

Other funding: Department of Science and Technology, Government of India.

Available from: 2018-06-01 Created: 2018-06-01 Last updated: 2020-04-30Bibliographically approved
3. ModelFlex: Parameter Tuning for Flexible Design of Deep Learning Accelerators
Open this publication in new window or tab >>ModelFlex: Parameter Tuning for Flexible Design of Deep Learning Accelerators
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Algorithmic optimizations are applied to neural networks models to decrease their compute and memory requirements for efficient realization on embedded platforms. A feedback form the target platform during the optimization process can increase the benefit of these optimizations. In this paper, we propose a method for hardware guided optimizations to recurrent neural networks. The method is automated to respond to changes in the model or the application constraints with minimal effort. Also, a hybrid of three optimizations is applied to the base RNN model to increase the search space for a feasible solution and increase the chance of skipping retraining.

National Category
Embedded Systems
Identifiers
urn:nbn:se:hh:diva-41998 (URN)
Note

As manuscript in thesis

Available from: 2020-05-05 Created: 2020-05-05 Last updated: 2020-05-05

Open Access in DiVA

fulltext(15388 kB)8 downloads
File information
File name FULLTEXT01.pdfFile size 15388 kBChecksum SHA-512
65111bd458c63cd56df5a787bee9c80c7078fc34534c817d9ebcea0e095c0ef48855b7b068bba680957e1cecaf3f89f99759a26ee8a1d1effafbdf9e7d479f7e
Type fulltextMimetype application/pdf

Authority records BETA

Rezk, Nesma

Search in DiVA

By author/editor
Rezk, Nesma
By organisation
Centre for Research on Embedded Systems (CERES)
Embedded Systems

Search outside of DiVA

GoogleGoogle Scholar
Total: 8 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 37 hits
23456785 of 41
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf