hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Evaluation of Parallel Programming Standards For  Embedded High Performance Computing
2010 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

The aim of this project is to evaluate parallel programming standards for embedded high performance computing. There is a huge demand for high computational speed and performance in the present radar signal processing, so more processors are needed to get enough performance. One way of getting high performance is by dividing the work on multiple processors. At the same time, it has to get low communication overhead and good speedup. This has been done by using parallel computing languages such as OpenMP and MPI.We use these parallel programming languages on radar signal benchmark which is similar to many tasks in radar signal processing. For running OpenMP, a shared memory system SUNFIRE E2900 is used and for MPI, a SUNFIRE E2900, containing 8 nodes which uses SUN HPC cluster tools v5 is used. The OpenMP program shows pretty good speedup up to 5 processors, there after an increase in communication overhead is observed. MPI has shown low communication overhead at the beginning but got decreases when the numbers of processors were increased. Both OpenMP and MPI show similar aspects, at certain limit as the number of processors are increased there is decreasing trend in efficiency and increase in communication overhead. According to our results, OpenMP is a relatively easy to use program when compared to MPI. When using MPI it is up to the programmer to make explicit calls in order to parallelize.

Place, publisher, year, edition, pages
2010. , p. 81
Keywords [en]
OpenMP, MPI, Parallelism
National Category
Information Systems Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-6047OAI: oai:DiVA.org:hh-6047DiVA, id: diva2:356219
Uppsok
Technology
Supervisors
Examiners
Available from: 2010-10-12 Created: 2010-09-30 Last updated: 2018-01-12Bibliographically approved

Open Access in DiVA

fulltext(747 kB)284 downloads
File information
File name FULLTEXT01.pdfFile size 747 kBChecksum SHA-512
64bfbb69e5e5a7aab233478703a49710305139e47f7298e6bf17f8e8aac2a9fd4acce221b1985d6497927544cc8eaf1e245b5bf5cca937137c474a044a819848
Type fulltextMimetype application/pdf

Information SystemsComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 284 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 170 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf