hh.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Embeddings Based Parallel Stacked Autoencoder Approach for Dimensionality Reduction and Predictive Maintenance of Vehicles
Halmstad University.
Halmstad University.
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.ORCID iD: 0000-0003-2590-6661
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.ORCID iD: 0000-0002-3797-4605
2020 (English)In: IoT Streams for Data-Driven Predictive Maintenance and IoT, Edge, and Mobile for Embedded Machine Learning / [ed] Joao Gama, Sepideh Pashami, Albert Bifet, Moamar Sayed-Mouchawe, Holger Fröning, Franz Pernkopf, Gregor Schiele, Michaela Blott, Heidelberg: Springer, 2020, p. 127-141Conference paper, Published paper (Refereed)
Abstract [en]

Predictive Maintenance (PdM) of automobiles requires the storage and analysis of large amounts of sensor data. This requirement can be challenging in deploying PdM algorithms onboard the vehicles due to limited storage and computational power on the hardware of the vehicle. Hence, this study seeks to obtain low dimensional descriptive features from high dimensional data using Representation Learning. The low dimensional representation can then be used for predicting vehicle faults, in particular a component related to the powertrain. A Parallel Stacked Autoencoder based architecture is presented with the aim of producing better representations when compared to individual Autoen-coders with focus on vehicle data. Also, Embeddings are employed on categorical Variables to aid the performance of the artificial neural networks (ANN) models. This architecture is shown to achieve excellent performance, and in close standards to the previous state-of-the-art research. Significant improvement in powertrain failure prediction is obtained along with a reduction in the size of input data using our novel deep learning ANN architecture.

© Springer Nature Switzerland AG 2020

Place, publisher, year, edition, pages
Heidelberg: Springer, 2020. p. 127-141
Series
Communications in Computer and Information Science, ISSN 1865-0937
Keywords [en]
Dimensionality Reduction, Autoencoder, Artificial Neural Network, Embeddings, Powertrain, Predictive Maintenance
National Category
Vehicle Engineering
Identifiers
URN: urn:nbn:se:hh:diva-43772DOI: 10.1007/978-3-030-66770-2_10Scopus ID: 2-s2.0-85101510405ISBN: 978-3-030-66769-6 (print)ISBN: 978-3-030-66770-2 (electronic)OAI: oai:DiVA.org:hh-43772DiVA, id: diva2:1516172
Conference
Second International Workshop, IoT Streams 2020, and First International Workshop, ITEM 2020, Co-located with ECML/PKDD 2020, Ghent, Belgium, September 14-18, 2020
Available from: 2021-01-11 Created: 2021-01-11 Last updated: 2023-03-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Rahat, MahmoudKhoshkangini, Reza

Search in DiVA

By author/editor
Rahat, MahmoudKhoshkangini, Reza
By organisation
Halmstad UniversityCAISR - Center for Applied Intelligent Systems Research
Vehicle Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 265 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf