hh.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Establishing strong imputation performance of a denoising autoencoder in a wide range of missing data problems
Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab). Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.ORCID-id: 0000-0003-1145-4297
2019 (Engelska)Ingår i: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 65, s. 137-146Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Dealing with missing data in data analysis is inevitable. Although powerful imputation methods that address this problem exist, there is still much room for improvement. In this study, we examined single imputation based on deep autoencoders, motivated by the apparent success of deep learning to efficiently extract useful dataset features. We have developed a consistent framework for both training and imputation. Moreover, we benchmarked the results against state-of-the-art imputation methods on different data sizes and characteristics. The work was not limited to the one-type variable dataset; we also imputed missing data with multi-type variables, e.g., a combination of binary, categorical, and continuous attributes. To evaluate the imputation methods, we randomly corrupted the complete data, with varying degrees of corruption, and then compared the imputed and original values. In all experiments, the developed autoencoder obtained the smallest error for all ranges of initial data corruption. © 2019 Elsevier B.V.

Ort, förlag, år, upplaga, sidor
Amsterdam: Elsevier, 2019. Vol. 65, s. 137-146
Nyckelord [en]
Deep learning, Autoencoder, Imputation, Missing data
Nationell ämneskategori
Annan data- och informationsvetenskap
Identifikatorer
URN: urn:nbn:se:hh:diva-41245DOI: 10.1016/j.neucom.2019.07.065ISI: 000484072600014Scopus ID: 2-s2.0-85069939556OAI: oai:DiVA.org:hh-41245DiVA, id: diva2:1378263
Forskningsfinansiär
Stiftelsen för strategisk forskning (SSF)Tillgänglig från: 2019-12-13 Skapad: 2019-12-13 Senast uppdaterad: 2019-12-13Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Personposter BETA

Ohlsson, Mattias

Sök vidare i DiVA

Av författaren/redaktören
Ohlsson, Mattias
Av organisationen
CAISR Centrum för tillämpade intelligenta system (IS-lab)
I samma tidskrift
Neurocomputing
Annan data- och informationsvetenskap

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 18 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf