hh.sePublikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Establishing strong imputation performance of a denoising autoencoder in a wide range of missing data problems
Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab). Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.ORCID-id: 0000-0003-1145-4297
2019 (engelsk)Inngår i: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 65, s. 137-146Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Dealing with missing data in data analysis is inevitable. Although powerful imputation methods that address this problem exist, there is still much room for improvement. In this study, we examined single imputation based on deep autoencoders, motivated by the apparent success of deep learning to efficiently extract useful dataset features. We have developed a consistent framework for both training and imputation. Moreover, we benchmarked the results against state-of-the-art imputation methods on different data sizes and characteristics. The work was not limited to the one-type variable dataset; we also imputed missing data with multi-type variables, e.g., a combination of binary, categorical, and continuous attributes. To evaluate the imputation methods, we randomly corrupted the complete data, with varying degrees of corruption, and then compared the imputed and original values. In all experiments, the developed autoencoder obtained the smallest error for all ranges of initial data corruption. © 2019 Elsevier B.V.

sted, utgiver, år, opplag, sider
Amsterdam: Elsevier, 2019. Vol. 65, s. 137-146
Emneord [en]
Deep learning, Autoencoder, Imputation, Missing data
HSV kategori
Identifikatorer
URN: urn:nbn:se:hh:diva-41245DOI: 10.1016/j.neucom.2019.07.065ISI: 000484072600014Scopus ID: 2-s2.0-85069939556OAI: oai:DiVA.org:hh-41245DiVA, id: diva2:1378263
Forskningsfinansiär
Swedish Foundation for Strategic Research Tilgjengelig fra: 2019-12-13 Laget: 2019-12-13 Sist oppdatert: 2020-04-29bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Personposter BETA

Ohlsson, Mattias

Søk i DiVA

Av forfatter/redaktør
Ohlsson, Mattias
Av organisasjonen
I samme tidsskrift
Neurocomputing

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 35 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf