hh.sePublications
Change search
Refine search result
1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abiri, Najmeh
    et al.
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Linse, Björn
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Edén, Patrik
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Ohlsson, Mattias
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research. Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Establishing strong imputation performance of a denoising autoencoder in a wide range of missing data problems2019In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 65, p. 137-146Article in journal (Refereed)
    Abstract [en]

    Dealing with missing data in data analysis is inevitable. Although powerful imputation methods that address this problem exist, there is still much room for improvement. In this study, we examined single imputation based on deep autoencoders, motivated by the apparent success of deep learning to efficiently extract useful dataset features. We have developed a consistent framework for both training and imputation. Moreover, we benchmarked the results against state-of-the-art imputation methods on different data sizes and characteristics. The work was not limited to the one-type variable dataset; we also imputed missing data with multi-type variables, e.g., a combination of binary, categorical, and continuous attributes. To evaluate the imputation methods, we randomly corrupted the complete data, with varying degrees of corruption, and then compared the imputed and original values. In all experiments, the developed autoencoder obtained the smallest error for all ranges of initial data corruption. © 2019 Elsevier B.V.

  • 2.
    Bodén, Mikael
    Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Australia.
    Generalization by symbolic abstraction in cascaded recurrent networks2004In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 57, no 1-4, p. 87-104Article in journal (Refereed)
    Abstract [en]

    Generalization performance in recurrent neural networks is enhanced by cascading several networks. By discretizing abstractions induced in one network, other networks can operate on a coarse symbolic level with increased performance on sparse and structural prediction tasks. The level of systematicity exhibited by the cascade of recurrent networks is assessed on the basis of three language domains.

  • 3.
    Verikas, Antanas
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Gelzinis, Adas
    Department of Applied Electronics, Kaunas University of Technology, Lithuania.
    Training neural networks by stochastic optimisation2000In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 30, no 1-4, p. 153-172Article in journal (Refereed)
    Abstract [en]

    We present a stochastic learning algorithm for neural networks. The algorithm does not make any assumptions about transfer functions of individual neurons and does not depend on a functional form of a performance measure. The algorithm uses a random step of varying size to adapt weights. The average size of the step decreases during learning. The large steps enable the algorithm to jump over local maxima/minima, while the small ones ensure convergence in a local area. We investigate convergence properties of the proposed algorithm as well as test the algorithm on four supervised and unsupervised learning problems. We have found a superiority of this algorithm compared to several known algorithms when testing them on generated as well as real data.

1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf