hh.sePublications
Change search
Refine search result
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abiri, Najmeh
    et al.
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Linse, Björn
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Edén, Patrik
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Ohlsson, Mattias
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research. Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Establishing strong imputation performance of a denoising autoencoder in a wide range of missing data problems2019In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 65, p. 137-146Article in journal (Refereed)
    Abstract [en]

    Dealing with missing data in data analysis is inevitable. Although powerful imputation methods that address this problem exist, there is still much room for improvement. In this study, we examined single imputation based on deep autoencoders, motivated by the apparent success of deep learning to efficiently extract useful dataset features. We have developed a consistent framework for both training and imputation. Moreover, we benchmarked the results against state-of-the-art imputation methods on different data sizes and characteristics. The work was not limited to the one-type variable dataset; we also imputed missing data with multi-type variables, e.g., a combination of binary, categorical, and continuous attributes. To evaluate the imputation methods, we randomly corrupted the complete data, with varying degrees of corruption, and then compared the imputed and original values. In all experiments, the developed autoencoder obtained the smallest error for all ranges of initial data corruption. © 2019 Elsevier B.V.

  • 2.
    Bodén, Mikael
    Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Australia.
    Generalization by symbolic abstraction in cascaded recurrent networks2004In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 57, no 1-4, p. 87-104Article in journal (Refereed)
    Abstract [en]

    Generalization performance in recurrent neural networks is enhanced by cascading several networks. By discretizing abstractions induced in one network, other networks can operate on a coarse symbolic level with increased performance on sparse and structural prediction tasks. The level of systematicity exhibited by the cascade of recurrent networks is assessed on the basis of three language domains.

  • 3.
    Farouq, Shiraz
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Byttner, Stefan
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bouguelia, Mohamed-Rafik
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Gadd, Henrik
    Halmstad University, School of Business, Innovation and Sustainability, The Rydberg Laboratory for Applied Sciences (RLAS).
    Mondrian conformal anomaly detection for fault sequence identification in heterogeneous fleets2021In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 462, p. 591-606Article in journal (Refereed)
    Abstract [en]

    We considered the case of monitoring a large fleet where heterogeneity in the operational behavior among its constituent units (i.e., systems or machines) is non-negligible, and no labeled data is available. Each unit in the fleet, referred to as a target, is tracked by its sub-fleet. A conformal sub-fleet (CSF) is a set of units that act as a proxy for the normal operational behavior of a target unit by relying on the Mondrian conformal anomaly detection framework. Two approaches, the k-nearest neighbors and conformal clustering, were investigated for constructing such a sub-fleet by formulating a stability criterion. Moreover, it is important to discover the sub-sequence of events that describes an anomalous behavior in a target unit. Hence, we proposed to extract such sub-sequences for further investigation without pre-specifying their length. We refer to it as a conformal anomaly sequence (CAS). Furthermore, different nonconformity measures were evaluated for their efficiency, i.e., their ability to detect anomalous behavior in a target unit, based on the length of the observed CAS and the S-criterion value. The CSF approach was evaluated in the context of monitoring district heating substations. Anomalous behavior sub-sequences were corroborated with the domain expert leading to the conclusion that the proposed approach has the potential to be useful for both diagnostic and knowledge extraction purposes, especially in domains where labeled data is not available or hard to obtain. © 2021

  • 4.
    Ran, Hang
    et al.
    Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China.
    Ning, Xin
    Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China; Cognitive Computing Technology Joint Laboratory, Wave Group, Beijing, China.
    Li, Weijun
    Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China; Beijing Key Laboratory Of Semiconductor Neural Network Intelligent Sensing and Computing Technology, Beijing, China.
    Hao, Meilan
    Chinese Academy of Sciences, Beijing, China; Hebei University of Engineering, Handan, China.
    Tiwari, Prayag
    Halmstad University, School of Information Technology.
    3D human pose and shape estimation via de-occlusion multi-task learning2023In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 548, article id 126284Article in journal (Refereed)
    Abstract [en]

    Three-dimensional human pose and shape estimation is to compute a full human 3D mesh given a single image. The contamination of features caused by occlusion usually degrades its performance significantly. Recent progress in this field typically addressed the occlusion problem implicitly. By contrast, in this paper, we address it explicitly using a simple yet effective de-occlusion multi-task learning network. Our key insight is that feature for mesh parameter regression should be noiseless. Thus, in the feature space, our method disentangles the occludee that represents the noiseless human feature from the occluder. Specifically, a spatial regularization and an attention mechanism are imposed in the backbone of our network to disentangle the features into different channels. Furthermore, two segmentation tasks are proposed to supervise the de-occlusion process. The final mesh model is regressed by the disentangled occlusion-aware features. Experiments on both occlusion and non-occlusion datasets are conducted, and the results prove that our method is superior to the state-of-the-art methods on two occlusion datasets, while achieving competitive performance on a non-occlusion dataset. We also demonstrate that the proposed de-occlusion strategy is the main factor to improve the robustness against occlusion. The code is available at https://github.com/qihangran/De-occlusion_MTL_HMR. © 2023

  • 5.
    Tian, Songsong
    et al.
    Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China.
    Li, Weijun
    Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China.
    Ning, Xin
    Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China; Zhongke Ruitu Technology Co., Ltd, Beijing, China.
    Ran, Hang
    Chinese Academy of Sciences, Beijing, China.
    Qin, Hong
    Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China.
    Tiwari, Prayag
    Halmstad University, School of Information Technology.
    Continuous transfer of neural network representational similarity for incremental learning2023In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 545, article id 126300Article in journal (Refereed)
    Abstract [en]

    The incremental learning paradigm in machine learning has consistently been a focus of academic research. It is similar to the way in which biological systems learn, and reduces energy consumption by avoiding excessive retraining. Existing studies utilize the powerful feature extraction capabilities of pre-trained models to address incremental learning, but there remains a problem of insufficient utilization of neural network feature knowledge. To address this issue, this paper proposes a novel method called Pre-trained Model Knowledge Distillation (PMKD) which combines knowledge distillation of neural network representations and replay. This paper designs a loss function based on centered kernel alignment to transfer neural network representations knowledge from the pre-trained model to the incremental model layer-by-layer. Additionally, the use of memory buffer for Dark Experience Replay helps the model retain past knowledge better. Experiments show that PMKD achieved superior performance on various datasets and different buffer sizes. Compared to other methods, our class incremental learning accuracy reached the best performance. The open-source code is published athttps://github.com/TianSongS/PMKD-IL. © 2023 The Author(s)

  • 6.
    Verikas, Antanas
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Gelzinis, Adas
    Department of Applied Electronics, Kaunas University of Technology, Lithuania.
    Training neural networks by stochastic optimisation2000In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 30, no 1-4, p. 153-172Article in journal (Refereed)
    Abstract [en]

    We present a stochastic learning algorithm for neural networks. The algorithm does not make any assumptions about transfer functions of individual neurons and does not depend on a functional form of a performance measure. The algorithm uses a random step of varying size to adapt weights. The average size of the step decreases during learning. The large steps enable the algorithm to jump over local maxima/minima, while the small ones ensure convergence in a local area. We investigate convergence properties of the proposed algorithm as well as test the algorithm on four supervised and unsupervised learning problems. We have found a superiority of this algorithm compared to several known algorithms when testing them on generated as well as real data.

  • 7.
    Vettoruzzo, Anna
    et al.
    Halmstad University, School of Information Technology.
    Bouguelia, Mohamed-Rafik
    Halmstad University, School of Information Technology.
    Rögnvaldsson, Thorsteinn
    Halmstad University, School of Information Technology.
    Meta-learning for efficient unsupervised domain adaptation2024In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 574, article id 127264Article in journal (Refereed)
    Abstract [en]

    The standard machine learning assumption that training and test data are drawn from the same probability distribution does not hold in many real-world applications due to the inability to reproduce testing conditions at training time. Existing unsupervised domain adaption (UDA) methods address this problem by learning a domain-invariant feature space that performs well on available source domain(s) (labeled training data) and the specific target domain (unlabeled test data). In contrast, instead of simply adapting to domains, this paper aims for an approach that learns to adapt effectively to new unlabeled domains. To do so, we leverage meta-learning to optimize a neural network such that an unlabeled adaptation of its parameters to any domain would yield a good generalization on this latter. The experimental evaluation shows that the proposed approach outperforms standard approaches even when a small amount of unlabeled test data is used for adaptation, demonstrating the benefit of meta-learning prior knowledge from various domains to solve UDA problems.

    Download full text (pdf)
    Meta-learning for efficient unsupervised domain adaptation
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf