hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Meta-learning for efficient unsupervised domain adaptation
Halmstad University, School of Information Technology.ORCID iD: 0000-0003-0185-5038
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-2859-6155
Halmstad University, School of Information Technology.ORCID iD: 0000-0001-5163-2997
2024 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 574, article id 127264Article in journal (Refereed) Published
Abstract [en]

The standard machine learning assumption that training and test data are drawn from the same probability distribution does not hold in many real-world applications due to the inability to reproduce testing conditions at training time. Existing unsupervised domain adaption (UDA) methods address this problem by learning a domain-invariant feature space that performs well on available source domain(s) (labeled training data) and the specific target domain (unlabeled test data). In contrast, instead of simply adapting to domains, this paper aims for an approach that learns to adapt effectively to new unlabeled domains. To do so, we leverage meta-learning to optimize a neural network such that an unlabeled adaptation of its parameters to any domain would yield a good generalization on this latter. The experimental evaluation shows that the proposed approach outperforms standard approaches even when a small amount of unlabeled test data is used for adaptation, demonstrating the benefit of meta-learning prior knowledge from various domains to solve UDA problems.

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2024. Vol. 574, article id 127264
Keywords [en]
Domain adaptation, Meta-learning, Unsupervised learning, Distribution shift
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-52450DOI: 10.1016/j.neucom.2024.127264ISI: 001170864800001Scopus ID: 2-s2.0-85184141702OAI: oai:DiVA.org:hh-52450DiVA, id: diva2:1829930
Funder
Knowledge FoundationAvailable from: 2024-01-22 Created: 2024-01-22 Last updated: 2025-10-01Bibliographically approved
In thesis
1. Advancing Meta-Learning for Enhanced Generalization Across Diverse Tasks
Open this publication in new window or tab >>Advancing Meta-Learning for Enhanced Generalization Across Diverse Tasks
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Meta-learning, or learning to learn, is a rapidly evolving area in machine learning that aims to enhance the adaptability and efficiency of learning algorithms. Inspired by the human ability to learn new concepts from limited examples and quickly adapt to unforeseen situations, meta-learning leverages prior experience to prepare models for fast adaptation to new tasks. Unlike traditional machine learning systems, where models are trained for specific tasks, meta-learning frameworks enable models to acquire generalized knowledge during training and efficiently learn new tasks during inference. This ability to generalize from past experiences to new tasks makes meta-learning a key focus in advancing artificial intelligence, offering the potential to create more flexible and efficient AI systems capable of performing well with minimal data.

In this thesis, we begin by formally defining the meta-learning framework, establishing clear terminology, and synthesizing existing work in a comprehensive survey paper. Building on this foundation, we demonstrate how meta-learning can be integrated into various fields to enhance model performance and extend capabilities to few-shot learning scenarios. We show how meta-learning can significantly improve the accuracy and efficiency of transferring knowledge across domains in domain adaptation. In scenarios involving a multimodal distribution of tasks, we develop methods that efficiently learn from and adapt to a wide variety of tasks drawn from different modes within the distribution, ensuring effective adaptation across diverse domains. Our work on personalized federated learning highlights meta-learning's potential to tailor federated learning processes to individual user needs while maintaining privacy and data security. Additionally, we address the challenges of continual learning by developing models that continuously integrate new information without forgetting previously acquired knowledge. For time series data analysis, we present meta-learning strategies that automatically learn optimal augmentation techniques, enhancing model predictions and offering robust solutions for real-world applications. Lastly, our pioneering research on unsupervised meta-learning via in-context learning explores innovative approaches for constructing tasks and learning effectively from unlabeled data.

Overall, the contributions of this thesis emphasize the potential of meta-learning techniques to improve performance across diverse research areas and demonstrate how advancements in one area can benefit the field as a whole.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2025. p. 46
Series
Halmstad University Dissertations ; 127
Keywords
Meta-learning, Few-shot learning, Domain adaptation, Federated learning, Continual learning, Unsupervised learning, In-context learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-55147 (URN)978-91-89587-71-7 (ISBN)978-91-89587-70-0 (ISBN)
Public defence
2025-02-03, S1022, Kristian IV:s väg 3, 30118, Halmstad, Halmstad, 13:00 (English)
Opponent
Supervisors
Available from: 2025-01-08 Created: 2025-01-07 Last updated: 2025-10-01Bibliographically approved

Open Access in DiVA

Meta-learning for efficient unsupervised domain adaptation(1803 kB)342 downloads
File information
File name FULLTEXT01.pdfFile size 1803 kBChecksum SHA-512
709c4cbcb339a73f7f490d1504412507630dc687031d3d3eb0e2086a896bc69a7e20bdd87cb9ee59cf9621d1e341c80c50dcaedccad8ea6a828886e9336d4a6d
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Vettoruzzo, AnnaBouguelia, Mohamed-RafikRögnvaldsson, Thorsteinn

Search in DiVA

By author/editor
Vettoruzzo, AnnaBouguelia, Mohamed-RafikRögnvaldsson, Thorsteinn
By organisation
School of Information Technology
In the same journal
Neurocomputing
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 342 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 533 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf