hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Advances and Challenges in Meta-Learning: A Technical Review
Halmstad University, School of Information Technology.ORCID iD: 0000-0003-0185-5038
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-2859-6155
Eindhoven University of Technology, Eindhoven, Netherlands.
Halmstad University, School of Information Technology.ORCID iD: 0000-0001-5163-2997
Show others and affiliations
2024 (English)In: IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 46, no 7, p. 4763-4779Article, review/survey (Refereed) Published
Abstract [en]

Meta-learning empowers learning systems with the ability to acquire knowledge from multiple tasks, enabling faster adaptation and generalization to new tasks. This review provides a comprehensive technical overview of meta-learning, emphasizing its importance in real-world applications where data may be scarce or expensive to obtain. The paper covers the state-of-the-art meta-learning approaches and explores the relationship between meta-learning and multi-task learning, transfer learning, domain adaptation and generalization, selfsupervised learning, personalized federated learning, and continual learning. By highlighting the synergies between these topics and the field of meta-learning, the paper demonstrates how advancements in one area can benefit the field as a whole, while avoiding unnecessary duplication of efforts. Additionally, the paper delves into advanced meta-learning topics such as learning from complex multi-modal task distributions, unsupervised metalearning, learning to efficiently adapt to data distribution shifts, and continual meta-learning. Lastly, the paper highlights open problems and challenges for future research in the field. By synthesizing the latest research developments, this paper provides a thorough understanding of meta-learning and its potential impact on various machine learning applications. We believe that this technical overview will contribute to the advancement of meta-learning and its practical implications in addressing realworld problems.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2024. Vol. 46, no 7, p. 4763-4779
Keywords [en]
Adaptation models, Data models, deep neural networks, few-shot learning, Meta-learning, Metalearning, representation learning, Surveys, Task analysis, Training, transfer learning, Transfer learning
National Category
Robotics and automation
Identifiers
URN: urn:nbn:se:hh:diva-52730DOI: 10.1109/TPAMI.2024.3357847Scopus ID: 2-s2.0-85183973598OAI: oai:DiVA.org:hh-52730DiVA, id: diva2:1840318
Funder
Knowledge FoundationAvailable from: 2024-02-23 Created: 2024-02-23 Last updated: 2025-02-09Bibliographically approved
In thesis
1. Advancing Meta-Learning for Enhanced Generalization Across Diverse Tasks
Open this publication in new window or tab >>Advancing Meta-Learning for Enhanced Generalization Across Diverse Tasks
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Meta-learning, or learning to learn, is a rapidly evolving area in machine learning that aims to enhance the adaptability and efficiency of learning algorithms. Inspired by the human ability to learn new concepts from limited examples and quickly adapt to unforeseen situations, meta-learning leverages prior experience to prepare models for fast adaptation to new tasks. Unlike traditional machine learning systems, where models are trained for specific tasks, meta-learning frameworks enable models to acquire generalized knowledge during training and efficiently learn new tasks during inference. This ability to generalize from past experiences to new tasks makes meta-learning a key focus in advancing artificial intelligence, offering the potential to create more flexible and efficient AI systems capable of performing well with minimal data.

In this thesis, we begin by formally defining the meta-learning framework, establishing clear terminology, and synthesizing existing work in a comprehensive survey paper. Building on this foundation, we demonstrate how meta-learning can be integrated into various fields to enhance model performance and extend capabilities to few-shot learning scenarios. We show how meta-learning can significantly improve the accuracy and efficiency of transferring knowledge across domains in domain adaptation. In scenarios involving a multimodal distribution of tasks, we develop methods that efficiently learn from and adapt to a wide variety of tasks drawn from different modes within the distribution, ensuring effective adaptation across diverse domains. Our work on personalized federated learning highlights meta-learning's potential to tailor federated learning processes to individual user needs while maintaining privacy and data security. Additionally, we address the challenges of continual learning by developing models that continuously integrate new information without forgetting previously acquired knowledge. For time series data analysis, we present meta-learning strategies that automatically learn optimal augmentation techniques, enhancing model predictions and offering robust solutions for real-world applications. Lastly, our pioneering research on unsupervised meta-learning via in-context learning explores innovative approaches for constructing tasks and learning effectively from unlabeled data.

Overall, the contributions of this thesis emphasize the potential of meta-learning techniques to improve performance across diverse research areas and demonstrate how advancements in one area can benefit the field as a whole.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2025. p. 46
Series
Halmstad University Dissertations ; 127
Keywords
Meta-learning, Few-shot learning, Domain adaptation, Federated learning, Continual learning, Unsupervised learning, In-context learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-55147 (URN)978-91-89587-71-7 (ISBN)978-91-89587-70-0 (ISBN)
Public defence
2025-02-03, S1022, Kristian IV:s väg 3, 30118, Halmstad, Halmstad, 13:00 (English)
Opponent
Supervisors
Available from: 2025-01-08 Created: 2025-01-07 Last updated: 2025-01-08Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Vettoruzzo, AnnaBouguelia, Mohamed-RafikRögnvaldsson, Thorsteinn

Search in DiVA

By author/editor
Vettoruzzo, AnnaBouguelia, Mohamed-RafikRögnvaldsson, ThorsteinnSantosh, KC
By organisation
School of Information Technology
In the same journal
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 111 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf