hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Meta-Learning from Multimodal Task Distributions Using Multiple Sets of Meta-Parameters
Halmstad University, School of Information Technology.ORCID iD: 0000-0003-0185-5038
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-2859-6155
Halmstad University, School of Information Technology.ORCID iD: 0000-0001-5163-2997
2023 (English)In: 2023 International Joint Conference on Neural Networks (IJCNN), Piscataway, NJ: IEEE, 2023, p. 1-8Conference paper, Published paper (Refereed)
Abstract [en]

Meta-learning or learning to learn involves training a model on various learning tasks in a way that allows it to quickly learn new tasks from the same distribution using only a small amount of training data (i.e., few-shot learning). Current meta-learning methods implicitly assume that the distribution over tasks is unimodal and consists of tasks belonging to a common domain, which significantly reduces the variety of task distributions they can handle. However, in real-world applications, tasks are often very diverse and come from multiple different domains, making it challenging to meta-learn common knowledge shared across the entire task distribution. In this paper, we propose a method for meta-learning from a multimodal task distribution. The proposed method learns multiple sets of meta-parameters (acting as different initializations of a neural network model) and uses a task encoder to select the best initialization to fine-tune for a new task. More specifically, with a few training examples from a task sampled from an unknown mode, the proposed method predicts which set of meta-parameters (i.e., model’s initialization) would lead to a fast adaptation and a good post-adaptation performance on that task. We evaluate the proposed method on a diverse set of few-shot regression and image classification tasks. The results demonstrate the superiority of the proposed method compared to other state of-the-art meta-learning methods and the benefit of learning multiple model initializations when tasks are sampled from a multimodal task distribution. © 2023 IEEE.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2023. p. 1-8
Keywords [en]
Meta-Learning, Few-Shot Learning, Transfer Learning, Task Representation, Multimodal Distribution
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-51352DOI: 10.1109/IJCNN54540.2023.10191944ISI: 001046198707013Scopus ID: 2-s2.0-85169561819ISBN: 978-1-6654-8867-9 (electronic)OAI: oai:DiVA.org:hh-51352DiVA, id: diva2:1786779
Conference
International Joint Conference on Neural Networks (IJCNN 2023), Gold Coast, Australia, 18-23 June, 2023
Available from: 2023-08-10 Created: 2023-08-10 Last updated: 2023-12-05Bibliographically approved

Open Access in DiVA

Meta-Learning from Multimodal Task Distributions Using Multiple Sets of Meta-Parameters(458 kB)112 downloads
File information
File name FULLTEXT01.pdfFile size 458 kBChecksum SHA-512
5aa0837650160cd770ba825e8c202c4ea3d7259b57f767d8bc660896c976148853f370f301657785cc12e6b4a2c90b126b5307e98d35586875a5db71fba2ad90
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Vettoruzzo, AnnaBouguelia, Mohamed-RafikRögnvaldsson, Thorsteinn

Search in DiVA

By author/editor
Vettoruzzo, AnnaBouguelia, Mohamed-RafikRögnvaldsson, Thorsteinn
By organisation
School of Information Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 112 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 91 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf