hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A prompt regularization approach to enhance few-shot class-incremental learning with Two-Stage Classifier
Hebei University of Engineering, Handan, China; Institute of Semiconductors Chinese Academy of Sciences, Beijing, China.
Hebei University of Engineering, Handan, China.
Hebei University of Engineering, Handan, China.
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-2851-4260
Show others and affiliations
2025 (English)In: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 188, p. 1-11, article id 107453Article in journal (Refereed) In press
Abstract [en]

With a limited number of labeled samples, Few-Shot Class-Incremental Learning (FSCIL) seeks to efficiently train and update models without forgetting previously learned tasks. Because pre-trained models can learn extensive feature representations from big existing datasets, they offer strong knowledge foundations and transferability, which makes them useful in both few-shot and incremental learning scenarios. Additionally, Prompt Learning improves pre-trained deep learning models’ performance on downstream tasks, particularly in large-scale language or vision models. In this paper, we propose a novel Prompt Regularization (PrRe) approach to maximize the fusion of prompts by embedding two different prompts, the Task Prompt and the Global Prompt, inside a pre-trained Vision Transformer (ViT). In the classification phase, we propose a Two-Stage Classifier (TSC), utilizing K-Nearest Neighbors for base session and a Prototype Classifier for incremental sessions, integrated with a global self-attention module. Through experiments on multiple benchmark tests, we demonstrate the effectiveness and superiority of our method. The code is available at https://github.com/gyzzzzzzzz/PrRe. © 2025 Elsevier Ltd

Place, publisher, year, edition, pages
Oxford: Elsevier, 2025. Vol. 188, p. 1-11, article id 107453
Keywords [en]
Few-shot class-incremental learning, Global self-attention module, Prompt regularization, Two-Stage Classifier
National Category
Natural Language Processing Signal Processing
Identifiers
URN: urn:nbn:se:hh:diva-55930DOI: 10.1016/j.neunet.2025.107453ISI: 001469409400001Scopus ID: 2-s2.0-105002288715OAI: oai:DiVA.org:hh-55930DiVA, id: diva2:1955438
Available from: 2025-04-30 Created: 2025-04-30 Last updated: 2025-04-30Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Tiwari, Prayag

Search in DiVA

By author/editor
Tiwari, PrayagNing, Xin
By organisation
School of Information Technology
In the same journal
Neural Networks
Natural Language ProcessingSignal Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 6 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf