hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
ICGNet: An intensity-controllable generation network based on covering learning for face attribute synthesis
AnnLab, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China; Center of Materials Science and Optoelectronics Engineering School of Integrated Circuits, University of Chinese Academy of Sciences, Beijing, China.
University of Science and Technology of China, Hefei, China; Department of computer science, Yangtze University, Jingzhou, China.
AnnLab, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China.
AnnLab, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China; Center of Materials Science and Optoelectronics Engineering School of Integrated Circuits, University of Chinese Academy of Sciences, Beijing, China.
Show others and affiliations
2024 (English)In: Information Sciences, ISSN 0020-0255, E-ISSN 1872-6291, Vol. 660, article id 120130Article in journal (Refereed) Published
Abstract [en]

Face-attribute synthesis is a typical application of neural network technology. However, most current methods suffer from the problem of uncontrollable attribute intensity. In this study, we proposed a novel intensity-controllable generation network (ICGNet) based on covering learning for face attribute synthesis. Specifically, it includes an encoder module based on the principle of homology continuity between homologous samples to map different facial images onto the face feature space, which constructs sufficient and effective representation vectors by extracting the input information from different condition spaces. It then models the relationships between attribute instances and representational vectors in space to ensure accurate synthesis of the target attribute and complete preservation of the irrelevant region. Finally, the progressive changes in the facial attributes by applying different intensity constraints to the representation vectors. ICGNet achieves intensity-controllable face editing compared to other methods by extracting sufficient and effective representation features, exploring and transferring attribute relationships, and maintaining identity information. The source code is available at https://github.com/kllaodong/-ICGNet.

•We designed a new encoder module to map face images of different condition spaces into face feature space to obtain sufficient and effective face feature representation.

•Based on feature extraction, we proposed a novel Intensity-Controllable Generation Network (ICGNet), which can realize face attribute synthesis with continuous intensity control while maintaining identity and semantic information.

•The quantitative and qualitative results showed that the performance of ICGNet is superior to current advanced models.

© 2024 Elsevier Inc.

Place, publisher, year, edition, pages
New York: Elsevier, 2024. Vol. 660, article id 120130
Keywords [en]
Face attribute synthesis, Controllable intensity, Covering learning, Generative adversarial network, Image processing
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-54282DOI: 10.1016/j.ins.2024.120130ISI: 001168971300001Scopus ID: 2-s2.0-85182744992OAI: oai:DiVA.org:hh-54282DiVA, id: diva2:1883506
Note

This work is supported by the National Natural Science Foundation of China (no. 62373343) and Beijing Natural Science Foun-dation (no. L233036).

Available from: 2024-07-10 Created: 2024-07-10 Last updated: 2024-07-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Tiwari, Prayag

Search in DiVA

By author/editor
Tiwari, Prayag
By organisation
School of Information Technology
In the same journal
Information Sciences
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 43 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf