hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
One-shot many-to-many facial reenactment using Bi-Layer Graph Convolutional Networks
Beijing Institute of Technology, Beijing, China.ORCID iD: 0000-0001-7105-2674
College of Engineering, Jouf University, Sakaka, Saudi Arabia.ORCID iD: 0000-0002-9062-7493
Beijing Institute of Technology, Beijing, China.
College of Engineering, Jouf University, Sakaka, Saudi Arabia.ORCID iD: 0000-0002-4099-1254
Show others and affiliations
2022 (English)In: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 156, p. 193-204Article in journal (Refereed) Published
Abstract [en]

Facial reenactment is aimed at animating a source face image into a new place using a driving facial picture. In a few shot scenarios, the present strategies are designed with one or more identities or identity-sustained suffering protection challenges. These current solutions are either developed with one or more identities in mind, or face identity protection issues in one or more shot situations. Multiple pictures from the same entity have been used in previous research to model facial reenactment. In contrast, this paper presents a novel model of one-shot many-to-many facial reenactments that uses only one facial image of a face. The proposed model produces a face that represents the objective representation of the same source identity. The proposed technique can simulate motion from a single image by decomposing an object into two layers. Using bi-layer with Convolutional Neural Network (CNN), we named our model Bi-Layer Graph Convolutional Layers (BGCLN) which utilized to create the latent vector’s optical flow representation. This yields the precise structure and shape of the optical stream. Comprehensive studies suggest that our technique can produce high-quality results and outperform most recent techniques in both qualitative and quantitative data comparisons. Our proposed system can perform facial reenactment at 15 fps, which is approximately real time. Our code is publicly available at https://github.com/usaeed786/BGCLN

Place, publisher, year, edition, pages
Oxford: Elsevier, 2022. Vol. 156, p. 193-204
Keywords [en]
Facial reenactment, CNN, BGCLN
National Category
Computer Systems
Research subject
Health Innovation, IDC
Identifiers
URN: urn:nbn:se:hh:diva-48299DOI: 10.1016/j.neunet.2022.09.031ISI: 000886066900006PubMedID: 36274526Scopus ID: 2-s2.0-85140099344OAI: oai:DiVA.org:hh-48299DiVA, id: diva2:1702122
Available from: 2022-10-10 Created: 2022-10-10 Last updated: 2023-01-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMedScopus

Authority records

Tiwari, Prayag

Search in DiVA

By author/editor
Saeed, UzairArmghan, AmmarAlenezi, FayadhYue, SunTiwari, Prayag
By organisation
School of Information Technology
In the same journal
Neural Networks
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 56 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf