hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Comparing Facial Expressions for Face Swapping Evaluation with Supervised Contrastive Representation Learning
Berge Consulting, Gothenburg, Sweden; Rise Research Institutes Of Sweden, Gothenburg, Sweden.
Halmstad University, School of Information Technology. Rise Research Institutes Of Sweden, Gothenburg, Sweden.ORCID iD: 0000-0002-1043-8773
2021 (English)In: 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021): Proceedings / [ed] Vitomir Štruc; Marija Ivanovska, Piscataway: IEEE, 2021Conference paper, Published paper (Refereed)
Abstract [en]

Measuring and comparing facial expression have several practical applications. One such application is to measure the facial expression embedding, and to compare distances between those expressions embeddings in order to determine the identity- and face swapping algorithms' capabilities in preserving the facial expression information. One useful aspect is to present how well the expressions are preserved while anonymizing facial data during privacy aware data collection. We show that a weighted supervised contrastive learning is a strong approach for learning facial expression representation embeddings and dealing with the class imbalance bias. By feeding a classifier-head with the learned embeddings we reach competitive state-of-the-art results. Furthermore, we demonstrate the use case of measuring the distance between the expressions of a target face, a source face and the anonymized target face in the facial anonymization context. © 2021 IEEE.

Place, publisher, year, edition, pages
Piscataway: IEEE, 2021.
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-46506DOI: 10.1109/FG52635.2021.9666958ISI: 000784811600027Scopus ID: 2-s2.0-85125063047ISBN: 978-1-6654-3176-7 (electronic)OAI: oai:DiVA.org:hh-46506DiVA, id: diva2:1653134
Conference
16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021, Virtual, Jodhpur, India, 15- 18 December, 2021
Available from: 2022-04-21 Created: 2022-04-21 Last updated: 2024-03-18Bibliographically approved
In thesis
1. Anonymizing Faces without Destroying Information
Open this publication in new window or tab >>Anonymizing Faces without Destroying Information
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Anonymization is a broad term. Meaning that personal data, or rather data that identifies a person, is redacted or obscured. In the context of video and image data, the most palpable information is the face. Faces barely change compared to other aspect of a person, such as cloths, and we as people already have a strong sense of recognizing faces. Computers are also adroit at recognizing faces, with facial recognition models being exceptionally powerful at identifying and comparing faces. Therefore it is generally considered important to obscure the faces in video and image when aiming for keeping it anonymized. Traditionally this is simply done through blurring or masking. But this de- stroys useful information such as eye gaze, pose, expression and the fact that it is a face. This is an especial issue, as today our society is data-driven in many aspects. One obvious such aspect is autonomous driving and driver monitoring, where necessary algorithms such as object-detectors rely on deep learning to function. Due to the data hunger of deep learning in conjunction with society’s call for privacy and integrity through regulations such as the General Data Protection Regularization (GDPR), anonymization that preserve useful information becomes important.

This Thesis investigates the potential and possible limitation of anonymizing faces without destroying the aforementioned useful information. The base approach to achieve this is through face swapping and face manipulation, where the current research focus on changing the face (or identity) while keeping the original attribute information. All while being incorporated and consistent in an image and/or video. Specifically, will this Thesis demonstrate how target-oriented and subject-agnostic face swapping methodologies can be utilized for realistic anonymization that preserves attributes. Thru this, this Thesis points out several approaches that is: 1) controllable, meaning the proposed models do not naively changes the identity. Meaning that what kind of change of identity and magnitude is adjustable, thus also tunable to guarantee anonymization. 2) subject-agnostic, meaning that the models can handle any identity. 3) fast, meaning that the models is able to run efficiently. Thus having the potential of running in real-time. The end product consist of an anonymizer that achieved state-of-the-art performance on identity transfer, pose retention and expression retention while providing a realism.

Apart of identity manipulation, the Thesis demonstrate potential security issues. Specifically reconstruction attacks, where a bad-actor model learns convolutional traces/patterns in the anonymized images in such a way that it is able to completely reconstruct the original identity. The bad-actor networks is able to do this with simple black-box access of the anonymization model by constructing a pair-wise dataset of unanonymized and anonymized faces. To alleviate this issue, different defense measures that disrupts the traces in the anonymized image was investigated. The main take away from this, is that naively using what qualitatively looks convincing of hiding an identity is not necessary the case at all. Making robust quantitative evaluations important.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2024. p. 50
Series
Halmstad University Dissertations ; 111
Keywords
Anonymization, Data Privacy, Generative AI, Reconstruction Attacks, Deep Fakes, Facial Recognition, Identity Tracking, Biometrics
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-52892 (URN)978-91-89587-36-6 (ISBN)978-91-89587-35-9 (ISBN)
Presentation
2024-04-10, S1078, Halmstad University, Kristian IV:s väg 3, Halmstad, 10:00 (English)
Opponent
Supervisors
Available from: 2024-03-18 Created: 2024-03-18 Last updated: 2024-03-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Englund, Cristofer

Search in DiVA

By author/editor
Englund, Cristofer
By organisation
School of Information Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 50 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf