hh.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
FIVA: Facial Image and Video Anonymization and Anonymization Defense
Högskolan i Halmstad, Akademin för informationsteknologi. Berge Consulting, Gothenburg, Sweden.
Högskolan i Halmstad, Akademin för informationsteknologi.ORCID-id: 0000-0002-5712-6777
Högskolan i Halmstad, Akademin för informationsteknologi.ORCID-id: 0000-0002-1043-8773
Högskolan i Halmstad, Akademin för informationsteknologi.ORCID-id: 0000-0002-1400-346X
2023 (Engelska)Ingår i: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Los Alamitos, CA: IEEE, 2023, s. 362-371Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this paper, we present a new approach for facial anonymization in images and videos, abbreviated as FIVA. Our proposed method is able to maintain the same face anonymization consistently over frames with our suggested identity-tracking and guarantees a strong difference from the original face. FIVA allows for 0 true positives for a false acceptance rate of 0.001. Our work considers the important security issue of reconstruction attacks and investigates adversarial noise, uniform noise, and parameter noise to disrupt reconstruction attacks. In this regard, we apply different defense and protection methods against these privacy threats to demonstrate the scalability of FIVA. On top of this, we also show that reconstruction attack models can be used for detection of deep fakes. Last but not least, we provide experimental results showing how FIVA can even enable face swapping, which is purely trained on a single target image. © 2023 IEEE.

Ort, förlag, år, upplaga, sidor
Los Alamitos, CA: IEEE, 2023. s. 362-371
Serie
IEEE International Conference on Computer Vision Workshops, E-ISSN 2473-9944
Nyckelord [en]
Anonymization, Deep Fakes, Facial Recognition, Identity Tracking, Reconstruction Attacks
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:hh:diva-52592DOI: 10.1109/ICCVW60793.2023.00043Scopus ID: 2-s2.0-85182917356ISBN: 9798350307443 (tryckt)OAI: oai:DiVA.org:hh-52592DiVA, id: diva2:1836336
Konferens
2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023), Paris, France, 2-6 October, 2023
Tillgänglig från: 2024-02-08 Skapad: 2024-02-08 Senast uppdaterad: 2024-03-18Bibliografiskt granskad
Ingår i avhandling
1. Anonymizing Faces without Destroying Information
Öppna denna publikation i ny flik eller fönster >>Anonymizing Faces without Destroying Information
2024 (Engelska)Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Anonymization is a broad term. Meaning that personal data, or rather data that identifies a person, is redacted or obscured. In the context of video and image data, the most palpable information is the face. Faces barely change compared to other aspect of a person, such as cloths, and we as people already have a strong sense of recognizing faces. Computers are also adroit at recognizing faces, with facial recognition models being exceptionally powerful at identifying and comparing faces. Therefore it is generally considered important to obscure the faces in video and image when aiming for keeping it anonymized. Traditionally this is simply done through blurring or masking. But this de- stroys useful information such as eye gaze, pose, expression and the fact that it is a face. This is an especial issue, as today our society is data-driven in many aspects. One obvious such aspect is autonomous driving and driver monitoring, where necessary algorithms such as object-detectors rely on deep learning to function. Due to the data hunger of deep learning in conjunction with society’s call for privacy and integrity through regulations such as the General Data Protection Regularization (GDPR), anonymization that preserve useful information becomes important.

This Thesis investigates the potential and possible limitation of anonymizing faces without destroying the aforementioned useful information. The base approach to achieve this is through face swapping and face manipulation, where the current research focus on changing the face (or identity) while keeping the original attribute information. All while being incorporated and consistent in an image and/or video. Specifically, will this Thesis demonstrate how target-oriented and subject-agnostic face swapping methodologies can be utilized for realistic anonymization that preserves attributes. Thru this, this Thesis points out several approaches that is: 1) controllable, meaning the proposed models do not naively changes the identity. Meaning that what kind of change of identity and magnitude is adjustable, thus also tunable to guarantee anonymization. 2) subject-agnostic, meaning that the models can handle any identity. 3) fast, meaning that the models is able to run efficiently. Thus having the potential of running in real-time. The end product consist of an anonymizer that achieved state-of-the-art performance on identity transfer, pose retention and expression retention while providing a realism.

Apart of identity manipulation, the Thesis demonstrate potential security issues. Specifically reconstruction attacks, where a bad-actor model learns convolutional traces/patterns in the anonymized images in such a way that it is able to completely reconstruct the original identity. The bad-actor networks is able to do this with simple black-box access of the anonymization model by constructing a pair-wise dataset of unanonymized and anonymized faces. To alleviate this issue, different defense measures that disrupts the traces in the anonymized image was investigated. The main take away from this, is that naively using what qualitatively looks convincing of hiding an identity is not necessary the case at all. Making robust quantitative evaluations important.

Ort, förlag, år, upplaga, sidor
Halmstad: Halmstad University Press, 2024. s. 50
Serie
Halmstad University Dissertations ; 111
Nyckelord
Anonymization, Data Privacy, Generative AI, Reconstruction Attacks, Deep Fakes, Facial Recognition, Identity Tracking, Biometrics
Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:hh:diva-52892 (URN)978-91-89587-36-6 (ISBN)978-91-89587-35-9 (ISBN)
Presentation
2024-04-10, S1078, Halmstad University, Kristian IV:s väg 3, Halmstad, 10:00 (Engelska)
Opponent
Handledare
Tillgänglig från: 2024-03-18 Skapad: 2024-03-18 Senast uppdaterad: 2024-03-18Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Aksoy, ErenEnglund, CristoferAlonso-Fernandez, Fernando

Sök vidare i DiVA

Av författaren/redaktören
Aksoy, ErenEnglund, CristoferAlonso-Fernandez, Fernando
Av organisationen
Akademin för informationsteknologi
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 41 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf