hh.sePublikasjoner
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Anonymizing Faces without Destroying Information
Högskolan i Halmstad, Akademin för informationsteknologi. Engage Studios, Gothenburg, Sweden.ORCID-id: 0000-0001-7192-9026
2024 (engelsk)Licentiatavhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Anonymization is a broad term. Meaning that personal data, or rather data that identifies a person, is redacted or obscured. In the context of video and image data, the most palpable information is the face. Faces barely change compared to other aspect of a person, such as cloths, and we as people already have a strong sense of recognizing faces. Computers are also adroit at recognizing faces, with facial recognition models being exceptionally powerful at identifying and comparing faces. Therefore it is generally considered important to obscure the faces in video and image when aiming for keeping it anonymized. Traditionally this is simply done through blurring or masking. But this de- stroys useful information such as eye gaze, pose, expression and the fact that it is a face. This is an especial issue, as today our society is data-driven in many aspects. One obvious such aspect is autonomous driving and driver monitoring, where necessary algorithms such as object-detectors rely on deep learning to function. Due to the data hunger of deep learning in conjunction with society’s call for privacy and integrity through regulations such as the General Data Protection Regularization (GDPR), anonymization that preserve useful information becomes important.

This Thesis investigates the potential and possible limitation of anonymizing faces without destroying the aforementioned useful information. The base approach to achieve this is through face swapping and face manipulation, where the current research focus on changing the face (or identity) while keeping the original attribute information. All while being incorporated and consistent in an image and/or video. Specifically, will this Thesis demonstrate how target-oriented and subject-agnostic face swapping methodologies can be utilized for realistic anonymization that preserves attributes. Thru this, this Thesis points out several approaches that is: 1) controllable, meaning the proposed models do not naively changes the identity. Meaning that what kind of change of identity and magnitude is adjustable, thus also tunable to guarantee anonymization. 2) subject-agnostic, meaning that the models can handle any identity. 3) fast, meaning that the models is able to run efficiently. Thus having the potential of running in real-time. The end product consist of an anonymizer that achieved state-of-the-art performance on identity transfer, pose retention and expression retention while providing a realism.

Apart of identity manipulation, the Thesis demonstrate potential security issues. Specifically reconstruction attacks, where a bad-actor model learns convolutional traces/patterns in the anonymized images in such a way that it is able to completely reconstruct the original identity. The bad-actor networks is able to do this with simple black-box access of the anonymization model by constructing a pair-wise dataset of unanonymized and anonymized faces. To alleviate this issue, different defense measures that disrupts the traces in the anonymized image was investigated. The main take away from this, is that naively using what qualitatively looks convincing of hiding an identity is not necessary the case at all. Making robust quantitative evaluations important.

sted, utgiver, år, opplag, sider
Halmstad: Halmstad University Press, 2024. , s. 50
Serie
Halmstad University Dissertations ; 111
Emneord [en]
Anonymization, Data Privacy, Generative AI, Reconstruction Attacks, Deep Fakes, Facial Recognition, Identity Tracking, Biometrics
HSV kategori
Identifikatorer
URN: urn:nbn:se:hh:diva-52892ISBN: 978-91-89587-36-6 (tryckt)ISBN: 978-91-89587-35-9 (digital)OAI: oai:DiVA.org:hh-52892DiVA, id: diva2:1845212
Presentation
2024-04-10, S1078, Halmstad University, Kristian IV:s väg 3, Halmstad, 10:00 (engelsk)
Opponent
Veileder
Tilgjengelig fra: 2024-03-18 Laget: 2024-03-18 Sist oppdatert: 2024-03-18bibliografisk kontrollert
Delarbeid
1. Towards Privacy Aware Data collection in Traffic: A Proposed Method for Measuring Facial Anonymity
Åpne denne publikasjonen i ny fane eller vindu >>Towards Privacy Aware Data collection in Traffic: A Proposed Method for Measuring Facial Anonymity
2021 (engelsk)Inngår i: Fast-Zero 2021 Proceedings: 6th International Symposium on Future Active Safety Technology toward Zero Accidents, Chiyoda: JSAE , 2021Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Developing a machine learning-based vehicular safety system that is effective and generalizes well, capable of coping with all the different scenarios in real traffic is a challenge that requires large amounts of data. Especially visual data for when you want an autonomous vehicle to make decisions based on peoples’ possible intent revealed by the facial expression and eye gaze of nearby pedestrians. The problem with collecting this kind of data is the privacy issues and conflict with current laws like General Data Protection Regulation (GDPR). To deal with this problem we can anonymise faces with current identity and face swapping techniques. To evaluate the performance and interpretation of the anonymization process, there is a need for a metric to measure how well these faces are anonymized that takes identity leakage into consideration. To our knowledge, there is currently no such investigation for this problem. However, our method is based on current facial recognition methods and how recent face swapping work determines identity transfer performance. Our suggestion is to utilize state-of-the-art identity encoders like FaceNet and ArcFace to make use of the embedding vectors to measure anonymity. We provide qualitative results that show the applicability of publicly available identity encoders for measuring anonymity. We further strengthen the applicability of how these encoders behave on the VGGFace2 dataset compared to samples that have had their identity changed by Faceshifter, along with a survey regarding the anonymization procedure to pinpoint how strong facial anonymization is compared the vector distance measurements.

sted, utgiver, år, opplag, sider
Chiyoda: JSAE, 2021
Emneord
data collection, facial recognition, interpretation, anonymization
HSV kategori
Identifikatorer
urn:nbn:se:hh:diva-52895 (URN)
Konferanse
Fast Zero´21, Society of Automotive Engineers of Japan, Online, 28-30 September, 2021
Tilgjengelig fra: 2024-03-18 Laget: 2024-03-18 Sist oppdatert: 2024-03-18bibliografisk kontrollert
2. Comparing Facial Expressions for Face Swapping Evaluation with Supervised Contrastive Representation Learning
Åpne denne publikasjonen i ny fane eller vindu >>Comparing Facial Expressions for Face Swapping Evaluation with Supervised Contrastive Representation Learning
2021 (engelsk)Inngår i: 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021): Proceedings / [ed] Vitomir Štruc; Marija Ivanovska, Piscataway: IEEE, 2021Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Measuring and comparing facial expression have several practical applications. One such application is to measure the facial expression embedding, and to compare distances between those expressions embeddings in order to determine the identity- and face swapping algorithms' capabilities in preserving the facial expression information. One useful aspect is to present how well the expressions are preserved while anonymizing facial data during privacy aware data collection. We show that a weighted supervised contrastive learning is a strong approach for learning facial expression representation embeddings and dealing with the class imbalance bias. By feeding a classifier-head with the learned embeddings we reach competitive state-of-the-art results. Furthermore, we demonstrate the use case of measuring the distance between the expressions of a target face, a source face and the anonymized target face in the facial anonymization context. © 2021 IEEE.

sted, utgiver, år, opplag, sider
Piscataway: IEEE, 2021
HSV kategori
Identifikatorer
urn:nbn:se:hh:diva-46506 (URN)10.1109/FG52635.2021.9666958 (DOI)000784811600027 ()2-s2.0-85125063047 (Scopus ID)978-1-6654-3176-7 (ISBN)
Konferanse
16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021, Virtual, Jodhpur, India, 15- 18 December, 2021
Tilgjengelig fra: 2022-04-21 Laget: 2022-04-21 Sist oppdatert: 2024-03-18bibliografisk kontrollert
3. FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping
Åpne denne publikasjonen i ny fane eller vindu >>FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping
2023 (engelsk)Inngår i: Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023, Piscataway: IEEE, 2023, s. 3443-3452Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In this work, we present a new single-stage method for subject agnostic face swapping and identity transfer, named FaceDancer. We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR). The AFFA module is embedded in the decoder and adaptively learns to fuse attribute features and features conditioned on identity information without requiring any additional facial segmentation process. In IFSR, we leverage the intermediate features in an identity encoder to preserve important attributes such as head pose, facial expression, lighting, and occlusion in the target face, while still transferring the identity of the source face with high fidelity. We conduct extensive quantitative and qualitative experiments on various datasets and show that the proposed FaceDancer outperforms other state-of-the-art networks in terms of identityn transfer, while having significantly better pose preservation than most of the previous methods. © 2023 IEEE.

sted, utgiver, år, opplag, sider
Piscataway: IEEE, 2023
Emneord
Algorithms, Biometrics, and algorithms (including transfer, low-shot, semi-, self-, and un-supervised learning), body pose, face, formulations, gesture, Machine learning architectures
HSV kategori
Identifikatorer
urn:nbn:se:hh:diva-48618 (URN)10.1109/WACV56688.2023.00345 (DOI)000971500203054 ()2-s2.0-85149000603 (Scopus ID)9781665493468 (ISBN)
Konferanse
23rd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023, Waikoloa, Hawaii, USA, 3-7 January 2023
Tilgjengelig fra: 2022-11-15 Laget: 2022-11-15 Sist oppdatert: 2024-03-18bibliografisk kontrollert
4. FIVA: Facial Image and Video Anonymization and Anonymization Defense
Åpne denne publikasjonen i ny fane eller vindu >>FIVA: Facial Image and Video Anonymization and Anonymization Defense
2023 (engelsk)Inngår i: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Los Alamitos, CA: IEEE, 2023, s. 362-371Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In this paper, we present a new approach for facial anonymization in images and videos, abbreviated as FIVA. Our proposed method is able to maintain the same face anonymization consistently over frames with our suggested identity-tracking and guarantees a strong difference from the original face. FIVA allows for 0 true positives for a false acceptance rate of 0.001. Our work considers the important security issue of reconstruction attacks and investigates adversarial noise, uniform noise, and parameter noise to disrupt reconstruction attacks. In this regard, we apply different defense and protection methods against these privacy threats to demonstrate the scalability of FIVA. On top of this, we also show that reconstruction attack models can be used for detection of deep fakes. Last but not least, we provide experimental results showing how FIVA can even enable face swapping, which is purely trained on a single target image. © 2023 IEEE.

sted, utgiver, år, opplag, sider
Los Alamitos, CA: IEEE, 2023
Serie
IEEE International Conference on Computer Vision Workshops, E-ISSN 2473-9944
Emneord
Anonymization, Deep Fakes, Facial Recognition, Identity Tracking, Reconstruction Attacks
HSV kategori
Identifikatorer
urn:nbn:se:hh:diva-52592 (URN)10.1109/ICCVW60793.2023.00043 (DOI)2-s2.0-85182917356 (Scopus ID)9798350307443 (ISBN)
Konferanse
2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023), Paris, France, 2-6 October, 2023
Tilgjengelig fra: 2024-02-08 Laget: 2024-02-08 Sist oppdatert: 2024-03-18bibliografisk kontrollert

Open Access i DiVA

Anonymizing Faces without Destroying Information(1073 kB)123 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 1073 kBChecksum SHA-512
7ee94584e52a267b55346cdea88ff82618b0035b2856efa07e836fe9912c61cb8db0bffc40a5d36400a18ebb5e905cde904e588dfee1ce889ac8112db7307903
Type fulltextMimetype application/pdf

Søk i DiVA

Av forfatter/redaktør
Rosberg, Felix
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 123 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

isbn
urn-nbn

Altmetric

isbn
urn-nbn
Totalt: 212 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf