hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards Privacy Aware Data collection in Traffic: A Proposed Method for Measuring Facial Anonymity
Berge Consulting, Gothenburg, Sweden.ORCID iD: 0000-0001-7192-9026
Halmstad University, School of Information Technology. RISE Research Institutes of Sweden, Gothenburg, Sweden.ORCID iD: 0000-0002-1043-8773
RISE Research Institutes of Sweden, Gothenburg, Sweden.
RISE Research Institutes of Sweden, Gothenburg, Sweden.
2021 (English)In: Fast-Zero 2021 Proceedings: 6th International Symposium on Future Active Safety Technology toward Zero Accidents, Chiyoda: JSAE , 2021Conference paper, Published paper (Refereed)
Abstract [en]

Developing a machine learning-based vehicular safety system that is effective and generalizes well, capable of coping with all the different scenarios in real traffic is a challenge that requires large amounts of data. Especially visual data for when you want an autonomous vehicle to make decisions based on peoples’ possible intent revealed by the facial expression and eye gaze of nearby pedestrians. The problem with collecting this kind of data is the privacy issues and conflict with current laws like General Data Protection Regulation (GDPR). To deal with this problem we can anonymise faces with current identity and face swapping techniques. To evaluate the performance and interpretation of the anonymization process, there is a need for a metric to measure how well these faces are anonymized that takes identity leakage into consideration. To our knowledge, there is currently no such investigation for this problem. However, our method is based on current facial recognition methods and how recent face swapping work determines identity transfer performance. Our suggestion is to utilize state-of-the-art identity encoders like FaceNet and ArcFace to make use of the embedding vectors to measure anonymity. We provide qualitative results that show the applicability of publicly available identity encoders for measuring anonymity. We further strengthen the applicability of how these encoders behave on the VGGFace2 dataset compared to samples that have had their identity changed by Faceshifter, along with a survey regarding the anonymization procedure to pinpoint how strong facial anonymization is compared the vector distance measurements.

Place, publisher, year, edition, pages
Chiyoda: JSAE , 2021.
Keywords [en]
data collection, facial recognition, interpretation, anonymization
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-52895OAI: oai:DiVA.org:hh-52895DiVA, id: diva2:1845206
Conference
Fast Zero´21, Society of Automotive Engineers of Japan, Online, 28-30 September, 2021
Available from: 2024-03-18 Created: 2024-03-18 Last updated: 2025-03-18Bibliographically approved
In thesis
1. Anonymizing Faces without Destroying Information
Open this publication in new window or tab >>Anonymizing Faces without Destroying Information
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Anonymization is a broad term. Meaning that personal data, or rather data that identifies a person, is redacted or obscured. In the context of video and image data, the most palpable information is the face. Faces barely change compared to other aspect of a person, such as cloths, and we as people already have a strong sense of recognizing faces. Computers are also adroit at recognizing faces, with facial recognition models being exceptionally powerful at identifying and comparing faces. Therefore it is generally considered important to obscure the faces in video and image when aiming for keeping it anonymized. Traditionally this is simply done through blurring or masking. But this de- stroys useful information such as eye gaze, pose, expression and the fact that it is a face. This is an especial issue, as today our society is data-driven in many aspects. One obvious such aspect is autonomous driving and driver monitoring, where necessary algorithms such as object-detectors rely on deep learning to function. Due to the data hunger of deep learning in conjunction with society’s call for privacy and integrity through regulations such as the General Data Protection Regularization (GDPR), anonymization that preserve useful information becomes important.

This Thesis investigates the potential and possible limitation of anonymizing faces without destroying the aforementioned useful information. The base approach to achieve this is through face swapping and face manipulation, where the current research focus on changing the face (or identity) while keeping the original attribute information. All while being incorporated and consistent in an image and/or video. Specifically, will this Thesis demonstrate how target-oriented and subject-agnostic face swapping methodologies can be utilized for realistic anonymization that preserves attributes. Thru this, this Thesis points out several approaches that is: 1) controllable, meaning the proposed models do not naively changes the identity. Meaning that what kind of change of identity and magnitude is adjustable, thus also tunable to guarantee anonymization. 2) subject-agnostic, meaning that the models can handle any identity. 3) fast, meaning that the models is able to run efficiently. Thus having the potential of running in real-time. The end product consist of an anonymizer that achieved state-of-the-art performance on identity transfer, pose retention and expression retention while providing a realism.

Apart of identity manipulation, the Thesis demonstrate potential security issues. Specifically reconstruction attacks, where a bad-actor model learns convolutional traces/patterns in the anonymized images in such a way that it is able to completely reconstruct the original identity. The bad-actor networks is able to do this with simple black-box access of the anonymization model by constructing a pair-wise dataset of unanonymized and anonymized faces. To alleviate this issue, different defense measures that disrupts the traces in the anonymized image was investigated. The main take away from this, is that naively using what qualitatively looks convincing of hiding an identity is not necessary the case at all. Making robust quantitative evaluations important.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2024. p. 50
Series
Halmstad University Dissertations ; 111
Keywords
Anonymization, Data Privacy, Generative AI, Reconstruction Attacks, Deep Fakes, Facial Recognition, Identity Tracking, Biometrics
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-52892 (URN)978-91-89587-36-6 (ISBN)978-91-89587-35-9 (ISBN)
Presentation
2024-04-10, S1078, Halmstad University, Kristian IV:s väg 3, Halmstad, 10:00 (English)
Opponent
Supervisors
Available from: 2024-03-18 Created: 2024-03-18 Last updated: 2024-03-18Bibliographically approved
2. Non-Reversible and Attribute Preserving Face De-Identification
Open this publication in new window or tab >>Non-Reversible and Attribute Preserving Face De-Identification
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

De-identification, also known as anonymization, is a broad term that refers to the process of redacting or obscuring personal data, or data that identifies an individual. In the context of video and image data de-identification, the most tangible personal information is the face. Faces are considered biometric data, thus change little compared to other aspects of an individual, such as clothing and hairstyle. Humans possess a strong innate ability to recognize faces. Computers are also adept at recognizing faces, and face recognition models are exceptionally powerful at identifying and comparing faces. Consequently, it is widely recognized as crucial to obscure the faces in video and images to ensure the integrity of de-identified data. Conventionally, this has been achieved through blurring or masking techniques. However, these methods are destructive of data characteristics and thus compromise critical attribute information such as eye gaze, pose, expression and the fact that it is a face. This is a particular problem because our society is data-driven in many ways. This information is useful for a plethora of functions such as traffic safety. One obvious such aspect is autonomous driving and driver monitoring, where necessary algorithms such as object detectors rely on deep learning to function. Due to the data hunger of deep learning, combined with society's demand for privacy and integrity through regulations such as the General Data Protection Regulation (GDPR), face de-identification, which preserves useful information, becomes significantly important.

This Thesis investigates the potential and possible limitations of de-identifying faces, while preserving the aforementioned useful attribute information. The Thesis is especially focused on the sustainability perspective of de-identification, where the perseverance of both integrity and utility of data is important. The baseline method to achieve this is through methods introduced from the face swapping and face manipulation literature, where the current research focuses on changing the face (or identity) with generative models while keeping the original attribute information as intact as possible. All while being integrated and consistent in an image and/or video. Specifically, this Thesis will demonstrate how generative target-oriented and subject-agnostic face manipulation models, which aim to anonymize facial identities by transforming original faces to resemble specific targets, can be used for realistic de-identification that preserves attributes.

While this Thesis will demonstrate and introduce novel de-identification capabilities, it also addresses and highlight potential vulnerabilities and security issues that arise from naively applying generative target-oriented de-identification models. First, since state-of-the-art face representation models are typically restricting the face representation embeddings to a hyper-sphere, maximizing the privacy may lead to trivial identity retrieval matching. Second, transferable adversarial attacks, where adversarial perturbations generated by surrogate identity encoders cause identity leakage in the victim de-identification system. Third, reconstruction attacks, where bad actor models are able to learn and extract enough information from subtle cues left by the de-identification model to consistently reconstruct the original identity.

Through this, this Thesis points out several approaches that are: 1) Controllable, meaning that the proposed models do not naively change the identity. This means that the type and magnitude of identity change is adjustable, and thus tunable to ensure anonymization. 2) Subject agnostic, meaning that the models can handle any identity or face. 3) Fast, meaning that the models are able to run efficiently. Thus having the potential of running in real-time. 4) Non-reversible, this Thesis introduces a novel diffusion-based method to make generative target-oriented models robust against reconstruction attacks. The end product consists of a hybrid generative target-oriented and diffusion de-identification pipeline that achieves state-of-the-art performance on privacy protection as measured by identity retrieval, pose retention, expression retention, gaze retention, and visual fidelity while being robust against reconstruction attacks.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2025. p. 79
Series
Halmstad University Dissertations ; 130
Keywords
Anonymization, Data Privacy, Generative AI, Reconstruction Attacks, Deep Fakes, Facial Recognition, Identity Tracking, Biometrics
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-55652 (URN)978-91-89587-77-9 (ISBN)978-91-89587-76-2 (ISBN)
Public defence
2025-04-17, S3030, Kristian IV:s väg 3, Halmstad, 10:00 (English)
Opponent
Supervisors
Available from: 2025-03-19 Created: 2025-03-18 Last updated: 2025-03-19Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records

Englund, Cristofer

Search in DiVA

By author/editor
Rosberg, FelixEnglund, Cristofer
By organisation
School of Information Technology
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 76 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf