hh.sePublications
System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
An Explainable Model-Agnostic Algorithm for CNN-Based Biometrics Verification
Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).ORCID iD: 0000-0002-1400-346X
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-9696-7843
University of Balearic Islands, Palma, Spain.ORCID iD: 0000-0002-6137-9558
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-2851-4260
Show others and affiliations
2023 (English)In: 2023 IEEE International Workshop on Information Forensics and Security (WIFS), Institute of Electrical and Electronics Engineers (IEEE), 2023Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting. LIME was initially proposed for networks with the same output classes used for training, and it employs the softmax probability to determine which regions of the image contribute the most to classification. However, in a verification setting, the classes to be recognized have not been seen during training. In addition, instead of using the softmax output, face descriptors are usually obtained from a layer before the classification layer. The model is adapted to achieve explainability via cosine similarity between feature vectors of perturbated versions of the input image. The method is showcased for face biometrics with two CNN models based on MobileNetv2 and ResNet50. © 2023 IEEE.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023.
Keywords [en]
Biometrics, Explainable AI, Face recognition, XAI
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:hh:diva-52721DOI: 10.1109/WIFS58808.2023.10374866Scopus ID: 2-s2.0-85183463933ISBN: 9798350324914 (print)OAI: oai:DiVA.org:hh-52721DiVA, id: diva2:1838422
Conference
2023 IEEE International Workshop on Information Forensics and Security, WIFS 2023, Nürnberg, Germany, 4-7 December, 2023
Projects
EXPLAINING - ”Project EXPLainable Artificial INtelligence systems for health and well-beING”
Funder
Swedish Research CouncilVinnovaAvailable from: 2024-02-16 Created: 2024-02-16 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Alonso-Fernandez, FernandoHernandez-Diaz, KevinTiwari, PrayagBigun, Josef

Search in DiVA

By author/editor
Alonso-Fernandez, FernandoHernandez-Diaz, KevinBuades, Jose M.Tiwari, PrayagBigun, Josef
By organisation
Center for Applied Intelligent Systems Research (CAISR)School of Information Technology
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 1001 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf