hh.sePublications
Planned maintenance
A system upgrade is planned for 24/9-2024, at 12:00-14:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Cross-Spectral Periocular Recognition with Conditional Adversarial Networks
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.ORCID iD: 0000-0002-9696-7843
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.ORCID iD: 0000-0002-1400-346X
Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.ORCID iD: 0000-0002-4929-1262
2020 (English)In: IJCB 2020 : IEEE/IAPR International Joint Conference on Biometrics : 28th September-1st October 2020, online, Piscataway: IEEE, 2020Conference paper, Published paper (Refereed)
Abstract [en]

This work addresses the challenge of comparing periocular images captured in different spectra, which is known to produce significant drops in performance in comparison to operating in the same spectrum. We propose the use of ConditionalGenerative Adversarial Networks, trained to convert periocular images between visible and near-infrared spectra, so that biometric verification is carried out in the same spectrum. The proposed setup allows the use of existing feature methods typically optimized to operate in a single spectrum. Recognition experiments are done using a number of off-the-shelf periocular comparators based both on hand-crafted features and CNN descriptors. Using the Hong Kong Polytechnic University Cross-Spectral Iris Images Database (PolyU) as benchmark dataset, our experiments show that cross-spectral performance is substantially improved if both images are converted to the same spectrum, in comparison to matching features extracted from images in different spectra. In addition to this, we fine-tune a CNN based on the ResNet50 architecture, obtaining a cross-spectral periocular performance of EER=l%, and GAR>99% @ FAR=l%, which is comparable to the state-of-the-art with the PolyU database. © 2020 IEEE.

Place, publisher, year, edition, pages
Piscataway: IEEE, 2020.
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:hh:diva-43796DOI: 10.1109/IJCB48548.2020.9304899ISI: 000723870900045Scopus ID: 2-s2.0-85098614217ISBN: 978-1-7281-9186-7 (electronic)ISBN: 978-1-7281-9187-4 (print)OAI: oai:DiVA.org:hh-43796DiVA, id: diva2:1524643
Conference
International Joint Conference on Biometrics (IJCB 2020), 28 September - 1 October, 2020, Houston, USA, Online
Funder
Swedish Research Council
Note

s. 1-9

Available from: 2021-02-01 Created: 2021-02-01 Last updated: 2024-06-17Bibliographically approved
In thesis
1. Ocular Recognition in Unconstrained Sensing Environments
Open this publication in new window or tab >>Ocular Recognition in Unconstrained Sensing Environments
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis focuses on the problem of increasing flexibility in the acquisition and application of biometric recognition systems based on the ocular region. While the ocular area is one of the oldest and most widely studied biometric regions thanks to its rich and discriminative elements and characteristics, most modalities such as retina, iris, eye movements, or oculomotor plant have limitations regarding data acquisition. Some require a specific type of illumination like the iris, a limited distance range like eye movements, or specific sensors and user collaboration like the retina. In this context, this thesis focuses on the periocular region, which stands out as the ocular modality with the fewest acquisition constraints. 

The first part focuses on using middle-layers' deep representation of pre-trained CNNs as a one-shot learning method, along with simple distance-based metrics and similarity scores for periocular recognition. This approach tackles the issue of limited data availability and collection for biometric recognition systems by eliminating the need to train the models for the target data. Furthermore, it allows seamless transitions between identification and verification scenarios with a single model, and tackles the problem of the open-world setting and training bias of CNNs. We demonstrate that off-the-shelf features from middle-layers can outperform CNNs trained for the target domain that followed a more extensive training strategy when target data is limited.

The second part of the thesis analyzes traditional methods for biometric systems in the context of periocular recognition. Nowadays, these methods are often overlooked in favor of deep learning solutions. However, we show that they can still outperform heavily trained CNNs in closed-world and open-world settings and can be used in conjunction with CNNs to further improve recognition performance. Moreover, we investigate the use of the complex structure tensor as a handcrafted texture extractor at the input of CNNs. We show that CNNs can benefit from this explicit textural information in terms of performance and convergence, offering the potential for network compression and explainability of the features used. We demonstrate that CNNs may not easily access the orientation information present in the images that are exploited in some more traditional approaches.

The final part of the thesis addresses the analysis of periocular recognition under different light spectra and the cross-spectral scenario. More specifically, we analyze the performance of the proposed methods under different light spectra. We also investigate the cross-spectral scenario for one-shot learning with middle-layers' deep representations and explore the possibility of bridging the domain gap in the cross-spectral scenario by training generative networks. This allows using simpler models and algorithms trained on a single spectrum.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2024. p. 49
Series
Halmstad University Dissertations ; 114
Keywords
Biometrics, Computer Vision, Pattern Recognition, Periocular Recognition
National Category
Signal Processing Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:hh:diva-53257 (URN)978-91-89587-43-4 (ISBN)978-91-89587-42-7 (ISBN)
Public defence
2024-05-28, S3030, Kristian IV:s väg 3, 08:00 (English)
Opponent
Supervisors
Available from: 2024-04-24 Created: 2024-04-24 Last updated: 2024-06-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopushttps://arxiv.org/pdf/2008.11604

Authority records

Hernandez-Diaz, KevinAlonso-Fernandez, FernandoBigun, Josef

Search in DiVA

By author/editor
Hernandez-Diaz, KevinAlonso-Fernandez, FernandoBigun, Josef
By organisation
CAISR - Center for Applied Intelligent Systems Research
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 89 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf