hh.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
BETA
Alonso-Fernandez, FernandoORCID iD iconorcid.org/0000-0002-1400-346X
Publikationer (10 of 102) Visa alla publikationer
Alonso-Fernandez, F., Farrugia, R. A., Bigun, J., Fierrez, J. & Gonzalez-Sosa, E. (2019). A Survey of Super-Resolution in Iris Biometrics with Evaluation of Dictionary-Learning. IEEE Access, 7, 6519-6544
Öppna denna publikation i ny flik eller fönster >>A Survey of Super-Resolution in Iris Biometrics with Evaluation of Dictionary-Learning
Visa övriga...
2019 (Engelska)Ingår i: IEEE Access, E-ISSN 2169-3536, Vol. 7, s. 6519-6544Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The lack of resolution has a negative impact on the performance of image-based biometrics. While many generic super-resolution methods have been proposed to restore low-resolution images, they usually aim to enhance their visual appearance. However, an overall visual enhancement of biometric images does not necessarily correlate with a better recognition performance. Reconstruction approaches need thus to incorporate specific information from the target biometric modality to effectively improve recognition performance. This paper presents a comprehensive survey of iris super-resolution approaches proposed in the literature. We have also adapted an Eigen-patches reconstruction method based on PCA Eigentransformation of local image patches. The structure of the iris is exploited by building a patch-position dependent dictionary. In addition, image patches are restored separately, having their own reconstruction weights. This allows the solution to be locally optimized, helping to preserve local information. To evaluate the algorithm, we degraded high-resolution images from the CASIA Interval V3 database. Different restorations were considered, with 15 × 15 pixels being the smallest resolution evaluated. To the best of our knowledge, this is among the smallest resolutions employed in the literature. The experimental framework is complemented with six publicly available iris comparators, which were used to carry out biometric verification and identification experiments. Experimental results show that the proposed method significantly outperforms both bilinear and bicubic interpolation at very low-resolution. The performance of a number of comparators attain an impressive Equal Error Rate as low as 5%, and a Top-1 accuracy of 77-84% when considering iris images of only 15 × 15 pixels. These results clearly demonstrate the benefit of using trained super-resolution techniques to improve the quality of iris images prior to matching. © 2018, Emerald Publishing Limited.

Ort, förlag, år, upplaga, sidor
Piscataway, NJ: IEEE, 2019
Nyckelord
Iris hallucination, iris recognition, eigen-patch, super-resolution, PCA
Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:hh:diva-38659 (URN)10.1109/ACCESS.2018.2889395 (DOI)2-s2.0-85059007584 (Scopus ID)
Forskningsfinansiär
Vetenskapsrådet, 2016-03497EU, FP7, Sjunde ramprogrammet, COST IC1106KK-stiftelsen, SIDUS-AIRKK-stiftelsen, CAISRVINNOVA, 2018-00472
Tillgänglig från: 2018-12-20 Skapad: 2018-12-20 Senast uppdaterad: 2019-01-25Bibliografiskt granskad
Hernandez-Diaz, K., Alonso-Fernandez, F. & Bigun, J. (2019). Cross Spectral Periocular Matching using ResNet Features. In: : . Paper presented at 12th IAPR International Conference on Biometrics, Crete, Greece, June 4-7, 2019.
Öppna denna publikation i ny flik eller fönster >>Cross Spectral Periocular Matching using ResNet Features
2019 (Engelska)Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Periocular recognition has gained attention in the last years thanks to its high discrimination capabilities in less constraint scenarios than other ocular modalities. In this paper we propose a method for periocular verification under different light spectra using CNN features with the particularity that the network has not been trained for this purpose. We use a ResNet-101 pretrained model for the ImageNet Large Scale Visual Recognition Challenge to extract features from the IIITD Multispectral Periocular Database. At each layer the features are compared using χ 2 distance and cosine similitude to carry on verification between images, achieving an improvement in the EER and accuracy at 1% FAR of up to 63.13% and 24.79% in comparison to previous works that employ the same database. In addition to this, we train a neural network to match the best CNN feature layer vector from each spectrum. With this procedure, we achieve improvements of up to 65% (EER) and 87% (accuracy at 1% FAR) in cross-spectral verification with respect to previous studies.

Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:hh:diva-40499 (URN)
Konferens
12th IAPR International Conference on Biometrics, Crete, Greece, June 4-7, 2019
Forskningsfinansiär
Vetenskapsrådet, 2016-03497KK-stiftelsen, SIDUS-AIRKK-stiftelsen, CAISR
Tillgänglig från: 2019-09-04 Skapad: 2019-09-04 Senast uppdaterad: 2019-10-11
Hernandez-Diaz, K., Alonso-Fernandez, F. & Bigun, J. (2019). Cross-Spectral Biometric Recognition with Pretrained CNNs as Generic Feature Extractors. In: : . Paper presented at Swedish Symposium on Image Analysis, SSBA, Gothenburg, Sweden, March 19-20, 2019.
Öppna denna publikation i ny flik eller fönster >>Cross-Spectral Biometric Recognition with Pretrained CNNs as Generic Feature Extractors
2019 (Engelska)Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
Abstract [en]

Periocular recognition has gained attention in the last years thanks to its high discrimination capabilities in less constraint scenarios than face or iris. In this paper we propose a method for periocular verification under different light spectra using CNN features with the particularity that the network has not been trained for this purpose. We use a ResNet-101 pretrained model for the ImageNet Large Scale Visual Recognition Challenge to extract features from the IIITD Multispectral Periocular Database. At each layer the features are compared using χ 2 distance and cosine similitude to carry on verification between images, achieving an improvement in the EER and accuracy at 1% FAR of up to 63.13% and 24.79% in comparison to previous works that employ the same database. In addition to this, we train a neural network to match the best CNN feature layer vector from each spectrum. With this procedure, we achieve improvements of up to 65% (EER) and 87% (accuracy at 1% FAR) in cross-spectral verification with respect to previous studies.

Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:hh:diva-40625 (URN)
Konferens
Swedish Symposium on Image Analysis, SSBA, Gothenburg, Sweden, March 19-20, 2019
Tillgänglig från: 2019-09-24 Skapad: 2019-09-24 Senast uppdaterad: 2019-12-09
Gonzalez-Sosa, E., Vera-Rodriguez, R., Fierrez, J., Alonso-Fernandez, F. & Patel, V. M. (2019). Exploring Body Texture From mmW Images for Person Recognition. IEEE Transactions on Biometrics, Behavior, and Identity Science, 1(2), 139-151
Öppna denna publikation i ny flik eller fönster >>Exploring Body Texture From mmW Images for Person Recognition
Visa övriga...
2019 (Engelska)Ingår i: IEEE Transactions on Biometrics, Behavior, and Identity Science, E-ISSN 2637-6407, Vol. 1, nr 2, s. 139-151Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Imaging using millimeter waves (mmWs) has many advantages including the ability to penetrate obscurants, such as clothes and polymers. After having explored shape information retrieved from mmW images for person recognition, in this paper we aim to gain some insight about the potential of using mmW texture information for the same task, considering not only the mmW face, but also mmW torso and mmW wholebody. We report experimental results using the mmW TNO database consisting of 50 individuals based on both hand-crafted and learned features from Alexnet and VGG-face pretrained convolutional neural networks (CNNs) models. First, we analyze the individual performance of three mmW body parts, concluding that: 1) mmW torso region is more discriminative than mmW face and the whole body; 2) CNN features produce better results compared to hand-crafted features on mmW faces and the entire body; and 3) hand-crafted features slightly outperform CNN features on mmW torso. In the second part of this paper, we analyze different multi-algorithmic and multi-modal techniques, including a novel CNN-based fusion technique, improving verification results to 2% EER and identification rank-1 results up to 99%. Comparative analyses with mmW body shape information and face recognition in the visible and NIR spectral bands are also reported.

Ort, förlag, år, upplaga, sidor
Piscataway, NJ: IEEE, 2019
Nyckelord
mmW imaging, body texture information, border control security, hand-crafted features, deep learning features, CNN-level multimodal fusion, body parts
Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:hh:diva-40622 (URN)10.1109/TBIOM.2019.2906367 (DOI)
Projekt
KK-CAISRKK-SIDUS AIR
Forskningsfinansiär
VetenskapsrådetKK-stiftelsen
Anmärkning

Funding: This work was supported in part by the Project CogniMetrics through MINECO/FEDER under Grant TEC2015-70627-R, and in part by the SPATEK Network under Grant TEC2015-68766-REDC. The work of E. Gonzalez-Sosa was supported by the Ph.D. Scholarship from Universidad Autonoma de Madrid. The work of F. Alonso-Fernandez was supported in part by the Swedish Research Council, in part by the CAISR Program, and in part by the SIDUS-AIR Project of the Swedish Knowledge Foundation. The work of V. M. Patel was supported in part by the U.S. Office of Naval Research under Grant YIP N00014-16-1-3134.

Tillgänglig från: 2019-09-24 Skapad: 2019-09-24 Senast uppdaterad: 2019-09-25Bibliografiskt granskad
Krish, R. P., Fierrez, J., Ramos, D., Alonso-Fernandez, F. & Bigun, J. (2019). Improving Automated Latent Fingerprint Identification Using Extended Minutia Types. Information Fusion, 50, 9-19
Öppna denna publikation i ny flik eller fönster >>Improving Automated Latent Fingerprint Identification Using Extended Minutia Types
Visa övriga...
2019 (Engelska)Ingår i: Information Fusion, ISSN 1566-2535, E-ISSN 1872-6305, Vol. 50, s. 9-19Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Latent fingerprints are usually processed with Automated Fingerprint Identification Systems (AFIS) by law enforcement agencies to narrow down possible suspects from a criminal database. AFIS do not commonly use all discriminatory features available in fingerprints but typically use only some types of features automatically extracted by a feature extraction algorithm. In this work, we explore ways to improve rank identification accuracies of AFIS when only a partial latent fingerprint is available. Towards solving this challenge, we propose a method that exploits extended fingerprint features (unusual/rare minutiae) not commonly considered in AFIS. This new method can be combined with any existing minutiae-based matcher. We first compute a similarity score based on least squares between latent and tenprint minutiae points, with rare minutiae features as reference points. Then the similarity score of the reference minutiae-based matcher at hand is modified based on a fitting error from the least square similarity stage. We use a realistic forensic fingerprint casework database in our experiments which contains rare minutiae features obtained from Guardia Civil, the Spanish law enforcement agency. Experiments are conducted using three minutiae-based matchers as a reference, namely: NIST-Bozorth3, VeriFinger-SDK and MCC-SDK. We report significant improvements in the rank identification accuracies when these minutiae matchers are augmented with our proposed algorithm based on rare minutiae features. © 2018 Elsevier B.V.

Ort, förlag, år, upplaga, sidor
Amsterdam: Elsevier, 2019
Nyckelord
Latent Fingerprints, Forensics, Extended Feature Sets, Rare minutiae features
Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:hh:diva-38113 (URN)10.1016/j.inffus.2018.10.001 (DOI)2-s2.0-85054739072 (Scopus ID)
Projekt
BBfor2
Forskningsfinansiär
EU, FP7, Sjunde ramprogrammet, FP7-ITN-238803KK-stiftelsen, SIDUS-AIRKK-stiftelsen, CAISR
Anmärkning

R.K. was supported for the most part of this work by a Marie Curie Fellowship under project BBfor2 from European Commission (FP7-ITN-238803). This work has also been partially supported by Spanish Guardia Civil, and project CogniMetrics (TEC2015-70627-R) from Spanish MINECO/FEDER. The researchers from Halmstad University acknowledge funding from KK-SIDUS-AIR 485 project and the CAISR program in Sweden.

Tillgänglig från: 2018-10-08 Skapad: 2018-10-08 Senast uppdaterad: 2019-04-10Bibliografiskt granskad
Ribeiro, E., Uhl, A. & Alonso-Fernandez, F. (2019). Iris Super-Resolution using CNNs: is Photo-Realism Important to Iris Recognition?. IET Biometrics, 8(1), 69-78
Öppna denna publikation i ny flik eller fönster >>Iris Super-Resolution using CNNs: is Photo-Realism Important to Iris Recognition?
2019 (Engelska)Ingår i: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 8, nr 1, s. 69-78Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The use of low-resolution images adopting more relaxed acquisition conditions such as mobile phones and surveillance videos is becoming increasingly common in Iris Recognition nowadays. Concurrently, a great variety of single image Super-Resolution techniques are emerging, specially with the use of convolutional neural networks. The main objective of these methods is to try to recover finer texture details generating more photo-realistic images based on the optimization of an objective function depending basically on the CNN architecture and the training approach. In this work, we explore single image Super-Resolution using CNNs for iris recognition. For this, we test different CNN architectures as well as the use of different training databases, validating our approach on a database of 1.872 near infrared iris images and on a mobile phone image database. We also use quality assessment, visual results and recognition experiments to verify if the photo-realism provided by the CNNs which have already proven to be effective for natural images can reflect in a better recognition rate for Iris Recognition. The results show that using deeper architectures trained with texture databases that provide a balance between edge preservation and the smoothness of the method can lead to good results in the iris recognition process. © The Institution of Engineering and Technology 2015

Ort, förlag, år, upplaga, sidor
Stevenage: Institution of Engineering and Technology, 2019
Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:hh:diva-36650 (URN)10.1049/iet-bmt.2018.5146 (DOI)
Anmärkning

Funding: CNPq-Brazil for Eduardo Ribeiro under grant No. 00736/2014-0.

Tillgänglig från: 2018-04-20 Skapad: 2018-04-20 Senast uppdaterad: 2018-12-17Bibliografiskt granskad
Ribeiro, E., Uhl, A. & Alonso-Fernandez, F. (2019). Super-Resolution and Image Re-Projection for Iris Recognition. In: 2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA): . Paper presented at Fifth IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), Hyderabad, India, 22-24 January, 2019 (pp. 1-7).
Öppna denna publikation i ny flik eller fönster >>Super-Resolution and Image Re-Projection for Iris Recognition
2019 (Engelska)Ingår i: 2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA), 2019, s. 1-7Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Several recent works have addressed the ability of deep learning to disclose rich, hierarchical and discriminative models for the most diverse purposes. Specifically in the super-resolution field, Convolutional Neural Networks (CNNs) using different deep learning approaches attempt to recover realistic texture and fine grained details from low resolution images. In this work we explore the viability of these approaches for iris Super-Resolution (SR) in an iris recognition environment. For this, we test different architectures with and without a so called image re-projection to reduce artifacts applying it to different iris databases to verify the viability of the different CNNs for iris super-resolution. Results show that CNNs and image re-projection can improve the results specially for the accuracy of recognition systems using a complete different training database performing the transfer learning successfully.

Serie
IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), ISSN 2640-5555, E-ISSN 2640-0790 ; 5
Nyckelord
Iris recognition, Databases, Image resolution, Training, Deep learning, Image reconstruction, Image recognition
Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:hh:diva-38507 (URN)10.1109/ISBA.2019.8778581 (DOI)978-1-7281-0532-1 (ISBN)978-1-7281-0531-4 (ISBN)978-1-7281-0533-8 (ISBN)
Konferens
Fifth IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), Hyderabad, India, 22-24 January, 2019
Forskningsfinansiär
EU, Horisont 2020, 700259
Anmärkning

Funding: the European Union’s Horizon 2020 research and innovation program under grant agreement No 700259. This research was partially supported by CNPq-Brazil for Eduardo Ribeiro under grant No. 00736/2014-0.

Tillgänglig från: 2018-12-06 Skapad: 2018-12-06 Senast uppdaterad: 2019-08-15Bibliografiskt granskad
Alonso-Fernandez, F., Farrugia, R. A., Fierrez, J. & Bigun, J. (2019). Super-Resolution for Selfie Biometrics: Introduction and Application to Face and Iris (1ed.). In: Ajita Rattani, Reza Derakhshani & Arun A. Ross (Ed.), Selfie Biometrics: Advances and Challenges (pp. 105-128). Cham: Springer
Öppna denna publikation i ny flik eller fönster >>Super-Resolution for Selfie Biometrics: Introduction and Application to Face and Iris
2019 (Engelska)Ingår i: Selfie Biometrics: Advances and Challenges / [ed] Ajita Rattani, Reza Derakhshani & Arun A. Ross, Cham: Springer, 2019, 1, s. 105-128Kapitel i bok, del av antologi (Refereegranskat)
Abstract [en]

Biometric research is heading towards enabling more relaxed acquisition conditions. This has effects on the quality and resolution of acquired images, severly affecting the accuracy of recognition systems if not tackled appropriately. In this chapter, we give an overview of recent research in super-resolution reconstruction applied to biometrics, with a focus on face and iris images in the visible spectrum, two prevalent modalities in selfie biometrics. After an introduction to the generic topic of super-resolution, we investigate methods adapted to cater for the particularities of these two modalities. By experiments, we show the benefits of incorporating super-resolution to improve the quality of biometric images prior to recognition. © Springer Nature AG 2019

Ort, förlag, år, upplaga, sidor
Cham: Springer, 2019 Upplaga: 1
Serie
Advances in Computer Vision and Pattern Recognition, ISSN 2191-6586, E-ISSN 2191-6594 ; 77
Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:hh:diva-38508 (URN)10.1007/978-3-030-26972-2_5 (DOI)978-3-030-26971-5 (ISBN)978-3-030-26972-2 (ISBN)
Projekt
SIDUS-AIR
Forskningsfinansiär
VetenskapsrådetVinnovaKK-stiftelsen
Anmärkning

Other funder: CogniMetrics (TEC2015-70627-R) from MINECO/FEDER

Tillgänglig från: 2018-12-06 Skapad: 2018-12-06 Senast uppdaterad: 2019-10-16Bibliografiskt granskad
Varytimidis, D., Alonso-Fernandez, F., Englund, C. & Duran, B. (2018). Action and intention recognition of pedestrians in urban traffic. In: Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir (Ed.), 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS): . Paper presented at The 14th International Conference on Signal Image Technology & Internet Based Systems (SITIS), Hotel Reina Isabel, Las Palmas de Gran Canaria, Spain, 26-29 November, 2018 (pp. 676-682). Piscataway, N.J.: IEEE
Öppna denna publikation i ny flik eller fönster >>Action and intention recognition of pedestrians in urban traffic
2018 (Engelska)Ingår i: 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) / [ed] Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir, Piscataway, N.J.: IEEE, 2018, s. 676-682Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Action and intention recognition of pedestrians in urban settings are challenging problems for Advanced Driver Assistance Systems as well as future autonomous vehicles to maintain smooth and safe traffic. This work investigates a number of feature extraction methods in combination with several machine learning algorithms to build knowledge on how to automatically detect the action and intention of pedestrians in urban traffic. We focus on the motion and head orientation to predict whether the pedestrian is about to cross the street or not. The work is based on the Joint Attention for Autonomous Driving (JAAD) dataset, which contains 346 videoclips of various traffic scenarios captured with cameras mounted in the windshield of a car. An accuracy of 72% for head orientation estimation and 85% for motion detection is obtained in our experiments.

Ort, förlag, år, upplaga, sidor
Piscataway, N.J.: IEEE, 2018
Nyckelord
Action Recognition, Intention Recognition, Pedestrian, Traffic, Driver Assistance
Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:hh:diva-38504 (URN)10.1109/SITIS.2018.00109 (DOI)978-1-5386-9385-8 (ISBN)978-1-5386-9386-5 (ISBN)
Konferens
The 14th International Conference on Signal Image Technology & Internet Based Systems (SITIS), Hotel Reina Isabel, Las Palmas de Gran Canaria, Spain, 26-29 November, 2018
Projekt
SIDUS AIR
Forskningsfinansiär
KK-stiftelsen, 20140220VetenskapsrådetVinnova
Anmärkning

Funding: This work is financed by the SIDUS AIR project of the Swedish Knowledge Foundation under the grant agreement number 20140220. Author F. A.-F. also thanks the Swedish Research Council (VR), and the Sweden’s innovation agency (VINNOVA).

Tillgänglig från: 2018-12-06 Skapad: 2018-12-06 Senast uppdaterad: 2019-05-16Bibliografiskt granskad
Menezes, M. L., Pinheiro Sant'Anna, A., Pavel, M., Jimison, H. & Alonso-Fernandez, F. (2018). Affective Ambient Intelligence: from Domotics to Ambient Intelligence. In: A2IC 2018: Artificial Intelligence International Conference: Book of Abstract. Paper presented at Artificial Intelligence International Conference, A2IC 2018, November 21-23, 2018, Barcelona, Spain (pp. 25-25).
Öppna denna publikation i ny flik eller fönster >>Affective Ambient Intelligence: from Domotics to Ambient Intelligence
Visa övriga...
2018 (Engelska)Ingår i: A2IC 2018: Artificial Intelligence International Conference: Book of Abstract, 2018, s. 25-25Konferensbidrag, Muntlig presentation med publicerat abstract (Refereegranskat)
Nationell ämneskategori
Signalbehandling
Identifikatorer
urn:nbn:se:hh:diva-38503 (URN)
Konferens
Artificial Intelligence International Conference, A2IC 2018, November 21-23, 2018, Barcelona, Spain
Tillgänglig från: 2018-12-06 Skapad: 2018-12-06 Senast uppdaterad: 2018-12-06Bibliografiskt granskad
Projekt
Bio-distance: Biometri på avstånd [2009-07215_VR]; Högskolan i HalmstadAnsiktsdetektering och robust igenkänning med avseende på bild deformationer [2012-04313_VR]; Högskolan i HalmstadOkulär biometrik i naturliga miljöer [2016-03497_VR]; Högskolan i Halmstad; Publikationer
Gonzalez-Sosa, E., Vera-Rodriguez, R., Fierrez, J., Alonso-Fernandez, F. & Patel, V. M. (2019). Exploring Body Texture From mmW Images for Person Recognition. IEEE Transactions on Biometrics, Behavior, and Identity Science, 1(2), 139-151
Kontinuerlig multimodal biometrisk identifiering för fordon och övervakning [2018-00472_Vinnova]; Högskolan i HalmstadEtablering av identitet och personbeteende genom smartphone [2018-04347_Vinnova]; Högskolan i Halmstad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-1400-346X

Sök vidare i DiVA

Visa alla publikationer