hh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alonso-Fernandez, FernandoORCID iD iconorcid.org/0000-0002-1400-346X
Publications (10 of 97) Show all publications
Alonso-Fernandez, F., Farrugia, R. A., Bigun, J., Fierrez, J. & Gonzalez-Sosa, E. (2019). A Survey of Super-Resolution in Iris Biometrics with Evaluation of Dictionary-Learning. IEEE Access, 7, 6519-6544
Open this publication in new window or tab >>A Survey of Super-Resolution in Iris Biometrics with Evaluation of Dictionary-Learning
Show others...
2019 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 6519-6544Article in journal (Refereed) Published
Abstract [en]

The lack of resolution has a negative impact on the performance of image-based biometrics. While many generic super-resolution methods have been proposed to restore low-resolution images, they usually aim to enhance their visual appearance. However, an overall visual enhancement of biometric images does not necessarily correlate with a better recognition performance. Reconstruction approaches need thus to incorporate specific information from the target biometric modality to effectively improve recognition performance. This paper presents a comprehensive survey of iris super-resolution approaches proposed in the literature. We have also adapted an Eigen-patches reconstruction method based on PCA Eigentransformation of local image patches. The structure of the iris is exploited by building a patch-position dependent dictionary. In addition, image patches are restored separately, having their own reconstruction weights. This allows the solution to be locally optimized, helping to preserve local information. To evaluate the algorithm, we degraded high-resolution images from the CASIA Interval V3 database. Different restorations were considered, with 15 × 15 pixels being the smallest resolution evaluated. To the best of our knowledge, this is among the smallest resolutions employed in the literature. The experimental framework is complemented with six publicly available iris comparators, which were used to carry out biometric verification and identification experiments. Experimental results show that the proposed method significantly outperforms both bilinear and bicubic interpolation at very low-resolution. The performance of a number of comparators attain an impressive Equal Error Rate as low as 5%, and a Top-1 accuracy of 77-84% when considering iris images of only 15 × 15 pixels. These results clearly demonstrate the benefit of using trained super-resolution techniques to improve the quality of iris images prior to matching. © 2018, Emerald Publishing Limited.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2019
Keywords
Iris hallucination, iris recognition, eigen-patch, super-resolution, PCA
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-38659 (URN)10.1109/ACCESS.2018.2889395 (DOI)2-s2.0-85059007584 (Scopus ID)
Funder
Swedish Research Council, 2016-03497EU, FP7, Seventh Framework Programme, COST IC1106Knowledge Foundation, SIDUS-AIRKnowledge Foundation, CAISRVINNOVA, 2018-00472
Available from: 2018-12-20 Created: 2018-12-20 Last updated: 2019-01-25Bibliographically approved
Krish, R. P., Fierrez, J., Ramos, D., Alonso-Fernandez, F. & Bigun, J. (2019). Improving Automated Latent Fingerprint Identification Using Extended Minutia Types. Information Fusion, 50, 9-19
Open this publication in new window or tab >>Improving Automated Latent Fingerprint Identification Using Extended Minutia Types
Show others...
2019 (English)In: Information Fusion, ISSN 1566-2535, E-ISSN 1872-6305, Vol. 50, p. 9-19Article in journal (Refereed) Published
Abstract [en]

Latent fingerprints are usually processed with Automated Fingerprint Identification Systems (AFIS) by law enforcement agencies to narrow down possible suspects from a criminal database. AFIS do not commonly use all discriminatory features available in fingerprints but typically use only some types of features automatically extracted by a feature extraction algorithm. In this work, we explore ways to improve rank identification accuracies of AFIS when only a partial latent fingerprint is available. Towards solving this challenge, we propose a method that exploits extended fingerprint features (unusual/rare minutiae) not commonly considered in AFIS. This new method can be combined with any existing minutiae-based matcher. We first compute a similarity score based on least squares between latent and tenprint minutiae points, with rare minutiae features as reference points. Then the similarity score of the reference minutiae-based matcher at hand is modified based on a fitting error from the least square similarity stage. We use a realistic forensic fingerprint casework database in our experiments which contains rare minutiae features obtained from Guardia Civil, the Spanish law enforcement agency. Experiments are conducted using three minutiae-based matchers as a reference, namely: NIST-Bozorth3, VeriFinger-SDK and MCC-SDK. We report significant improvements in the rank identification accuracies when these minutiae matchers are augmented with our proposed algorithm based on rare minutiae features. © 2018 Elsevier B.V.

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2019
Keywords
Latent Fingerprints, Forensics, Extended Feature Sets, Rare minutiae features
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-38113 (URN)10.1016/j.inffus.2018.10.001 (DOI)2-s2.0-85054739072 (Scopus ID)
Projects
BBfor2
Funder
EU, FP7, Seventh Framework Programme, FP7-ITN-238803Knowledge Foundation, SIDUS-AIRKnowledge Foundation, CAISR
Note

R.K. was supported for the most part of this work by a Marie Curie Fellowship under project BBfor2 from European Commission (FP7-ITN-238803). This work has also been partially supported by Spanish Guardia Civil, and project CogniMetrics (TEC2015-70627-R) from Spanish MINECO/FEDER. The researchers from Halmstad University acknowledge funding from KK-SIDUS-AIR 485 project and the CAISR program in Sweden.

Available from: 2018-10-08 Created: 2018-10-08 Last updated: 2019-04-10Bibliographically approved
Ribeiro, E., Uhl, A. & Alonso-Fernandez, F. (2019). Iris Super-Resolution using CNNs: is Photo-Realism Important to Iris Recognition?. IET Biometrics, 8(1), 69-78
Open this publication in new window or tab >>Iris Super-Resolution using CNNs: is Photo-Realism Important to Iris Recognition?
2019 (English)In: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 8, no 1, p. 69-78Article in journal (Refereed) Published
Abstract [en]

The use of low-resolution images adopting more relaxed acquisition conditions such as mobile phones and surveillance videos is becoming increasingly common in Iris Recognition nowadays. Concurrently, a great variety of single image Super-Resolution techniques are emerging, specially with the use of convolutional neural networks. The main objective of these methods is to try to recover finer texture details generating more photo-realistic images based on the optimization of an objective function depending basically on the CNN architecture and the training approach. In this work, we explore single image Super-Resolution using CNNs for iris recognition. For this, we test different CNN architectures as well as the use of different training databases, validating our approach on a database of 1.872 near infrared iris images and on a mobile phone image database. We also use quality assessment, visual results and recognition experiments to verify if the photo-realism provided by the CNNs which have already proven to be effective for natural images can reflect in a better recognition rate for Iris Recognition. The results show that using deeper architectures trained with texture databases that provide a balance between edge preservation and the smoothness of the method can lead to good results in the iris recognition process. © The Institution of Engineering and Technology 2015

Place, publisher, year, edition, pages
Stevenage: Institution of Engineering and Technology, 2019
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-36650 (URN)10.1049/iet-bmt.2018.5146 (DOI)
Note

Funding: CNPq-Brazil for Eduardo Ribeiro under grant No. 00736/2014-0.

Available from: 2018-04-20 Created: 2018-04-20 Last updated: 2018-12-17Bibliographically approved
Ribeiro, E., Uhl, A. & Alonso-Fernandez, F. (2019). Super-Resolution and Image Re-Projection for Iris Recognition. In: 2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA): . Paper presented at Fifth IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), Hyderabad, India, 22-24 January, 2019 (pp. 1-7).
Open this publication in new window or tab >>Super-Resolution and Image Re-Projection for Iris Recognition
2019 (English)In: 2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA), 2019, p. 1-7Conference paper, Published paper (Refereed)
Abstract [en]

Several recent works have addressed the ability of deep learning to disclose rich, hierarchical and discriminative models for the most diverse purposes. Specifically in the super-resolution field, Convolutional Neural Networks (CNNs) using different deep learning approaches attempt to recover realistic texture and fine grained details from low resolution images. In this work we explore the viability of these approaches for iris Super-Resolution (SR) in an iris recognition environment. For this, we test different architectures with and without a so called image re-projection to reduce artifacts applying it to different iris databases to verify the viability of the different CNNs for iris super-resolution. Results show that CNNs and image re-projection can improve the results specially for the accuracy of recognition systems using a complete different training database performing the transfer learning successfully.

Series
IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), ISSN 2640-5555, E-ISSN 2640-0790 ; 5
Keywords
Iris recognition, Databases, Image resolution, Training, Deep learning, Image reconstruction, Image recognition
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-38507 (URN)10.1109/ISBA.2019.8778581 (DOI)978-1-7281-0532-1 (ISBN)978-1-7281-0531-4 (ISBN)978-1-7281-0533-8 (ISBN)
Conference
Fifth IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), Hyderabad, India, 22-24 January, 2019
Funder
EU, Horizon 2020, 700259
Note

Funding: the European Union’s Horizon 2020 research and innovation program under grant agreement No 700259. This research was partially supported by CNPq-Brazil for Eduardo Ribeiro under grant No. 00736/2014-0.

Available from: 2018-12-06 Created: 2018-12-06 Last updated: 2019-08-15Bibliographically approved
Alonso-Fernandez, F., Farrugia, R. A., Fierrez, J. & Bigun, J. (2019). Super-Resolution for Selfie Biometrics: Introduction and Application to Face and Iris. In: Ajita Rattani, Arun Ross (Ed.), Selfie Biometrics: . Springer
Open this publication in new window or tab >>Super-Resolution for Selfie Biometrics: Introduction and Application to Face and Iris
2019 (English)In: Selfie Biometrics / [ed] Ajita Rattani, Arun Ross, Springer, 2019Chapter in book (Refereed)
Place, publisher, year, edition, pages
Springer, 2019
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-38508 (URN)
Projects
SIDUS-AIR
Funder
Swedish Research CouncilVinnovaKnowledge Foundation
Available from: 2018-12-06 Created: 2018-12-06 Last updated: 2019-03-22
Varytimidis, D., Alonso-Fernandez, F., Englund, C. & Duran, B. (2018). Action and intention recognition of pedestrians in urban traffic. In: Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir (Ed.), 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS): . Paper presented at The 14th International Conference on Signal Image Technology & Internet Based Systems (SITIS), Hotel Reina Isabel, Las Palmas de Gran Canaria, Spain, 26-29 November, 2018 (pp. 676-682). Piscataway, N.J.: IEEE
Open this publication in new window or tab >>Action and intention recognition of pedestrians in urban traffic
2018 (English)In: 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) / [ed] Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir, Piscataway, N.J.: IEEE, 2018, p. 676-682Conference paper, Published paper (Refereed)
Abstract [en]

Action and intention recognition of pedestrians in urban settings are challenging problems for Advanced Driver Assistance Systems as well as future autonomous vehicles to maintain smooth and safe traffic. This work investigates a number of feature extraction methods in combination with several machine learning algorithms to build knowledge on how to automatically detect the action and intention of pedestrians in urban traffic. We focus on the motion and head orientation to predict whether the pedestrian is about to cross the street or not. The work is based on the Joint Attention for Autonomous Driving (JAAD) dataset, which contains 346 videoclips of various traffic scenarios captured with cameras mounted in the windshield of a car. An accuracy of 72% for head orientation estimation and 85% for motion detection is obtained in our experiments.

Place, publisher, year, edition, pages
Piscataway, N.J.: IEEE, 2018
Keywords
Action Recognition, Intention Recognition, Pedestrian, Traffic, Driver Assistance
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-38504 (URN)10.1109/SITIS.2018.00109 (DOI)978-1-5386-9385-8 (ISBN)978-1-5386-9386-5 (ISBN)
Conference
The 14th International Conference on Signal Image Technology & Internet Based Systems (SITIS), Hotel Reina Isabel, Las Palmas de Gran Canaria, Spain, 26-29 November, 2018
Projects
SIDUS AIR
Funder
Knowledge Foundation, 20140220Swedish Research CouncilVinnova
Note

Funding: This work is financed by the SIDUS AIR project of the Swedish Knowledge Foundation under the grant agreement number 20140220. Author F. A.-F. also thanks the Swedish Research Council (VR), and the Sweden’s innovation agency (VINNOVA).

Available from: 2018-12-06 Created: 2018-12-06 Last updated: 2019-05-16Bibliographically approved
Menezes, M. L., Pinheiro Sant'Anna, A., Pavel, M., Jimison, H. & Alonso-Fernandez, F. (2018). Affective Ambient Intelligence: from Domotics to Ambient Intelligence. In: A2IC 2018: Artificial Intelligence International Conference: Book of Abstract. Paper presented at Artificial Intelligence International Conference, A2IC 2018, November 21-23, 2018, Barcelona, Spain (pp. 25-25).
Open this publication in new window or tab >>Affective Ambient Intelligence: from Domotics to Ambient Intelligence
Show others...
2018 (English)In: A2IC 2018: Artificial Intelligence International Conference: Book of Abstract, 2018, p. 25-25Conference paper, Oral presentation with published abstract (Refereed)
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-38503 (URN)
Conference
Artificial Intelligence International Conference, A2IC 2018, November 21-23, 2018, Barcelona, Spain
Available from: 2018-12-06 Created: 2018-12-06 Last updated: 2018-12-06Bibliographically approved
Alonso-Fernandez, F., Bigun, J. & Englund, C. (2018). Expression Recognition Using the Periocular Region: A Feasibility Study. In: Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir (Ed.), 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS): . Paper presented at The 14th International Conference on Signal Image Technology & Internet Based Systems, SITIS 2018, Las Palmas de Gran Canaria, Spain, 26-29 November, 2018 (pp. 536-541). Los Alamitos: IEEE Computer Society
Open this publication in new window or tab >>Expression Recognition Using the Periocular Region: A Feasibility Study
2018 (English)In: 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) / [ed] Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir, Los Alamitos: IEEE Computer Society, 2018, p. 536-541Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0%/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.

Place, publisher, year, edition, pages
Los Alamitos: IEEE Computer Society, 2018
Keywords
Expression Recognition, Emotion Recognition, Periocular Analysis, Periocular Descriptor
National Category
Signal Processing Computer Vision and Robotics (Autonomous Systems) Medical Image Processing
Identifiers
urn:nbn:se:hh:diva-38505 (URN)978-1-5386-9385-8 (ISBN)978-1-5386-9386-5 (ISBN)
Conference
The 14th International Conference on Signal Image Technology & Internet Based Systems, SITIS 2018, Las Palmas de Gran Canaria, Spain, 26-29 November, 2018
Projects
SIDUS-AIR
Funder
Swedish Research CouncilKnowledge Foundation
Note

Funding: Author F. A.-F. thanks the Swedish Research Council for funding his research. Authors acknowledge the CAISR program and the SIDUS-AIR project of the Swedish Knowledge Foundation.

Available from: 2018-12-06 Created: 2018-12-06 Last updated: 2019-05-16Bibliographically approved
Gonzalez-Sosa, E., Fierrez, J., Vera-Rodriguez, R. & Alonso-Fernandez, F. (2018). Facial Soft Biometrics for Recognition in the Wild: Recent Works, Annotation and Evaluation. IEEE Transactions on Information Forensics and Security, 13(8), 2001-2014
Open this publication in new window or tab >>Facial Soft Biometrics for Recognition in the Wild: Recent Works, Annotation and Evaluation
2018 (English)In: IEEE Transactions on Information Forensics and Security, ISSN 1556-6013, E-ISSN 1556-6021, Vol. 13, no 8, p. 2001-2014Article in journal (Refereed) Published
Abstract [en]

The role of soft biometrics to enhance person recognition systems in unconstrained scenarios has not been extensively studied. Here, we explore the utility of the following modalities: gender, ethnicity, age, glasses, beard, and moustache. We consider two assumptions: 1) manual estimation of soft biometrics and 2) automatic estimation from two commercial off-the-shelf systems (COTS). All experiments are reported using the labeled faces in the wild (LFW) database. First, we study the discrimination capabilities of soft biometrics standalone. Then, experiments are carried out fusing soft biometrics with two state-of-the-art face recognition systems based on deep learning. We observe that soft biometrics is a valuable complement to the face modality in unconstrained scenarios, with relative improvements up to 40%/15% in the verification performance when using manual/automatic soft biometrics estimation. Results are reproducible as we make public our manual annotations and COTS outputs of soft biometrics over LFW, as well as the face recognition scores. © 2018 IEEE.

Place, publisher, year, edition, pages
Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
Soft biometrics, hard biometrics, commercial systems, unconstrained scenarios
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-36651 (URN)10.1109/TIFS.2018.2807791 (DOI)000429228800010 ()2-s2.0-85039786512 (Scopus ID)
Projects
SIDUS-AIR
Funder
Swedish Research CouncilKnowledge Foundation
Note

Funded in part by the Spanish Guardia Civil and the project CogniMetrics from MINECO/FEDER under Grant TEC2015-70627-R and in part by the Imperial College London under Grant PRX16/00580. The work of E. Gonzalez-Sosa was supported by a Ph.D. Scholarship from the Universidad Autonoma de Madrid. The work of F. Alonso-Fernandez was supported in part by the Swedish Research Council, in part by the CAISR program, and in part by the SIDUS-AIR project of the Swedish Knowledge Foundation. 

Available from: 2018-04-20 Created: 2018-04-20 Last updated: 2018-04-23Bibliographically approved
Femling, F., Olsson, A. & Alonso-Fernandez, F. (2018). Fruit and Vegetable Identification Using Machine Learning for Retail Application. In: Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir (Ed.), 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS): . Paper presented at The 14th International Conference on Signal Image Technology & Internet based Systems, Las Palmas de Gran Canaria, Spain, 26-29 November, 2018 (pp. 9-15). Los Alamitos: IEEE Computer Society
Open this publication in new window or tab >>Fruit and Vegetable Identification Using Machine Learning for Retail Application
2018 (English)In: 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) / [ed] Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir, Los Alamitos: IEEE Computer Society, 2018, p. 9-15Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes an approach of creating a system identifying fruit and vegetables in the retail market using images captured with a video camera attached to the system. The system helps the customers to label desired fruits and vegetables with a price according to its weight. The purpose of the system is to minimize the number of human computer interactions, speed up the identification process and improve the usability of the graphical user interface compared to existing manual systems. The hardware of the system is constituted by a Raspberry Pi, camera, display, load cell and a case. To classify an object, different convolutional neural networks have been tested and retrained. To test the usability, a heuristic evaluation has been performed with several users, concluding that the implemented system is more user friendly compared to existing systems.

Place, publisher, year, edition, pages
Los Alamitos: IEEE Computer Society, 2018
Keywords
Fruit and Vegetable Identification, Computer Vision, Graphical User Interface, Usability
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-38506 (URN)10.1109/SITIS.2018.00013 (DOI)978-1-5386-9385-8 (ISBN)978-1-5386-9386-5 (ISBN)
Conference
The 14th International Conference on Signal Image Technology & Internet based Systems, Las Palmas de Gran Canaria, Spain, 26-29 November, 2018
Funder
Swedish Research CouncilKnowledge FoundationVinnova
Available from: 2018-12-06 Created: 2018-12-06 Last updated: 2019-05-16Bibliographically approved
Projects
Bio-distance, Biometrics at a distance [2009-07215_VR]; Halmstad UniversityFacial detection and recognition resilient to physical image deformations [2012-04313_VR]; Halmstad UniversityOcular biometrics in unconstrained sensing environments [2016-03497_VR]; Halmstad UniversityContinuous Multimodal Biometrics for Vehicles & Surveillance [2018-00472_Vinnova]; Halmstad UniversityHuman identity and understanding of person behavior using smartphone devices [2018-04347_Vinnova]; Halmstad University
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1400-346X

Search in DiVA

Show all publications