hh.sePublications
Change search
Link to record
Permanent link

Direct link
Alonso-Fernandez, FernandoORCID iD iconorcid.org/0000-0002-1400-346X
Publications (10 of 138) Show all publications
Alonso-Fernandez, F., Hernandez-Diaz, K., Buades Rubio, J. M., Tiwari, P. & Bigun, J. (2025). Deep network pruning: A comparative study on CNNs in face recognition. Pattern Recognition Letters, 189, 221-228
Open this publication in new window or tab >>Deep network pruning: A comparative study on CNNs in face recognition
Show others...
2025 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 189, p. 221-228Article in journal (Refereed) Published
Abstract [en]

The widespread use of mobile devices for all kinds of transactions makes necessary reliable and real-time identity authentication, leading to the adoption of face recognition (FR) via the cameras embedded in such devices. Progress of deep Convolutional Neural Networks (CNNs) has provided substantial advances in FR. Nonetheless, the size of state-of-the-art architectures is unsuitable for mobile deployment, since they often encompass hundreds of megabytes and millions of parameters. We address this by studying methods for deep network compression applied to FR. In particular, we apply network pruning based on Taylor scores, where less important filters are removed iteratively. The method is tested on three networks based on the small SqueezeNet (1.24M parameters) and the popular MobileNetv2 (3.5M) and ResNet50 (23.5M) architectures. These have been selected to showcase the method on CNNs with different complexities and sizes. We observe that a substantial percentage of filters can be removed with minimal performance loss. Also, filters with the highest amount of output channels tend to be removed first, suggesting that high-dimensional spaces within popular CNNs are over-dimensioned. The models of this paper are available at https://github.com/HalmstadUniversityBiometrics/CNN-pruning-for-face-recognition. © 2025.

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2025
Keywords
Convolutional Neural Networks, Deep learning, Face recognition, Mobile biometrics, Network pruning, Taylor expansion
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:hh:diva-55571 (URN)10.1016/j.patrec.2025.01.023 (DOI)2-s2.0-85217214565 (Scopus ID)
Funder
Vinnova, PID2022-136779OB-C32Swedish Research CouncilEuropean Commission
Note

This work was partly done while F. A.-F. was a visiting researcher at the University of the Balearic Islands . F. A.-F., K. H.-D., and J. B. thank the Swedish Research Council (VR) and the Swedish Innovation Agency (VINNOVA) for funding their research. This work is part of the Project PID2022-136779OB-C32 (PLEISAR) funded by MICIU/ AEI /10.13039/501100011033/ and FEDER, EU.

Available from: 2025-02-28 Created: 2025-02-28 Last updated: 2025-02-28Bibliographically approved
Alonso-Fernandez, F., Hernandez-Diaz, K., Tiwari, P. & Bigun, J. (2024). Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition. In: Proceedings - 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024: . Paper presented at 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, Rome, Italy, December 2-5, 2024. IEEE
Open this publication in new window or tab >>Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition
2024 (English)In: Proceedings - 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, IEEE, 2024Conference paper, Published paper (Refereed)
Abstract [en]

We apply pre-trained architectures, originally developed for the ImageNet Large Scale Visual Recognition Challenge, for periocular recognition. These architectures have demon-strated significant success in various computer vision tasks beyond the ones for which they were designed. This work builds on our previous study using off-the-shelf Convolutional Neural Network (CNN) and extends it to include the more recently proposed Vision Transformers (ViT). Despite being trained for generic object classification, middle-layer features from CNNs and ViTs are a suitable way to recognize individuals based on periocular images. We also demonstrate that CNNs and ViTs are highly complementary since their combination results in boosted accuracy. In addition, we show that a small portion of these pre-trained models can achieve good accuracy, resulting in thinner models with fewer parameters, suitable for resource-limited environments such as mobiles. This efficiency improves if traditional handcrafted features are added as well. ©2024 IEEE.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE International Workshop on Information Forensics and Security, ISSN 2157-4766, E-ISSN 2157-4774
Keywords
Periocular recognition, deep representation, biometrics, transfer learning, one-shot learning, Convolutional Neural Network, Vision Transformers
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-54713 (URN)10.1109/WIFS61860.2024.10810712 (DOI)2-s2.0-85215518296 (Scopus ID)979-8-3503-6442-2 (ISBN)979-8-3503-6443-9 (ISBN)
Conference
16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, Rome, Italy, December 2-5, 2024
Funder
Swedish Research CouncilVinnova
Available from: 2024-10-06 Created: 2024-10-06 Last updated: 2025-02-05Bibliographically approved
Nguyen, K., Proença, H. & Alonso-Fernandez, F. (2024). Deep Learning for Iris Recognition: A Survey. ACM Computing Surveys, 56(9), Article ID 223.
Open this publication in new window or tab >>Deep Learning for Iris Recognition: A Survey
2024 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 56, no 9, article id 223Article in journal (Refereed) Published
Abstract [en]

In this survey, we provide a comprehensive review of more than 200 articles, technical reports, and GitHub repositories published over the last 10 years on the recent developments of deep learning techniques for iris recognition, covering broad topics on algorithm designs, open-source tools, open challenges, and emerging research. First, we conduct a comprehensive analysis of deep learning techniques developed for two main sub-tasks in iris biometrics: segmentation and recognition. Second, we focus on deep learning techniques for the robustness of iris recognition systems against presentation attacks and via human-machine pairing. Third, we delve deep into deep learning techniques for forensic application, especially in post-mortem iris recognition. Fourth, we review open-source resources and tools in deep learning techniques for iris recognition. Finally, we highlight the technical challenges, emerging research trends, and outlook for the future of deep learning in iris recognition. © 2024 Copyright held by the owner/author(s).

Place, publisher, year, edition, pages
New York, NY: Association for Computing Machinery (ACM), 2024
Keywords
deep learning, Iris recognition, neural networks
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-53497 (URN)10.1145/3651306 (DOI)2-s2.0-85193919968 (Scopus ID)
Projects
MIDASDIFFUSE
Funder
VinnovaSwedish Research Council, 2021-05110
Note

Funding: The work due to Hugo Proença was funded by FCT/MEC through national funds and co-funded by FEDER - PT2020 partnership agreement under the projects UIDB/50008/2020, POCI-01-0247-FEDER-033395. Author Alonso-Fernandez thanks the Swedish Innovation Agency VINNOVA (project MIDAS and DIFFUSE) and the Swedish Research Council (project 2021-05110) for funding his research.

Available from: 2024-06-07 Created: 2024-06-07 Last updated: 2024-06-07Bibliographically approved
Alonso-Fernandez, F., Bigun, J., Fierrez, J., Damer, N., Proenca, H. & Ross, A. (2024). Periocular Biometrics: A Modality for Unconstrained Scenarios. Computer, 57(6), 40-49
Open this publication in new window or tab >>Periocular Biometrics: A Modality for Unconstrained Scenarios
Show others...
2024 (English)In: Computer, ISSN 0018-9162, E-ISSN 1558-0814, Vol. 57, no 6, p. 40-49Article in journal (Refereed) Published
Abstract [en]

This article discusses the state of the art in periocular biometrics, presenting an overall framework encompassing the field's most significant research aspects, which include ocular definition, acquisition, and detection; identity recognition; and ocular soft-biometric analysis. © 1970-2012 IEEE.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE Computer Society, 2024
Keywords
Biometric analysis, Identity recognition, Periocular, Soft biometrics, State of the art
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-54353 (URN)10.1109/MC.2023.3298095 (DOI)001240114700006 ()2-s2.0-85172810109& (Scopus ID)
Available from: 2024-08-01 Created: 2024-08-01 Last updated: 2024-08-01Bibliographically approved
Butt, T. H., Tiwari, P. & Alonso-Fernandez, F. (2024). Predicting Overtakes In Trucks Using Can Data. In: Florian Westphal; Einav Peretz-Andersson; Maria Riveiro; Kerstin Bach; Fredrik Heintz (Ed.), 14th Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden: . Paper presented at 14th Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden (pp. 160-167). Linköping: Linköping University Electronic Press, 208
Open this publication in new window or tab >>Predicting Overtakes In Trucks Using Can Data
2024 (English)In: 14th Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden / [ed] Florian Westphal; Einav Peretz-Andersson; Maria Riveiro; Kerstin Bach; Fredrik Heintz, Linköping: Linköping University Electronic Press, 2024, Vol. 208, p. 160-167Conference paper, Published paper (Refereed)
Abstract [en]

Safe overtakes in trucks are crucial to prevent accidents, reduce congestion, and ensure efficient traffic flow, making early prediction essential for timely and informed driving decisions. Accordingly, we investigate the detection of truck overtakes from CAN data. Three classifiers, Artificial Neural Networks (ANN), Random Forest, and Support Vector Machines (SVM), are employed for the task. Our analysis covers up to 10 seconds before the overtaking event, using an overlapping sliding window of 1 second to extract CAN features. We observe that the prediction scores of the overtake class tend to increase as we approach the overtake trigger, while the no-overtake class remain stable or oscillates depending on the classifier. Thus, the best accuracy is achieved when approaching the trigger, making early overtaking prediction challenging. The classifiers show good accuracy in classifying overtakes (Recall/TPR ≥ 93%), but accuracy is suboptimal in classifying no-overtakes (TNR typically 80-90% and below 60% for one SVM variant). We further combine two classifiers (Random Forest and linear SVM) by averaging their output scores. The fusion is observed to improve no-overtake classification (TNR ≥ 92%) at the expense of reducing overtake accuracy (TPR). However, the latter is kept above 91% near the overtake trigger. Therefore, the fusion balances TPR and TNR, providing more consistent performance than individual classifiers.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2024
Series
Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 208
Keywords
Machine Learning, CAN BUS data, Overtakes
National Category
Transport Systems and Logistics
Identifiers
urn:nbn:se:hh:diva-55261 (URN)10.3384/ecp208018 (DOI)978-91-8075-709-6 (ISBN)
Conference
14th Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden
Available from: 2025-01-17 Created: 2025-01-17 Last updated: 2025-01-17Bibliographically approved
Alonso-Fernandez, F., Hernandez-Diaz, K., Buades Rubio, J. M. & Bigun, J. (2024). SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning. In: Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J. (Ed.), Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023.: . Paper presented at VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023 (pp. 349-361). Cham: Springer, 14335
Open this publication in new window or tab >>SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning
2024 (English)In: Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023. / [ed] Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J., Cham: Springer, 2024, Vol. 14335, p. 349-361Conference paper, Published paper (Refereed)
Abstract [en]

The widespread use of mobile devices for various digital services has created a need for reliable and real-time person authentication. In this context, facial recognition technologies have emerged as a dependable method for verifying users due to the prevalence of cameras in mobile devices and their integration into everyday applications. The rapid advancement of deep Convolutional Neural Networks (CNNs) has led to numerous face verification architectures. However, these models are often large and impractical for mobile applications, reaching sizes of hundreds of megabytes with millions of parameters. We address this issue by developing SqueezerFaceNet, a light face recognition network which less than 1M parameters. This is achieved by applying a network pruning method based on Taylor scores, where filters with small importance scores are removed iteratively. Starting from an already small network (of 1.24M) based on SqueezeNet, we show that it can be further reduced (up to 40%) without an appreciable loss in performance. To the best of our knowledge, we are the first to evaluate network pruning methods for the task of face recognition. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Place, publisher, year, edition, pages
Cham: Springer, 2024
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14335
Keywords
Face recognition, Mobile Biometrics, CNN pruning, Taylor scores
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-51299 (URN)10.1007/978-3-031-49552-6_30 (DOI)2-s2.0-85180788350 (Scopus ID)978-3-031-49551-9 (ISBN)978-3-031-49552-6 (ISBN)
Conference
VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023
Funder
Swedish Research CouncilVinnova
Note

Funding: F. A.-F., K. H.-D., and J. B. thank the Swedish Research Council (VR) and the Swedish Innovation Agency (VINNOVA) for funding their research. Author J. M. B. thanks the project EX-PLAINING - "Project EXPLainable Artificial INtelligence systems for health and well-beING", under Spanish national projects funding (PID2019-104829RA-I00/AEI/10.13039/501100011033).

Available from: 2023-07-20 Created: 2023-07-20 Last updated: 2024-06-17Bibliographically approved
Ning, X., Jiang, L., Li, W., Yu, Z., Xie, J., Li, L., . . . Alonso-Fernandez, F. (2024). Swin-MGNet: Swin Transformer based Multi-view Grouping Network for 3D Object Recognition. IEEE Transactions on Artificial Intelligence, 1-12
Open this publication in new window or tab >>Swin-MGNet: Swin Transformer based Multi-view Grouping Network for 3D Object Recognition
Show others...
2024 (English)In: IEEE Transactions on Artificial Intelligence, ISSN 2691-4581, p. 1-12Article in journal (Refereed) Epub ahead of print
Abstract [en]

Recent developments in Swin Transformer have shown its great potential in various computer vision tasks, including image classification, semantic segmentation, and object detection. However, it is challenging to achieve desired performance by directly employing the Swin Transformer in multi-view 3D object recognition since the Swin Transformer independently extracts the characteristics of each view and relies heavily on a subsequent fusion strategy to unify the multi-view information. This leads to the problem of the insufficient extraction of interdependencies between the multi-view images. To this end, we propose an aggregation strategy integrated into the Swin Transformer to reinforce the connections between internal features across multiple views, thus leading to a complete interpretation of isolated features extracted by the Swin Transformer. Specifically, we utilize Swin Transformer to learn view-level feature representations from multi-view images and then calculate their view discrimination scores. The scores are employed to assign the view-level features to different groups. Finally, a grouping and fusion network is proposed to aggregate the features from view and group levels. The experimental results indicate that our method attains state-of-the-art performance compared to prior approaches in multi-view 3D object recognition tasks. The source code is available at https://github.com/Qishaohua94/DEST. ©2020 IEEE.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2024
Keywords
3D Object Classification, 3D Object Retrieval, Feature Fusion, Grouping Mechanism, Multi-view learning, Swin Transformer
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:hh:diva-54973 (URN)10.1109/TAI.2024.3492163 (DOI)2-s2.0-85208686299 (Scopus ID)
Funder
VinnovaSwedish Research Council
Note

This work is supported by the National Natural Science Foundation of China No. 62373343, Beijing Natural Science Foundation No. L233036, Swedish Research Council (VR) and the Swedish Innovation Agency (VIN-NOVA).

Available from: 2024-11-26 Created: 2024-11-26 Last updated: 2025-02-07Bibliographically approved
Alonso-Fernandez, F., Hernandez-Diaz, K., Buades, J. M., Tiwari, P. & Bigun, J. (2023). An Explainable Model-Agnostic Algorithm for CNN-Based Biometrics Verification. In: 2023 IEEE International Workshop on Information Forensics and Security (WIFS): . Paper presented at 2023 IEEE International Workshop on Information Forensics and Security, WIFS 2023, Nürnberg, Germany, 4-7 December, 2023. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>An Explainable Model-Agnostic Algorithm for CNN-Based Biometrics Verification
Show others...
2023 (English)In: 2023 IEEE International Workshop on Information Forensics and Security (WIFS), Institute of Electrical and Electronics Engineers (IEEE), 2023Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting. LIME was initially proposed for networks with the same output classes used for training, and it employs the softmax probability to determine which regions of the image contribute the most to classification. However, in a verification setting, the classes to be recognized have not been seen during training. In addition, instead of using the softmax output, face descriptors are usually obtained from a layer before the classification layer. The model is adapted to achieve explainability via cosine similarity between feature vectors of perturbated versions of the input image. The method is showcased for face biometrics with two CNN models based on MobileNetv2 and ResNet50. © 2023 IEEE.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Biometrics, Explainable AI, Face recognition, XAI
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:hh:diva-52721 (URN)10.1109/WIFS58808.2023.10374866 (DOI)2-s2.0-85183463933 (Scopus ID)9798350324914 (ISBN)
Conference
2023 IEEE International Workshop on Information Forensics and Security, WIFS 2023, Nürnberg, Germany, 4-7 December, 2023
Projects
EXPLAINING - ”Project EXPLainable Artificial INtelligence systems for health and well-beING”
Funder
Swedish Research CouncilVinnova
Available from: 2024-02-16 Created: 2024-02-16 Last updated: 2025-02-07Bibliographically approved
Arvidsson, M., Sawirot, S., Englund, C., Alonso-Fernandez, F., Torstensson, M. & Duran, B. (2023). Drone navigation and license place detection for vehicle location in indoor spaces. In: Yanio Hernández Heredia; Vladimir Milián Núñez; José Ruiz Shulcloper (Ed.), Progress in Artificial Intelligence and Pattern Recognition: . Paper presented at 8th International Congress on Artificial Intelligence and Pattern Recognition, IWAIPR 2023, Varadero, Cuba, September 27–29, 2023 (pp. 362-374). Heidelberg: Springer
Open this publication in new window or tab >>Drone navigation and license place detection for vehicle location in indoor spaces
Show others...
2023 (English)In: Progress in Artificial Intelligence and Pattern Recognition / [ed] Yanio Hernández Heredia; Vladimir Milián Núñez; José Ruiz Shulcloper, Heidelberg: Springer, 2023, p. 362-374Conference paper, Published paper (Refereed)
Abstract [en]

Millions of vehicles are transported every year, tightly parked in vessels or boats. To reduce the risks of associated safety issues like fires, knowing the location of vehicles is essential, since different vehicles may need different mitigation measures, e.g. electric cars. This work is aimed at creating a solution based on a nano-drone that navigates across rows of parked vehicles and detects their license plates. We do so via a wall-following algorithm, and a CNN trained to detect license plates. All computations are done in real-time on the drone, which just sends position and detected images that allow the creation of a 2D map with the position of the plates. Our solution is capable of reading all plates across eight test cases (with several rows of plates, different drone speeds, or low light) by aggregation of measurements across several drone journeys. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Place, publisher, year, edition, pages
Heidelberg: Springer, 2023
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14335
Keywords
Nano-drone, License plate detection, Vehicle location, UAV
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-51292 (URN)10.1007/978-3-031-49552-6_31 (DOI)2-s2.0-85180752157& (Scopus ID)978-3-031-49551-9 (ISBN)978-3-031-49552-6 (ISBN)
Conference
8th International Congress on Artificial Intelligence and Pattern Recognition, IWAIPR 2023, Varadero, Cuba, September 27–29, 2023
Funder
VinnovaSwedish Research Council
Available from: 2023-07-19 Created: 2023-07-19 Last updated: 2024-04-04Bibliographically approved
Kolf, J. N., Alonso-Fernandez, F., Hernandez-Diaz, K., Bigun, J. & Yang, B. (2023). EFaR 2023: Efficient Face Recognition Competition. In: 2023 IEEE International Joint Conference on Biometrics, IJCB 2023: . Paper presented at IEEE International Joint Conference on Biometrics (IJCB 2023), Ljubljana, Slovenia, 25-28 September 2023. IEEE
Open this publication in new window or tab >>EFaR 2023: Efficient Face Recognition Competition
Show others...
2023 (English)In: 2023 IEEE International Joint Conference on Biometrics, IJCB 2023, IEEE, 2023Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents the summary of the Efficient Face Recognition Competition (EFaR) held at the 2023 International Joint Conference on Biometrics (IJCB 2023). The competition received 17 submissions from 6 different teams. To drive further development of efficient face recognition models, the submitted solutions are ranked based on a weighted score of the achieved verification accuracies on a diverse set of benchmarks, as well as the deployability given by the number of floating-point operations and model size. The evaluation of submissions is extended to bias, cross-quality, and large-scale recognition benchmarks. Overall, the paper gives an overview of the achieved performance values of the submitted solutions as well as a diverse set of baselines. The submitted solutions use small, efficient network architectures to reduce the computational cost, some solutions apply model quantization. An outlook on possible techniques that are underrepresented in current solutions is given as well. © 2023 IEEE.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Conference on Biometrics, Theory, Applications and Systems, ISSN 2474-9680, E-ISSN 2474-9699
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-52967 (URN)10.1109/IJCB57857.2023.10448917 (DOI)001180818700054 ()2-s2.0-85171755032 (Scopus ID)
Conference
IEEE International Joint Conference on Biometrics (IJCB 2023), Ljubljana, Slovenia, 25-28 September 2023
Funder
Swedish Research CouncilVinnova
Note

Acknowledgment: This research work has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. This work has been partially funded by the German Federal Ministry of Education and Research (BMBF) through the Software Campus Project.

Available from: 2024-03-26 Created: 2024-03-26 Last updated: 2024-06-28Bibliographically approved
Projects
Bio-distance, Biometrics at a distance [2009-07215_VR]; Halmstad UniversityFacial detection and recognition resilient to physical image deformations [2012-04313_VR]; Halmstad UniversityOcular biometrics in unconstrained sensing environments [2016-03497_VR]; Halmstad University; Publications
Hernandez-Diaz, K., Alonso-Fernandez, F. & Bigun, J. (2023). One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations. IEEE Access, 11, 100396-100413Alonso-Fernandez, F., Hernandez-Diaz, K., Ramis, S., Perales, F. J. & Bigun, J. (2021). Facial Masks and Soft-Biometrics: Leveraging Face Recognition CNNs for Age and Gender Prediction on Mobile Ocular Images. IET Biometrics, 10(5), 562-580Gonzalez-Sosa, E., Vera-Rodriguez, R., Fierrez, J., Alonso-Fernandez, F. & Patel, V. M. (2019). Exploring Body Texture From mmW Images for Person Recognition. IEEE Transactions on Biometrics, Behavior, and Identity Science, 1(2), 139-151
Continuous Multimodal Biometrics for Vehicles & Surveillance [2018-00472_Vinnova]; Halmstad UniversityHuman identity and understanding of person behavior using smartphone devices [2018-04347_Vinnova]; Halmstad UniversityFacial Analysis in the Era of Mobile Devices and Face Masks [2021-05110_VR]; Halmstad University; Publications
Alonso-Fernandez, F., Hernandez-Diaz, K., Tiwari, P. & Bigun, J. (2024). Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition. In: Proceedings - 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024: . Paper presented at 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, Rome, Italy, December 2-5, 2024. IEEEAlonso-Fernandez, F., Hernandez-Diaz, K., Buades Rubio, J. M. & Bigun, J. (2024). SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning. In: Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J. (Ed.), Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023.: . Paper presented at VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023 (pp. 349-361). Cham: Springer, 14335Busch, C., Deravi, F., Frings, D., Alonso-Fernandez, F. & Bigun, J. (2023). Facilitating free travel in the Schengen area—A position paper by the European Association for Biometrics. IET Biometrics, 12(2), 112-128Zell, O., Påsson, J., Hernandez-Diaz, K., Alonso-Fernandez, F. & Nilsson, F. (2023). Image-Based Fire Detection in Industrial Environments with YOLOv4. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods ICPRAM: . Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 379-386). Setúbal: SciTePress, 1Baaz, A., Yonan, Y., Hernandez-Diaz, K., Alonso-Fernandez, F. & Nilsson, F. (2023). Synthetic Data for Object Classification in Industrial Applications. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods ICPRAM: . Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 387-394). SciTePress, 1Karlsson, J., Strand, F., Bigun, J., Alonso-Fernandez, F., Hernandez-Diaz, K. & Nilsson, F. (2023). Visual Detection of Personal Protective Equipment and Safety Gear on Industry Workers. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods: February 22-24, 2023, in Lisbon, Portugal. Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 395-402). SciTePress, 1Hedman, P., Skepetzis, V., Hernandez-Diaz, K., Bigun, J. & Alonso-Fernandez, F. (2022). On the effect of selfie beautification filters on face detection and recognition. Pattern Recognition Letters, 163, 104-111
DIFFUSE: Disentanglement of Features For Utilization in Systematic Evaluation [2021-05038_Vinnova]; RISE - Research Institutes of Sweden (2017-2019) (Closed down 2019-12-31)Big Data-Powered End User Function Development (BIG FUN) [2021-05045_Vinnova]; Halmstad University; Publications
Luo, Y., Gkouskos, D., Russo, N. & Wang, M. (2024). Navigating from data-driven design to designing with ML: A case study of truck HMI system design. In: Proceedings of the Design Society: . Paper presented at 2024 International Design Society Conference (Design 2024), Cavtat, Dubrovnik, Croatia, 20-23 May, 2024 (pp. 2119-2128). Cambridge: Cambridge University Press, 4
AI-Powered Crime Scene Analysis [2022-00919_Vinnova]; Halmstad University
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1400-346X

Search in DiVA

Show all publications