hh.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 170) Show all publications
Alonso-Fernandez, F., Hernandez-Diaz, K., Buades Rubio, J. M., Tiwari, P. & Bigun, J. (2025). Deep network pruning: A comparative study on CNNs in face recognition. Pattern Recognition Letters, 189, 221-228
Open this publication in new window or tab >>Deep network pruning: A comparative study on CNNs in face recognition
Show others...
2025 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 189, p. 221-228Article in journal (Refereed) Published
Abstract [en]

The widespread use of mobile devices for all kinds of transactions makes necessary reliable and real-time identity authentication, leading to the adoption of face recognition (FR) via the cameras embedded in such devices. Progress of deep Convolutional Neural Networks (CNNs) has provided substantial advances in FR. Nonetheless, the size of state-of-the-art architectures is unsuitable for mobile deployment, since they often encompass hundreds of megabytes and millions of parameters. We address this by studying methods for deep network compression applied to FR. In particular, we apply network pruning based on Taylor scores, where less important filters are removed iteratively. The method is tested on three networks based on the small SqueezeNet (1.24M parameters) and the popular MobileNetv2 (3.5M) and ResNet50 (23.5M) architectures. These have been selected to showcase the method on CNNs with different complexities and sizes. We observe that a substantial percentage of filters can be removed with minimal performance loss. Also, filters with the highest amount of output channels tend to be removed first, suggesting that high-dimensional spaces within popular CNNs are over-dimensioned. The models of this paper are available at https://github.com/HalmstadUniversityBiometrics/CNN-pruning-for-face-recognition. © 2025.

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2025
Keywords
Convolutional Neural Networks, Deep learning, Face recognition, Mobile biometrics, Network pruning, Taylor expansion
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:hh:diva-55571 (URN)10.1016/j.patrec.2025.01.023 (DOI)2-s2.0-85217214565 (Scopus ID)
Funder
Vinnova, PID2022-136779OB-C32Swedish Research CouncilEuropean Commission
Note

This work was partly done while F. A.-F. was a visiting researcher at the University of the Balearic Islands . F. A.-F., K. H.-D., and J. B. thank the Swedish Research Council (VR) and the Swedish Innovation Agency (VINNOVA) for funding their research. This work is part of the Project PID2022-136779OB-C32 (PLEISAR) funded by MICIU/ AEI /10.13039/501100011033/ and FEDER, EU.

Available from: 2025-02-28 Created: 2025-02-28 Last updated: 2025-02-28Bibliographically approved
Alonso-Fernandez, F., Hernandez-Diaz, K., Tiwari, P. & Bigun, J. (2024). Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition. In: Proceedings - 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024: . Paper presented at 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, Rome, Italy, December 2-5, 2024 (pp. 1-6). IEEE
Open this publication in new window or tab >>Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition
2024 (English)In: Proceedings - 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, IEEE, 2024, p. 1-6Conference paper, Published paper (Refereed)
Abstract [en]

We apply pre-trained architectures, originally developed for the ImageNet Large Scale Visual Recognition Challenge, for periocular recognition. These architectures have demon-strated significant success in various computer vision tasks beyond the ones for which they were designed. This work builds on our previous study using off-the-shelf Convolutional Neural Network (CNN) and extends it to include the more recently proposed Vision Transformers (ViT). Despite being trained for generic object classification, middle-layer features from CNNs and ViTs are a suitable way to recognize individuals based on periocular images. We also demonstrate that CNNs and ViTs are highly complementary since their combination results in boosted accuracy. In addition, we show that a small portion of these pre-trained models can achieve good accuracy, resulting in thinner models with fewer parameters, suitable for resource-limited environments such as mobiles. This efficiency improves if traditional handcrafted features are added as well. ©2024 IEEE.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE International Workshop on Information Forensics and Security, ISSN 2157-4766, E-ISSN 2157-4774
Keywords
Periocular recognition, deep representation, biometrics, transfer learning, one-shot learning, Convolutional Neural Network, Vision Transformers
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-54713 (URN)10.1109/WIFS61860.2024.10810712 (DOI)001422478100039 ()2-s2.0-85215518296 (Scopus ID)979-8-3503-6442-2 (ISBN)979-8-3503-6443-9 (ISBN)
Conference
16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, Rome, Italy, December 2-5, 2024
Funder
Swedish Research CouncilVinnova
Available from: 2024-10-06 Created: 2024-10-06 Last updated: 2025-03-20Bibliographically approved
Alonso-Fernandez, F., Bigun, J., Fierrez, J., Damer, N., Proenca, H. & Ross, A. (2024). Periocular Biometrics: A Modality for Unconstrained Scenarios. Computer, 57(6), 40-49
Open this publication in new window or tab >>Periocular Biometrics: A Modality for Unconstrained Scenarios
Show others...
2024 (English)In: Computer, ISSN 0018-9162, E-ISSN 1558-0814, Vol. 57, no 6, p. 40-49Article in journal (Refereed) Published
Abstract [en]

This article discusses the state of the art in periocular biometrics, presenting an overall framework encompassing the field's most significant research aspects, which include ocular definition, acquisition, and detection; identity recognition; and ocular soft-biometric analysis. © 1970-2012 IEEE.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE Computer Society, 2024
Keywords
Biometric analysis, Identity recognition, Periocular, Soft biometrics, State of the art
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-54353 (URN)10.1109/MC.2023.3298095 (DOI)001240114700006 ()2-s2.0-85172810109& (Scopus ID)
Available from: 2024-08-01 Created: 2024-08-01 Last updated: 2024-08-01Bibliographically approved
Alonso-Fernandez, F., Hernandez-Diaz, K., Buades Rubio, J. M. & Bigun, J. (2024). SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning. In: Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J. (Ed.), Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023.: . Paper presented at VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023 (pp. 349-361). Cham: Springer, 14335
Open this publication in new window or tab >>SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning
2024 (English)In: Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023. / [ed] Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J., Cham: Springer, 2024, Vol. 14335, p. 349-361Conference paper, Published paper (Refereed)
Abstract [en]

The widespread use of mobile devices for various digital services has created a need for reliable and real-time person authentication. In this context, facial recognition technologies have emerged as a dependable method for verifying users due to the prevalence of cameras in mobile devices and their integration into everyday applications. The rapid advancement of deep Convolutional Neural Networks (CNNs) has led to numerous face verification architectures. However, these models are often large and impractical for mobile applications, reaching sizes of hundreds of megabytes with millions of parameters. We address this issue by developing SqueezerFaceNet, a light face recognition network which less than 1M parameters. This is achieved by applying a network pruning method based on Taylor scores, where filters with small importance scores are removed iteratively. Starting from an already small network (of 1.24M) based on SqueezeNet, we show that it can be further reduced (up to 40%) without an appreciable loss in performance. To the best of our knowledge, we are the first to evaluate network pruning methods for the task of face recognition. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Place, publisher, year, edition, pages
Cham: Springer, 2024
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14335
Keywords
Face recognition, Mobile Biometrics, CNN pruning, Taylor scores
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-51299 (URN)10.1007/978-3-031-49552-6_30 (DOI)2-s2.0-85180788350 (Scopus ID)978-3-031-49551-9 (ISBN)978-3-031-49552-6 (ISBN)
Conference
VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023
Funder
Swedish Research CouncilVinnova
Note

Funding: F. A.-F., K. H.-D., and J. B. thank the Swedish Research Council (VR) and the Swedish Innovation Agency (VINNOVA) for funding their research. Author J. M. B. thanks the project EX-PLAINING - "Project EXPLainable Artificial INtelligence systems for health and well-beING", under Spanish national projects funding (PID2019-104829RA-I00/AEI/10.13039/501100011033).

Available from: 2023-07-20 Created: 2023-07-20 Last updated: 2024-06-17Bibliographically approved
Alonso-Fernandez, F., Hernandez-Diaz, K., Buades, J. M., Tiwari, P. & Bigun, J. (2023). An Explainable Model-Agnostic Algorithm for CNN-Based Biometrics Verification. In: 2023 IEEE International Workshop on Information Forensics and Security (WIFS): . Paper presented at 2023 IEEE International Workshop on Information Forensics and Security, WIFS 2023, Nürnberg, Germany, 4-7 December, 2023. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>An Explainable Model-Agnostic Algorithm for CNN-Based Biometrics Verification
Show others...
2023 (English)In: 2023 IEEE International Workshop on Information Forensics and Security (WIFS), Institute of Electrical and Electronics Engineers (IEEE), 2023Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting. LIME was initially proposed for networks with the same output classes used for training, and it employs the softmax probability to determine which regions of the image contribute the most to classification. However, in a verification setting, the classes to be recognized have not been seen during training. In addition, instead of using the softmax output, face descriptors are usually obtained from a layer before the classification layer. The model is adapted to achieve explainability via cosine similarity between feature vectors of perturbated versions of the input image. The method is showcased for face biometrics with two CNN models based on MobileNetv2 and ResNet50. © 2023 IEEE.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Biometrics, Explainable AI, Face recognition, XAI
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:hh:diva-52721 (URN)10.1109/WIFS58808.2023.10374866 (DOI)2-s2.0-85183463933 (Scopus ID)9798350324914 (ISBN)
Conference
2023 IEEE International Workshop on Information Forensics and Security, WIFS 2023, Nürnberg, Germany, 4-7 December, 2023
Projects
EXPLAINING - ”Project EXPLainable Artificial INtelligence systems for health and well-beING”
Funder
Swedish Research CouncilVinnova
Available from: 2024-02-16 Created: 2024-02-16 Last updated: 2025-02-07Bibliographically approved
Kolf, J. N., Alonso-Fernandez, F., Hernandez-Diaz, K., Bigun, J. & Yang, B. (2023). EFaR 2023: Efficient Face Recognition Competition. In: 2023 IEEE International Joint Conference on Biometrics, IJCB 2023: . Paper presented at IEEE International Joint Conference on Biometrics (IJCB 2023), Ljubljana, Slovenia, 25-28 September 2023. IEEE
Open this publication in new window or tab >>EFaR 2023: Efficient Face Recognition Competition
Show others...
2023 (English)In: 2023 IEEE International Joint Conference on Biometrics, IJCB 2023, IEEE, 2023Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents the summary of the Efficient Face Recognition Competition (EFaR) held at the 2023 International Joint Conference on Biometrics (IJCB 2023). The competition received 17 submissions from 6 different teams. To drive further development of efficient face recognition models, the submitted solutions are ranked based on a weighted score of the achieved verification accuracies on a diverse set of benchmarks, as well as the deployability given by the number of floating-point operations and model size. The evaluation of submissions is extended to bias, cross-quality, and large-scale recognition benchmarks. Overall, the paper gives an overview of the achieved performance values of the submitted solutions as well as a diverse set of baselines. The submitted solutions use small, efficient network architectures to reduce the computational cost, some solutions apply model quantization. An outlook on possible techniques that are underrepresented in current solutions is given as well. © 2023 IEEE.

Place, publisher, year, edition, pages
IEEE, 2023
Series
IEEE International Conference on Biometrics, Theory, Applications and Systems, ISSN 2474-9680, E-ISSN 2474-9699
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-52967 (URN)10.1109/IJCB57857.2023.10448917 (DOI)001180818700054 ()2-s2.0-85171755032 (Scopus ID)
Conference
IEEE International Joint Conference on Biometrics (IJCB 2023), Ljubljana, Slovenia, 25-28 September 2023
Funder
Swedish Research CouncilVinnova
Note

Acknowledgment: This research work has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. This work has been partially funded by the German Federal Ministry of Education and Research (BMBF) through the Software Campus Project.

Available from: 2024-03-26 Created: 2024-03-26 Last updated: 2024-06-28Bibliographically approved
Busch, C., Deravi, F., Frings, D., Alonso-Fernandez, F. & Bigun, J. (2023). Facilitating free travel in the Schengen area—A position paper by the European Association for Biometrics. IET Biometrics, 12(2), 112-128
Open this publication in new window or tab >>Facilitating free travel in the Schengen area—A position paper by the European Association for Biometrics
Show others...
2023 (English)In: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 12, no 2, p. 112-128Article in journal (Refereed) Published
Abstract [en]

Due to migration, terror-threats and the viral pandemic, various EU member states have re-established internal border control or even closed their borders. European Association for Biometrics (EAB), a non-profit organisation, solicited the views of its members on ways which biometric technologies and services may be used to help with re-establishing open borders within the Schengen area while at the same time mitigating any adverse effects. From the responses received, this position paper was composed to identify ideas to re-establish free travel between the member states in the Schengen area. The paper covers the contending needs for security, open borders and fundamental rights as well as legal constraints that any technological solution must consider. A range of specific technologies for direct biometric recognition alongside complementary measures are outlined. The interrelated issues of ethical and societal considerations are also highlighted. Provided a holistic approach is adopted, it may be possible to reach a more optimal trade-off with regards to open borders while maintaining a high-level of security and protection of fundamental rights. European Association for Biometrics and its members can play an important role in fostering a shared understanding of security and mobility challenges and their solutions. © 2023 The Authors. IET Biometrics published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.

Place, publisher, year, edition, pages
Oxford: John Wiley & Sons, 2023
Keywords
biometric applications, biometric template protection, biometrics (access control), computer vision, data privacy, image analysis for biometrics, object tracking
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-50367 (URN)10.1049/bme2.12107 (DOI)000976420600001 ()2-s2.0-85153281620 (Scopus ID)
Available from: 2023-04-20 Created: 2023-04-20 Last updated: 2023-12-06Bibliographically approved
Hernandez-Diaz, K., Alonso-Fernandez, F. & Bigun, J. (2023). One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations. IEEE Access, 11, 100396-100413
Open this publication in new window or tab >>One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations
2023 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 100396-100413Article in journal (Refereed) Published
Abstract [en]

One weakness of machine-learning algorithms is the need to train the models for a new task. This presents a specific challenge for biometric recognition due to the dynamic nature of databases and, in some instances, the reliance on subject collaboration for data collection. In this paper, we investigate the behavior of deep representations in widely used CNN models under extreme data scarcity for One-Shot periocular recognition, a biometric recognition task. We analyze the outputs of CNN layers as identity-representing feature vectors. We examine the impact of Domain Adaptation on the network layers’ output for unseen data and evaluate the method’s robustness concerning data normalization and generalization of the best-performing layer. We improved state-of-the-art results that made use of networks trained with biometric datasets with millions of images and fine-tuned for the target periocular dataset by utilizing out-of-the-box CNNs trained for the ImageNet Recognition Challenge and standard computer vision algorithms. For example, for the Cross-Eyed dataset, we could reduce the EER by 67% and 79% (from 1.70%and 3.41% to 0.56% and 0.71%) in the Close-World and Open-World protocols, respectively, for the periocular case. We also demonstrate that traditional algorithms like SIFT can outperform CNNs in situations with limited data or scenarios where the network has not been trained with the test classes like the Open-World mode. SIFT alone was able to reduce the EER by 64% and 71.6% (from 1.7% and 3.41% to 0.6% and 0.97%) for Cross-Eyed in the Close-World and Open-World protocols, respectively, and a reduction of 4.6% (from 3.94% to 3.76%) in the PolyU database for the Open-World and single biometric case.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2023
Keywords
Biometrics, Biometrics (access control), Databases, Deep learning, Deep Representation, Face recognition, Feature extraction, Image recognition, Iris recognition, One-Shot Learning, Periocular, Representation learning, Task analysis, Training, Transfer Learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-51749 (URN)10.1109/ACCESS.2023.3315234 (DOI)2-s2.0-85171525429 (Scopus ID)
Funder
Swedish Research CouncilVinnova
Available from: 2023-10-19 Created: 2023-10-19 Last updated: 2024-06-17Bibliographically approved
Karlsson, J., Strand, F., Bigun, J., Alonso-Fernandez, F., Hernandez-Diaz, K. & Nilsson, F. (2023). Visual Detection of Personal Protective Equipment and Safety Gear on Industry Workers. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods: February 22-24, 2023, in Lisbon, Portugal. Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 395-402). SciTePress, 1
Open this publication in new window or tab >>Visual Detection of Personal Protective Equipment and Safety Gear on Industry Workers
Show others...
2023 (English)In: Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods: February 22-24, 2023, in Lisbon, Portugal / [ed] Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred, SciTePress, 2023, Vol. 1, p. 395-402Conference paper, Published paper (Refereed)
Abstract [en]

Workplace injuries are common in today’s society due to a lack of adequately worn safety equipment. A system that only admits appropriately equipped personnel can be created to improve working conditions. The goal is thus to develop a system that will improve workers’ safety using a camera that will detect the usage of Personal Protective Equipment (PPE). To this end, we collected and labeled appropriate data from several public sources, which have been used to train and evaluate several models based on the popular YOLOv4 object detector. Our focus, driven by a collaborating industrial partner, is to implement our system into an entry control point where workers must present themselves to obtain access to a restricted area. Combined with facial identity recognition, the system would ensure that only authorized people wearing appropriate equipment are granted access. A novelty of this work is that we increase the number of classes to five objects (hardhat, safety vest, safety gloves, safety glasses, and hearing protection), whereas most existing works only focus on one or two classes, usually hardhats or vests. The AI model developed provides good detection accuracy at a distance of 3 and 5 meters in the collaborative environment where we aim at operating (mAP of 99/89%, respectively). The small size of some objects or the potential occlusion by body parts have been identified as potential factors that are detrimental to accuracy, which we have counteracted via data augmentation and cropping of the body before applying PPE detection. © 2023 by SCITEPRESS-Science and Technology Publications, Lda.

Place, publisher, year, edition, pages
SciTePress, 2023
Series
ICPRAM, E-ISSN 2184-4313
Keywords
PPE, PPE Detection, Personal Protective Equipment, Machine Learning, Computer Vision, YOLO
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-48795 (URN)10.5220/0011693500003411 (DOI)2-s2.0-85174511525 (Scopus ID)978-989-758-626-2 (ISBN)
Conference
12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023
Projects
2021-05038 Vinnova DIFFUSE Disentanglement of Features For Utilization in Systematic Evaluation
Available from: 2022-12-09 Created: 2022-12-09 Last updated: 2024-06-17Bibliographically approved
Alonso-Fernandez, F. & Bigun, J. (2022). Continuous Examination by Automatic Quiz Assessment Using Spiral Codes and Image Processing. In: Ilhem Kallel; Habib M. Kammoun; Lobna Hsairi (Ed.), 2022 IEEE Global Engineering Education Conference (EDUCON): . Paper presented at 13th IEEE Global Engineering Education Conference, EDUCON (Educational Conference), Tunis, Tunisia, 28-31 March, 2022 (pp. 929-935). IEEE, 2022-Marc
Open this publication in new window or tab >>Continuous Examination by Automatic Quiz Assessment Using Spiral Codes and Image Processing
2022 (English)In: 2022 IEEE Global Engineering Education Conference (EDUCON) / [ed] Ilhem Kallel; Habib M. Kammoun; Lobna Hsairi, IEEE, 2022, Vol. 2022-Marc, p. 929-935Conference paper, Published paper (Refereed)
Abstract [en]

We describe a technical solution implemented at Halmstad University to automatise assessment and reporting of results of paper-based quiz exams. Paper quizzes are affordable and within reach of campus education in classrooms. Offering and taking them is accepted as they cause fewer issues with reliability and democratic access, e.g. a large number of students can take them without a trusted mobile device, internet, or battery. By contrast, correction of the quiz is a considerable obstacle. We suggest mitigating the issue by a novel image processing technique using harmonic spirals that aligns answer sheets in sub-pixel accuracy to read student identity and answers and to email results within minutes, all fully automatically. Using the described method, we carry out regular weekly examinations in two master courses at the mentioned centre without a significant workload increase. The employed solution also enables us to assign a unique identifier to each quiz (e.g. week 1, week 2...) while allowing us to have an individualised quiz for each student. © 2022 IEEE.

Place, publisher, year, edition, pages
IEEE, 2022
Series
IEEE Global Engineering Education Conference, ISSN 2165-9559, E-ISSN 2165-9567 ; 2022
Keywords
Continuous examination, automatic correction, image processing, spiral codes, continuous education
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-46251 (URN)10.1109/EDUCON52537.2022.9766699 (DOI)000836390500137 ()2-s2.0-85123685725 (Scopus ID)978-1-6654-4434-7 (ISBN)978-1-6654-4435-4 (ISBN)
Conference
13th IEEE Global Engineering Education Conference, EDUCON (Educational Conference), Tunis, Tunisia, 28-31 March, 2022
Funder
Swedish Research CouncilVinnovaKnowledge Foundation
Available from: 2022-01-26 Created: 2022-01-26 Last updated: 2023-10-05Bibliographically approved
Projects
Lip-motion, face and speech analysis in synergy, for human-machine interfaces [2008-03876_VR]; Halmstad UniversityScale, orientation and illumination invariant information encoding and decoding-- A study on invariant visual codes [2011-05819_VR]; Halmstad UniversityFacial detection and recognition resilient to physical image deformations [2012-04313_VR]; Halmstad UniversityFacial Analysis in the Era of Mobile Devices and Face Masks [2021-05110_VR]; Halmstad University; Publications
Alonso-Fernandez, F., Hernandez-Diaz, K., Tiwari, P. & Bigun, J. (2024). Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition. In: Proceedings - 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024: . Paper presented at 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, Rome, Italy, December 2-5, 2024 (pp. 1-6). IEEEAlonso-Fernandez, F., Hernandez-Diaz, K., Buades Rubio, J. M. & Bigun, J. (2024). SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning. In: Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J. (Ed.), Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023.: . Paper presented at VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023 (pp. 349-361). Cham: Springer, 14335Busch, C., Deravi, F., Frings, D., Alonso-Fernandez, F. & Bigun, J. (2023). Facilitating free travel in the Schengen area—A position paper by the European Association for Biometrics. IET Biometrics, 12(2), 112-128Zell, O., Påsson, J., Hernandez-Diaz, K., Alonso-Fernandez, F. & Nilsson, F. (2023). Image-Based Fire Detection in Industrial Environments with YOLOv4. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods ICPRAM: . Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 379-386). Setúbal: SciTePress, 1Baaz, A., Yonan, Y., Hernandez-Diaz, K., Alonso-Fernandez, F. & Nilsson, F. (2023). Synthetic Data for Object Classification in Industrial Applications. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods ICPRAM: . Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 387-394). SciTePress, 1Karlsson, J., Strand, F., Bigun, J., Alonso-Fernandez, F., Hernandez-Diaz, K. & Nilsson, F. (2023). Visual Detection of Personal Protective Equipment and Safety Gear on Industry Workers. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods: February 22-24, 2023, in Lisbon, Portugal. Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 395-402). SciTePress, 1Hedman, P., Skepetzis, V., Hernandez-Diaz, K., Bigun, J. & Alonso-Fernandez, F. (2022). On the effect of selfie beautification filters on face detection and recognition. Pattern Recognition Letters, 163, 104-111
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-4929-1262

Search in DiVA

Show all publications