hh.sePublications
Change search
Link to record
Permanent link

Direct link
Alonso-Fernandez, FernandoORCID iD iconorcid.org/0000-0002-1400-346X
Publications (10 of 130) Show all publications
Alonso-Fernandez, F., Hernandez-Diaz, K., Buades Rubio, J. M. & Bigun, J. (2024). SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning. In: Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J. (Ed.), Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023.: . Paper presented at VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023 (pp. 349-361). Cham: Springer, 14335
Open this publication in new window or tab >>SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning
2024 (English)In: Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023. / [ed] Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J., Cham: Springer, 2024, Vol. 14335, p. 349-361Conference paper, Published paper (Refereed)
Abstract [en]

The widespread use of mobile devices for various digital services has created a need for reliable and real-time person authentication. In this context, facial recognition technologies have emerged as a dependable method for verifying users due to the prevalence of cameras in mobile devices and their integration into everyday applications. The rapid advancement of deep Convolutional Neural Networks (CNNs) has led to numerous face verification architectures. However, these models are often large and impractical for mobile applications, reaching sizes of hundreds of megabytes with millions of parameters. We address this issue by developing SqueezerFaceNet, a light face recognition network which less than 1M parameters. This is achieved by applying a network pruning method based on Taylor scores, where filters with small importance scores are removed iteratively. Starting from an already small network (of 1.24M) based on SqueezeNet, we show that it can be further reduced (up to 40%) without an appreciable loss in performance. To the best of our knowledge, we are the first to evaluate network pruning methods for the task of face recognition. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Place, publisher, year, edition, pages
Cham: Springer, 2024
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14335
Keywords
Face recognition, Mobile Biometrics, CNN pruning, Taylor scores
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-51299 (URN)10.1007/978-3-031-49552-6_30 (DOI)2-s2.0-85180788350 (Scopus ID)978-3-031-49551-9 (ISBN)978-3-031-49552-6 (ISBN)
Conference
VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023
Funder
Swedish Research CouncilVinnova
Note

Funding: F. A.-F., K. H.-D., and J. B. thank the Swedish Research Council (VR) and the Swedish Innovation Agency (VINNOVA) for funding their research. Author J. M. B. thanks the project EX-PLAINING - "Project EXPLainable Artificial INtelligence systems for health and well-beING", under Spanish national projects funding (PID2019-104829RA-I00/AEI/10.13039/501100011033).

Available from: 2023-07-20 Created: 2023-07-20 Last updated: 2024-01-18Bibliographically approved
Alonso-Fernandez, F., Hernandez-Diaz, K., Buades, J. M., Tiwari, P. & Bigun, J. (2023). An Explainable Model-Agnostic Algorithm for CNN-Based Biometrics Verification. In: 2023 IEEE International Workshop on Information Forensics and Security (WIFS): . Paper presented at 2023 IEEE International Workshop on Information Forensics and Security, WIFS 2023, Nürnberg, Germany, 4-7 December, 2023. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>An Explainable Model-Agnostic Algorithm for CNN-Based Biometrics Verification
Show others...
2023 (English)In: 2023 IEEE International Workshop on Information Forensics and Security (WIFS), Institute of Electrical and Electronics Engineers (IEEE), 2023Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting. LIME was initially proposed for networks with the same output classes used for training, and it employs the softmax probability to determine which regions of the image contribute the most to classification. However, in a verification setting, the classes to be recognized have not been seen during training. In addition, instead of using the softmax output, face descriptors are usually obtained from a layer before the classification layer. The model is adapted to achieve explainability via cosine similarity between feature vectors of perturbated versions of the input image. The method is showcased for face biometrics with two CNN models based on MobileNetv2 and ResNet50. © 2023 IEEE.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Biometrics, Explainable AI, Face recognition, XAI
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:hh:diva-52721 (URN)10.1109/WIFS58808.2023.10374866 (DOI)2-s2.0-85183463933 (Scopus ID)9798350324914 (ISBN)
Conference
2023 IEEE International Workshop on Information Forensics and Security, WIFS 2023, Nürnberg, Germany, 4-7 December, 2023
Projects
EXPLAINING - ”Project EXPLainable Artificial INtelligence systems for health and well-beING”
Funder
Swedish Research CouncilVinnova
Available from: 2024-02-16 Created: 2024-02-16 Last updated: 2024-02-16Bibliographically approved
Arvidsson, M., Sawirot, S., Englund, C., Alonso-Fernandez, F., Torstensson, M. & Duran, B. (2023). Drone navigation and license place detection for vehicle location in indoor spaces. In: : . Paper presented at VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023.
Open this publication in new window or tab >>Drone navigation and license place detection for vehicle location in indoor spaces
Show others...
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Millions of vehicles are transported every year, tightly parked in vessels or boats. To reduce the risks of associated safety issues like fires, knowing the location of vehicles is essential, since different vehicles may need different mitigation measures, e.g. electric cars. This work is aimed at creating a solution based on a nano-drone that navigates across rows of parked vehicles and detects their license plates. We do so via a wall-following algorithm, and a CNN trained to detect license plates. All computations are done in real-time on the drone, which just sends position and detected images that allow the creation of a 2D map with the position of the plates. Our solution is capable of reading all plates across eight test cases (with several rows of plates, different drone speeds, or low light) by aggregation of measurements across several drone journeys.

Keywords
Nano-drone, License plate detection, Vehicle location, UAV
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-51292 (URN)10.48550/arXiv.2307.10165 (DOI)
Conference
VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023
Funder
VinnovaSwedish Research Council
Available from: 2023-07-19 Created: 2023-07-19 Last updated: 2023-12-08Bibliographically approved
Rosberg, F., Aksoy, E., Alonso-Fernandez, F. & Englund, C. (2023). FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping. In: Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023: . Paper presented at 23rd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023, Waikoloa, Hawaii, USA, 3-7 January 2023 (pp. 3443-3452). Piscataway: IEEE
Open this publication in new window or tab >>FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping
2023 (English)In: Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023, Piscataway: IEEE, 2023, p. 3443-3452Conference paper, Published paper (Refereed)
Abstract [en]

In this work, we present a new single-stage method for subject agnostic face swapping and identity transfer, named FaceDancer. We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR). The AFFA module is embedded in the decoder and adaptively learns to fuse attribute features and features conditioned on identity information without requiring any additional facial segmentation process. In IFSR, we leverage the intermediate features in an identity encoder to preserve important attributes such as head pose, facial expression, lighting, and occlusion in the target face, while still transferring the identity of the source face with high fidelity. We conduct extensive quantitative and qualitative experiments on various datasets and show that the proposed FaceDancer outperforms other state-of-the-art networks in terms of identityn transfer, while having significantly better pose preservation than most of the previous methods. © 2023 IEEE.

Place, publisher, year, edition, pages
Piscataway: IEEE, 2023
Keywords
Algorithms, Biometrics, and algorithms (including transfer, low-shot, semi-, self-, and un-supervised learning), body pose, face, formulations, gesture, Machine learning architectures
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-48618 (URN)10.1109/WACV56688.2023.00345 (DOI)000971500203054 ()2-s2.0-85149000603 (Scopus ID)9781665493468 (ISBN)
Conference
23rd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023, Waikoloa, Hawaii, USA, 3-7 January 2023
Available from: 2022-11-15 Created: 2022-11-15 Last updated: 2023-08-21Bibliographically approved
Busch, C., Deravi, F., Frings, D., Alonso-Fernandez, F. & Bigun, J. (2023). Facilitating free travel in the Schengen area—A position paper by the European Association for Biometrics. IET Biometrics, 12(2), 112-128
Open this publication in new window or tab >>Facilitating free travel in the Schengen area—A position paper by the European Association for Biometrics
Show others...
2023 (English)In: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 12, no 2, p. 112-128Article in journal (Refereed) Published
Abstract [en]

Due to migration, terror-threats and the viral pandemic, various EU member states have re-established internal border control or even closed their borders. European Association for Biometrics (EAB), a non-profit organisation, solicited the views of its members on ways which biometric technologies and services may be used to help with re-establishing open borders within the Schengen area while at the same time mitigating any adverse effects. From the responses received, this position paper was composed to identify ideas to re-establish free travel between the member states in the Schengen area. The paper covers the contending needs for security, open borders and fundamental rights as well as legal constraints that any technological solution must consider. A range of specific technologies for direct biometric recognition alongside complementary measures are outlined. The interrelated issues of ethical and societal considerations are also highlighted. Provided a holistic approach is adopted, it may be possible to reach a more optimal trade-off with regards to open borders while maintaining a high-level of security and protection of fundamental rights. European Association for Biometrics and its members can play an important role in fostering a shared understanding of security and mobility challenges and their solutions. © 2023 The Authors. IET Biometrics published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.

Place, publisher, year, edition, pages
Oxford: John Wiley & Sons, 2023
Keywords
biometric applications, biometric template protection, biometrics (access control), computer vision, data privacy, image analysis for biometrics, object tracking
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-50367 (URN)10.1049/bme2.12107 (DOI)000976420600001 ()2-s2.0-85153281620 (Scopus ID)
Available from: 2023-04-20 Created: 2023-04-20 Last updated: 2023-12-06Bibliographically approved
Rosberg, F., Aksoy, E., Englund, C. & Alonso-Fernandez, F. (2023). FIVA: Facial Image and Video Anonymization and Anonymization Defense. In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW): . Paper presented at 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023), Paris, France, 2-6 October, 2023 (pp. 362-371). Los Alamitos, CA: IEEE
Open this publication in new window or tab >>FIVA: Facial Image and Video Anonymization and Anonymization Defense
2023 (English)In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Los Alamitos, CA: IEEE, 2023, p. 362-371Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we present a new approach for facial anonymization in images and videos, abbreviated as FIVA. Our proposed method is able to maintain the same face anonymization consistently over frames with our suggested identity-tracking and guarantees a strong difference from the original face. FIVA allows for 0 true positives for a false acceptance rate of 0.001. Our work considers the important security issue of reconstruction attacks and investigates adversarial noise, uniform noise, and parameter noise to disrupt reconstruction attacks. In this regard, we apply different defense and protection methods against these privacy threats to demonstrate the scalability of FIVA. On top of this, we also show that reconstruction attack models can be used for detection of deep fakes. Last but not least, we provide experimental results showing how FIVA can even enable face swapping, which is purely trained on a single target image. © 2023 IEEE.

Place, publisher, year, edition, pages
Los Alamitos, CA: IEEE, 2023
Series
IEEE International Conference on Computer Vision Workshops, E-ISSN 2473-9944
Keywords
Anonymization, Deep Fakes, Facial Recognition, Identity Tracking, Reconstruction Attacks
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-52592 (URN)10.1109/ICCVW60793.2023.00043 (DOI)2-s2.0-85182917356 (Scopus ID)9798350307443 (ISBN)
Conference
2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023), Paris, France, 2-6 October, 2023
Available from: 2024-02-08 Created: 2024-02-08 Last updated: 2024-02-08Bibliographically approved
Zell, O., Påsson, J., Hernandez-Diaz, K., Alonso-Fernandez, F. & Nilsson, F. (2023). Image-Based Fire Detection in Industrial Environments with YOLOv4. In: : . Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023.
Open this publication in new window or tab >>Image-Based Fire Detection in Industrial Environments with YOLOv4
Show others...
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Fires have destructive power when they break out and affect their surroundings on a devastatingly large scale. The best way to minimize their damage is to detect the fire as quickly as possible before it has a chance to grow. Accordingly, this work looks into the potential of AI to detect and recognize fires and reduce detection time using object detection on an image stream. Object detection has made giant leaps in speed and accuracy over the last six years, making real-time detection feasible. To our end, we collected and labeled appropriate data from several public sources, which have been used to train and evaluate several models based on the popular YOLOv4 object detector. Our focus, driven by a collaborating industrial partner, is to implement our system in an industrial warehouse setting, which is characterized by high ceilings. A drawback of traditional smoke detectors in this setup is that the smoke has to rise to a sufficient height. The AI models brought forward in this research managed to outperform these detectors by a significant amount of time, providing precious anticipation that could help to minimize the effects of fires further.

Keywords
Fire detection, Smoke Detection, Machine learning, Computer Vision, YOLOv4
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-48793 (URN)10.48550/arXiv.2212.04786 (DOI)
Conference
12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023
Projects
2021-05038 Vinnova DIFFUSE Disentanglement of Features For Utilization in Systematic Evaluation
Funder
Swedish Research CouncilVinnova
Available from: 2022-12-09 Created: 2022-12-09 Last updated: 2023-12-06Bibliographically approved
Hernandez-Diaz, K., Alonso-Fernandez, F. & Bigun, J. (2023). One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations. IEEE Access, 11, 100396-100413
Open this publication in new window or tab >>One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations
2023 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 11, p. 100396-100413Article in journal (Refereed) Published
Abstract [en]

One weakness of machine-learning algorithms is the need to train the models for a new task. This presents a specific challenge for biometric recognition due to the dynamic nature of databases and, in some instances, the reliance on subject collaboration for data collection. In this paper, we investigate the behavior of deep representations in widely used CNN models under extreme data scarcity for One-Shot periocular recognition, a biometric recognition task. We analyze the outputs of CNN layers as identity-representing feature vectors. We examine the impact of Domain Adaptation on the network layers’ output for unseen data and evaluate the method’s robustness concerning data normalization and generalization of the best-performing layer. We improved state-of-the-art results that made use of networks trained with biometric datasets with millions of images and fine-tuned for the target periocular dataset by utilizing out-of-the-box CNNs trained for the ImageNet Recognition Challenge and standard computer vision algorithms. For example, for the Cross-Eyed dataset, we could reduce the EER by 67% and 79% (from 1.70%and 3.41% to 0.56% and 0.71%) in the Close-World and Open-World protocols, respectively, for the periocular case. We also demonstrate that traditional algorithms like SIFT can outperform CNNs in situations with limited data or scenarios where the network has not been trained with the test classes like the Open-World mode. SIFT alone was able to reduce the EER by 64% and 71.6% (from 1.7% and 3.41% to 0.6% and 0.97%) for Cross-Eyed in the Close-World and Open-World protocols, respectively, and a reduction of 4.6% (from 3.94% to 3.76%) in the PolyU database for the Open-World and single biometric case.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2023
Keywords
Biometrics, Biometrics (access control), Databases, Deep learning, Deep Representation, Face recognition, Feature extraction, Image recognition, Iris recognition, One-Shot Learning, Periocular, Representation learning, Task analysis, Training, Transfer Learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-51749 (URN)10.1109/ACCESS.2023.3315234 (DOI)2-s2.0-85171525429 (Scopus ID)
Funder
Swedish Research CouncilVinnova
Available from: 2023-10-19 Created: 2023-10-19 Last updated: 2023-10-20Bibliographically approved
Baaz, A., Yonan, Y., Hernandez-Diaz, K., Alonso-Fernandez, F. & Nilsson, F. (2023). Synthetic Data for Object Classification in Industrial Applications. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods ICPRAM: . Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 387-394). SciTePress, 1
Open this publication in new window or tab >>Synthetic Data for Object Classification in Industrial Applications
Show others...
2023 (English)In: Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods ICPRAM / [ed] Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred, SciTePress, 2023, Vol. 1, p. 387-394Conference paper, Published paper (Refereed)
Abstract [en]

One of the biggest challenges in machine learning is data collection. Training data is an important part since it determines how the model will behave. In object classification, capturing a large number of images per object and in different conditions is not always possible and can be very time-consuming and tedious. Accordingly, this work explores the creation of artificial images using a game engine to cope with limited data in the training dataset. We combine real and synthetic data to train the object classification engine, a strategy that has shown to be beneficial to increase confidence in the decisions made by the classifier, which is often critical in industrial setups. To combine real and synthetic data, we first train the classifier on a massive amount of synthetic data, and then we fine-tune it on real images. Another important result is that the amount of real images needed for fine-tuning is not very high, reaching top accuracy with just 12 or 24 images per class. This substantially reduces the requirements of capturing a great amount of real data. © 2023 by SCITEPRESS-Science and Technology Publications, Lda.

Place, publisher, year, edition, pages
SciTePress, 2023
Keywords
Synthetic Data, Object Classification, Machine Learning, Computer Vision, ResNet50
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-48794 (URN)10.5220/0011689900003411 (DOI)2-s2.0-85174507299 (Scopus ID)
Conference
12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023
Projects
2021-05038 Vinnova DIFFUSE Disentanglement of Features For Utilization in Systematic Evaluation
Funder
Swedish Research CouncilVinnova
Available from: 2022-12-09 Created: 2022-12-09 Last updated: 2024-02-14Bibliographically approved
Karlsson, J., Strand, F., Bigun, J., Alonso-Fernandez, F., Hernandez-Diaz, K. & Nilsson, F. (2023). Visual Detection of Personal Protective Equipment and Safety Gear on Industry Workers. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods: February 22-24, 2023, in Lisbon, Portugal. Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 395-402). SciTePress, 1
Open this publication in new window or tab >>Visual Detection of Personal Protective Equipment and Safety Gear on Industry Workers
Show others...
2023 (English)In: Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods: February 22-24, 2023, in Lisbon, Portugal / [ed] Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred, SciTePress, 2023, Vol. 1, p. 395-402Conference paper, Published paper (Refereed)
Abstract [en]

Workplace injuries are common in today’s society due to a lack of adequately worn safety equipment. A system that only admits appropriately equipped personnel can be created to improve working conditions. The goal is thus to develop a system that will improve workers’ safety using a camera that will detect the usage of Personal Protective Equipment (PPE). To this end, we collected and labeled appropriate data from several public sources, which have been used to train and evaluate several models based on the popular YOLOv4 object detector. Our focus, driven by a collaborating industrial partner, is to implement our system into an entry control point where workers must present themselves to obtain access to a restricted area. Combined with facial identity recognition, the system would ensure that only authorized people wearing appropriate equipment are granted access. A novelty of this work is that we increase the number of classes to five objects (hardhat, safety vest, safety gloves, safety glasses, and hearing protection), whereas most existing works only focus on one or two classes, usually hardhats or vests. The AI model developed provides good detection accuracy at a distance of 3 and 5 meters in the collaborative environment where we aim at operating (mAP of 99/89%, respectively). The small size of some objects or the potential occlusion by body parts have been identified as potential factors that are detrimental to accuracy, which we have counteracted via data augmentation and cropping of the body before applying PPE detection. © 2023 by SCITEPRESS-Science and Technology Publications, Lda.

Place, publisher, year, edition, pages
SciTePress, 2023
Series
ICPRAM, E-ISSN 2184-4313
Keywords
PPE, PPE Detection, Personal Protective Equipment, Machine Learning, Computer Vision, YOLO
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-48795 (URN)10.5220/0011693500003411 (DOI)2-s2.0-85174511525 (Scopus ID)978-989-758-626-2 (ISBN)
Conference
12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023
Projects
2021-05038 Vinnova DIFFUSE Disentanglement of Features For Utilization in Systematic Evaluation
Available from: 2022-12-09 Created: 2022-12-09 Last updated: 2023-11-30Bibliographically approved
Projects
Bio-distance, Biometrics at a distance [2009-07215_VR]; Halmstad UniversityFacial detection and recognition resilient to physical image deformations [2012-04313_VR]; Halmstad UniversityOcular biometrics in unconstrained sensing environments [2016-03497_VR]; Halmstad University; Publications
Hernandez-Diaz, K., Alonso-Fernandez, F. & Bigun, J. (2023). One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations. IEEE Access, 11, 100396-100413Alonso-Fernandez, F., Hernandez-Diaz, K., Ramis, S., Perales, F. J. & Bigun, J. (2021). Facial Masks and Soft-Biometrics: Leveraging Face Recognition CNNs for Age and Gender Prediction on Mobile Ocular Images. IET Biometrics, 10(5), 562-580Gonzalez-Sosa, E., Vera-Rodriguez, R., Fierrez, J., Alonso-Fernandez, F. & Patel, V. M. (2019). Exploring Body Texture From mmW Images for Person Recognition. IEEE Transactions on Biometrics, Behavior, and Identity Science, 1(2), 139-151
Continuous Multimodal Biometrics for Vehicles & Surveillance [2018-00472_Vinnova]; Halmstad UniversityHuman identity and understanding of person behavior using smartphone devices [2018-04347_Vinnova]; Halmstad UniversityFacial Analysis in the Era of Mobile Devices and Face Masks [2021-05110_VR]; Halmstad University; Publications
Alonso-Fernandez, F., Hernandez-Diaz, K., Buades Rubio, J. M. & Bigun, J. (2024). SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning. In: Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J. (Ed.), Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023.: . Paper presented at VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023 (pp. 349-361). Cham: Springer, 14335Busch, C., Deravi, F., Frings, D., Alonso-Fernandez, F. & Bigun, J. (2023). Facilitating free travel in the Schengen area—A position paper by the European Association for Biometrics. IET Biometrics, 12(2), 112-128Zell, O., Påsson, J., Hernandez-Diaz, K., Alonso-Fernandez, F. & Nilsson, F. (2023). Image-Based Fire Detection in Industrial Environments with YOLOv4. In: : . Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023. Baaz, A., Yonan, Y., Hernandez-Diaz, K., Alonso-Fernandez, F. & Nilsson, F. (2023). Synthetic Data for Object Classification in Industrial Applications. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods ICPRAM: . Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 387-394). SciTePress, 1Karlsson, J., Strand, F., Bigun, J., Alonso-Fernandez, F., Hernandez-Diaz, K. & Nilsson, F. (2023). Visual Detection of Personal Protective Equipment and Safety Gear on Industry Workers. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods: February 22-24, 2023, in Lisbon, Portugal. Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 395-402). SciTePress, 1Hedman, P., Skepetzis, V., Hernandez-Diaz, K., Bigun, J. & Alonso-Fernandez, F. (2022). On the effect of selfie beautification filters on face detection and recognition. Pattern Recognition Letters, 163, 104-111
DIFFUSE: Disentanglement of Features For Utilization in Systematic Evaluation [2021-05038_Vinnova]; RISE - Research Institutes of Sweden (2017-2019) (Closed down 2019-12-31)Big Data-Powered End User Function Development (BIG FUN) [2021-05045_Vinnova]; Halmstad UniversityAI-Powered Crime Scene Analysis [2022-00919_Vinnova]; Halmstad University
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1400-346X

Search in DiVA

Show all publications