hh.sePublications
Change search
Link to record
Permanent link

Direct link
Alonso-Fernandez, FernandoORCID iD iconorcid.org/0000-0002-1400-346X
Publications (10 of 146) Show all publications
Rosberg, F., Englund, C., Aksoy, E. & Alonso-Fernandez, F. (2026). Adversarial Attacks and Identity Leakage in De-Identification Systems: An Empirical Study. IEEE Transactions on Biometrics, Behavior, and Identity Science, 1-18
Open this publication in new window or tab >>Adversarial Attacks and Identity Leakage in De-Identification Systems: An Empirical Study
2026 (English)In: IEEE Transactions on Biometrics, Behavior, and Identity Science, p. 1-18Article in journal (Refereed) Epub ahead of print
Abstract [en]

In this paper, we investigate the impact of adversarial attacks on identity encoders within a realistic de-identification framework. Our experiments show that the transferability of attacks transfers from an external surrogate model to the system model (e.g., CosFace to ArcFace), allows the adversary to cause identity information to leak in a sufficiently sensitive face recognition system. We present experimental evidence and propose strategies to mitigate this vulnerability. Specifically, we show how fine-tuning on adversarial examples helps to mitigate this effect for distortion-based attacks (i.e., snow, fog, etc.), while a simple low-pass filter can attenuate the effect of adversarial noise without affecting the de-identified images. Our mitigation results in a de-identification system that preserves its functionality while being significantly more robust to adversarial noise. 

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2026
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-55647 (URN)10.1109/tbiom.2025.3596069 (DOI)2-s2.0-105013048224 (Scopus ID)
Funder
Vinnova, 2023-02996
Available from: 2025-03-18 Created: 2025-03-18 Last updated: 2026-02-18Bibliographically approved
Jankowska, J., Kostek, B., Alonso-Fernandez, F. & Tiwari, P. (2026). Exploring the correlation between the type of music and the emotions evoked: A study using subjective questionnaires and EEG. In: : . Paper presented at IX International Congress on Artificial Intelligence and Pattern Recognition IWAIPR 2025, Varadero, Cuba, October 14 - 17, 2025 (pp. 1-12).
Open this publication in new window or tab >>Exploring the correlation between the type of music and the emotions evoked: A study using subjective questionnaires and EEG
2026 (English)Conference paper, Published paper (Refereed)
Abstract [en]

The subject of this work is to check how different types of music affect human emotions. While listening to music, a subjective survey and brain activity measurements were carried out using an EEG helmet. The aim is to demonstrate the impact of different music genres on emotions. The research involved a diverse group of participants of different gender and musical preferences. This had the effect of capturing a wide range of emotional responses to music. After the experiment, a relationship analysis of the respondents' questionnaires with EEG signals was performed. The analysis revealed connections between emotions and observed brain activity.

Keywords
Music and Emotion, EEG-Based Emotion Recognition, Brain-Computer Interface (BCI)
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-57694 (URN)10.1007/978-3-032-11358-0_33 (DOI)
Conference
IX International Congress on Artificial Intelligence and Pattern Recognition IWAIPR 2025, Varadero, Cuba, October 14 - 17, 2025
Funder
Swedish Research Council
Available from: 2025-10-30 Created: 2025-10-30 Last updated: 2026-03-11
Hashemi-Nazari, Y., Tajaddini, A., Saberi-Movahed, F., Alonso-Fernandez, F. & Tiwari, P. (2026). Robust oblique projection and weighted NMF for hyperspectral unmixing. Pattern Recognition, 170, Article ID 112029.
Open this publication in new window or tab >>Robust oblique projection and weighted NMF for hyperspectral unmixing
Show others...
2026 (English)In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 170, article id 112029Article in journal (Refereed) Published
Abstract [en]

Hyperspectral unmixing (HU) is a crucial method for interpreting remotely sensed hyperspectral images (HSIs), with the aim of splitting the image into pure spectral components (endmembers) and their abundance fractions in every pixel of the scene. However, the effectiveness of this procedure is hindered by the presence of noise and anomalies. These kind of disruptions mainly arise from real-world factors such as atmospheric effects and endmember variability. To address this challenge, a novel approach called Graph-Regularized Oblique Projection Weighted NMF (GOP-WNMF) is introduced, which is grounded in a more precise separation of signal and noise subspaces, aiming to enhance the accuracy and robustness of the analysis. GOP-WNMF achieves this by constructing an oblique projector that projects each pixel onto the signal subspace, i.e., the space formed by signatures of endmembers, and parallel to the noise subspace. This approach effectively suppresses noise while preserving crucial spectral information. Furthermore, our new oblique NMF framework includes a unique residual-based weighting approach to detect and remove anomalies in pixels and spectral bands simultaneously. In addition to this, another weighting matrix is proposed by establishing a bipartite graph connecting endmembers and pixels to promote smoothness and sparsity in the resulting abundance maps. GOP-WNMF also enhances abundance map estimation accuracy by mitigating the negative effects of pixel outliers through the utilization of Laplacian eigenmaps technique to maintain the manifold structure of data. The effectiveness of GOP-WNMF is evaluated through comprehensive testing on synthetic and real HSIs, and its superiority is demonstrated over multiple state-of-the-art approaches. The source code is also available at https://github.com/yasinhashemi/GOP-WNMF. © 2025 The Authors

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2026
Keywords
Anomaly detection, Laplacian eigenmaps, Non-negative matrix factorization (NMF), Oblique projection, Sparse unmixing
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-57086 (URN)10.1016/j.patcog.2025.112029 (DOI)001532820300001 ()2-s2.0-105009880361 (Scopus ID)
Funder
Swedish Research CouncilVinnova
Available from: 2025-07-23 Created: 2025-07-23 Last updated: 2026-02-06Bibliographically approved
Ayed, I., Alcover, G. M., Alonso-Fernandez, F. & Jaume-i-Capó, A. (2025). Beyond Static Bias: Quantifying Fairness Variability in CheXpert. In: : . Paper presented at NeurIPS 2025, Workshop on Reliable ML from Unreliable Data, San Diego, USA, 2nd - 7th december, 2025.
Open this publication in new window or tab >>Beyond Static Bias: Quantifying Fairness Variability in CheXpert
2025 (English)Conference paper, Published paper (Refereed)
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-58458 (URN)
Conference
NeurIPS 2025, Workshop on Reliable ML from Unreliable Data, San Diego, USA, 2nd - 7th december, 2025
Available from: 2026-02-17 Created: 2026-02-17 Last updated: 2026-02-18Bibliographically approved
Cooney, M. & Alonso-Fernandez, F. (2025). Blimp-based Crime Scene Analysis. In: Sławomir Nowaczyk; Anna Vettoruzzo (Ed.), CEUR Workshop Proceedings: . Paper presented at 2025 Swedish AI Society Workshop, SAIS 2025, Halmstad, Sweden, 16-17 June, 2025 (pp. 63-78). Aachen: Technical University of Aachen, 4037
Open this publication in new window or tab >>Blimp-based Crime Scene Analysis
2025 (English)In: CEUR Workshop Proceedings / [ed] Sławomir Nowaczyk; Anna Vettoruzzo, Aachen: Technical University of Aachen , 2025, Vol. 4037, p. 63-78Conference paper, Published paper (Refereed)
Abstract [en]

Crime is a critical problem-which often takes place behind closed doors, posing additional difficulties for investigators. To bring hidden truths to light, evidence at indoor crime scenes must be documented before any contamination or degradation occurs. Here, we address this challenge from the perspective of artificial intelligence (AI), computer vision, and robotics: Specifically, we explore the use of a blimp as a "floating camera" to drift over and record evidence with minimal disturbance. Adopting a rapid prototyping approach, we develop a proof-of-concept to investigate capabilities required for manual or semi-autonomous operation. Consequently, our results demonstrate the feasibility of equipping indoor blimps with various components (such as RGB and thermal cameras, LiDARs, and WiFi, with 20 minutes of battery life). Moreover, we confirm the core premise: that such blimps can be used to observe crime scene evidence while generating little airflow. We conclude by proposing some ideas related to detection (e.g., of bloodstains), mapping, and path planning, with the aim of stimulating further discussion and exploration. © 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

Place, publisher, year, edition, pages
Aachen: Technical University of Aachen, 2025
Series
CEUR Workshop Proceedings, ISSN 1613-0073
Keywords
small blimp, indoor crime scene analysis, exploratory design, applied AI
National Category
Robotics and automation
Identifiers
urn:nbn:se:hh:diva-58014 (URN)2-s2.0-105017587029 (Scopus ID)
Conference
2025 Swedish AI Society Workshop, SAIS 2025, Halmstad, Sweden, 16-17 June, 2025
Available from: 2025-12-10 Created: 2025-12-10 Last updated: 2025-12-11Bibliographically approved
Alonso-Fernandez, F., Hernandez-Diaz, K., Buades Rubio, J. M., Tiwari, P. & Bigun, J. (2025). Deep network pruning: A comparative study on CNNs in face recognition. Pattern Recognition Letters, 189, 221-228
Open this publication in new window or tab >>Deep network pruning: A comparative study on CNNs in face recognition
Show others...
2025 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 189, p. 221-228Article in journal (Refereed) Published
Abstract [en]

The widespread use of mobile devices for all kinds of transactions makes necessary reliable and real-time identity authentication, leading to the adoption of face recognition (FR) via the cameras embedded in such devices. Progress of deep Convolutional Neural Networks (CNNs) has provided substantial advances in FR. Nonetheless, the size of state-of-the-art architectures is unsuitable for mobile deployment, since they often encompass hundreds of megabytes and millions of parameters. We address this by studying methods for deep network compression applied to FR. In particular, we apply network pruning based on Taylor scores, where less important filters are removed iteratively. The method is tested on three networks based on the small SqueezeNet (1.24M parameters) and the popular MobileNetv2 (3.5M) and ResNet50 (23.5M) architectures. These have been selected to showcase the method on CNNs with different complexities and sizes. We observe that a substantial percentage of filters can be removed with minimal performance loss. Also, filters with the highest amount of output channels tend to be removed first, suggesting that high-dimensional spaces within popular CNNs are over-dimensioned. The models of this paper are available at https://github.com/HalmstadUniversityBiometrics/CNN-pruning-for-face-recognition. © 2025.

Place, publisher, year, edition, pages
Amsterdam: Elsevier, 2025
Keywords
Convolutional Neural Networks, Deep learning, Face recognition, Mobile biometrics, Network pruning, Taylor expansion
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:hh:diva-55571 (URN)10.1016/j.patrec.2025.01.023 (DOI)2-s2.0-85217214565 (Scopus ID)
Funder
Vinnova, PID2022-136779OB-C32Swedish Research CouncilEuropean Commission
Note

This work was partly done while F. A.-F. was a visiting researcher at the University of the Balearic Islands . F. A.-F., K. H.-D., and J. B. thank the Swedish Research Council (VR) and the Swedish Innovation Agency (VINNOVA) for funding their research. This work is part of the Project PID2022-136779OB-C32 (PLEISAR) funded by MICIU/ AEI /10.13039/501100011033/ and FEDER, EU.

Available from: 2025-02-28 Created: 2025-02-28 Last updated: 2025-10-01Bibliographically approved
Liang, G., Tiwari, P., Nowaczyk, S., Byttner, S. & Alonso-Fernandez, F. (2025). Dynamic Causal Explanation Based Diffusion-Variational Graph Neural Network for Spatiotemporal Forecasting. IEEE Transactions on Neural Networks and Learning Systems, 33(5), 9524-9537
Open this publication in new window or tab >>Dynamic Causal Explanation Based Diffusion-Variational Graph Neural Network for Spatiotemporal Forecasting
Show others...
2025 (English)In: IEEE Transactions on Neural Networks and Learning Systems, ISSN 2162-237X, E-ISSN 2162-2388, Vol. 33, no 5, p. 9524-9537Article in journal (Refereed) Published
Abstract [en]

Graph neural networks (GNNs), especially dynamic GNNs, have become a research hotspot in spatiotemporal forecasting problems. While many dynamic graph construction methods have been developed, relatively few of them explore the causal relationship between neighbor nodes. Thus, the resulting models lack strong explainability for the causal relationship between the neighbor nodes of the dynamically generated graphs, which can easily lead to a risk in subsequent decisions. Moreover, few of them consider the uncertainty and noise of dynamic graphs based on the time series datasets, which are ubiquitous in real-world graph structure networks. In this article, we propose a novel dynamic diffusion-variational GNN (DVGNN) for spatiotemporal forecasting. For dynamic graph construction, an unsupervised generative model is devised. Two layers of graph convolutional network (GCN) are applied to calculate the posterior distribution of the latent node embeddings in the encoder stage. Then, a diffusion model is used to infer the dynamic link probability and reconstruct causal graphs (CGs) in the decoder stage adaptively. The new loss function is derived theoretically, and the reparameterization trick is adopted in estimating the probability distribution of the dynamic graphs by evidence lower bound (ELBO) during the backpropagation period. After obtaining the generated graphs, dynamic GCN and temporal attention are applied to predict future states. Experiments are conducted on four real-world datasets of different graph structures in different domains. The results demonstrate that the proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding root mean square error (RMSE) results while exhibiting higher robustness. Also, by F1-score and probability distribution analysis, we demonstrate that DVGNN better reflects the causal relationship and uncertainty of dynamic graphs. The website of the code is https://github.com/gorgen2020/DVGNN.

Place, publisher, year, edition, pages
Piscataway: IEEE, 2025
Keywords
Diffusion process, graph neural networks (GNNs), spatiotemporal forecasting, variational graph autoencoders (VGAEs)
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-55718 (URN)10.1109/tnnls.2024.3415149 (DOI)001271405600001 ()38980780 (PubMedID)
Funder
VinnovaSwedish Research Council
Available from: 2025-03-31 Created: 2025-03-31 Last updated: 2025-10-01Bibliographically approved
Cooney, M., Ponrajan, S. & Alonso-Fernandez, F. (2025). Nano Drone-based Indoor Crime Scene Analysis*. In: 2025 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO): . Paper presented at 2025 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO), Osaka, Japan, 17-19 July, 2025 (pp. 20-27). Piscataway, NJ: IEEE
Open this publication in new window or tab >>Nano Drone-based Indoor Crime Scene Analysis*
2025 (English)In: 2025 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO), Piscataway, NJ: IEEE, 2025, p. 20-27Conference paper, Published paper (Refereed)
Abstract [en]

Technologies such as robotics, Artificial Intelligence (AI), and Computer Vision (CV) can be applied to crime scene analysis (CSA) to help protect lives, facilitate justice, and deter crime, but an overview of the tasks that can be automated has been lacking. Here we follow a speculative prototyping approach: First, the STAIR tool is used to rapidly review the literature and identify tasks that seem to have not received much attention, like accessing crime scenes through a window, mapping/gathering evidence, and analyzing blood smears. Secondly, we present a prototype of a small drone that implements these three tasks with 75%,85%, and 80% performance, to perform a minimal analysis of an indoor crime scene. Lessons learned are reported, toward guiding next work. ©2025 IEEE

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2025
Series
IEEE Workshop on Advanced Robotics and its Social Impacts. Conference Proceedings, ISSN 2162-7576
Keywords
Computer vision, Image analysis, Reviews, Prototypes, Stairs, Artificial intelligence, Blood, Drones
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-58459 (URN)10.1109/ARSO64737.2025.11124976 (DOI)979-8-3315-1101-2 (ISBN)
Conference
2025 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO), Osaka, Japan, 17-19 July, 2025
Funder
Vinnova, 2022-00919
Available from: 2026-02-17 Created: 2026-02-17 Last updated: 2026-02-18Bibliographically approved
Ning, X., Jiang, L., Li, W., Yu, Z., Xie, J., Li, L., . . . Alonso-Fernandez, F. (2025). Swin-MGNet: Swin Transformer based Multi-view Grouping Network for 3D Object Recognition. IEEE Transactions on Artificial Intelligence, 6(3), 747-758
Open this publication in new window or tab >>Swin-MGNet: Swin Transformer based Multi-view Grouping Network for 3D Object Recognition
Show others...
2025 (English)In: IEEE Transactions on Artificial Intelligence, ISSN 2691-4581, Vol. 6, no 3, p. 747-758Article in journal (Refereed) Published
Abstract [en]

Recent developments in Swin Transformer have shown its great potential in various computer vision tasks, including image classification, semantic segmentation, and object detection. However, it is challenging to achieve desired performance by directly employing the Swin Transformer in multi-view 3D object recognition since the Swin Transformer independently extracts the characteristics of each view and relies heavily on a subsequent fusion strategy to unify the multi-view information. This leads to the problem of the insufficient extraction of interdependencies between the multi-view images. To this end, we propose an aggregation strategy integrated into the Swin Transformer to reinforce the connections between internal features across multiple views, thus leading to a complete interpretation of isolated features extracted by the Swin Transformer. Specifically, we utilize Swin Transformer to learn view-level feature representations from multi-view images and then calculate their view discrimination scores. The scores are employed to assign the view-level features to different groups. Finally, a grouping and fusion network is proposed to aggregate the features from view and group levels. The experimental results indicate that our method attains state-of-the-art performance compared to prior approaches in multi-view 3D object recognition tasks. The source code is available at https://github.com/Qishaohua94/DEST. ©2024 IEEE.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2025
Keywords
3D Object Classification, 3D Object Retrieval, Feature Fusion, Grouping Mechanism, Multi-view learning, Swin Transformer
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:hh:diva-54973 (URN)10.1109/TAI.2024.3492163 (DOI)2-s2.0-85208686299 (Scopus ID)
Funder
VinnovaSwedish Research Council
Note

This work is supported by the National Natural Science Foundation of China No. 62373343, Beijing Natural Science Foundation No. L233036, Swedish Research Council (VR) and the Swedish Innovation Agency (VINNOVA).

Available from: 2024-11-26 Created: 2024-11-26 Last updated: 2025-10-01Bibliographically approved
Alonso-Fernandez, F., Hernandez-Diaz, K., Tiwari, P. & Bigun, J. (2024). Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition. In: Proceedings - 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024: . Paper presented at 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, Rome, Italy, December 2-5, 2024 (pp. 1-6). IEEE
Open this publication in new window or tab >>Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition
2024 (English)In: Proceedings - 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, IEEE, 2024, p. 1-6Conference paper, Published paper (Refereed)
Abstract [en]

We apply pre-trained architectures, originally developed for the ImageNet Large Scale Visual Recognition Challenge, for periocular recognition. These architectures have demon-strated significant success in various computer vision tasks beyond the ones for which they were designed. This work builds on our previous study using off-the-shelf Convolutional Neural Network (CNN) and extends it to include the more recently proposed Vision Transformers (ViT). Despite being trained for generic object classification, middle-layer features from CNNs and ViTs are a suitable way to recognize individuals based on periocular images. We also demonstrate that CNNs and ViTs are highly complementary since their combination results in boosted accuracy. In addition, we show that a small portion of these pre-trained models can achieve good accuracy, resulting in thinner models with fewer parameters, suitable for resource-limited environments such as mobiles. This efficiency improves if traditional handcrafted features are added as well. ©2024 IEEE.

Place, publisher, year, edition, pages
IEEE, 2024
Series
IEEE International Workshop on Information Forensics and Security, ISSN 2157-4766, E-ISSN 2157-4774
Keywords
Periocular recognition, deep representation, biometrics, transfer learning, one-shot learning, Convolutional Neural Network, Vision Transformers
National Category
Signal Processing
Identifiers
urn:nbn:se:hh:diva-54713 (URN)10.1109/WIFS61860.2024.10810712 (DOI)001422478100039 ()2-s2.0-85215518296 (Scopus ID)979-8-3503-6442-2 (ISBN)979-8-3503-6443-9 (ISBN)
Conference
16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, Rome, Italy, December 2-5, 2024
Funder
Swedish Research CouncilVinnova
Available from: 2024-10-06 Created: 2024-10-06 Last updated: 2025-10-01Bibliographically approved
Projects
Bio-distance, Biometrics at a distance [2009-07215_VR]; Halmstad UniversityFacial detection and recognition resilient to physical image deformations [2012-04313_VR]; Halmstad UniversityOcular biometrics in unconstrained sensing environments [2016-03497_VR]; Halmstad University; Publications
Hernandez-Diaz, K., Alonso-Fernandez, F. & Bigun, J. (2023). One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations. IEEE Access, 11, 100396-100413Alonso-Fernandez, F., Hernandez-Diaz, K., Ramis, S., Perales, F. J. & Bigun, J. (2021). Facial Masks and Soft-Biometrics: Leveraging Face Recognition CNNs for Age and Gender Prediction on Mobile Ocular Images. IET Biometrics, 10(5), 562-580Gonzalez-Sosa, E., Vera-Rodriguez, R., Fierrez, J., Alonso-Fernandez, F. & Patel, V. M. (2019). Exploring Body Texture From mmW Images for Person Recognition. IEEE Transactions on Biometrics, Behavior, and Identity Science, 1(2), 139-151
Continuous Multimodal Biometrics for Vehicles & Surveillance [2018-00472_Vinnova]; Halmstad UniversityHuman identity and understanding of person behavior using smartphone devices [2018-04347_Vinnova]; Halmstad UniversityFacial Analysis in the Era of Mobile Devices and Face Masks [2021-05110_VR]; Halmstad University; Publications
Alonso-Fernandez, F., Hernandez-Diaz, K., Tiwari, P. & Bigun, J. (2024). Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition. In: Proceedings - 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024: . Paper presented at 16th IEEE International Workshop on Information Forensics and Security, WIFS 2024, Rome, Italy, December 2-5, 2024 (pp. 1-6). IEEEAlonso-Fernandez, F., Hernandez-Diaz, K., Buades Rubio, J. M. & Bigun, J. (2024). SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning. In: Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J. (Ed.), Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023.: . Paper presented at VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR, Varadero, Cuba, September 27-29, 2023 (pp. 349-361). Cham: Springer, 14335Busch, C., Deravi, F., Frings, D., Alonso-Fernandez, F. & Bigun, J. (2023). Facilitating free travel in the Schengen area—A position paper by the European Association for Biometrics. IET Biometrics, 12(2), 112-128Zell, O., Påsson, J., Hernandez-Diaz, K., Alonso-Fernandez, F. & Nilsson, F. (2023). Image-Based Fire Detection in Industrial Environments with YOLOv4. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods ICPRAM: . Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 379-386). Setúbal: SciTePress, 1Baaz, A., Yonan, Y., Hernandez-Diaz, K., Alonso-Fernandez, F. & Nilsson, F. (2023). Synthetic Data for Object Classification in Industrial Applications. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods ICPRAM: . Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 387-394). SciTePress, 1Karlsson, J., Strand, F., Bigun, J., Alonso-Fernandez, F., Hernandez-Diaz, K. & Nilsson, F. (2023). Visual Detection of Personal Protective Equipment and Safety Gear on Industry Workers. In: Maria De Marsico; Gabriella Sanniti di Baja; Ana Fred (Ed.), Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods: February 22-24, 2023, in Lisbon, Portugal. Paper presented at 12th International Conference on Pattern Recognition Applications and Methods, ICPRAM, Lisbon, Portugal, February 22-24, 2023 (pp. 395-402). SciTePress, 1Hedman, P., Skepetzis, V., Hernandez-Diaz, K., Bigun, J. & Alonso-Fernandez, F. (2022). On the effect of selfie beautification filters on face detection and recognition. Pattern Recognition Letters, 163, 104-111
DIFFUSE: Disentanglement of Features For Utilization in Systematic Evaluation [2021-05038_Vinnova]; RISE - Research Institutes of Sweden (2017-2019) (Closed down 2019-12-31)Big Data-Powered End User Function Development (BIG FUN) [2021-05045_Vinnova]; Halmstad University; Publications
Luo, Y., Gkouskos, D., Russo, N. & Wang, M. (2024). Navigating from data-driven design to designing with ML: A case study of truck HMI system design. In: Proceedings of the Design Society: . Paper presented at 2024 International Design Society Conference (Design 2024), Cavtat, Dubrovnik, Croatia, 20-23 May, 2024 (pp. 2119-2128). Cambridge: Cambridge University Press, 4
AI-Powered Crime Scene Analysis [2022-00919_Vinnova]; Halmstad University; Publications
Cooney, M., Ponrajan, S. & Alonso-Fernandez, F. (2025). Nano Drone-based Indoor Crime Scene Analysis*. In: 2025 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO): . Paper presented at 2025 IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO), Osaka, Japan, 17-19 July, 2025 (pp. 20-27). Piscataway, NJ: IEEE
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1400-346X

Search in DiVA

Show all publications