hh.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
Refine search result
123456 1 - 50 of 274
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abbas, Taimoor
    et al.
    Lund Univ, Elect & Informat Technol Dept, S-22100 Lund, Sweden..
    Sjöberg, Katrin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Kåredal, Johan
    Lund Univ, Elect & Informat Technol Dept, S-22100 Lund, Sweden..
    Tufvesson, Fredrik
    Lund Univ, Elect & Informat Technol Dept, S-22100 Lund, Sweden..
    A Measurement Based Shadow Fading Model for Vehicle-to-Vehicle Network Simulations2015In: International Journal of Antennas and Propagation, ISSN 1687-5869, E-ISSN 1687-5877, article id 190607Article in journal (Refereed)
    Abstract [en]

    The vehicle-to-vehicle (V2V) propagation channel has significant implications on the design and performance of novel communication protocols for vehicular ad hoc networks (VANETs). Extensive research efforts have been made to develop V2V channel models to be implemented in advanced VANET system simulators for performance evaluation. The impact of shadowing caused by other vehicles has, however, largely been neglected in most of the models, as well as in the system simulations. In this paper we present a shadow fading model targeting system simulations based on real measurements performed in urban and highway scenarios. The measurement data is separated into three categories, line-of-sight (LOS), obstructed line-of-sight (OLOS) by vehicles, and non-line-of-sight due to buildings, with the help of video information recorded during the measurements. It is observed that vehicles obstructing the LOS induce an additional average attenuation of about 10 dB in the received signal power. An approach to incorporate the LOS/OLOS model into existing VANET simulators is also provided. Finally, system level VANET simulation results are presented, showing the difference between the LOS/OLOS model and a channel model based on Nakagami-m fading.

  • 2.
    Alabdallah, Abdallah
    Halmstad University, School of Information Technology.
    Machine Learning Survival Models: Performance and Explainability2023Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Survival analysis is an essential statistics and machine learning field in various critical applications like medical research and predictive maintenance. In these domains understanding models' predictions is paramount. While machine learning techniques are increasingly applied to enhance the predictive performance of survival models, they simultaneously sacrifice transparency and explainability. 

    Survival models, in contrast to regular machine learning models, predict functions rather than point estimates like regression and classification models. This creates a challenge regarding explaining such models using the known off-the-shelf machine learning explanation techniques, like Shapley Values, Counterfactual examples, and others.   

    Censoring is also a major issue in survival analysis where the target time variable is not fully observed for all subjects. Moreover, in predictive maintenance settings, recorded events do not always map to actual failures, where some components could be replaced because it is considered faulty or about to fail in the future based on an expert's opinion. Censoring and noisy labels create problems in terms of modeling and evaluation that require to be addressed during the development and evaluation of the survival models.

    Considering the challenges in survival modeling and the differences from regular machine learning models, this thesis aims to bridge this gap by facilitating the use of machine learning explanation methods to produce plausible and actionable explanations for survival models. It also aims to enhance survival modeling and evaluation revealing a better insight into the differences among the compared survival models.

    In this thesis, we propose two methods for explaining survival models which rely on discovering survival patterns in the model's predictions that group the studied subjects into significantly different survival groups. Each pattern reflects a specific survival behavior common to all the subjects in their respective group. We utilize these patterns to explain the predictions of the studied model in two ways. In the first, we employ a classification proxy model that can capture the relationship between the descriptive features of subjects and the learned survival patterns. Explaining such a proxy model using Shapley Values provides insights into the feature attribution of belonging to a specific survival pattern. In the second method, we addressed the "what if?" question by generating plausible and actionable counterfactual examples that would change the predicted pattern of the studied subject. Such counterfactual examples provide insights into actionable changes required to enhance the survivability of subjects.

    We also propose a variational-inference-based generative model for estimating the time-to-event distribution. The model relies on a regression-based loss function with the ability to handle censored cases. It also relies on sampling for estimating the conditional probability of event times. Moreover, we propose a decomposition of the C-index into a weighted harmonic average of two quantities, the concordance among the observed events and the concordance between observed and censored cases. These two quantities, weighted by a factor representing the balance between the two, can reveal differences between survival models previously unseen using only the total Concordance index. This can give insight into the performances of different models and their relation to the characteristics of the studied data.

    Finally, as part of enhancing survival modeling, we propose an algorithm that can correct erroneous event labels in predictive maintenance time-to-event data. we adopt an expectation-maximization-like approach utilizing a genetic algorithm to find better labels that would maximize the survival model's performance. Over iteration, the algorithm builds confidence about events' assignments which improves the search in the following iterations until convergence.

    We performed experiments on real and synthetic data showing that our proposed methods enhance the performance in survival modeling and can reveal the underlying factors contributing to the explainability of survival models' behavior and performance.

    Download full text (pdf)
    fulltext
  • 3.
    Albinsson, John
    et al.
    Lund Univ, Dept Biomed Engn, S-22100 Lund, Sweden..
    Brorsson, Sofia
    Halmstad University, School of Business, Engineering and Science, The Rydberg Laboratory for Applied Sciences (RLAS).
    Rydén Ahlgren, Åsa
    Lund Univ, Dept Clin Sci, Clin Physiol & Nucl Med Unit, Malmo, Sweden..
    Cinthio, Magnus
    Lund Univ, Dept Biomed Engn, S-22100 Lund, Sweden..
    Improved tracking performance of lagrangian block-matching methodologies using block expansion in the time domain: In silico, phantom and invivo evaluations2014In: Ultrasound in Medicine and Biology, ISSN 0301-5629, E-ISSN 1879-291X, Vol. 40, no 10, p. 2508-2520Article in journal (Refereed)
    Abstract [en]

    The aim of this study was to evaluate tracking performance when an extra reference block is added to a basic block-matching method, where the two reference blocks originate from two consecutive ultrasound frames. The use of an extra reference block was evaluated for two putative benefits: (i) an increase in tracking performance while maintaining the size of the reference blocks, evaluated using in silico and phantom cine loops; (ii) a reduction in the size of the reference blocks while maintaining the tracking performance, evaluated using in vivo cine loops of the common carotid artery where the longitudinal movement of the wall was estimated. The results indicated that tracking accuracy improved (mean - 48%, p<0.005 [in silico]; mean - 43%, p<0.01 [phantom]), and there was a reduction in size of the reference blocks while maintaining tracking performance (mean - 19%, p<0.01 [in vivo]). This novel method will facilitate further exploration of the longitudinal movement of the arterial wall. (C) 2014 World Federation for Ultrasound in Medicine & Biology.

  • 4.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Barrachina, Javier
    Facephi Biometria, Alicante, Spain.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    SqueezeFacePoseNet: Lightweight Face Verification Across Different Poses for Mobile Platforms2021In: Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10-15, 2021, Proceedings, Part VIII / [ed] Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, Roberto Vezzani, Berlin: Springer, 2021, p. 139-153Conference paper (Refereed)
    Abstract [en]

    Ubiquitous and real-time person authentication has become critical after the breakthrough of all kind of services provided via mobile devices. In this context, face technologies can provide reliable and robust user authentication, given the availability of cameras in these devices, as well as their widespread use in everyday applications. The rapid development of deep Convolutional Neural Networks (CNNs) has resulted in many accurate face verification architectures. However, their typical size (hundreds of megabytes) makes them infeasible to be incorporated in downloadable mobile applications where the entire file typically may not exceed 100 Mb. Accordingly, we address the challenge of developing a lightweight face recognition network of just a few megabytes that can operate with sufficient accuracy in comparison to much larger models. The network also should be able to operate under different poses, given the variability naturally observed in uncontrolled environments where mobile devices are typically used. In this paper, we adapt the lightweight SqueezeNet model, of just 4.4MB, to effectively provide cross-pose face recognition. After trained on the MS-Celeb-1M and VGGFace2 databases, our model achieves an EER of 1.23% on the difficult frontal vs. profile comparison, and 0.54% on profile vs. profile images. Under less extreme variations involving frontal images in any of the enrolment/query images pair, EER is pushed down to <0.3%, and the FRR at FAR=0.1% to less than 1%. This makes our light model suitable for face recognition where at least acquisition of the enrolment image can be controlled. At the cost of a slight degradation in performance, we also test an even lighter model (of just 2.5MB) where regular convolutions are replaced with depth-wise separable convolutions. © 2021, Springer Nature Switzerland AG.

    Download full text (pdf)
    fulltext
  • 5.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    A survey on periocular biometrics research2016In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 82, part 2, p. 92-105Article in journal (Refereed)
    Abstract [en]

    Periocular refers to the facial region in the vicinity of the eye, including eyelids, lashes and eyebrows. While face and irises have been extensively studied, the periocular region has emerged as a promising trait for unconstrained biometrics, following demands for increased robustness of face or iris systems. With a surprisingly high discrimination ability, this region can be easily obtained with existing setups for face and iris, and the requirement of user cooperation can be relaxed, thus facilitating the interaction with biometric systems. It is also available over a wide range of distances even when the iris texture cannot be reliably obtained (low resolution) or under partial face occlusion (close distances). Here, we review the state of the art in periocular biometrics research. A number of aspects are described, including: (i) existing databases, (ii) algorithms for periocular detection and/or segmentation, (iii) features employed for recognition, (iv) identification of the most discriminative regions of the periocular area, (v) comparison with iris and face modalities, (vi) soft-biometrics (gender/ethnicity classification), and (vii) impact of gender transformation and plastic surgery on the recognition accuracy. This work is expected to provide an insight of the most relevant issues in periocular biometrics, giving a comprehensive coverage of the existing literature and current state of the art. © 2015 Elsevier B.V. All rights reserved.

  • 6.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    An Overview of Periocular Biometrics2017In: Iris and Periocular Biometric Recognition / [ed] Christian Rathgeb & Christoph Busch, London: The Institution of Engineering and Technology , 2017, p. 29-53Chapter in book (Refereed)
    Abstract [en]

    Periocular biometrics specifically refers to the externally visible skin region of the face that surrounds the eye socket. Its utility is specially pronounced when the iris or the face cannot be properly acquired, being the ocular modality requiring the least constrained acquisition process. It appears over a wide range of distances, even under partial face occlusion (close distance) or low resolution iris (long distance), making it very suitable for unconstrained or uncooperative scenarios. It also avoids the need of iris segmentation, an issue in difficult images. In such situation, identifying a suspect where only the periocular region is visible is one of the toughest real-world challenges in biometrics. The richness of the periocular region in terms of identity is so high that the whole face can even be reconstructed only from images of the periocular region. The technological shift to mobile devices has also resulted in many identity-sensitive applications becoming prevalent on these devices.

  • 7.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Best Regions for Periocular Recognition with NIR and Visible Images2014In: 2014 IEEE International Conference on Image Processing (ICIP), Piscataway, NJ: IEEE Press, 2014, p. 4987-4991Conference paper (Refereed)
    Abstract [en]

    We evaluate the most useful regions for periocular recognition. For this purpose, we employ our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the spectrum. We use both NIR and visible iris images. The best regions are selected via Sequential Forward Floating Selection (SFFS). The iris neighborhood (including sclera and eyelashes) is found as the best region with NIR data, while the surrounding skin texture (which is over-illuminated in NIR images) is the most discriminative region in visible range. To the best of our knowledge, only one work in the literature has evaluated the influence of different regions in the performance of periocular recognition algorithms. Our results are in the same line, despite the use of completely different matchers. We also evaluate an iris texture matcher, providing fusion results with our periocular system as well. © 2014 IEEE.

    Download full text (pdf)
    fulltext
  • 8.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Biometric Recognition Using Periocular Images2013Conference paper (Other academic)
    Abstract [en]

    We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum at different frequencies and orientations. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, and 4) rotation compensation between query and test images. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region.

    Download full text (pdf)
    fulltext
  • 9.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology.
    Bigun, Josef
    Halmstad University, School of Information Technology.
    Continuous Examination by Automatic Quiz Assessment Using Spiral Codes and Image Processing2022In: 2022 IEEE Global Engineering Education Conference (EDUCON) / [ed] Ilhem Kallel; Habib M. Kammoun; Lobna Hsairi, IEEE, 2022, Vol. 2022-Marc, p. 929-935Conference paper (Refereed)
    Abstract [en]

    We describe a technical solution implemented at Halmstad University to automatise assessment and reporting of results of paper-based quiz exams. Paper quizzes are affordable and within reach of campus education in classrooms. Offering and taking them is accepted as they cause fewer issues with reliability and democratic access, e.g. a large number of students can take them without a trusted mobile device, internet, or battery. By contrast, correction of the quiz is a considerable obstacle. We suggest mitigating the issue by a novel image processing technique using harmonic spirals that aligns answer sheets in sub-pixel accuracy to read student identity and answers and to email results within minutes, all fully automatically. Using the described method, we carry out regular weekly examinations in two master courses at the mentioned centre without a significant workload increase. The employed solution also enables us to assign a unique identifier to each quiz (e.g. week 1, week 2...) while allowing us to have an individualised quiz for each student. © 2022 IEEE.

    Download full text (pdf)
    fulltext
  • 10.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Exploting Periocular and RGB Information in Fake Iris Detection2014In: 2014 37th International Conventionon Information and Communication Technology, Electronics and Microelectronics (MIPRO): 26 – 30 May 2014 Opatija, Croatia: Proceedings / [ed] Petar Biljanovic, Zeljko Butkovic, Karolj Skala, Stjepan Golubic, Marina Cicin-Sain, Vlado Sruk, Slobodan Ribaric, Stjepan Gros, Boris Vrdoljak, Mladen Mauher & Goran Cetusic, Rijeka: Croatian Society for Information and Communication Technology, Electronics and Microelectronics - MIPRO , 2014, p. 1354-1359Conference paper (Refereed)
    Abstract [en]

    Fake iris detection has been studied by several researchers. However, to date, the experimental setup has been limited to near-infrared (NIR) sensors, which provide grey-scale images. This work makes use of images captured in visible range with color (RGB) information. We employ Gray-Level CoOccurrence textural features and SVM classifiers for the task of fake iris detection. The best features are selected with the Sequential Forward Floating Selection (SFFS) algorithm. To the best of our knowledge, this is the first work evaluating spoofing attack using color iris images in visible range. Our results demonstrate that the use of features from the three color channels clearly outperform the accuracy obtained from the luminance (gray scale) image. Also, the R channel is found to be the best individual channel. Lastly, we analyze the effect of extracting features from selected (eye or periocular) regions only. The best performance is obtained when GLCM features are extracted from the whole image, highlighting that both the iris and the surrounding periocular region are relevant for fake iris detection. An added advantage is that no accurate iris segmentation is needed. This work is relevant due to the increasing prevalence of more relaxed scenarios where iris acquisition using NIR light is unfeasible (e.g. distant acquisition or mobile devices), which are putting high pressure in the development of algorithms capable of working with visible light. © 2014 MIPRO.

    Download full text (pdf)
    fulltext
  • 11.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Eye Detection by Complex Filtering for Periocular Recognition2014In: 2nd International Workshop on Biometrics and Forensics (IWBF2014): Valletta, Malta (27-28th March 2014), Piscataway, NJ: IEEE Press, 2014, article id 6914250Conference paper (Refereed)
    Abstract [en]

    We present a novel system to localize the eye position based on symmetry filters. By using a 2D separable filter tuned to detect circular symmetries, detection is done with a few ID convolutions. The detected eye center is used as input to our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the local power spectrum. This setup is evaluated with two databases of iris data, one acquired with a close-up NIR camera, and another in visible light with a web-cam. The periocular system shows high resilience to inaccuracies in the position of the detected eye center. The density of the sampling grid can also be reduced without sacrificing too much accuracy, allowing additional computational savings. We also evaluate an iris texture matcher based on ID Log-Gabor wavelets. Despite the poorer performance of the iris matcher with the webcam database, its fusion with the periocular system results in improved performance. ©2014 IEEE.

    Download full text (pdf)
    fulltext
  • 12.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Fake Iris Detection: A Comparison Between Near-Infrared and Visible Images2014In: Proceedings: 10th International Conference on Signal-Image Technology and Internet-Based Systems, SITIS 2014 / [ed] Kokou Yetongnon, Albert Dipanda & Richard Chbeir, Piscataway, NJ: IEEE Computer Society, 2014, p. 546-553Conference paper (Refereed)
    Abstract [en]

    Fake iris detection has been studied so far using near-infrared sensors (NIR), which provide grey scale-images, i.e. With luminance information only. Here, we incorporate into the analysis images captured in visible range, with color information, and perform comparative experiments between the two types of data. We employ Gray-Level Cocurrence textural features and SVM classifiers. These features analyze various image properties related with contrast, pixel regularity, and pixel co-occurrence statistics. We select the best features with the Sequential Forward Floating Selection (SFFS) algorithm. We also study the effect of extracting features from selected (eye or periocular) regions only. Our experiments are done with fake samples obtained from printed images, which are then presented to the same sensor than the real ones. Results show that fake images captured in NIR range are easier to detect than visible images (even if we down sample NIR images to equate the average size of the iris region between the two databases). We also observe that the best performance with both sensors can be obtained with features extracted from the whole image, showing that not only the eye region, but also the surrounding periocular texture is relevant for fake iris detection. An additional source of improvement with the visible sensor also comes from the use of the three RGB channels, in comparison with the luminance image only. A further analysis also reveals that some features are best suited to one particular sensor than the others. © 2014 IEEE

    Download full text (pdf)
    fulltext
  • 13.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Halmstad University submission to the First ICB Competition on Iris Recognition (ICIR2013)2013Other (Other academic)
    Download full text (pdf)
    fulltext
  • 14.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Iris Boundaries Segmentation Using the Generalized Structure Tensor: A Study on the Effects of Image Degradation2012In: Biometrics: Theory, Applications and Systems (BTAS), 2012 IEEE Fifth International Conference on, Piscataway, N.J.: IEEE Press, 2012, p. 426-431, article id 6374610Conference paper (Refereed)
    Abstract [en]

    We present a new iris segmentation algorithm based onthe Generalized Structure Tensor (GST), which also includesan eyelid detection step. It is compared with traditionalsegmentation systems based on Hough transformand integro-differential operators. Results are given usingthe CASIA-IrisV3-Interval database. Segmentation performanceunder different degrees of image defocus and motionblur is also evaluated. Reported results shows the effectivenessof the proposed algorithm, with similar performancethan the others in pupil detection, and clearly betterperformance for sclera detection for all levels of degradation.Verification results using 1D Log-Gabor wavelets arealso given, showing the benefits of the eyelids removal step.These results point out the validity of the GST as an alternativeto other iris segmentation systems. © 2012 IEEE.

    Download full text (pdf)
    2012_BTAS_IrisGST_Quality_Alonso
  • 15.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Iris Segmentation Using the Generalized Structure Tensor2012Conference paper (Other academic)
    Abstract [en]

    We present a new iris segmentation algorithm based on the Generalized Structure Tensor (GST). We compare this approach with traditional iris segmentation systems based on Hough transform and integro-differential operators. Results are given using the CASIA-IrisV3-Interval database with respect to a segmentation made manually by a human expert. The proposed algorithm outperforms the baseline approaches, pointing out the validity of the GST as an alternative to classic iris segmentation systems. We also detect the cross positions between the eyelids and the outer iris boundary. Verification results using a publicly available iris recognition system based on 1D Log-Gabor wavelets are also given, showing the benefits of the eyelids removal step.

    Download full text (pdf)
    fulltext
  • 16.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Near-infrared and visible-light periocular recognition with Gabor features using frequency-adaptive automatic eye detection2015In: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 4, no 2, p. 74-89Article in journal (Refereed)
    Abstract [en]

    Periocular recognition has gained attention recently due to demands of increased robustness of face or iris in less controlled scenarios. We present a new system for eye detection based on complex symmetry filters, which has the advantage of not needing training. Also, separability of the filters allows faster detection via one-dimensional convolutions. This system is used as input to a periocular algorithm based on retinotopic sampling grids and Gabor spectrum decomposition. The evaluation framework is composed of six databases acquired both with near-infrared and visible sensors. The experimental setup is complemented with four iris matchers, used for fusion experiments. The eye detection system presented shows very high accuracy with near-infrared data, and a reasonable good accuracy with one visible database. Regarding the periocular system, it exhibits great robustness to small errors in locating the eye centre, as well as to scale changes of the input image. The density of the sampling grid can also be reduced without sacrificing accuracy. Lastly, despite the poorer performance of the iris matchers with visible data, fusion with the periocular system can provide an improvement of more than 20%. The six databases used have been manually annotated, with the annotation made publicly available. © The Institution of Engineering and Technology 2015.

    Download full text (pdf)
    fulltext
  • 17.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Recognition Using Retinotopic Sampling and Gabor Decomposition2012In: Computer Vision – ECCV 2012: Workshops and demonstrations : Florence, Italy, October 7-13, 2012, Proceedings. Part II / [ed] Fusiello, Andrea; Murino, Vittorio; Cucchiara, Rita, Berlin: Springer, 2012, Vol. 7584, p. 309-318Conference paper (Refereed)
    Abstract [en]

    We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, 4) rotation compensation between query and test images, and 5) comparison with an iris machine expert. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region. © 2012 Springer-Verlag.

    Download full text (pdf)
    2012_WIAF_Periocular_Retinotopic_Gabor_Alonso
  • 18.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Biometrics: Databases, Algorithms and Directions2016Conference paper (Other academic)
    Abstract [en]

    Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a tradeoff between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed.

  • 19.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Quality Factors Affecting Iris Segmentation and Matching2013In: Proceedings – 2013 International Conference on Biometrics, ICB 2013 / [ed] Julian Fierrez, Ajay Kumar, Mayank Vatsa, Raymond Veldhuis & Javier Ortega-Garcia, Piscataway, N.J.: IEEE conference proceedings, 2013, article id 6613016Conference paper (Refereed)
    Abstract [en]

    Image degradations can affect the different processing steps of iris recognition systems. With several quality factors proposed for iris images, its specific effect in the segmentation accuracy is often obviated, with most of the efforts focused on its impact in the recognition accuracy. Accordingly, we evaluate the impact of 8 quality measures in the performance of iris segmentation. We use a database acquired with a close-up iris sensor and built-in quality checking process. Despite the latter, we report differences in behavior, with some measures clearly predicting the segmentation performance, while others giving inconclusive results. Recognition experiments with two matchers also show that segmentation and matching performance are not necessarily affected by the same factors. The resilience of one matcher to segmentation inaccuracies also suggest that segmentation errors due to low image quality are not necessarily revealed by the matcher, pointing out the importance of separate evaluation of the segmentation accuracy. © 2013 IEEE.

    Download full text (pdf)
    fulltext
  • 20.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Englund, Cristofer
    RISE Viktoria, Gothenburg, Sweden.
    Expression Recognition Using the Periocular Region: A Feasibility Study2018In: 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) / [ed] Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir, Los Alamitos: IEEE, 2018, p. 536-541Conference paper (Refereed)
    Abstract [en]

    This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0%/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.

    Download full text (pdf)
    fulltext
  • 21.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology.
    Bigun, Josef
    Halmstad University, School of Information Technology.
    Fierrez, Julian
    Universidad Autónoma de Madrid, Madrid, Spain.
    Damer, Naser
    Fraunhofer Institute for Computer Graphics Research, Darmstadt, Germany.
    Proenca, Hugo
    University of Beira Interior, Covilhã, Portugal.
    Ross, Arun
    Michigan State University, East Lansing, United States.
    Periocular Biometrics: A Modality for Unconstrained Scenarios2024In: Computer, ISSN 0018-9162, E-ISSN 1558-0814, Vol. 57, no 6, p. 40-49Article in journal (Refereed)
    Abstract [en]

    This article discusses the state of the art in periocular biometrics, presenting an overall framework encompassing the field's most significant research aspects, which include ocular definition, acquisition, and detection; identity recognition; and ocular soft-biometric analysis. © 1970-2012 IEEE.

  • 22.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fairhurst, M.
    University of Kent, UK.
    Fierrez, J.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Ortega-Garcia, J.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Automatic Measures for Predicting Performance in Off-line Signature2007Conference paper (Refereed)
    Abstract [en]

    Performance in terms of accuracy is one of the most important goal of a biometric system. Hence, having a measure which is able to predict the performance with respect to a particular sample of interest is specially useful, and can be exploited in a number of ways. In this paper, we present two automatic measures for predicting the performance in off-line signature verification. Results obtained on a sub-corpus of the MCYT signature database confirms a relationship between the proposed measures and system error rates measured in terms of Equal Error Rate (EER), False Acceptance Rate (FAR) and False Rejection Rate (FRR). © 2007 IEEE.

    Download full text (pdf)
    fulltext
  • 23.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fairhurst, M.
    University of Kent, UK.
    Fierrez, J.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Ortega-Garcia, J.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Impact of signature legibility and signature type in off-line signature verification2007In: Biometrics Symposium, 2007: [Baltimore, Maryland]: 11-13 Sept. 2007, Piscataway, N.J.: IEEE Press, 2007, Vol. 1, p. 1-6Conference paper (Refereed)
    Abstract [en]

    The performance of two popular approaches for off-line signature nature verification in terms of signature legibility and signature type is studied. We investigate experimentally if the knowledge of letters, syllables or name instances can help in the process of imitating a signature. Experimental results are given on a sub-corpus of the MCYT signature database for random and skilled forgeries. We use for our experiments two machine experts, one based on global image analysis and statistical distance measures, and the second based on local image analysis and Hidden Markov Models. Verification results are reported in terms of Equal Error Rate (EER), False Acceptance Rate (FAR) and False Rejection Rate (FRR). ©2007 IEEE.

    Download full text (pdf)
    fulltext
  • 24.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Eigen-patch iris super-resolution for iris recognition improvement2015In: 2015 23rd European Signal Processing Conference (EUSIPCO), Piscataway, NJ: IEEE Press, 2015, p. 76-80, article id 7362348Conference paper (Refereed)
    Abstract [en]

    Low image resolution will be a predominant factor in iris recognition systems as they evolve towards more relaxed acquisition conditions. Here, we propose a super-resolution technique to enhance iris images based on Principal Component Analysis (PCA) Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information and reducing artifacts. We validate the system used a database of 1,872 near-infrared iris images. Results show the superiority of the presented approach over bilinear or bicubic interpolation, with the eigen-patch method being more resilient to image resolution reduction. We also perform recognition experiments with an iris matcher based 1D Log-Gabor, demonstrating that verification rates degrades more rapidly with bilinear or bicubic interpolation. ©2015 IEEE

    Download full text (pdf)
    fulltext
  • 25.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Improving Very Low-Resolution Iris Identification Via Super-Resolution Reconstruction of Local Patches2017In: 2017 International Conference of the Biometrics Special Interest Group (BIOSIG) / [ed] Arslan Brömme, Christoph Busch, Antitza Dantcheva, Christian Rathgeb & Andreas Uhl, Bonn: Gesellschaft für Informatik, 2017, Vol. P-270, article id 8053512Conference paper (Refereed)
    Abstract [en]

    Relaxed acquisition conditions in iris recognition systems have significant effects on the quality and resolution of acquired images, which can severely affect performance if not addressed properly. Here, we evaluate two trained super-resolution algorithms in the context of iris identification. They are based on reconstruction of local image patches, where each patch is reconstructed separately using its own optimal reconstruction function. We employ a database of 1,872 near-infrared iris images (with 163 different identities for identification experiments) and three iris comparators. The trained approaches are substantially superior to bilinear or bicubic interpolations, with one of the comparators providing a Rank-1 performance of ∼88% with images of only 15×15 pixels, and an identification rate of 95% with a hit list size of only 8 identities. © 2017 Gesellschaft fuer Informatik.

    Download full text (pdf)
    fulltext
  • 26.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Iris Super-Resolution Using Iterative Neighbor Embedding2017In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops / [ed] Lisa O’Conner, Los Alamitos: IEEE Computer Society, 2017, p. 655-663Conference paper (Refereed)
    Abstract [en]

    Iris recognition research is heading towards enabling more relaxed acquisition conditions. This has effects on the quality and resolution of acquired images, severely affecting the accuracy of recognition systems if not tackled appropriately. In this paper, we evaluate a super-resolution algorithm used to reconstruct iris images based on iterative neighbor embedding of local image patches which tries to represent input low-resolution patches while preserving the geometry of the original high-resolution space. To this end, the geometry of the low- and high-resolution manifolds are jointly considered during the reconstruction process. We validate the system with a database of 1,872 near-infrared iris images, while fusion of two iris comparators has been adopted to improve recognition performance. The presented approach is substantially superior to bilinear/bicubic interpolations at very low resolutions, and it also outperforms a previous PCA-based iris reconstruction approach which only considers the geometry of the low-resolution manifold during the reconstruction process. © 2017 IEEE

  • 27.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Reconstruction of Smartphone Images for Low Resolution Iris Recognition2015In: 2015 IEEE International Workshop on Information Forensics and Security (WIFS), Piscataway, NJ: IEEE Press, 2015, article id 7368600Conference paper (Refereed)
    Abstract [en]

    As iris systems evolve towards a more relaxed acquisition, low image resolution will be a predominant issue. In this paper we evaluate a super-resolution method to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. We employ a database of 560 images captured in visible spectrum with two smartphones. The presented approach is superiorto bilinear or bicubic interpolation, especially at lower resolutions. We also carry out recognition experiments with six iris matchers, showing that better performance can be obtained at low-resolutions with the proposed eigen-patch reconstruction, with fusion of only two systems pushing the EER to below 5-8% for down-sampling factors up to a size of only 13x13. © 2015 IEEE.

  • 28.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Gonzalez-Sosa, Ester
    Nokia Bell-Labs, Madrid, Spain.
    A Survey of Super-Resolution in Iris Biometrics with Evaluation of Dictionary-Learning2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 6519-6544Article in journal (Refereed)
    Abstract [en]

    The lack of resolution has a negative impact on the performance of image-based biometrics. While many generic super-resolution methods have been proposed to restore low-resolution images, they usually aim to enhance their visual appearance. However, an overall visual enhancement of biometric images does not necessarily correlate with a better recognition performance. Reconstruction approaches need thus to incorporate specific information from the target biometric modality to effectively improve recognition performance. This paper presents a comprehensive survey of iris super-resolution approaches proposed in the literature. We have also adapted an Eigen-patches reconstruction method based on PCA Eigentransformation of local image patches. The structure of the iris is exploited by building a patch-position dependent dictionary. In addition, image patches are restored separately, having their own reconstruction weights. This allows the solution to be locally optimized, helping to preserve local information. To evaluate the algorithm, we degraded high-resolution images from the CASIA Interval V3 database. Different restorations were considered, with 15 × 15 pixels being the smallest resolution evaluated. To the best of our knowledge, this is among the smallest resolutions employed in the literature. The experimental framework is complemented with six publicly available iris comparators, which were used to carry out biometric verification and identification experiments. Experimental results show that the proposed method significantly outperforms both bilinear and bicubic interpolation at very low-resolution. The performance of a number of comparators attain an impressive Equal Error Rate as low as 5%, and a Top-1 accuracy of 77-84% when considering iris images of only 15 × 15 pixels. These results clearly demonstrate the benefit of using trained super-resolution techniques to improve the quality of iris images prior to matching. © 2018, Emerald Publishing Limited.

  • 29.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Super-Resolution for Selfie Biometrics: Introduction and Application to Face and Iris2019In: Selfie Biometrics: Advances and Challenges / [ed] Ajita Rattani, Reza Derakhshani & Arun A. Ross, Cham: Springer, 2019, 1, p. 105-128Chapter in book (Refereed)
    Abstract [en]

    Biometric research is heading towards enabling more relaxed acquisition conditions. This has effects on the quality and resolution of acquired images, severly affecting the accuracy of recognition systems if not tackled appropriately. In this chapter, we give an overview of recent research in super-resolution reconstruction applied to biometrics, with a focus on face and iris images in the visible spectrum, two prevalent modalities in selfie biometrics. After an introduction to the generic topic of super-resolution, we investigate methods adapted to cater for the particularities of these two modalities. By experiments, we show the benefits of incorporating super-resolution to improve the quality of biometric images prior to recognition. © Springer Nature AG 2019

  • 30.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Learning-Based Local-Patch Resolution Reconstruction of Iris Smart-phone Images2017Conference paper (Refereed)
    Abstract [en]

    Application of ocular biometrics in mobile and at a distance environments still has several open challenges, with the lack quality and resolution being an evident issue that can severely affects performance. In this paper, we evaluate two trained image reconstruction algorithms in the context of smart-phone biometrics. They are based on the use of coupled dictionaries to learn the mapping relations between low and high resolution images. In addition, reconstruction is made in local overlapped image patches, where up-scaling functions are modelled separately for each patch, allowing to better preserve local details. The experimental setup is complemented with a database of 560 images captured with two different smart-phones, and two iris comparators employed for verification experiments. We show that the trained approaches are substantially superior to bilinear or bicubic interpolations at very low resolutions (images of 13×13 pixels). Under such challenging conditions, an EER of ∼7% can be achieved using individual comparators, which is further pushed down to 4-6% after the fusion of the two systems. © 2017 IEEE

    Download full text (pdf)
    fulltext
  • 31.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Very Low-Resolution Iris Recognition Via Eigen-Patch Super-Resolution and Matcher Fusion2016In: 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Piscataway: IEEE, 2016, article id 7791208Conference paper (Refereed)
    Abstract [en]

    Current research in iris recognition is moving towards enabling more relaxed acquisition conditions. This has effects on the quality of acquired images, with low resolution being a predominant issue. Here, we evaluate a super-resolution algorithm used to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. Contrast enhancement is used to improve the reconstruction quality, while matcher fusion has been adopted to improve iris recognition performance. We validate the system using a database of 1,872 near-infrared iris images. The presented approach is superior to bilinear or bicubic interpolation, especially at lower resolutions, and the fusion of the two systems pushes the EER to below 5% for down-sampling factors up to a image size of only 13×13.

  • 32.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fierrez, J.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fingerprint Databases and Evaluation2008In: Encyclopedia of Biometrics / [ed] Li, Stan Z., New York: Springer-Verlag New York, 2008, 1, p. 452-458Chapter in book (Other academic)
    Download full text (pdf)
    fulltext
  • 33.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fierrez, J.
    Universidad Autonoma de Madrid, Spain.
    Gilperez, A.
    Universidad Autonoma de Madrid, Spain.
    Ortega-Garcia, J.
    Universidad Autonoma de Madrid, Spain.
    Impact of time variability in off-line writer identification and verification2009In: ISPA 2009: Proceedings of the 6th International Symposium on Image and Signal Processing and Analysis, Piscataway, N.J.: IEEE Press, 2009, p. 540-545Conference paper (Refereed)
    Abstract [en]

    One of the biggest challenges in person recognition using biometric systems is the variability in the acquired data. In this paper, we evaluate the effects of an increasing time lapse between reference and test biometric data consisting of static images of handwritten signatures and texts. We use for our experiments two recognition approaches exploiting information at the global and local levels, and the BiosecurlD database, containing 3,724 signature images and 532 texts of 133 individuals acquired in four acquisition sessions distributed along a 4 months time span. We report results of the recognition systems working both in verification (one-to-one) and identification (one-to-many) mode. The results show the extent of the impact that the time separation between samples under comparison has on the recognition rates, being the local approach more robust to the time lapse than the global one. We also observe in our experiments that recognition based on handwritten texts provides higher accuracy than recognition based on signatures.

    Download full text (pdf)
    fulltext
  • 34.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fierrez, J.
    Universidad Autonoma de Madrid, Spain.
    Martinez-Diaz, M.
    Universidad Autonoma de Madrid, Spain.
    Ortega-Garcia, J.
    Universidad Autonoma de Madrid, Spain.
    Fusion of static image and dynamic information for signature verification2009In: ICIP 2009: 2009 IEEE International Conference on Image Processing : proceedings, November 7-12, 2009, Cairo, Egypt, Piscataway, N.J.: IEEE Press, 2009, p. 2725-2728Conference paper (Refereed)
    Abstract [en]

    This paper evaluates the combination of static image (off-line) and dynamic information (on-line) for signature verification. Two off-line and two on-line recognition approaches exploiting information at the global and local levels are used. Experimental results are given using the BiosecurID database (130 signers, 3,640 signatures). Fusion experiments are done using a trained fusion approach based on linear logistic regression. It is shown experimentally that the local systems outperform the global ones, both in the on-line and in the off-line case. We also observe a considerable improvement when combining the two on-line systems, which is not the case with the off-line systems. The best performance is obtained when fusing all the systems together, which is specially evident for skilled forgeries when enough training data is available. ©2009 IEEE.

    Download full text (pdf)
    fulltext
  • 35.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fierrez, J.
    Universidad Autonoma de Madrid, Spain.
    Ortega-Garcia, J.
    Universidad Autonoma de Madrid, Spain.
    An enhanced Gabor filter-based segmentation algorithm for fingerprint recognition systems2005Conference paper (Refereed)
    Abstract [en]

    An imponant step in fingerprint recognition is the segmentation of the region of interest. In this paper, we present an enhanced approach for fingerprint segmentation based on the response of eight oriented Gabor filters. The performance of the algorithm has been evaluated in terms of decision error trade-off curves of an overall verification system. Experimental results demonstrate the robustness of the proposed method.

    Download full text (pdf)
    fulltext
  • 36.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fierrez, J.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Ramos, D.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Ortega-Garcia, J.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Dealing With Sensor Interoperability in Multi-biometrics: The UPM Experience at the Biosecure Multimodal Evaluation 20072008In: Biometric Technology for Human Identification, Bellingham, WA: SPIE - International Society for Optical Engineering, 2008, Vol. 6944, p. J9440-J9440Conference paper (Refereed)
    Abstract [en]

    Multimodal biometric systems allow to overcome some of the problems presented in unimodal systems, such as non-universality, lack of distinctiveness of the unimodal trait, noise in the acquired data, etc. Integration at the matching score level is the most common approach used due to the ease in combining the scores generated by different unimodal systems. Unfortunately, scores usually lie in application-dependent domains. In this work, we use linear logistic regression fusion, in which fused scores tend to be calibrated log-likelihood-ratios and thus, independent of the application. We use for our experiments the development set of scores of the DS2 Evaluation (Access Control Scenario) of the BioSecure Multimodal Evaluation Campaign, whose objective is to compare the performance of fusion algorithms when query biometric signals are originated from heterogeneous biometric devices. We compare a fusion scheme that uses linear logistic regression with a set of simple fusion rules. It is observed that the proposed fusion scheme outperforms all the simple fusion rules, with the additional advantage of the application-independent nature of the resulting fused scores.

    Download full text (pdf)
    fulltext
  • 37.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Fingerprint Databases and Evaluation2015In: Encyclopedia of Biometrics / [ed] Stan Z. Li & Anil K. Jain, New York: Springer Science+Business Media B.V., 2015, 2, p. 599-606Chapter in book (Refereed)
    Abstract [en]

    This is an excerpt from the content

    Synonyms

    Fingerprint benchmark; Fingerprint corpora; Fingerprint dataset

    Definition

    Fingerprint databases are structured collections of fingerprint data mainly used for either evaluation or operational recognition purposes.

    Fingerprint data in databases for evaluation are usually detached from the identity of corresponding individuals. These databases are publicly available for research purposes, and they usually consist of raw fingerprint images acquired with live-scan sensors or digitized from inked fingerprint impressions on paper. Databases for evaluation are the basis for research in automatic fingerprint recognition, and together with specific experimental protocols, they are the basis for a number of technology evaluations and benchmarks. This is the type of fingerprint databases further covered here.

    On the other hand, fingerprint databases for operational recognition are typically proprietary, they usually incorporate personal information about the enrolled people together with the fingerprint data, and they can incorporate either raw fingerprint image data or some form of distinctive fingerprint descriptors such as minutiae templates. These fingerprint databases represent one of the modules in operational automated fingerprint recognition systems, and they will not be adressed here.

    Download full text (pdf)
    fulltext
  • 38.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Quality Measures in Biometric Systems2015In: Encyclopedia of Biometrics / [ed] Stan Z. Li & Anil K. Jain, New York: Springer Science+Business Media B.V., 2015, 2, p. 1287-1297Chapter in book (Refereed)
    Abstract [en]

    This is an excerpt from the content

    Synonyms

    Quality assessment; Biometric quality; Quality-based processing

    Definition

    Since the establishment of biometrics as a specific research area in the late 1990s, the biometric community has focused its efforts in the development of accurate recognition algorithms [1]. Nowadays, biometric recognition is a mature technology that is used in many applications, offering greater security and convenience than traditional methods of personal recognition [2].

    During the past few years, biometric quality measurement has become an important concern after a number of studies and technology benchmarks that demonstrate how performance of biometric systems is heavily affected by the quality of biometric signals [3]. This operationally important step has been nevertheless under-researched compared to the primary feature extraction and pattern recognition tasks [4]. One of the main challenges facing biometric technologies is performance degradation in less controlled situations, and the problem of biometric quality measurement has arisen even stronger with the proliferation of portable handheld devices, with at-a-distance and on-the-move acquisition capabilities. These will require robust algorithms capable of handling a range of changing characteristics [2]. Another important example is forensics, in which intrinsic operational factors further degrade recognition performance.

    There are number of factors that can affect the quality of biometric signals, and there are numerous roles of a quality measure in the context of biometric systems. This section summarizes the state of the art in the biometric quality problem, giving an overall framework of the different challenges involved.

    Download full text (pdf)
    fulltext
  • 39.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fierrez, Julian
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Ramos, Daniel
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Gonzalez-Rodriguez, Joaquin
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Quality-Based Conditional Processing in Multi-Biometrics: Application to Sensor Interoperability2010In: IEEE transactions on systems, man and cybernetics. Part A. Systems and humans, ISSN 1083-4427, E-ISSN 1558-2426, Vol. 40, no 6, p. 1168-1179Article in journal (Refereed)
    Abstract [en]

    As biometric technology is increasingly deployed, it will be common to replace parts of operational systems with newer designs. The cost and inconvenience of reacquiring enrolled users when a new vendor solution is incorporated makes this approach difficult and many applications will require to deal with information from different sources regularly. These interoperability problems can dramatically affect the performance of biometric systems and thus, they need to be overcome. Here, we describe and evaluate the ATVS-UAM fusion approach submitted to the quality-based evaluation of the 2007 BioSecure Multimodal Evaluation Campaign, whose aim was to compare fusion algorithms when biometric signals were generated using several biometric devices in mismatched conditions. Quality measures from the raw biometric data are available to allow system adjustment to changing quality conditions due to device changes. This system adjustment is referred to as quality-based conditional processing. The proposed fusion approach is based on linear logistic regression, in which fused scores tend to be log-likelihood-ratios. This allows the easy and efficient combination of matching scores from different devices assuming low dependence among modalities. In our system, quality information is used to switch between different system modules depending on the data source (the sensor in our case) and to reject channels with low quality data during the fusion. We compare our fusion approach to a set of rule-based fusion schemes over normalized scores. Results show that the proposed approach outperforms all the rule-based fusion schemes. We also show that with the quality-based channel rejection scheme, an overall improvement of 25% in the equal error rate is obtained. © 2010 IEEE.

    Download full text (pdf)
    fulltext
  • 40.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fierrez-Aguilar, J.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    del-Valle, F.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Ortega-Garcia, J.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    On-line signature verification using Tablet PC2005In: Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis: Zagreb, Croatia, 15 - 17 September, 2005, Zagreb: University of Zagreb , 2005, p. 245-250Conference paper (Refereed)
    Abstract [en]

    On-line signature verification for Tablet PC devices is studied. The on-line signature verification algorithm presented by the authors at the First International Signature Verification Competition (SVC 2004) is adapted to work in Tablet PC environments. An example prototype of securing access and securing document application using this Tablet PC system is also reported. Two different commercial Tablet PCs are evaluated, including information of interest for signature verification systems such as sampling and pressure statistics. Authentication performance experiments are reported considering both random and skilled forgeries by using a new database with over 3000 signatures.

    Download full text (pdf)
    fulltext
  • 41.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fierrez-Aguilar, J.
    Universidad Autonoma de Madrid, Spain.
    Galbally, J.
    Universidad Autonoma de Madrid, Spain.
    Ortega-Garcia, J.
    Universidad Autonoma de Madrid, Spain.
    Exploiting Character Class Information in Forensic Writer Identification2011In: Computational forensics: 4th International Workshop, IWCF 2010 Tokyo, Japan, November 11-12, 2010 : revised selected papers, Berlin: Springer Berlin/Heidelberg, 2011, p. 31-42Conference paper (Refereed)
    Abstract [en]

    Questioned document examination is extensively used by forensic specialists for criminal identification. This paper presents a writer recognition system based on contour features operating in identification mode (one-to-many) and working at the level of isolated characters. Individual characters of a writer are manually segmented and labeled by an expert as pertaining to one of 62 alphanumeric classes (10 numbers and 52 letters, including lowercase and uppercase letters), being the particular setup used by the forensic laboratory participating in this work. Three different scenarios for identity modeling are proposed, making use to a different degree of the class information provided by the alphanumeric samples. Results obtained on a database of 30 writers from real forensic documents show that the character class information given by the manual analysis provides a valuable source of improvement, justifying the significant amount of time spent in manual segmentation and labeling by the forensic specialist. © 2011 Springer-Verlag Berlin Heidelberg.

    Download full text (pdf)
    fulltext
  • 42.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fierrez-Aguilar, J.
    Universidad Autonoma de Madrid, Spain.
    Gilperez, A.
    Universidad Autonoma de Madrid, Spain.
    Galbally, J.
    Universidad Autonoma de Madrid, Spain.
    Ortega-Garcia, J.
    Universidad Autonoma de Madrid, Spain.
    Robustness of signature verification systems to imitators with increasing skills2009In: ICDAR '09: Proceedings of the 10th International Conference on Document Analysis and Recognition, 26-29 July 2009, Barcelona, Catalonia, Spain, Los Alamitos, Calif.: IEEE Computer Society, 2009Conference paper (Refereed)
    Abstract [en]

    In this paper, we study the impact of an incremental level of skill in the forgeries against signature verification systems. Experiments are carried out using both off-line systems, involving the discrimination of signatures written on a piece of paper, and on-line systems, in which dynamic information of the signing process (such as velocity and acceleration) is also available. We use for our experiments the BiosecurID database, which contains both on-line and off-line versions of signatures, acquired in four sessions across a 4 month time span with incremental level of skill in the forgeries for different sessions. We compare several scenarios with different size and variability of the enrolment set, showing that the problem of skilled forgeries can be alleviated as we consider more signatures for enrolment. © 2009 IEEE.

    Download full text (pdf)
    fulltext
  • 43.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fierrez-Aguilar, J.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Ortega-Garcia, J.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Sensor interoperability and fusion in signature verification: a case study using Tablet PC2005In: Advances in biometric person authentification: International workshop on biometric recognition systems, IWBRS 2005, Beijing, China, October 22-23, 2005 : proceedings, New York: Springer-Verlag New York, 2005, Vol. Springer LNCS-3781, p. 180-187Conference paper (Refereed)
    Abstract [en]

    Several works related to information fusion for signature verification have been presented. However, few works have focused on sensor fusion and sensor interoperability. In this paper, these two topics are evaluated for signature verification using two different commercial Tablet PCs. An enrolment strategy using signatures from the two Tablet PCs is also proposed. Authentication performance experiments are reported by using a database with over 3000 signatures. © Springer-Verlag Berlin Heidelberg 2005.

    Download full text (pdf)
    fulltext
  • 44.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fierrez-Aguilar, J.
    Universidad Autonoma de Madrid, Spain.
    Ortega-Garcia, J.
    Universidad Autonoma de Madrid, Spain.
    Gonzalez-Rodriguez, J.
    Universidad Autonoma de Madrid, Spain.
    Secure access system using signature verification over Tablet PC2007In: IEEE Aerospace and Electronic Systems Magazine, ISSN 0885-8985, E-ISSN 1557-959X, Vol. 22, no 4, p. 3-8Article in journal (Refereed)
    Abstract [en]

    Low-cost portable devices capable of capturing signature signals are being increasingly used. Additionally, the social and legal acceptance of the written signature for authentication purposes is opening a range of new applications. We describe a highly versatile and scalable prototype for Web-based secure access using signature verification. The proposed architecture can be easily extended to work with different kinds of sensors and large-scale databases. Several remarks are also given on security and privacy of network-based signature verification. © 2007 IEEE.

    Download full text (pdf)
    fulltext
  • 45.
    Alonso-Fernandez, Fernando
    et al.
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Madrid, Spain.
    Fierrez-Aguilar, Julian
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Madrid, Spain.
    Ortega-Garcia, Javier
    Escuela Politecnica Superior, Univ. Autonoma de Madrid, Madrid, Spain.
    A Review Of Schemes For Fingerprint Image Quality Computation2005In: COST Action 275: Proceedings of the third COST 275 Workshop Biometrics on the Internet / [ed] Aladdin Ariyaeeinia, Mauro Falcone & Andrea Paoloni, Luxembourg: EU Publications Office (OPOCE) , 2005, p. 3-6Conference paper (Refereed)
    Abstract [en]

    Fingerprint image quality affects heavily the performance of fingerprint recognition systems. This paper reviews existing approaches for fingerprint image quality computation. We also implement, test and compare a selection of them using the MCYT database including 9000 fingerprint images. Experimental results show that most of the algorithms behave similarly.

    Download full text (pdf)
    fulltext
  • 46.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Buades Rubio, Jose Maria
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Palma, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning2024In: Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023. / [ed] Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J., Cham: Springer, 2024, Vol. 14335, p. 349-361Conference paper (Refereed)
    Abstract [en]

    The widespread use of mobile devices for various digital services has created a need for reliable and real-time person authentication. In this context, facial recognition technologies have emerged as a dependable method for verifying users due to the prevalence of cameras in mobile devices and their integration into everyday applications. The rapid advancement of deep Convolutional Neural Networks (CNNs) has led to numerous face verification architectures. However, these models are often large and impractical for mobile applications, reaching sizes of hundreds of megabytes with millions of parameters. We address this issue by developing SqueezerFaceNet, a light face recognition network which less than 1M parameters. This is achieved by applying a network pruning method based on Taylor scores, where filters with small importance scores are removed iteratively. Starting from an already small network (of 1.24M) based on SqueezeNet, we show that it can be further reduced (up to 40%) without an appreciable loss in performance. To the best of our knowledge, we are the first to evaluate network pruning methods for the task of face recognition. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.

    Download full text (pdf)
    fulltext
  • 47.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Ramis, Silvia
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Spain.
    Perales, Francisco J.
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Facial Masks and Soft-Biometrics: Leveraging Face Recognition CNNs for Age and Gender Prediction on Mobile Ocular Images2021In: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 10, no 5, p. 562-580Article in journal (Refereed)
    Abstract [en]

    We address the use of selfie ocular images captured with smartphones to estimate age and gender. Partial face occlusion has become an issue due to the mandatory use of face masks. Also, the use of mobile devices has exploded, with the pandemic further accelerating the migration to digital services. However, state-of-the-art solutions in related tasks such as identity or expression recognition employ large Convolutional Neural Networks, whose use in mobile devices is infeasible due to hardware limitations and size restrictions of downloadable applications. To counteract this, we adapt two existing lightweight CNNs proposed in the context of the ImageNet Challenge, and two additional architectures proposed for mobile face recognition. Since datasets for soft-biometrics prediction using selfie images are limited, we counteract over-fitting by using networks pre-trained on ImageNet. Furthermore, some networks are further pre-trained for face recognition, for which very large training databases are available. Since both tasks employ similar input data, we hypothesize that such strategy can be beneficial for soft-biometrics estimation. A comprehensive study of the effects of different pre-training over the employed architectures is carried out, showing that, in most cases, a better accuracy is obtained after the networks have been fine-tuned for face recognition. © The Authors

    Download full text (pdf)
    fulltext
  • 48.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Ramis, Silvia
    University of Balearic Islands, Spain.
    Perales, Francisco J.
    University of Balearic Islands, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Soft-Biometrics Estimation In the Era of Facial Masks2020In: 2020 International Conference of the Biometrics Special Interest Group (BIOSIG), Piscataway, N.J.: IEEE, 2020, p. 1-6Conference paper (Refereed)
    Abstract [en]

    We analyze the use of images from face parts to estimate soft-biometrics indicators. Partial face occlusion is common in unconstrained scenarios, and it has become mainstream during the COVID-19 pandemic due to the use of masks. Here, we apply existing pre-trained CNN architectures, proposed in the context of the ImageNet Large Scale Visual Recognition Challenge, to the tasks of gender, age, and ethnicity estimation. Experiments are done with 12007 images from the Labeled Faces in the Wild (LFW) database. We show that such off-the-shelf features can effectively estimate soft-biometrics indicators using only the ocular region. For completeness, we also evaluate images showing only the mouth region. In overall terms, the network providing the best accuracy only suffers accuracy drops of 2-4% when using the ocular region, in comparison to using the entire face. Our approach is also shown to outperform in several tasks two commercial off-the-shelf systems (COTS) that employ the whole face, even if we only use the eye or mouth regions. © 2020 German Computer Association (Gesellschaft für Informatik e.V.).

    Download full text (pdf)
    fulltext
  • 49.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Tiwari, Prayag
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Bigun, Josef
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition2024Conference paper (Refereed)
  • 50.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Mikaelyan, Anna
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Compact Multi-scale Periocular Recognition Using SAFE Features2016In: Proceedings - International Conference on Pattern Recognition, Washington: IEEE, 2016, p. 1455-1460, article id 7899842Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor, which encodes the presence of various symmetric curve families around image key points. We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this unique point of the eye. As it is demonstrated, such discriminative properties can be encoded with a reduced set of symmetric curves. Experiments are done with a database of periocular images captured with a digital camera. We test our system against reference periocular features, achieving top performance with a considerably smaller feature vector (given by the use of a single key point). All the systems tested also show a nearly steady correlation between acquisition distance and performance, and they are also able to cope well when enrolment and test images are not captured at the same distance. Fusion experiments among the available systems are also provided. © 2016 IEEE

123456 1 - 50 of 274
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf