hh.sePublications
Change search
Refine search result
1234 1 - 50 of 168
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Barrachina, Javier
    Facephi Biometria, Alicante, Spain.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    SqueezeFacePoseNet: Lightweight Face Verification Across Different Poses for Mobile Platforms2021In: Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10-15, 2021, Proceedings, Part VIII / [ed] Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, Roberto Vezzani, Berlin: Springer, 2021, p. 139-153Conference paper (Refereed)
    Abstract [en]

    Ubiquitous and real-time person authentication has become critical after the breakthrough of all kind of services provided via mobile devices. In this context, face technologies can provide reliable and robust user authentication, given the availability of cameras in these devices, as well as their widespread use in everyday applications. The rapid development of deep Convolutional Neural Networks (CNNs) has resulted in many accurate face verification architectures. However, their typical size (hundreds of megabytes) makes them infeasible to be incorporated in downloadable mobile applications where the entire file typically may not exceed 100 Mb. Accordingly, we address the challenge of developing a lightweight face recognition network of just a few megabytes that can operate with sufficient accuracy in comparison to much larger models. The network also should be able to operate under different poses, given the variability naturally observed in uncontrolled environments where mobile devices are typically used. In this paper, we adapt the lightweight SqueezeNet model, of just 4.4MB, to effectively provide cross-pose face recognition. After trained on the MS-Celeb-1M and VGGFace2 databases, our model achieves an EER of 1.23% on the difficult frontal vs. profile comparison, and 0.54% on profile vs. profile images. Under less extreme variations involving frontal images in any of the enrolment/query images pair, EER is pushed down to <0.3%, and the FRR at FAR=0.1% to less than 1%. This makes our light model suitable for face recognition where at least acquisition of the enrolment image can be controlled. At the cost of a slight degradation in performance, we also test an even lighter model (of just 2.5MB) where regular convolutions are replaced with depth-wise separable convolutions. © 2021, Springer Nature Switzerland AG.

    Download full text (pdf)
    fulltext
  • 2.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    A survey on periocular biometrics research2016In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 82, part 2, p. 92-105Article in journal (Refereed)
    Abstract [en]

    Periocular refers to the facial region in the vicinity of the eye, including eyelids, lashes and eyebrows. While face and irises have been extensively studied, the periocular region has emerged as a promising trait for unconstrained biometrics, following demands for increased robustness of face or iris systems. With a surprisingly high discrimination ability, this region can be easily obtained with existing setups for face and iris, and the requirement of user cooperation can be relaxed, thus facilitating the interaction with biometric systems. It is also available over a wide range of distances even when the iris texture cannot be reliably obtained (low resolution) or under partial face occlusion (close distances). Here, we review the state of the art in periocular biometrics research. A number of aspects are described, including: (i) existing databases, (ii) algorithms for periocular detection and/or segmentation, (iii) features employed for recognition, (iv) identification of the most discriminative regions of the periocular area, (v) comparison with iris and face modalities, (vi) soft-biometrics (gender/ethnicity classification), and (vii) impact of gender transformation and plastic surgery on the recognition accuracy. This work is expected to provide an insight of the most relevant issues in periocular biometrics, giving a comprehensive coverage of the existing literature and current state of the art. © 2015 Elsevier B.V. All rights reserved.

  • 3.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    An Overview of Periocular Biometrics2017In: Iris and Periocular Biometric Recognition / [ed] Christian Rathgeb & Christoph Busch, London: The Institution of Engineering and Technology , 2017, p. 29-53Chapter in book (Refereed)
    Abstract [en]

    Periocular biometrics specifically refers to the externally visible skin region of the face that surrounds the eye socket. Its utility is specially pronounced when the iris or the face cannot be properly acquired, being the ocular modality requiring the least constrained acquisition process. It appears over a wide range of distances, even under partial face occlusion (close distance) or low resolution iris (long distance), making it very suitable for unconstrained or uncooperative scenarios. It also avoids the need of iris segmentation, an issue in difficult images. In such situation, identifying a suspect where only the periocular region is visible is one of the toughest real-world challenges in biometrics. The richness of the periocular region in terms of identity is so high that the whole face can even be reconstructed only from images of the periocular region. The technological shift to mobile devices has also resulted in many identity-sensitive applications becoming prevalent on these devices.

  • 4.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Best Regions for Periocular Recognition with NIR and Visible Images2014In: 2014 IEEE International Conference on Image Processing (ICIP), Piscataway, NJ: IEEE Press, 2014, p. 4987-4991Conference paper (Refereed)
    Abstract [en]

    We evaluate the most useful regions for periocular recognition. For this purpose, we employ our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the spectrum. We use both NIR and visible iris images. The best regions are selected via Sequential Forward Floating Selection (SFFS). The iris neighborhood (including sclera and eyelashes) is found as the best region with NIR data, while the surrounding skin texture (which is over-illuminated in NIR images) is the most discriminative region in visible range. To the best of our knowledge, only one work in the literature has evaluated the influence of different regions in the performance of periocular recognition algorithms. Our results are in the same line, despite the use of completely different matchers. We also evaluate an iris texture matcher, providing fusion results with our periocular system as well. © 2014 IEEE.

    Download full text (pdf)
    fulltext
  • 5.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Biometric Recognition Using Periocular Images2013Conference paper (Other academic)
    Abstract [en]

    We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum at different frequencies and orientations. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, and 4) rotation compensation between query and test images. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region.

    Download full text (pdf)
    fulltext
  • 6.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology.
    Bigun, Josef
    Halmstad University, School of Information Technology.
    Continuous Examination by Automatic Quiz Assessment Using Spiral Codes and Image Processing2022In: 2022 IEEE Global Engineering Education Conference (EDUCON) / [ed] Ilhem Kallel; Habib M. Kammoun; Lobna Hsairi, IEEE, 2022, Vol. 2022-Marc, p. 929-935Conference paper (Refereed)
    Abstract [en]

    We describe a technical solution implemented at Halmstad University to automatise assessment and reporting of results of paper-based quiz exams. Paper quizzes are affordable and within reach of campus education in classrooms. Offering and taking them is accepted as they cause fewer issues with reliability and democratic access, e.g. a large number of students can take them without a trusted mobile device, internet, or battery. By contrast, correction of the quiz is a considerable obstacle. We suggest mitigating the issue by a novel image processing technique using harmonic spirals that aligns answer sheets in sub-pixel accuracy to read student identity and answers and to email results within minutes, all fully automatically. Using the described method, we carry out regular weekly examinations in two master courses at the mentioned centre without a significant workload increase. The employed solution also enables us to assign a unique identifier to each quiz (e.g. week 1, week 2...) while allowing us to have an individualised quiz for each student. © 2022 IEEE.

    Download full text (pdf)
    fulltext
  • 7.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Exploting Periocular and RGB Information in Fake Iris Detection2014In: 2014 37th International Conventionon Information and Communication Technology, Electronics and Microelectronics (MIPRO): 26 – 30 May 2014 Opatija, Croatia: Proceedings / [ed] Petar Biljanovic, Zeljko Butkovic, Karolj Skala, Stjepan Golubic, Marina Cicin-Sain, Vlado Sruk, Slobodan Ribaric, Stjepan Gros, Boris Vrdoljak, Mladen Mauher & Goran Cetusic, Rijeka: Croatian Society for Information and Communication Technology, Electronics and Microelectronics - MIPRO , 2014, p. 1354-1359Conference paper (Refereed)
    Abstract [en]

    Fake iris detection has been studied by several researchers. However, to date, the experimental setup has been limited to near-infrared (NIR) sensors, which provide grey-scale images. This work makes use of images captured in visible range with color (RGB) information. We employ Gray-Level CoOccurrence textural features and SVM classifiers for the task of fake iris detection. The best features are selected with the Sequential Forward Floating Selection (SFFS) algorithm. To the best of our knowledge, this is the first work evaluating spoofing attack using color iris images in visible range. Our results demonstrate that the use of features from the three color channels clearly outperform the accuracy obtained from the luminance (gray scale) image. Also, the R channel is found to be the best individual channel. Lastly, we analyze the effect of extracting features from selected (eye or periocular) regions only. The best performance is obtained when GLCM features are extracted from the whole image, highlighting that both the iris and the surrounding periocular region are relevant for fake iris detection. An added advantage is that no accurate iris segmentation is needed. This work is relevant due to the increasing prevalence of more relaxed scenarios where iris acquisition using NIR light is unfeasible (e.g. distant acquisition or mobile devices), which are putting high pressure in the development of algorithms capable of working with visible light. © 2014 MIPRO.

    Download full text (pdf)
    fulltext
  • 8.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Eye Detection by Complex Filtering for Periocular Recognition2014In: 2nd International Workshop on Biometrics and Forensics (IWBF2014): Valletta, Malta (27-28th March 2014), Piscataway, NJ: IEEE Press, 2014, article id 6914250Conference paper (Refereed)
    Abstract [en]

    We present a novel system to localize the eye position based on symmetry filters. By using a 2D separable filter tuned to detect circular symmetries, detection is done with a few ID convolutions. The detected eye center is used as input to our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the local power spectrum. This setup is evaluated with two databases of iris data, one acquired with a close-up NIR camera, and another in visible light with a web-cam. The periocular system shows high resilience to inaccuracies in the position of the detected eye center. The density of the sampling grid can also be reduced without sacrificing too much accuracy, allowing additional computational savings. We also evaluate an iris texture matcher based on ID Log-Gabor wavelets. Despite the poorer performance of the iris matcher with the webcam database, its fusion with the periocular system results in improved performance. ©2014 IEEE.

    Download full text (pdf)
    fulltext
  • 9.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Fake Iris Detection: A Comparison Between Near-Infrared and Visible Images2014In: Proceedings: 10th International Conference on Signal-Image Technology and Internet-Based Systems, SITIS 2014 / [ed] Kokou Yetongnon, Albert Dipanda & Richard Chbeir, Piscataway, NJ: IEEE Computer Society, 2014, p. 546-553Conference paper (Refereed)
    Abstract [en]

    Fake iris detection has been studied so far using near-infrared sensors (NIR), which provide grey scale-images, i.e. With luminance information only. Here, we incorporate into the analysis images captured in visible range, with color information, and perform comparative experiments between the two types of data. We employ Gray-Level Cocurrence textural features and SVM classifiers. These features analyze various image properties related with contrast, pixel regularity, and pixel co-occurrence statistics. We select the best features with the Sequential Forward Floating Selection (SFFS) algorithm. We also study the effect of extracting features from selected (eye or periocular) regions only. Our experiments are done with fake samples obtained from printed images, which are then presented to the same sensor than the real ones. Results show that fake images captured in NIR range are easier to detect than visible images (even if we down sample NIR images to equate the average size of the iris region between the two databases). We also observe that the best performance with both sensors can be obtained with features extracted from the whole image, showing that not only the eye region, but also the surrounding periocular texture is relevant for fake iris detection. An additional source of improvement with the visible sensor also comes from the use of the three RGB channels, in comparison with the luminance image only. A further analysis also reveals that some features are best suited to one particular sensor than the others. © 2014 IEEE

    Download full text (pdf)
    fulltext
  • 10.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Halmstad University submission to the First ICB Competition on Iris Recognition (ICIR2013)2013Other (Other academic)
    Download full text (pdf)
    fulltext
  • 11.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Iris Boundaries Segmentation Using the Generalized Structure Tensor: A Study on the Effects of Image Degradation2012In: Biometrics: Theory, Applications and Systems (BTAS), 2012 IEEE Fifth International Conference on, Piscataway, N.J.: IEEE Press, 2012, p. 426-431, article id 6374610Conference paper (Refereed)
    Abstract [en]

    We present a new iris segmentation algorithm based onthe Generalized Structure Tensor (GST), which also includesan eyelid detection step. It is compared with traditionalsegmentation systems based on Hough transformand integro-differential operators. Results are given usingthe CASIA-IrisV3-Interval database. Segmentation performanceunder different degrees of image defocus and motionblur is also evaluated. Reported results shows the effectivenessof the proposed algorithm, with similar performancethan the others in pupil detection, and clearly betterperformance for sclera detection for all levels of degradation.Verification results using 1D Log-Gabor wavelets arealso given, showing the benefits of the eyelids removal step.These results point out the validity of the GST as an alternativeto other iris segmentation systems. © 2012 IEEE.

    Download full text (pdf)
    2012_BTAS_IrisGST_Quality_Alonso
  • 12.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Iris Pupil Detection by Structure Tensor Analysis2011Conference paper (Other academic)
    Abstract [en]

    This paper present a pupil detection/segmentation algorithm for iris images based on Structure Tensor analysis. Eigenvalues of the structure tensor matrix have been observed to be high in pupil boundaries and specular reflections of iris images. We exploit this fact to detect the specular reflections region and the boundary of the pupil in a sequential manner. Experimental results are given using the CASIA-IrisV3-Interval database (249 contributors, 396 different eyes, 2,639 iris images). Results show that our algorithm works specially well in detecting the specular reflections (98.98% success rate) and pupil boundary detection is correctly done in 84.24% of the images.

    Download full text (pdf)
    fulltext
  • 13.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Iris Segmentation Using the Generalized Structure Tensor2012Conference paper (Other academic)
    Abstract [en]

    We present a new iris segmentation algorithm based on the Generalized Structure Tensor (GST). We compare this approach with traditional iris segmentation systems based on Hough transform and integro-differential operators. Results are given using the CASIA-IrisV3-Interval database with respect to a segmentation made manually by a human expert. The proposed algorithm outperforms the baseline approaches, pointing out the validity of the GST as an alternative to classic iris segmentation systems. We also detect the cross positions between the eyelids and the outer iris boundary. Verification results using a publicly available iris recognition system based on 1D Log-Gabor wavelets are also given, showing the benefits of the eyelids removal step.

    Download full text (pdf)
    fulltext
  • 14.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Near-infrared and visible-light periocular recognition with Gabor features using frequency-adaptive automatic eye detection2015In: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 4, no 2, p. 74-89Article in journal (Refereed)
    Abstract [en]

    Periocular recognition has gained attention recently due to demands of increased robustness of face or iris in less controlled scenarios. We present a new system for eye detection based on complex symmetry filters, which has the advantage of not needing training. Also, separability of the filters allows faster detection via one-dimensional convolutions. This system is used as input to a periocular algorithm based on retinotopic sampling grids and Gabor spectrum decomposition. The evaluation framework is composed of six databases acquired both with near-infrared and visible sensors. The experimental setup is complemented with four iris matchers, used for fusion experiments. The eye detection system presented shows very high accuracy with near-infrared data, and a reasonable good accuracy with one visible database. Regarding the periocular system, it exhibits great robustness to small errors in locating the eye centre, as well as to scale changes of the input image. The density of the sampling grid can also be reduced without sacrificing accuracy. Lastly, despite the poorer performance of the iris matchers with visible data, fusion with the periocular system can provide an improvement of more than 20%. The six databases used have been manually annotated, with the annotation made publicly available. © The Institution of Engineering and Technology 2015.

    Download full text (pdf)
    fulltext
  • 15.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Biometrics: Databases, Algorithms and Directions2016In: 2016 4th International Workshop on Biometrics and Forensics (IWBF): Proceedings : 3-4 March, 2016, Limassol, Cyprus, Piscataway, NJ: IEEE, 2016, article id 7449688Conference paper (Refereed)
    Abstract [en]

    Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a trade-off between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed. © 2016 IEEE.

  • 16.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Recognition Using Retinotopic Sampling and Gabor Decomposition2012In: Computer Vision – ECCV 2012: Workshops and demonstrations : Florence, Italy, October 7-13, 2012, Proceedings. Part II / [ed] Fusiello, Andrea; Murino, Vittorio; Cucchiara, Rita, Berlin: Springer, 2012, Vol. 7584, p. 309-318Conference paper (Refereed)
    Abstract [en]

    We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, 4) rotation compensation between query and test images, and 5) comparison with an iris machine expert. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region. © 2012 Springer-Verlag.

    Download full text (pdf)
    2012_WIAF_Periocular_Retinotopic_Gabor_Alonso
  • 17.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Biometrics: Databases, Algorithms and Directions2016Conference paper (Other academic)
    Abstract [en]

    Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a tradeoff between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed.

  • 18.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Quality Factors Affecting Iris Segmentation and Matching2013In: Proceedings – 2013 International Conference on Biometrics, ICB 2013 / [ed] Julian Fierrez, Ajay Kumar, Mayank Vatsa, Raymond Veldhuis & Javier Ortega-Garcia, Piscataway, N.J.: IEEE conference proceedings, 2013, article id 6613016Conference paper (Refereed)
    Abstract [en]

    Image degradations can affect the different processing steps of iris recognition systems. With several quality factors proposed for iris images, its specific effect in the segmentation accuracy is often obviated, with most of the efforts focused on its impact in the recognition accuracy. Accordingly, we evaluate the impact of 8 quality measures in the performance of iris segmentation. We use a database acquired with a close-up iris sensor and built-in quality checking process. Despite the latter, we report differences in behavior, with some measures clearly predicting the segmentation performance, while others giving inconclusive results. Recognition experiments with two matchers also show that segmentation and matching performance are not necessarily affected by the same factors. The resilience of one matcher to segmentation inaccuracies also suggest that segmentation errors due to low image quality are not necessarily revealed by the matcher, pointing out the importance of separate evaluation of the segmentation accuracy. © 2013 IEEE.

    Download full text (pdf)
    fulltext
  • 19.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Englund, Cristofer
    RISE Viktoria, Gothenburg, Sweden.
    Expression Recognition Using the Periocular Region: A Feasibility Study2018In: 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) / [ed] Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir, Los Alamitos: IEEE, 2018, p. 536-541Conference paper (Refereed)
    Abstract [en]

    This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0%/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.

    Download full text (pdf)
    fulltext
  • 20.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology.
    Bigun, Josef
    Halmstad University, School of Information Technology.
    Fierrez, Julian
    Universidad Autónoma de Madrid, Madrid, Spain.
    Damer, Naser
    Fraunhofer Institute for Computer Graphics Research, Darmstadt, Germany.
    Proenca, Hugo
    University of Beira Interior, Covilhã, Portugal.
    Ross, Arun
    Michigan State University, East Lansing, United States.
    Periocular Biometrics: A Modality for Unconstrained Scenarios2024In: Computer, ISSN 0018-9162, E-ISSN 1558-0814, Vol. 57, no 6, p. 40-49Article in journal (Refereed)
    Abstract [en]

    This article discusses the state of the art in periocular biometrics, presenting an overall framework encompassing the field's most significant research aspects, which include ocular definition, acquisition, and detection; identity recognition; and ocular soft-biometric analysis. © 1970-2012 IEEE.

  • 21.
    Alonso-Fernandez, Fernando
    et al.
    ATVS/Biometric Recognition Group, Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Fierrez, Julian
    ATVS/Biometric Recognition Group, Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fronthaler, Hartwig
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Kollreider, Klaus
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Ortega-Garcia, Javier
    ATVS/Biometric Recognition Group, Escuela Politecnica Superior, Univ. Autonoma de Madrid, Spain.
    Fingerprint Recognition2009In: Guide to Biometric Reference Systems and Performance Evaluation / [ed] Dijana Petrovska-Delacrétaz, Gérard Chollet, Bernadette Dorizzi, London: Springer London, 2009, p. 51-88Chapter in book (Other academic)
    Abstract [en]

    First, an overview of the state of the art in fingerprint recognition is presented, including current issues and challenges. Fingerprint databases and evaluation campaigns, are also summarized. This is followed by the description of the BioSecure Benchmarking Framework for Fingerprints, using the NIST Fingerpint Image Software (NFIS2), the publicly available MCYT-100 database, and two evaluation protocols. Two research systems are compared within the proposed framework. The evaluated systems follow different approaches for fingerprint processing and are discussed in detail. Fusion experiments involving different combinations of the presented systems are also given. The NFIS2 software is also used to obtain the fingerprint scores for the multimodal experiments conducted within the BioSecure Multimodal Evaluation Campaign(BMEC’2007) reported in Chap.11.

    Download full text (pdf)
    fulltext
  • 22.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Eigen-patch iris super-resolution for iris recognition improvement2015In: 2015 23rd European Signal Processing Conference (EUSIPCO), Piscataway, NJ: IEEE Press, 2015, p. 76-80, article id 7362348Conference paper (Refereed)
    Abstract [en]

    Low image resolution will be a predominant factor in iris recognition systems as they evolve towards more relaxed acquisition conditions. Here, we propose a super-resolution technique to enhance iris images based on Principal Component Analysis (PCA) Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information and reducing artifacts. We validate the system used a database of 1,872 near-infrared iris images. Results show the superiority of the presented approach over bilinear or bicubic interpolation, with the eigen-patch method being more resilient to image resolution reduction. We also perform recognition experiments with an iris matcher based 1D Log-Gabor, demonstrating that verification rates degrades more rapidly with bilinear or bicubic interpolation. ©2015 IEEE

    Download full text (pdf)
    fulltext
  • 23.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Improving Very Low-Resolution Iris Identification Via Super-Resolution Reconstruction of Local Patches2017In: 2017 International Conference of the Biometrics Special Interest Group (BIOSIG) / [ed] Arslan Brömme, Christoph Busch, Antitza Dantcheva, Christian Rathgeb & Andreas Uhl, Bonn: Gesellschaft für Informatik, 2017, Vol. P-270, article id 8053512Conference paper (Refereed)
    Abstract [en]

    Relaxed acquisition conditions in iris recognition systems have significant effects on the quality and resolution of acquired images, which can severely affect performance if not addressed properly. Here, we evaluate two trained super-resolution algorithms in the context of iris identification. They are based on reconstruction of local image patches, where each patch is reconstructed separately using its own optimal reconstruction function. We employ a database of 1,872 near-infrared iris images (with 163 different identities for identification experiments) and three iris comparators. The trained approaches are substantially superior to bilinear or bicubic interpolations, with one of the comparators providing a Rank-1 performance of ∼88% with images of only 15×15 pixels, and an identification rate of 95% with a hit list size of only 8 identities. © 2017 Gesellschaft fuer Informatik.

    Download full text (pdf)
    fulltext
  • 24.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Iris Super-Resolution Using Iterative Neighbor Embedding2017In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops / [ed] Lisa O’Conner, Los Alamitos: IEEE Computer Society, 2017, p. 655-663Conference paper (Refereed)
    Abstract [en]

    Iris recognition research is heading towards enabling more relaxed acquisition conditions. This has effects on the quality and resolution of acquired images, severely affecting the accuracy of recognition systems if not tackled appropriately. In this paper, we evaluate a super-resolution algorithm used to reconstruct iris images based on iterative neighbor embedding of local image patches which tries to represent input low-resolution patches while preserving the geometry of the original high-resolution space. To this end, the geometry of the low- and high-resolution manifolds are jointly considered during the reconstruction process. We validate the system with a database of 1,872 near-infrared iris images, while fusion of two iris comparators has been adopted to improve recognition performance. The presented approach is substantially superior to bilinear/bicubic interpolations at very low resolutions, and it also outperforms a previous PCA-based iris reconstruction approach which only considers the geometry of the low-resolution manifold during the reconstruction process. © 2017 IEEE

  • 25.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Reconstruction of Smartphone Images for Low Resolution Iris Recognition2015In: 2015 IEEE International Workshop on Information Forensics and Security (WIFS), Piscataway, NJ: IEEE Press, 2015, article id 7368600Conference paper (Refereed)
    Abstract [en]

    As iris systems evolve towards a more relaxed acquisition, low image resolution will be a predominant issue. In this paper we evaluate a super-resolution method to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. We employ a database of 560 images captured in visible spectrum with two smartphones. The presented approach is superiorto bilinear or bicubic interpolation, especially at lower resolutions. We also carry out recognition experiments with six iris matchers, showing that better performance can be obtained at low-resolutions with the proposed eigen-patch reconstruction, with fusion of only two systems pushing the EER to below 5-8% for down-sampling factors up to a size of only 13x13. © 2015 IEEE.

  • 26.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Gonzalez-Sosa, Ester
    Nokia Bell-Labs, Madrid, Spain.
    A Survey of Super-Resolution in Iris Biometrics with Evaluation of Dictionary-Learning2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 6519-6544Article in journal (Refereed)
    Abstract [en]

    The lack of resolution has a negative impact on the performance of image-based biometrics. While many generic super-resolution methods have been proposed to restore low-resolution images, they usually aim to enhance their visual appearance. However, an overall visual enhancement of biometric images does not necessarily correlate with a better recognition performance. Reconstruction approaches need thus to incorporate specific information from the target biometric modality to effectively improve recognition performance. This paper presents a comprehensive survey of iris super-resolution approaches proposed in the literature. We have also adapted an Eigen-patches reconstruction method based on PCA Eigentransformation of local image patches. The structure of the iris is exploited by building a patch-position dependent dictionary. In addition, image patches are restored separately, having their own reconstruction weights. This allows the solution to be locally optimized, helping to preserve local information. To evaluate the algorithm, we degraded high-resolution images from the CASIA Interval V3 database. Different restorations were considered, with 15 × 15 pixels being the smallest resolution evaluated. To the best of our knowledge, this is among the smallest resolutions employed in the literature. The experimental framework is complemented with six publicly available iris comparators, which were used to carry out biometric verification and identification experiments. Experimental results show that the proposed method significantly outperforms both bilinear and bicubic interpolation at very low-resolution. The performance of a number of comparators attain an impressive Equal Error Rate as low as 5%, and a Top-1 accuracy of 77-84% when considering iris images of only 15 × 15 pixels. These results clearly demonstrate the benefit of using trained super-resolution techniques to improve the quality of iris images prior to matching. © 2018, Emerald Publishing Limited.

  • 27.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Super-Resolution for Selfie Biometrics: Introduction and Application to Face and Iris2019In: Selfie Biometrics: Advances and Challenges / [ed] Ajita Rattani, Reza Derakhshani & Arun A. Ross, Cham: Springer, 2019, 1, p. 105-128Chapter in book (Refereed)
    Abstract [en]

    Biometric research is heading towards enabling more relaxed acquisition conditions. This has effects on the quality and resolution of acquired images, severly affecting the accuracy of recognition systems if not tackled appropriately. In this chapter, we give an overview of recent research in super-resolution reconstruction applied to biometrics, with a focus on face and iris images in the visible spectrum, two prevalent modalities in selfie biometrics. After an introduction to the generic topic of super-resolution, we investigate methods adapted to cater for the particularities of these two modalities. By experiments, we show the benefits of incorporating super-resolution to improve the quality of biometric images prior to recognition. © Springer Nature AG 2019

  • 28.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Learning-Based Local-Patch Resolution Reconstruction of Iris Smart-phone Images2017Conference paper (Refereed)
    Abstract [en]

    Application of ocular biometrics in mobile and at a distance environments still has several open challenges, with the lack quality and resolution being an evident issue that can severely affects performance. In this paper, we evaluate two trained image reconstruction algorithms in the context of smart-phone biometrics. They are based on the use of coupled dictionaries to learn the mapping relations between low and high resolution images. In addition, reconstruction is made in local overlapped image patches, where up-scaling functions are modelled separately for each patch, allowing to better preserve local details. The experimental setup is complemented with a database of 560 images captured with two different smart-phones, and two iris comparators employed for verification experiments. We show that the trained approaches are substantially superior to bilinear or bicubic interpolations at very low resolutions (images of 13×13 pixels). Under such challenging conditions, an EER of ∼7% can be achieved using individual comparators, which is further pushed down to 4-6% after the fusion of the two systems. © 2017 IEEE

    Download full text (pdf)
    fulltext
  • 29.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Very Low-Resolution Iris Recognition Via Eigen-Patch Super-Resolution and Matcher Fusion2016In: 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Piscataway: IEEE, 2016, article id 7791208Conference paper (Refereed)
    Abstract [en]

    Current research in iris recognition is moving towards enabling more relaxed acquisition conditions. This has effects on the quality of acquired images, with low resolution being a predominant issue. Here, we evaluate a super-resolution algorithm used to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. Contrast enhancement is used to improve the reconstruction quality, while matcher fusion has been adopted to improve iris recognition performance. We validate the system using a database of 1,872 near-infrared iris images. The presented approach is superior to bilinear or bicubic interpolation, especially at lower resolutions, and the fusion of the two systems pushes the EER to below 5% for down-sampling factors up to a image size of only 13×13.

  • 30.
    Alonso-Fernandez, Fernando
    et al.
    University de Madrid, Madrid, Spain.
    Fierrez, J.
    Universidad Autonoma de Madrid.
    Ortega-Garcia, J.
    Universidad Autónoma de Madrid.
    Gonzalez-Rodriguez, J.
    Universidad Autónoma de Madrid.
    Fronthaler, Hartwig
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Kollreider, Klaus
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    A Comparative Study of Fingerprint Image-Quality Estimation Methods2007In: IEEE Transactions on Information Forensics and Security, ISSN 1556-6013, E-ISSN 1556-6021, Vol. 2, no 4, p. 734-743Article in journal (Refereed)
    Abstract [en]

    One of the open issues in fingerprint verification is the lack of robustness against image-quality degradation. Poor-quality images result in spurious and missing features, thus degrading the performance of the overall system. Therefore, it is important for a fingerprint recognition system to estimate the quality and validity of the captured fingerprint images. In this work, we review existing approaches for fingerprint image-quality estimation, including the rationale behind the published measures and visual examples showing their behavior under different quality conditions. We have also tested a selection of fingerprint image-quality estimation algorithms. For the experiments, we employ the BioSec multimodal baseline corpus, which includes 19 200 fingerprint images from 200 individuals acquired in two sessions with three different sensors. The behavior of the selected quality measures is compared, showing high correlation between them in most cases. The effect of low-quality samples in the verification performance is also studied for a widely available minutiae-based fingerprint matching system.

    Download full text (pdf)
    FULLTEXT01
  • 31.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Quality Measures in Biometric Systems2015In: Encyclopedia of Biometrics / [ed] Stan Z. Li & Anil K. Jain, New York: Springer Science+Business Media B.V., 2015, 2, p. 1287-1297Chapter in book (Refereed)
    Abstract [en]

    This is an excerpt from the content

    Synonyms

    Quality assessment; Biometric quality; Quality-based processing

    Definition

    Since the establishment of biometrics as a specific research area in the late 1990s, the biometric community has focused its efforts in the development of accurate recognition algorithms [1]. Nowadays, biometric recognition is a mature technology that is used in many applications, offering greater security and convenience than traditional methods of personal recognition [2].

    During the past few years, biometric quality measurement has become an important concern after a number of studies and technology benchmarks that demonstrate how performance of biometric systems is heavily affected by the quality of biometric signals [3]. This operationally important step has been nevertheless under-researched compared to the primary feature extraction and pattern recognition tasks [4]. One of the main challenges facing biometric technologies is performance degradation in less controlled situations, and the problem of biometric quality measurement has arisen even stronger with the proliferation of portable handheld devices, with at-a-distance and on-the-move acquisition capabilities. These will require robust algorithms capable of handling a range of changing characteristics [2]. Another important example is forensics, in which intrinsic operational factors further degrade recognition performance.

    There are number of factors that can affect the quality of biometric signals, and there are numerous roles of a quality measure in the context of biometric systems. This section summarizes the state of the art in the biometric quality problem, giving an overall framework of the different challenges involved.

    Download full text (pdf)
    fulltext
  • 32.
    Alonso-Fernandez, Fernando
    et al.
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Fierrez-Aguilar, Julian
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Fronthaler, Hartwig
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Kollreider, Klaus
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Ortega-Garcia, Javier
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Gonzalez-Rodriguez, Joaquin
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Combining multiple matchers for fingerprint verification: A case study in biosecure network of excellence2007In: Annales des télécommunications, ISSN 0003-4347, E-ISSN 1958-9395, Vol. 62, no 1-2, p. 62-82Article in journal (Refereed)
    Abstract [en]

    We report on experiments for the fingerprint modality conducted during the First BioSecure Residential Workshop. Two reference systems for fingerprint verification have been tested together with two additional non-reference systems. These systems follow different approaches of fingerprint processing and are discussed in detail. Fusion experiments involving different combinations of the available systems are presented. The experimental results show that the best recognition strategy involves both minutiae-based and correlation-based measurements. Regarding the fusion experiments, the best relative improvement is obtained when fusing systems that are based on heterogeneous strategies for feature extraction and/or matching. The best combinations of two/three/four systems always include the best individual systems whereas the best verification performance is obtained when combining all the available systems.

    Download full text (pdf)
    fulltext
  • 33.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology.
    Buades, Jose M.
    University of Balearic Islands, Palma, Spain.
    Tiwari, Prayag
    Halmstad University, School of Information Technology.
    Bigun, Josef
    Halmstad University, School of Information Technology.
    An Explainable Model-Agnostic Algorithm for CNN-Based Biometrics Verification2023In: 2023 IEEE International Workshop on Information Forensics and Security (WIFS), Institute of Electrical and Electronics Engineers (IEEE), 2023Conference paper (Refereed)
    Abstract [en]

    This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting. LIME was initially proposed for networks with the same output classes used for training, and it employs the softmax probability to determine which regions of the image contribute the most to classification. However, in a verification setting, the classes to be recognized have not been seen during training. In addition, instead of using the softmax output, face descriptors are usually obtained from a layer before the classification layer. The model is adapted to achieve explainability via cosine similarity between feature vectors of perturbated versions of the input image. The method is showcased for face biometrics with two CNN models based on MobileNetv2 and ResNet50. © 2023 IEEE.

  • 34.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Buades Rubio, Jose Maria
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Palma, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning2024In: Progress in Artificial Intelligence and Pattern Recognition. IWAIPR 2023. / [ed] Hernández Heredia, Y.; Milián Núñez, V.; Ruiz Shulcloper, J., Cham: Springer, 2024, Vol. 14335, p. 349-361Conference paper (Refereed)
    Abstract [en]

    The widespread use of mobile devices for various digital services has created a need for reliable and real-time person authentication. In this context, facial recognition technologies have emerged as a dependable method for verifying users due to the prevalence of cameras in mobile devices and their integration into everyday applications. The rapid advancement of deep Convolutional Neural Networks (CNNs) has led to numerous face verification architectures. However, these models are often large and impractical for mobile applications, reaching sizes of hundreds of megabytes with millions of parameters. We address this issue by developing SqueezerFaceNet, a light face recognition network which less than 1M parameters. This is achieved by applying a network pruning method based on Taylor scores, where filters with small importance scores are removed iteratively. Starting from an already small network (of 1.24M) based on SqueezeNet, we show that it can be further reduced (up to 40%) without an appreciable loss in performance. To the best of our knowledge, we are the first to evaluate network pruning methods for the task of face recognition. © 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.

    Download full text (pdf)
    fulltext
  • 35.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Ramis, Silvia
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Spain.
    Perales, Francisco J.
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Facial Masks and Soft-Biometrics: Leveraging Face Recognition CNNs for Age and Gender Prediction on Mobile Ocular Images2021In: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 10, no 5, p. 562-580Article in journal (Refereed)
    Abstract [en]

    We address the use of selfie ocular images captured with smartphones to estimate age and gender. Partial face occlusion has become an issue due to the mandatory use of face masks. Also, the use of mobile devices has exploded, with the pandemic further accelerating the migration to digital services. However, state-of-the-art solutions in related tasks such as identity or expression recognition employ large Convolutional Neural Networks, whose use in mobile devices is infeasible due to hardware limitations and size restrictions of downloadable applications. To counteract this, we adapt two existing lightweight CNNs proposed in the context of the ImageNet Challenge, and two additional architectures proposed for mobile face recognition. Since datasets for soft-biometrics prediction using selfie images are limited, we counteract over-fitting by using networks pre-trained on ImageNet. Furthermore, some networks are further pre-trained for face recognition, for which very large training databases are available. Since both tasks employ similar input data, we hypothesize that such strategy can be beneficial for soft-biometrics estimation. A comprehensive study of the effects of different pre-training over the employed architectures is carried out, showing that, in most cases, a better accuracy is obtained after the networks have been fine-tuned for face recognition. © The Authors

    Download full text (pdf)
    fulltext
  • 36.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Ramis, Silvia
    University of Balearic Islands, Spain.
    Perales, Francisco J.
    University of Balearic Islands, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Soft-Biometrics Estimation In the Era of Facial Masks2020In: 2020 International Conference of the Biometrics Special Interest Group (BIOSIG), Piscataway, N.J.: IEEE, 2020, p. 1-6Conference paper (Refereed)
    Abstract [en]

    We analyze the use of images from face parts to estimate soft-biometrics indicators. Partial face occlusion is common in unconstrained scenarios, and it has become mainstream during the COVID-19 pandemic due to the use of masks. Here, we apply existing pre-trained CNN architectures, proposed in the context of the ImageNet Large Scale Visual Recognition Challenge, to the tasks of gender, age, and ethnicity estimation. Experiments are done with 12007 images from the Labeled Faces in the Wild (LFW) database. We show that such off-the-shelf features can effectively estimate soft-biometrics indicators using only the ocular region. For completeness, we also evaluate images showing only the mouth region. In overall terms, the network providing the best accuracy only suffers accuracy drops of 2-4% when using the ocular region, in comparison to using the entire face. Our approach is also shown to outperform in several tasks two commercial off-the-shelf systems (COTS) that employ the whole face, even if we only use the eye or mouth regions. © 2020 German Computer Association (Gesellschaft für Informatik e.V.).

    Download full text (pdf)
    fulltext
  • 37.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Tiwari, Prayag
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Bigun, Josef
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition2024Conference paper (Refereed)
  • 38.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Mikaelyan, Anna
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Compact Multi-scale Periocular Recognition Using SAFE Features2016In: Proceedings - International Conference on Pattern Recognition, Washington: IEEE, 2016, p. 1455-1460, article id 7899842Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor, which encodes the presence of various symmetric curve families around image key points. We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this unique point of the eye. As it is demonstrated, such discriminative properties can be encoded with a reduced set of symmetric curves. Experiments are done with a database of periocular images captured with a digital camera. We test our system against reference periocular features, achieving top performance with a considerably smaller feature vector (given by the use of a single key point). All the systems tested also show a nearly steady correlation between acquisition distance and performance, and they are also able to cope well when enrolment and test images are not captured at the same distance. Fusion experiments among the available systems are also provided. © 2016 IEEE

  • 39.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Mikaelyan, Anna
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Comparison and Fusion of Multiple Iris and Periocular Matchers Using Near-Infrared and Visible Images2015In: 3rd International Workshop on Biometrics and Forensics, IWBF 2015, Piscataway, NJ: IEEE Press, 2015, p. Article number: 7110234-Conference paper (Refereed)
    Abstract [en]

    Periocular refers to the facial region in the eye vicinity. It can be easily obtained with existing face and iris setups, and it appears in iris images, so its fusion with the iris texture has a potential to improve the overall recognition. It is also suggested that iris is more suited to near-infrared (NIR) illu- mination, whereas the periocular modality is best for visible (VW) illumination. Here, we evaluate three periocular and three iris matchers based on different features. As experimen- tal data, we use five databases, three acquired with a close-up NIR camera, and two in VW light with a webcam and a dig- ital camera. We observe that the iris matchers perform better than the periocular matchers with NIR data, and the opposite with VW data. However, in both cases, their fusion can pro- vide additional performance improvements. This is specially relevant with VW data, where the iris matchers perform sig- nificantly worse (due to low resolution), but they are still able to complement the periocular modality. © 2015 IEEE.

    Download full text (pdf)
    fulltext
  • 40.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Raja, Kiran B.
    Norwegian University of Science and Technology, Gjøvik, Norway.
    Busch, Christoph
    Norwegian University of Science and Technology, Gjøvik, Norway.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Log-Likelihood Score Level Fusion for Improved Cross-Sensor Smartphone Periocular Recognition2017In: 2017 25th European Signal Processing Conference (EUSIPCO), Piscataway: IEEE, 2017, p. 281-285, article id 8081211Conference paper (Refereed)
    Abstract [en]

    The proliferation of cameras and personal devices results in a wide variability of imaging conditions, producing large intra-class variations and a significant performance drop when images from heterogeneous environments are compared. However, many applications require to deal with data from different sources regularly, thus needing to overcome these interoperability problems. Here, we employ fusion of several comparators to improve periocular performance when images from different smartphones are compared. We use a probabilistic fusion framework based on linear logistic regression, in which fused scores tend to be log-likelihood ratios, obtaining a reduction in cross-sensor EER of up to 40% due to the fusion. Our framework also provides an elegant and simple solution to handle signals from different devices, since same-sensor and crosssensor score distributions are aligned and mapped to a common probabilistic domain. This allows the use of Bayes thresholds for optimal decision making, eliminating the need of sensor-specific thresholds, which is essential in operational conditions because the threshold setting critically determines the accuracy of the authentication process in many applications. © EURASIP 2017

  • 41.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology.
    Raja, Kiran B.
    Norwegian University of Science and Technology, Gjøvik, Norway.
    Raghavendra, R.
    Norwegian University of Science and Technology, Gjøvik, Norway.
    Busch, Christoph
    Norwegian University of Science and Technology, Gjøvik, Norway.
    Bigun, Josef
    Halmstad University, School of Information Technology.
    Vera-Rodriguez, Ruben
    Universidad Autónoma de Madrid, Madrid, Spain.
    Fierrez, Julian
    Universidad Autónoma de Madrid, Madrid, Spain.
    Cross-sensor periocular biometrics in a global pandemic: Comparative benchmark and novel multialgorithmic approach2022In: Information Fusion, ISSN 1566-2535, E-ISSN 1872-6305, Vol. 83-84, p. 110-130Article in journal (Refereed)
    Abstract [en]

    The massive availability of cameras and personal devices results in a wide variability between imaging conditions, producing large intra-class variations and a significant performance drop if images from heterogeneous environments are compared for person recognition purposes. However, as biometric solutions are extensively deployed, it will be common to replace acquisition hardware as it is damaged or newer designs appear or to exchange information between agencies or applications operating in different environments. Furthermore, variations in imaging spectral bands can also occur. For example, face images are typically acquired in the visible (VIS) spectrum, while iris images are usually captured in the near-infrared (NIR) spectrum. However, cross-spectrum comparison may be needed if, for example, a face image obtained from a surveillance camera needs to be compared against a legacy database of iris imagery. Here, we propose a multialgorithmic approach to cope with periocular images captured with different sensors. With face masks in the front line to fight against the COVID-19 pandemic, periocular recognition is regaining popularity since it is the only region of the face that remains visible. As a solution to the mentioned cross-sensor issues, we integrate different biometric comparators using a score fusion scheme based on linear logistic regression This approach is trained to improve the discriminating ability and, at the same time, to encourage that fused scores are represented by log-likelihood ratios. This allows easy interpretation of output scores and the use of Bayes thresholds for optimal decision-making since scores from different comparators are in the same probabilistic range. We evaluate our approach in the context of the 1st Cross-Spectral Iris/Periocular Competition, whose aim was to compare person recognition approaches when periocular data from visible and near-infrared images is matched. The proposed fusion approach achieves reductions in the error rates of up to 30%–40% in cross-spectral NIR–VIS comparisons with respect to the best individual system, leading to an EER of 0.2% and a FRR of just 0.47% at FAR = 0.01%. It also represents the best overall approach of the mentioned competition. Experiments are also reported with a database of VIS images from two different smartphones as well, achieving even bigger relative improvements and similar performance numbers. We also discuss the proposed approach from the point of view of template size and computation times, with the most computationally heavy comparator playing an important role in the results. Lastly, the proposed method is shown to outperform other popular fusion approaches in multibiometrics, such as the average of scores, Support Vector Machines, or Random Forest. © 2022 The Authors

  • 42.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Sharon Belvisi, Nicole Mariah
    Halmstad University, School of Information Technology.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Muhammad, Naveed
    Institute of Computer Science, University of Tartu, Tartu , Estonia.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Writer Identification Using Microblogging Texts for Social Media Forensics2021In: IEEE Transactions on Biometrics, Behavior, and Identity Science, E-ISSN 2637-6407, Vol. 3, no 3, p. 405-426Article in journal (Refereed)
    Abstract [en]

    Establishing authorship of online texts is fundamental to combat cybercrimes. Unfortunately, text length is limited on some platforms, making the challenge harder. We aim at identifying the authorship of Twitter messages limited to 140 characters. We evaluate popular stylometric features, widely used in literary analysis, and specific Twitter features like URLs, hashtags, replies or quotes. We use two databases with 93 and 3957 authors, respectively. We test varying sized author sets and varying amounts of training/test texts per author. Performance is further improved by feature combination via automatic selection. With a large amount of training Tweets (>500), a good accuracy (Rank-5>80%) is achievable with only a few dozens of test Tweets, even with several thousands of authors. With smaller sample sizes (10-20 training Tweets), the search space can be diminished by 9-15% while keeping a high chance that the correct author is retrieved among the candidates. In such cases, automatic attribution can provide significant time savings to experts in suspect search. For completeness, we report verification results. With few training/test Tweets, the EER is above 20-25%, which is reduced to < 15% if hundreds of training Tweets are available. We also quantify the computational complexity and time permanence of the employed features. © 2019 IEEE.

    Download full text (pdf)
    fulltext
  • 43.
    Assabie, Yaregal
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    A comprehensive Dataset for Ethiopic Handwriting Recognition2009In: Proceedings SSBA '09: Symposium on Image Analysis, Halmstad University, Halmstad, March 18-20, 2009 / [ed] Josef Bigun & Antanas Verikas, Halmstad: Halmstad University , 2009, p. 41-43Chapter in book (Other academic)
    Abstract [en]

    Ethiopic script is used by several languages in Ethiopia for writing. We present a comprehensive dataset of handwritten Ethiopic script called DEHR (Dataset for Ethiopic Handwriting Recognition) captured both offline and online. The offline dataset includes isolated characters, Ethiopian church documents and ordinary handwritten texts dealing with various real-life issues. The ordinary texts and isolated characters were freely written by several participants. The church documents are written in Geez and Amharic languages whereas the language for ordinary texts is Amharic only. The online dataset was collected by using two Digimemo devices of different sizes. For isolated characters and online dataset, all the 265 character samples used by Amharic language are included. The dataset is intended to set a benchmark for training and/or testing handwriting recognition, character and word segmentation, and text line detection. The dataset is can be accessed by contacting the authors or via http://www.hh.se/staff/josef/.

  • 44.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa, Ethiopia .
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    A Hybrid System for Robust Recognition of Ethiopic Script2007In: Ninth International Conference on Document Analysis and Recognition: proceedings : Curtiba, Paraná, Brazil, September 23-26, 2007 / [ed] IEEE Computer Society, Los Alamitos, Calif.: IEEE Computer Society, 2007, p. 556-560Conference paper (Refereed)
    Abstract [en]

    In real life, documents contain several font types, styles, and sizes. However, many character recognition systems show good results for specific type of documents and fail to produce satisfactory results for others. Over the past decades, various pattern recognition techniques have been applied with the aim to develop recognition systems insensitive to variations in the characteristics of documents. In this paper, we present a robust recognition system for Ethiopic script using a hybrid of classifiers. The complex structures of Ethiopic characters are structurally and syntactically analyzed, and represented as a pattern of simpler graphical units called primitives. The pattern is used for classification of characters using similarity-based matching and neural network classifier. The classification result is further refined by using template matching. A pair of directional filters is used for creating templates and extracting structural features. The recognition system is tested by real life documents and experimental results are reported.

    Download full text (pdf)
    FULLTEXT01
  • 45.
    Assabie, Yaregal
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    A neural network approach for multifont and size-independent recognition of ethiopic characters2007In: Progress in pattern recognition / [ed] Singh, S, Singh, M, London: Springer London, 2007, p. 129-137Conference paper (Refereed)
    Abstract [en]

    Artificial neural networks are one of the most commonly used tools for character recognition problems, and usually they take gray values of 2D character images as inputs. In this paper, we propose a novel neural network classifier whose input is ID string patterns generated from the spatial relationships of primitive structures of Ethiopiccharacters. The spatial relationships of primitives are modeled by a special tree structure from which a unique set of string patterns are generated for each character. Training theneural network with string patterns of different font types and styles enables the classifier to handle variations in font types, sizes, and styles. We use a pair of directional filters forextracting primitives and their spatial relationships. The robustness of the proposed recognition system is tested by real life documents and experimental results are reported.

  • 46.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa, Ethiopia .
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Ethiopic Character Recognition Using Direction Field Tensor2006In: The 18th International Conference on Pattern Recognition: proceedings : 20-24 August, 2006, Hong Kong, Los Alamitos, Calif.: IEEE Computer Society, 2006, p. 284-287Conference paper (Refereed)
    Abstract [en]

    Many languages in Ethiopia use a unique alphabet called Ethiopic for writing. However, there is no OCR system developed to date. In an effort to develop automatic recognition of Ethiopic script, a novel system is designed by applying structural and syntactic techniques. The recognition system is developed by extracting primitive structural features and their spatial relationships. A special tree structure is used to represent the spatial relationship of primitive structures. For each character, a unique string pattern is generated from the tree and recognition is achieved by matching the string against a stored knowledge base of the alphabet. To implement the recognition system, we use direction field tensor as a tool for character segmentation, and extraction of structural features and their spatial relationships. Experimental results are reported.

    Download full text (pdf)
    FULLTEXT01
  • 47.
    Assabie, Yaregal
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Ethiopic Document Image Database for Testing Character Recognition Systems2006Report (Other academic)
    Abstract [en]

    In this paper we describe the acquisition and content of a large database of Ethiopic documents for testing and evaluating character recognition systems. The Ethiopic Document Image Database (EDIDB) contains documents written in Amharic and Geez languages. The database was built from a variety of documents such as printouts, books, newspapers, and magazines. Documents written in various font types, sizes and styles were included in the database. Degraded and poor quality documents were also included in the database to represent the real life situation. A total of 1,204 pages were scanned at a resolution of 300 dpi and saved as grayscale images of JPEG format. We also describe an evaluation protocol for standardizing the comparison of recognition systems and their results. The database is made available to the research community through http://www.hh.se/staff/josef/.

  • 48.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa Ethiopia.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    HMM-Based Handwritten Amharic Word Recognition with Feature Concatenation2009In: Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, New York: IEEE Press, 2009, p. 961-965Conference paper (Refereed)
    Abstract [en]

    Amharic is the official language of Ethiopia and uses Ethiopic script for writing. In this paper, we present writer-independent HMM-based Amharic word recognition for offline handwritten text. The underlying units of the recognition system are a set of primitive strokes whose combinations form handwritten Ethiopic characters. For each character, possibly occurring sequences of primitive strokes and their spatial relationships, collectively termed as primitive structural features, are stored as feature list. Hidden Markov models for Amharic words are trained with such sequences of structural features of characters constituting words. The recognition phase does not require segmentation of characters but only requires text line detection and extraction of structural features in each text line. Text lines and primitive structural features are extracted by making use of direction field tensor. The performance of the recognition system is tested by a database of unconstrained handwritten documents collected from various sources.

  • 49.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa Ethiopia .
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Lexicon-based Offline Recognition of Amharic Words in Unconstrained Handwritten Text2008In: 19th International Conference on Pattern Recognition: (ICPR 2008) ; Tampa, Florida, USA 8-11 December 2008, New York: IEEE Computer Society, 2008, article id 4761145Conference paper (Refereed)
    Abstract [en]

    This paper describes an offline handwriting recognition system for Amharic words based on lexicon. The system computes direction fields of scanned handwritten documents, from which pseudo-characters are segmented. The pseudo-characters are organized based on their proximity and direction to form text lines. Words are then segmented by analyzing the relative gap between subsequent pseudocharacters in text lines. For each segmented word image, the structural characteristics of pseudo-characters are syntactically analyzed to predict a set of plausible characters forming the word. The most likelihood word is finally selected among candidates by matching against the lexicon. The system is tested by a database of unconstrained handwritten Amharic documents collected from various sources. The lexicon is prepared from words appearing in the collected database.

  • 50.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa, Ethiopia .
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Multifont size-resilient recognition system for Ethiopic script2007In: International Journal on Document Analysis and Recognition, ISSN 1433-2833, E-ISSN 1433-2825, Vol. 10, no 2, p. 85-100Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel framework for recognition of Ethiopic characters using structural and syntactic techniques. Graphically complex characters are represented by the spatial relationships of less complex primitives which form a unique set of patterns for each character. The spatial relationship is represented by a special tree structure which is also used to generate string patterns of primitives. Recognition is then achieved by matching the generated string pattern against each pattern in the alphabet knowledge-base built for this purpose. The recognition system tolerates variations on the parameters of characters like font type, size and style. Direction field tensor is used as a tool to extract structural features.

1234 1 - 50 of 168
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf