hh.sePublications
Change search
Refine search result
1234567 1 - 50 of 384
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abiri, Najmeh
    et al.
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Linse, Björn
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Edén, Patrik
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Ohlsson, Mattias
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research. Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Establishing strong imputation performance of a denoising autoencoder in a wide range of missing data problems2019In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 65, p. 137-146Article in journal (Refereed)
    Abstract [en]

    Dealing with missing data in data analysis is inevitable. Although powerful imputation methods that address this problem exist, there is still much room for improvement. In this study, we examined single imputation based on deep autoencoders, motivated by the apparent success of deep learning to efficiently extract useful dataset features. We have developed a consistent framework for both training and imputation. Moreover, we benchmarked the results against state-of-the-art imputation methods on different data sizes and characteristics. The work was not limited to the one-type variable dataset; we also imputed missing data with multi-type variables, e.g., a combination of binary, categorical, and continuous attributes. To evaluate the imputation methods, we randomly corrupted the complete data, with varying degrees of corruption, and then compared the imputed and original values. In all experiments, the developed autoencoder obtained the smallest error for all ranges of initial data corruption. © 2019 Elsevier B.V.

  • 2.
    Aein, Mohamad Javad
    et al.
    Department for Computational Neuroscience at the Bernstein Center Göttingen (Inst. of Physics 3) & Leibniz Science Campus for Primate Cognition, Georg-August-Universität Göttingen, Göttingen, Germany.
    Aksoy, Eren
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Wörgötter, Florentin
    Department for Computational Neuroscience at the Bernstein Center Göttingen (Inst. of Physics 3) & Leibniz Science Campus for Primate Cognition, Georg-August-Universität Göttingen, Göttingen, Germany.
    Library of actions: Implementing a generic robot execution framework by using manipulation action semantics2019In: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 38, no 8, p. 910-934Article in journal (Refereed)
    Abstract [en]

    Drive-thru-Internet is a scenario in cooperative intelligent transportation systems (C-ITSs), where a road-side unit (RSU) provides multimedia services to vehicles that pass by. Performance of the drive-thru-Internet depends on various factors, including data traffic intensity, vehicle traffic density, and radio-link quality within the coverage area of the RSU, and must be evaluated at the stage of system design in order to fulfill the quality-of-service requirements of the customers in C-ITS. In this paper, we present an analytical framework that models downlink traffic in a drive-thru-Internet scenario by means of a multidimensional Markov process: the packet arrivals in the RSU buffer constitute Poisson processes and the transmission times are exponentially distributed. Taking into account the state space explosion problem associated with multidimensional Markov processes, we use iterative perturbation techniques to calculate the stationary distribution of the Markov chain. Our numerical results reveal that the proposed approach yields accurate estimates of various performance metrics, such as the mean queue content and the mean packet delay for a wide range of workloads. © 2019 IEEE.

  • 3.
    Aksoy, Eren
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Baci, Saimir
    Volvo Technology AB, Volvo Group Trucks Technology, Vehicle Automation, Gothenburg, Sweden.
    Cavdar, Selcuk
    Volvo Technology AB, Volvo Group Trucks Technology, Vehicle Automation, Gothenburg, Sweden.
    SalsaNet: Fast Road and Vehicle Segmentationin LiDAR Point Clouds for Autonomous Driving2020In: IEEE Intelligent Vehicles Symposium: IV2020, Piscataway, N.J.: IEEE, 2020, p. 926-932Conference paper (Refereed)
    Abstract [en]

    In this paper, we introduce a deep encoder-decoder network, named SalsaNet, for efficient semantic segmentation of 3D LiDAR point clouds. SalsaNet segments the road, i.e. drivable free-space, and vehicles in the scene by employing the Bird-Eye-View (BEV) image projection of the point cloud. To overcome the lack of annotated point cloud data, in particular for the road segments, we introduce an auto-labeling process which transfers automatically generated labels from the camera to LiDAR. We also explore the role of imagelike projection of LiDAR data in semantic segmentation by comparing BEV with spherical-front-view projection and show that SalsaNet is projection-agnostic. We perform quantitative and qualitative evaluations on the KITTI dataset, which demonstrate that the proposed SalsaNet outperforms other state-of-the-art semantic segmentation networks in terms of accuracy and computation time. Our code and data are publicly available at https://gitlab.com/aksoyeren/salsanet.git. 

    Download full text (pdf)
    SalsaNet
  • 4.
    Ali Hamad, Rebeen
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Järpe, Eric
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Lundström, Jens
    JeCom Consulting, Halmstad, Sweden.
    Stability analysis of the t-SNE algorithm for human activity pattern data2018Conference paper (Refereed)
    Abstract [en]

    Health technological systems learning from and reacting on how humans behave in sensor equipped environments are today being commercialized. These systems rely on the assumptions that training data and testing data share the same feature space, and residing from the same underlying distribution - which is commonly unrealistic in real-world applications. Instead, the use of transfer learning could be considered. In order to transfer knowledge between a source and a target domain these should be mapped to a common latent feature space. In this work, the dimensionality reduction algorithm t-SNE is used to map data to a similar feature space and is further investigated through a proposed novel analysis of output stability. The proposed analysis, Normalized Linear Procrustes Analysis (NLPA) extends the existing Procrustes and Local Procrustes algorithms for aligning manifolds. The methods are tested on data reflecting human behaviour patterns from data collected in a smart home environment. Results show high partial output stability for the t-SNE algorithm for the tested input data for which NLPA is able to detect clusters which are individually aligned and compared. The results highlight the importance of understanding output stability before incorporating dimensionality reduction algorithms into further computation, e.g. for transfer learning.

    Download full text (pdf)
    tsne-stability
  • 5.
    Ali Hamad, Rebeen
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Kimura, Masashi
    Convergence Lab, Tokyo, Japan.
    Lundström, Jens
    Convergia Consulting, Halmstad, Sweden.
    Efficacy of Imbalanced Data Handling Methods on Deep Learning for Smart Homes Environments2020In: SN Computer Science, ISSN 2661-8907, Vol. 1, no 4, article id 204Article in journal (Refereed)
    Abstract [en]

    Human activity recognition as an engineering tool as well as an active research field has become fundamental to many applications in various fields such as health care, smart home monitoring and surveillance. However, delivering sufficiently robust activity recognition systems from sensor data recorded in a smart home setting is a challenging task. Moreover, human activity datasets are typically highly imbalanced because generally certain activities occur more frequently than others. Consequently, it is challenging to train classifiers from imbalanced human activity datasets. Deep learning algorithms perform well on balanced datasets, yet their performance cannot be promised on imbalanced datasets. Therefore, we aim to address the problem of class imbalance in deep learning for smart home data. We assess it with Activities of Daily Living recognition using binary sensors dataset. This paper proposes a data level perspective combined with a temporal window technique to handle imbalanced human activities from smart homes in order to make the learning algorithms more sensitive to the minority class. The experimental results indicate that handling imbalanced human activities from the data-level outperforms algorithms level and improved the classification performance. © The Author(s) 2020

    Download full text (pdf)
    fulltext
  • 6.
    Ali Hamad, Rebeen
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Salguero Hidalgo, Alberto
    University of Cádiz, Cádiz, Spain.
    Bouguelia, Mohamed-Rafik
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Estevez, Macarena Espinilla
    University of Jaén, Jaén, Spain.
    Quero, Javier Medina
    University of Jaén, Jaén, Spain.
    Efficient Activity Recognition in Smart Homes Using Delayed Fuzzy Temporal Windows on Binary Sensors2020In: IEEE journal of biomedical and health informatics, ISSN 2168-2194, E-ISSN 2168-2208, Vol. 24, no 2, p. 387-395Article in journal (Refereed)
    Abstract [en]

    Human activity recognition has become an active research field over the past few years due to its wide application in various fields such as health-care, smart home monitoring, and surveillance. Existing approaches for activity recognition in smart homes have achieved promising results. Most of these approaches evaluate real-time recognition of activities using only sensor activations that precede the evaluation time (where the decision is made). However, in several critical situations, such as diagnosing people with dementia, “preceding sensor activations” are not always sufficient to accurately recognize the inhabitant's daily activities in each evaluated time. To improve performance, we propose a method that delays the recognition process in order to include some sensor activations that occur after the point in time where the decision needs to be made. For this, the proposed method uses multiple incremental fuzzy temporal windows to extract features from both preceding and some oncoming sensor activations. The proposed method is evaluated with two temporal deep learning models (convolutional neural network and long short-term memory), on a binary sensor dataset of real daily living activities. The experimental evaluation shows that the proposed method achieves significantly better results than the real-time approach, and that the representation with fuzzy temporal windows enhances performance within deep learning models. © Copyright 2020 IEEE

  • 7.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Barrachina, Javier
    Facephi Biometria, Alicante, Spain.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    SqueezeFacePoseNet: Lightweight Face Verification Across Different Poses for Mobile Platforms2021In: Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10-15, 2021, Proceedings, Part VIII / [ed] Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, Roberto Vezzani, Berlin: Springer, 2021, p. 139-153Conference paper (Refereed)
    Abstract [en]

    Ubiquitous and real-time person authentication has become critical after the breakthrough of all kind of services provided via mobile devices. In this context, face technologies can provide reliable and robust user authentication, given the availability of cameras in these devices, as well as their widespread use in everyday applications. The rapid development of deep Convolutional Neural Networks (CNNs) has resulted in many accurate face verification architectures. However, their typical size (hundreds of megabytes) makes them infeasible to be incorporated in downloadable mobile applications where the entire file typically may not exceed 100 Mb. Accordingly, we address the challenge of developing a lightweight face recognition network of just a few megabytes that can operate with sufficient accuracy in comparison to much larger models. The network also should be able to operate under different poses, given the variability naturally observed in uncontrolled environments where mobile devices are typically used. In this paper, we adapt the lightweight SqueezeNet model, of just 4.4MB, to effectively provide cross-pose face recognition. After trained on the MS-Celeb-1M and VGGFace2 databases, our model achieves an EER of 1.23% on the difficult frontal vs. profile comparison, and 0.54% on profile vs. profile images. Under less extreme variations involving frontal images in any of the enrolment/query images pair, EER is pushed down to <0.3%, and the FRR at FAR=0.1% to less than 1%. This makes our light model suitable for face recognition where at least acquisition of the enrolment image can be controlled. At the cost of a slight degradation in performance, we also test an even lighter model (of just 2.5MB) where regular convolutions are replaced with depth-wise separable convolutions. © 2021, Springer Nature Switzerland AG.

    Download full text (pdf)
    fulltext
  • 8.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    A survey on periocular biometrics research2016In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 82, part 2, p. 92-105Article in journal (Refereed)
    Abstract [en]

    Periocular refers to the facial region in the vicinity of the eye, including eyelids, lashes and eyebrows. While face and irises have been extensively studied, the periocular region has emerged as a promising trait for unconstrained biometrics, following demands for increased robustness of face or iris systems. With a surprisingly high discrimination ability, this region can be easily obtained with existing setups for face and iris, and the requirement of user cooperation can be relaxed, thus facilitating the interaction with biometric systems. It is also available over a wide range of distances even when the iris texture cannot be reliably obtained (low resolution) or under partial face occlusion (close distances). Here, we review the state of the art in periocular biometrics research. A number of aspects are described, including: (i) existing databases, (ii) algorithms for periocular detection and/or segmentation, (iii) features employed for recognition, (iv) identification of the most discriminative regions of the periocular area, (v) comparison with iris and face modalities, (vi) soft-biometrics (gender/ethnicity classification), and (vii) impact of gender transformation and plastic surgery on the recognition accuracy. This work is expected to provide an insight of the most relevant issues in periocular biometrics, giving a comprehensive coverage of the existing literature and current state of the art. © 2015 Elsevier B.V. All rights reserved.

  • 9.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    An Overview of Periocular Biometrics2017In: Iris and Periocular Biometric Recognition / [ed] Christian Rathgeb & Christoph Busch, London: The Institution of Engineering and Technology , 2017, p. 29-53Chapter in book (Refereed)
    Abstract [en]

    Periocular biometrics specifically refers to the externally visible skin region of the face that surrounds the eye socket. Its utility is specially pronounced when the iris or the face cannot be properly acquired, being the ocular modality requiring the least constrained acquisition process. It appears over a wide range of distances, even under partial face occlusion (close distance) or low resolution iris (long distance), making it very suitable for unconstrained or uncooperative scenarios. It also avoids the need of iris segmentation, an issue in difficult images. In such situation, identifying a suspect where only the periocular region is visible is one of the toughest real-world challenges in biometrics. The richness of the periocular region in terms of identity is so high that the whole face can even be reconstructed only from images of the periocular region. The technological shift to mobile devices has also resulted in many identity-sensitive applications becoming prevalent on these devices.

  • 10.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Best Regions for Periocular Recognition with NIR and Visible Images2014In: 2014 IEEE International Conference on Image Processing (ICIP), Piscataway, NJ: IEEE Press, 2014, p. 4987-4991Conference paper (Refereed)
    Abstract [en]

    We evaluate the most useful regions for periocular recognition. For this purpose, we employ our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the spectrum. We use both NIR and visible iris images. The best regions are selected via Sequential Forward Floating Selection (SFFS). The iris neighborhood (including sclera and eyelashes) is found as the best region with NIR data, while the surrounding skin texture (which is over-illuminated in NIR images) is the most discriminative region in visible range. To the best of our knowledge, only one work in the literature has evaluated the influence of different regions in the performance of periocular recognition algorithms. Our results are in the same line, despite the use of completely different matchers. We also evaluate an iris texture matcher, providing fusion results with our periocular system as well. © 2014 IEEE.

    Download full text (pdf)
    fulltext
  • 11.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Biometric Recognition Using Periocular Images2013Conference paper (Other academic)
    Abstract [en]

    We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum at different frequencies and orientations. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, and 4) rotation compensation between query and test images. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region.

    Download full text (pdf)
    fulltext
  • 12.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Exploting Periocular and RGB Information in Fake Iris Detection2014In: 2014 37th International Conventionon Information and Communication Technology, Electronics and Microelectronics (MIPRO): 26 – 30 May 2014 Opatija, Croatia: Proceedings / [ed] Petar Biljanovic, Zeljko Butkovic, Karolj Skala, Stjepan Golubic, Marina Cicin-Sain, Vlado Sruk, Slobodan Ribaric, Stjepan Gros, Boris Vrdoljak, Mladen Mauher & Goran Cetusic, Rijeka: Croatian Society for Information and Communication Technology, Electronics and Microelectronics - MIPRO , 2014, p. 1354-1359Conference paper (Refereed)
    Abstract [en]

    Fake iris detection has been studied by several researchers. However, to date, the experimental setup has been limited to near-infrared (NIR) sensors, which provide grey-scale images. This work makes use of images captured in visible range with color (RGB) information. We employ Gray-Level CoOccurrence textural features and SVM classifiers for the task of fake iris detection. The best features are selected with the Sequential Forward Floating Selection (SFFS) algorithm. To the best of our knowledge, this is the first work evaluating spoofing attack using color iris images in visible range. Our results demonstrate that the use of features from the three color channels clearly outperform the accuracy obtained from the luminance (gray scale) image. Also, the R channel is found to be the best individual channel. Lastly, we analyze the effect of extracting features from selected (eye or periocular) regions only. The best performance is obtained when GLCM features are extracted from the whole image, highlighting that both the iris and the surrounding periocular region are relevant for fake iris detection. An added advantage is that no accurate iris segmentation is needed. This work is relevant due to the increasing prevalence of more relaxed scenarios where iris acquisition using NIR light is unfeasible (e.g. distant acquisition or mobile devices), which are putting high pressure in the development of algorithms capable of working with visible light. © 2014 MIPRO.

    Download full text (pdf)
    fulltext
  • 13.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Eye Detection by Complex Filtering for Periocular Recognition2014In: 2nd International Workshop on Biometrics and Forensics (IWBF2014): Valletta, Malta (27-28th March 2014), Piscataway, NJ: IEEE Press, 2014, article id 6914250Conference paper (Refereed)
    Abstract [en]

    We present a novel system to localize the eye position based on symmetry filters. By using a 2D separable filter tuned to detect circular symmetries, detection is done with a few ID convolutions. The detected eye center is used as input to our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the local power spectrum. This setup is evaluated with two databases of iris data, one acquired with a close-up NIR camera, and another in visible light with a web-cam. The periocular system shows high resilience to inaccuracies in the position of the detected eye center. The density of the sampling grid can also be reduced without sacrificing too much accuracy, allowing additional computational savings. We also evaluate an iris texture matcher based on ID Log-Gabor wavelets. Despite the poorer performance of the iris matcher with the webcam database, its fusion with the periocular system results in improved performance. ©2014 IEEE.

    Download full text (pdf)
    fulltext
  • 14.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Fake Iris Detection: A Comparison Between Near-Infrared and Visible Images2014In: Proceedings: 10th International Conference on Signal-Image Technology and Internet-Based Systems, SITIS 2014 / [ed] Kokou Yetongnon, Albert Dipanda & Richard Chbeir, Piscataway, NJ: IEEE Computer Society, 2014, p. 546-553Conference paper (Refereed)
    Abstract [en]

    Fake iris detection has been studied so far using near-infrared sensors (NIR), which provide grey scale-images, i.e. With luminance information only. Here, we incorporate into the analysis images captured in visible range, with color information, and perform comparative experiments between the two types of data. We employ Gray-Level Cocurrence textural features and SVM classifiers. These features analyze various image properties related with contrast, pixel regularity, and pixel co-occurrence statistics. We select the best features with the Sequential Forward Floating Selection (SFFS) algorithm. We also study the effect of extracting features from selected (eye or periocular) regions only. Our experiments are done with fake samples obtained from printed images, which are then presented to the same sensor than the real ones. Results show that fake images captured in NIR range are easier to detect than visible images (even if we down sample NIR images to equate the average size of the iris region between the two databases). We also observe that the best performance with both sensors can be obtained with features extracted from the whole image, showing that not only the eye region, but also the surrounding periocular texture is relevant for fake iris detection. An additional source of improvement with the visible sensor also comes from the use of the three RGB channels, in comparison with the luminance image only. A further analysis also reveals that some features are best suited to one particular sensor than the others. © 2014 IEEE

    Download full text (pdf)
    fulltext
  • 15.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Halmstad University submission to the First ICB Competition on Iris Recognition (ICIR2013)2013Other (Other academic)
    Download full text (pdf)
    fulltext
  • 16.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Near-infrared and visible-light periocular recognition with Gabor features using frequency-adaptive automatic eye detection2015In: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 4, no 2, p. 74-89Article in journal (Refereed)
    Abstract [en]

    Periocular recognition has gained attention recently due to demands of increased robustness of face or iris in less controlled scenarios. We present a new system for eye detection based on complex symmetry filters, which has the advantage of not needing training. Also, separability of the filters allows faster detection via one-dimensional convolutions. This system is used as input to a periocular algorithm based on retinotopic sampling grids and Gabor spectrum decomposition. The evaluation framework is composed of six databases acquired both with near-infrared and visible sensors. The experimental setup is complemented with four iris matchers, used for fusion experiments. The eye detection system presented shows very high accuracy with near-infrared data, and a reasonable good accuracy with one visible database. Regarding the periocular system, it exhibits great robustness to small errors in locating the eye centre, as well as to scale changes of the input image. The density of the sampling grid can also be reduced without sacrificing accuracy. Lastly, despite the poorer performance of the iris matchers with visible data, fusion with the periocular system can provide an improvement of more than 20%. The six databases used have been manually annotated, with the annotation made publicly available. © The Institution of Engineering and Technology 2015.

    Download full text (pdf)
    fulltext
  • 17.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Biometrics: Databases, Algorithms and Directions2016In: 2016 4th International Workshop on Biometrics and Forensics (IWBF): Proceedings : 3-4 March, 2016, Limassol, Cyprus, Piscataway, NJ: IEEE, 2016, article id 7449688Conference paper (Refereed)
    Abstract [en]

    Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a trade-off between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed. © 2016 IEEE.

  • 18.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Recognition Using Retinotopic Sampling and Gabor Decomposition2012In: Computer Vision – ECCV 2012: Workshops and demonstrations : Florence, Italy, October 7-13, 2012, Proceedings. Part II / [ed] Fusiello, Andrea; Murino, Vittorio; Cucchiara, Rita, Berlin: Springer, 2012, Vol. 7584, p. 309-318Conference paper (Refereed)
    Abstract [en]

    We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, 4) rotation compensation between query and test images, and 5) comparison with an iris machine expert. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region. © 2012 Springer-Verlag.

    Download full text (pdf)
    2012_WIAF_Periocular_Retinotopic_Gabor_Alonso
  • 19.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Biometrics: Databases, Algorithms and Directions2016Conference paper (Other academic)
    Abstract [en]

    Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a tradeoff between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed.

  • 20.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Quality Factors Affecting Iris Segmentation and Matching2013In: Proceedings – 2013 International Conference on Biometrics, ICB 2013 / [ed] Julian Fierrez, Ajay Kumar, Mayank Vatsa, Raymond Veldhuis & Javier Ortega-Garcia, Piscataway, N.J.: IEEE conference proceedings, 2013, article id 6613016Conference paper (Refereed)
    Abstract [en]

    Image degradations can affect the different processing steps of iris recognition systems. With several quality factors proposed for iris images, its specific effect in the segmentation accuracy is often obviated, with most of the efforts focused on its impact in the recognition accuracy. Accordingly, we evaluate the impact of 8 quality measures in the performance of iris segmentation. We use a database acquired with a close-up iris sensor and built-in quality checking process. Despite the latter, we report differences in behavior, with some measures clearly predicting the segmentation performance, while others giving inconclusive results. Recognition experiments with two matchers also show that segmentation and matching performance are not necessarily affected by the same factors. The resilience of one matcher to segmentation inaccuracies also suggest that segmentation errors due to low image quality are not necessarily revealed by the matcher, pointing out the importance of separate evaluation of the segmentation accuracy. © 2013 IEEE.

    Download full text (pdf)
    fulltext
  • 21.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Englund, Cristofer
    RISE Viktoria, Gothenburg, Sweden.
    Expression Recognition Using the Periocular Region: A Feasibility Study2018In: 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) / [ed] Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir, Los Alamitos: IEEE, 2018, p. 536-541Conference paper (Refereed)
    Abstract [en]

    This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0%/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.

    Download full text (pdf)
    fulltext
  • 22.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Eigen-patch iris super-resolution for iris recognition improvement2015In: 2015 23rd European Signal Processing Conference (EUSIPCO), Piscataway, NJ: IEEE Press, 2015, p. 76-80, article id 7362348Conference paper (Refereed)
    Abstract [en]

    Low image resolution will be a predominant factor in iris recognition systems as they evolve towards more relaxed acquisition conditions. Here, we propose a super-resolution technique to enhance iris images based on Principal Component Analysis (PCA) Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information and reducing artifacts. We validate the system used a database of 1,872 near-infrared iris images. Results show the superiority of the presented approach over bilinear or bicubic interpolation, with the eigen-patch method being more resilient to image resolution reduction. We also perform recognition experiments with an iris matcher based 1D Log-Gabor, demonstrating that verification rates degrades more rapidly with bilinear or bicubic interpolation. ©2015 IEEE

    Download full text (pdf)
    fulltext
  • 23.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Improving Very Low-Resolution Iris Identification Via Super-Resolution Reconstruction of Local Patches2017In: 2017 International Conference of the Biometrics Special Interest Group (BIOSIG) / [ed] Arslan Brömme, Christoph Busch, Antitza Dantcheva, Christian Rathgeb & Andreas Uhl, Bonn: Gesellschaft für Informatik, 2017, Vol. P-270, article id 8053512Conference paper (Refereed)
    Abstract [en]

    Relaxed acquisition conditions in iris recognition systems have significant effects on the quality and resolution of acquired images, which can severely affect performance if not addressed properly. Here, we evaluate two trained super-resolution algorithms in the context of iris identification. They are based on reconstruction of local image patches, where each patch is reconstructed separately using its own optimal reconstruction function. We employ a database of 1,872 near-infrared iris images (with 163 different identities for identification experiments) and three iris comparators. The trained approaches are substantially superior to bilinear or bicubic interpolations, with one of the comparators providing a Rank-1 performance of ∼88% with images of only 15×15 pixels, and an identification rate of 95% with a hit list size of only 8 identities. © 2017 Gesellschaft fuer Informatik.

    Download full text (pdf)
    fulltext
  • 24.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Iris Super-Resolution Using Iterative Neighbor Embedding2017In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops / [ed] Lisa O’Conner, Los Alamitos: IEEE Computer Society, 2017, p. 655-663Conference paper (Refereed)
    Abstract [en]

    Iris recognition research is heading towards enabling more relaxed acquisition conditions. This has effects on the quality and resolution of acquired images, severely affecting the accuracy of recognition systems if not tackled appropriately. In this paper, we evaluate a super-resolution algorithm used to reconstruct iris images based on iterative neighbor embedding of local image patches which tries to represent input low-resolution patches while preserving the geometry of the original high-resolution space. To this end, the geometry of the low- and high-resolution manifolds are jointly considered during the reconstruction process. We validate the system with a database of 1,872 near-infrared iris images, while fusion of two iris comparators has been adopted to improve recognition performance. The presented approach is substantially superior to bilinear/bicubic interpolations at very low resolutions, and it also outperforms a previous PCA-based iris reconstruction approach which only considers the geometry of the low-resolution manifold during the reconstruction process. © 2017 IEEE

  • 25.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Reconstruction of Smartphone Images for Low Resolution Iris Recognition2015In: 2015 IEEE International Workshop on Information Forensics and Security (WIFS), Piscataway, NJ: IEEE Press, 2015, article id 7368600Conference paper (Refereed)
    Abstract [en]

    As iris systems evolve towards a more relaxed acquisition, low image resolution will be a predominant issue. In this paper we evaluate a super-resolution method to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. We employ a database of 560 images captured in visible spectrum with two smartphones. The presented approach is superiorto bilinear or bicubic interpolation, especially at lower resolutions. We also carry out recognition experiments with six iris matchers, showing that better performance can be obtained at low-resolutions with the proposed eigen-patch reconstruction, with fusion of only two systems pushing the EER to below 5-8% for down-sampling factors up to a size of only 13x13. © 2015 IEEE.

  • 26.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Gonzalez-Sosa, Ester
    Nokia Bell-Labs, Madrid, Spain.
    A Survey of Super-Resolution in Iris Biometrics with Evaluation of Dictionary-Learning2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 6519-6544Article in journal (Refereed)
    Abstract [en]

    The lack of resolution has a negative impact on the performance of image-based biometrics. While many generic super-resolution methods have been proposed to restore low-resolution images, they usually aim to enhance their visual appearance. However, an overall visual enhancement of biometric images does not necessarily correlate with a better recognition performance. Reconstruction approaches need thus to incorporate specific information from the target biometric modality to effectively improve recognition performance. This paper presents a comprehensive survey of iris super-resolution approaches proposed in the literature. We have also adapted an Eigen-patches reconstruction method based on PCA Eigentransformation of local image patches. The structure of the iris is exploited by building a patch-position dependent dictionary. In addition, image patches are restored separately, having their own reconstruction weights. This allows the solution to be locally optimized, helping to preserve local information. To evaluate the algorithm, we degraded high-resolution images from the CASIA Interval V3 database. Different restorations were considered, with 15 × 15 pixels being the smallest resolution evaluated. To the best of our knowledge, this is among the smallest resolutions employed in the literature. The experimental framework is complemented with six publicly available iris comparators, which were used to carry out biometric verification and identification experiments. Experimental results show that the proposed method significantly outperforms both bilinear and bicubic interpolation at very low-resolution. The performance of a number of comparators attain an impressive Equal Error Rate as low as 5%, and a Top-1 accuracy of 77-84% when considering iris images of only 15 × 15 pixels. These results clearly demonstrate the benefit of using trained super-resolution techniques to improve the quality of iris images prior to matching. © 2018, Emerald Publishing Limited.

  • 27.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Super-Resolution for Selfie Biometrics: Introduction and Application to Face and Iris2019In: Selfie Biometrics: Advances and Challenges / [ed] Ajita Rattani, Reza Derakhshani & Arun A. Ross, Cham: Springer, 2019, 1, p. 105-128Chapter in book (Refereed)
    Abstract [en]

    Biometric research is heading towards enabling more relaxed acquisition conditions. This has effects on the quality and resolution of acquired images, severly affecting the accuracy of recognition systems if not tackled appropriately. In this chapter, we give an overview of recent research in super-resolution reconstruction applied to biometrics, with a focus on face and iris images in the visible spectrum, two prevalent modalities in selfie biometrics. After an introduction to the generic topic of super-resolution, we investigate methods adapted to cater for the particularities of these two modalities. By experiments, we show the benefits of incorporating super-resolution to improve the quality of biometric images prior to recognition. © Springer Nature AG 2019

  • 28.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Learning-Based Local-Patch Resolution Reconstruction of Iris Smart-phone Images2017Conference paper (Refereed)
    Abstract [en]

    Application of ocular biometrics in mobile and at a distance environments still has several open challenges, with the lack quality and resolution being an evident issue that can severely affects performance. In this paper, we evaluate two trained image reconstruction algorithms in the context of smart-phone biometrics. They are based on the use of coupled dictionaries to learn the mapping relations between low and high resolution images. In addition, reconstruction is made in local overlapped image patches, where up-scaling functions are modelled separately for each patch, allowing to better preserve local details. The experimental setup is complemented with a database of 560 images captured with two different smart-phones, and two iris comparators employed for verification experiments. We show that the trained approaches are substantially superior to bilinear or bicubic interpolations at very low resolutions (images of 13×13 pixels). Under such challenging conditions, an EER of ∼7% can be achieved using individual comparators, which is further pushed down to 4-6% after the fusion of the two systems. © 2017 IEEE

    Download full text (pdf)
    fulltext
  • 29.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Very Low-Resolution Iris Recognition Via Eigen-Patch Super-Resolution and Matcher Fusion2016In: 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Piscataway: IEEE, 2016, article id 7791208Conference paper (Refereed)
    Abstract [en]

    Current research in iris recognition is moving towards enabling more relaxed acquisition conditions. This has effects on the quality of acquired images, with low resolution being a predominant issue. Here, we evaluate a super-resolution algorithm used to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. Contrast enhancement is used to improve the reconstruction quality, while matcher fusion has been adopted to improve iris recognition performance. We validate the system using a database of 1,872 near-infrared iris images. The presented approach is superior to bilinear or bicubic interpolation, especially at lower resolutions, and the fusion of the two systems pushes the EER to below 5% for down-sampling factors up to a image size of only 13×13.

  • 30.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Quality Measures in Biometric Systems2015In: Encyclopedia of Biometrics / [ed] Stan Z. Li & Anil K. Jain, New York: Springer Science+Business Media B.V., 2015, 2, p. 1287-1297Chapter in book (Refereed)
    Abstract [en]

    This is an excerpt from the content

    Synonyms

    Quality assessment; Biometric quality; Quality-based processing

    Definition

    Since the establishment of biometrics as a specific research area in the late 1990s, the biometric community has focused its efforts in the development of accurate recognition algorithms [1]. Nowadays, biometric recognition is a mature technology that is used in many applications, offering greater security and convenience than traditional methods of personal recognition [2].

    During the past few years, biometric quality measurement has become an important concern after a number of studies and technology benchmarks that demonstrate how performance of biometric systems is heavily affected by the quality of biometric signals [3]. This operationally important step has been nevertheless under-researched compared to the primary feature extraction and pattern recognition tasks [4]. One of the main challenges facing biometric technologies is performance degradation in less controlled situations, and the problem of biometric quality measurement has arisen even stronger with the proliferation of portable handheld devices, with at-a-distance and on-the-move acquisition capabilities. These will require robust algorithms capable of handling a range of changing characteristics [2]. Another important example is forensics, in which intrinsic operational factors further degrade recognition performance.

    There are number of factors that can affect the quality of biometric signals, and there are numerous roles of a quality measure in the context of biometric systems. This section summarizes the state of the art in the biometric quality problem, giving an overall framework of the different challenges involved.

    Download full text (pdf)
    fulltext
  • 31.
    Alonso-Fernandez, Fernando
    et al.
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Fierrez-Aguilar, Julian
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Fronthaler, Hartwig
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Kollreider, Klaus
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Ortega-Garcia, Javier
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Gonzalez-Rodriguez, Joaquin
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Combining multiple matchers for fingerprint verification: A case study in biosecure network of excellence2007In: Annales des télécommunications, ISSN 0003-4347, E-ISSN 1958-9395, Vol. 62, no 1-2, p. 62-82Article in journal (Refereed)
    Abstract [en]

    We report on experiments for the fingerprint modality conducted during the First BioSecure Residential Workshop. Two reference systems for fingerprint verification have been tested together with two additional non-reference systems. These systems follow different approaches of fingerprint processing and are discussed in detail. Fusion experiments involving different combinations of the available systems are presented. The experimental results show that the best recognition strategy involves both minutiae-based and correlation-based measurements. Regarding the fusion experiments, the best relative improvement is obtained when fusing systems that are based on heterogeneous strategies for feature extraction and/or matching. The best combinations of two/three/four systems always include the best individual systems whereas the best verification performance is obtained when combining all the available systems.

    Download full text (pdf)
    fulltext
  • 32.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Ramis, Silvia
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Spain.
    Perales, Francisco J.
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Facial Masks and Soft-Biometrics: Leveraging Face Recognition CNNs for Age and Gender Prediction on Mobile Ocular Images2021In: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 10, no 5, p. 562-580Article in journal (Refereed)
    Abstract [en]

    We address the use of selfie ocular images captured with smartphones to estimate age and gender. Partial face occlusion has become an issue due to the mandatory use of face masks. Also, the use of mobile devices has exploded, with the pandemic further accelerating the migration to digital services. However, state-of-the-art solutions in related tasks such as identity or expression recognition employ large Convolutional Neural Networks, whose use in mobile devices is infeasible due to hardware limitations and size restrictions of downloadable applications. To counteract this, we adapt two existing lightweight CNNs proposed in the context of the ImageNet Challenge, and two additional architectures proposed for mobile face recognition. Since datasets for soft-biometrics prediction using selfie images are limited, we counteract over-fitting by using networks pre-trained on ImageNet. Furthermore, some networks are further pre-trained for face recognition, for which very large training databases are available. Since both tasks employ similar input data, we hypothesize that such strategy can be beneficial for soft-biometrics estimation. A comprehensive study of the effects of different pre-training over the employed architectures is carried out, showing that, in most cases, a better accuracy is obtained after the networks have been fine-tuned for face recognition. © The Authors

    Download full text (pdf)
    fulltext
  • 33.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Ramis, Silvia
    University of Balearic Islands, Spain.
    Perales, Francisco J.
    University of Balearic Islands, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Soft-Biometrics Estimation In the Era of Facial Masks2020In: 2020 International Conference of the Biometrics Special Interest Group (BIOSIG), Piscataway, N.J.: IEEE, 2020, p. 1-6Conference paper (Refereed)
    Abstract [en]

    We analyze the use of images from face parts to estimate soft-biometrics indicators. Partial face occlusion is common in unconstrained scenarios, and it has become mainstream during the COVID-19 pandemic due to the use of masks. Here, we apply existing pre-trained CNN architectures, proposed in the context of the ImageNet Large Scale Visual Recognition Challenge, to the tasks of gender, age, and ethnicity estimation. Experiments are done with 12007 images from the Labeled Faces in the Wild (LFW) database. We show that such off-the-shelf features can effectively estimate soft-biometrics indicators using only the ocular region. For completeness, we also evaluate images showing only the mouth region. In overall terms, the network providing the best accuracy only suffers accuracy drops of 2-4% when using the ocular region, in comparison to using the entire face. Our approach is also shown to outperform in several tasks two commercial off-the-shelf systems (COTS) that employ the whole face, even if we only use the eye or mouth regions. © 2020 German Computer Association (Gesellschaft für Informatik e.V.).

    Download full text (pdf)
    fulltext
  • 34.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Tiwari, Prayag
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Bigun, Josef
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition2024Conference paper (Refereed)
  • 35.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Mikaelyan, Anna
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Compact Multi-scale Periocular Recognition Using SAFE Features2016In: Proceedings - International Conference on Pattern Recognition, Washington: IEEE, 2016, p. 1455-1460, article id 7899842Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor, which encodes the presence of various symmetric curve families around image key points. We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this unique point of the eye. As it is demonstrated, such discriminative properties can be encoded with a reduced set of symmetric curves. Experiments are done with a database of periocular images captured with a digital camera. We test our system against reference periocular features, achieving top performance with a considerably smaller feature vector (given by the use of a single key point). All the systems tested also show a nearly steady correlation between acquisition distance and performance, and they are also able to cope well when enrolment and test images are not captured at the same distance. Fusion experiments among the available systems are also provided. © 2016 IEEE

  • 36.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Mikaelyan, Anna
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Comparison and Fusion of Multiple Iris and Periocular Matchers Using Near-Infrared and Visible Images2015In: 3rd International Workshop on Biometrics and Forensics, IWBF 2015, Piscataway, NJ: IEEE Press, 2015, p. Article number: 7110234-Conference paper (Refereed)
    Abstract [en]

    Periocular refers to the facial region in the eye vicinity. It can be easily obtained with existing face and iris setups, and it appears in iris images, so its fusion with the iris texture has a potential to improve the overall recognition. It is also suggested that iris is more suited to near-infrared (NIR) illu- mination, whereas the periocular modality is best for visible (VW) illumination. Here, we evaluate three periocular and three iris matchers based on different features. As experimen- tal data, we use five databases, three acquired with a close-up NIR camera, and two in VW light with a webcam and a dig- ital camera. We observe that the iris matchers perform better than the periocular matchers with NIR data, and the opposite with VW data. However, in both cases, their fusion can pro- vide additional performance improvements. This is specially relevant with VW data, where the iris matchers perform sig- nificantly worse (due to low resolution), but they are still able to complement the periocular modality. © 2015 IEEE.

    Download full text (pdf)
    fulltext
  • 37.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Raja, Kiran B.
    Norwegian University of Science and Technology, Gjøvik, Norway.
    Busch, Christoph
    Norwegian University of Science and Technology, Gjøvik, Norway.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Log-Likelihood Score Level Fusion for Improved Cross-Sensor Smartphone Periocular Recognition2017In: 2017 25th European Signal Processing Conference (EUSIPCO), Piscataway: IEEE, 2017, p. 281-285, article id 8081211Conference paper (Refereed)
    Abstract [en]

    The proliferation of cameras and personal devices results in a wide variability of imaging conditions, producing large intra-class variations and a significant performance drop when images from heterogeneous environments are compared. However, many applications require to deal with data from different sources regularly, thus needing to overcome these interoperability problems. Here, we employ fusion of several comparators to improve periocular performance when images from different smartphones are compared. We use a probabilistic fusion framework based on linear logistic regression, in which fused scores tend to be log-likelihood ratios, obtaining a reduction in cross-sensor EER of up to 40% due to the fusion. Our framework also provides an elegant and simple solution to handle signals from different devices, since same-sensor and crosssensor score distributions are aligned and mapped to a common probabilistic domain. This allows the use of Bayes thresholds for optimal decision making, eliminating the need of sensor-specific thresholds, which is essential in operational conditions because the threshold setting critically determines the accuracy of the authentication process in many applications. © EURASIP 2017

  • 38.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Sharon Belvisi, Nicole Mariah
    Halmstad University, School of Information Technology.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Muhammad, Naveed
    Institute of Computer Science, University of Tartu, Tartu , Estonia.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Writer Identification Using Microblogging Texts for Social Media Forensics2021In: IEEE Transactions on Biometrics, Behavior, and Identity Science, E-ISSN 2637-6407, Vol. 3, no 3, p. 405-426Article in journal (Refereed)
    Abstract [en]

    Establishing authorship of online texts is fundamental to combat cybercrimes. Unfortunately, text length is limited on some platforms, making the challenge harder. We aim at identifying the authorship of Twitter messages limited to 140 characters. We evaluate popular stylometric features, widely used in literary analysis, and specific Twitter features like URLs, hashtags, replies or quotes. We use two databases with 93 and 3957 authors, respectively. We test varying sized author sets and varying amounts of training/test texts per author. Performance is further improved by feature combination via automatic selection. With a large amount of training Tweets (>500), a good accuracy (Rank-5>80%) is achievable with only a few dozens of test Tweets, even with several thousands of authors. With smaller sample sizes (10-20 training Tweets), the search space can be diminished by 9-15% while keeping a high chance that the correct author is retrieved among the candidates. In such cases, automatic attribution can provide significant time savings to experts in suspect search. For completeness, we report verification results. With few training/test Tweets, the EER is above 20-25%, which is reduced to < 15% if hundreds of training Tweets are available. We also quantify the computational complexity and time permanence of the employed features. © 2019 IEEE.

    Download full text (pdf)
    fulltext
  • 39.
    Aloulou, Hamdi
    et al.
    Institut Mines Telecom, Paris, France & Laboratory of Informatics, Robotics and Microelectronics, Montpellier, France.
    Abdulrazak, Bessam
    Laboratory of Informatics, Robotics and Microelectronics, Montpellier, France & University of Sherbrooke, Sherbrooke, Canada.
    Endelin, Romain
    Institut Mines Telecom, Paris, France & Laboratory of Informatics, Robotics and Microelectronics, Montpellier, France.
    Bentes, João
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research. Image and Pervasive Access Laboratory, Singapore, Singapore.
    Tiberghien, Thibaut
    Institut Mines Telecom, Paris, France & Image and Pervasive Access Laboratory, Singapore, Singapore.
    Bellmunt, Joaquim
    Institut Mines Telecom, Paris, France & Image and Pervasive Access Laboratory, Singapore, Singapore.
    Simplifying Installation and Maintenance of Ambient Intelligent Solutions Toward Large Scale Deployment2016In: Inclusive Smart Cities and Digital Health: 14th International Conference on Smart Homes and Health Telematics, ICOST 2016, Wuhan, China, May 25-27, 2016. Proceedings / [ed] Chang C.K., Jin H., Cao Y., Aloulou H., Mokhtari M., Chiari L., Heidelberg: Springer, 2016, p. 121-132Conference paper (Refereed)
    Abstract [en]

    Simplify deployment and maintenance of Ambient Intelligence solutions is important to enable large-scale deployment and maximize the use/benefit of these solutions. More mature Ambient Intelligence solutions emerge on the market as a result of an intensive investment in research. This research targets mainly the accuracy, usefulness, and usability aspects of the solutions. Still, possibility to adapt to different environments, ease of deployment and maintenance are ongoing problems of Ambient Intelligence. Existing solutions require an expert to move on-site in order to install or maintain systems. Therefore, we present in this paper our attempt to enable quick large scale deployment. We discuss lessons learned from our approach for automating the deployment process in order to be performed by ordinary people. We also introduce a solution for simplifying the monitoring and maintenance of installed systems. © Springer International Publishing Switzerland 2016.

  • 40.
    Altarabichi, Mohammed Ghaith
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Fan, Yuantao
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Pashami, Sepideh
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Sheikholharam Mashhadi, Peyman
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Nowaczyk, Sławomir
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Extracting Invariant Features for Predicting State of Health of Batteries in Hybrid Energy Buses2021In: 2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA), Porto, Portugal, 6-9 Oct., 2021, IEEE, 2021, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Batteries are a safety-critical and the most expensive component for electric vehicles (EVs). To ensure the reliability of the EVs in operation, it is crucial to monitor the state of health of those batteries. Monitoring their deterioration is also relevant to the sustainability of the transport solutions, through creating an efficient strategy for utilizing the remaining capacity of the battery and its second life. Electric buses, similar to other EVs, come in many different variants, including different configurations and operating conditions. Developing new degradation models for each existing combination of settings can become challenging from different perspectives such as unavailability of failure data for novel settings, heterogeneity in data, low amount of data available for less popular configurations, and lack of sufficient engineering knowledge. Therefore, being able to automatically transfer a machine learning model to new settings is crucial. More concretely, the aim of this work is to extract features that are invariant across different settings.

    In this study, we propose an evolutionary method, called genetic algorithm for domain invariant features (GADIF), that selects a set of features to be used for training machine learning models, in such a way as to maximize the invariance across different settings. A Genetic Algorithm, with each chromosome being a binary vector signaling selection of features, is equipped with a specific fitness function encompassing both the task performance and domain shift. We contrast the performance, in migrating to unseen domains, of our method against a number of classical feature selection methods without any transfer learning mechanism. Moreover, in the experimental result section, we analyze how different features are selected under different settings. The results show that using invariant features leads to a better generalization of the machine learning models to an unseen domain.

  • 41.
    Altarabichi, Mohammed Ghaith
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Nowaczyk, Sławomir
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Pashami, Sepideh
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Sheikholharam Mashhadi, Peyman
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Surrogate-Assisted Genetic Algorithm for Wrapper Feature Selection2021In: 2021 IEEE Congress on Evolutionary Computation (CEC), IEEE, 2021, p. 776-785Conference paper (Refereed)
    Abstract [en]

    Feature selection is an intractable problem, therefore practical algorithms often trade off the solution accuracy against the computation time. In this paper, we propose a novel multi-stage feature selection framework utilizing multiple levels of approximations, or surrogates. Such a framework allows for using wrapper approaches in a much more computationally efficient way, significantly increasing the quality of feature selection solutions achievable, especially on large datasets. We design and evaluate a Surrogate-Assisted Genetic Algorithm (SAGA) which utilizes this concept to guide the evolutionary search during the early phase of exploration. SAGA only switches to evaluating the original function at the final exploitation phase.

    We prove that the run-time upper bound of SAGA surrogate-assisted stage is at worse equal to the wrapper GA, and it scales better for induction algorithms of high order of complexity in number of instances. We demonstrate, using 14 datasets from the UCI ML repository, that in practice SAGA significantly reduces the computation time compared to a baseline wrapper Genetic Algorithm (GA), while converging to solutions of significantly higher accuracy. Our experiments show that SAGA can arrive at near-optimal solutions three times faster than a wrapper GA, on average. We also showcase the importance of evolution control approach designed to prevent surrogates from misleading the evolutionary search towards false optima.

  • 42.
    Amoozegar, Maryam
    et al.
    School of Computer Engineering, Iran University of Science and Technology, Narmak, Tehran, 1684613114, Iran.
    Minaei-Bidgoli, Behrouz
    School of Computer Engineering, Iran University of Science and Technology, Narmak, Tehran, 1684613114, Iran.
    Mansoor, Rezghi
    Department of Computer Science, Tarbiat Modares University, Tehran, 14115-175, Iran.
    Fanaee Tork, Hadi
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Extra-adaptive robust online subspace tracker for anomaly detection from streaming networks2020In: Engineering applications of artificial intelligence, ISSN 0952-1976, E-ISSN 1873-6769, Vol. 94, article id 103741Article in journal (Refereed)
    Abstract [en]

    Anomaly detection in time-evolving networks has many applications, for instance, traffic analysis in transportation networks and intrusion detection in computer networks. One group of popular methods for anomaly detection from evolving networks are robust online subspace trackers. However, these methods suffer from problem of insensitivity to drastic changes in the evolving subspace. In order to solve this problem, we propose a new robust online subspace and anomaly tracker, which is more adaptive and robust against sudden drastic changes in the subspace. More accurate estimation of low rank and sparse components by this tracker leads to more accurate anomaly detection. We evaluate the accuracy of our method with real-world dynamic network data sets with varying sparsity levels. The result is promising and our method outperforms the state-of-the-art.

  • 43.
    Andreasson, Henrik
    et al.
    Örebro University, Örebro, Sweden.
    Bouguerra, Abdelbaki
    Örebro University, Örebro, Sweden.
    Åstrand, Björn
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Rögnvaldsson, Thorsteinn
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Gold-fish SLAM: An application of SLAM to localize AGVs2014In: Field and Service Robotics: Results of the 8th International Conference / [ed] Kazuya Yoshida & Satoshi Tadokoro, Heidelberg: Springer, 2014, p. 585-598Conference paper (Refereed)
    Abstract [en]

    The main focus of this paper is to present a case study of a SLAM solution for Automated Guided Vehicles (AGVs) operating in real-world industrial environments. The studied solution, called Gold-fish SLAM, was implemented to provide localization estimates in dynamic industrial environments, where there are static landmarks that are only rarely perceived by the AGVs. The main idea of Gold-fish SLAM is to consider the goods that enter and leave the environment as temporary landmarks that can be used in combination with the rarely seen static landmarks to compute online estimates of AGV poses. The solution is tested and verified in a factory of paper using an eight ton diesel-truck retrofitted with an AGV control system running at speeds up to 3m/s. The paper includes also a general discussion on how SLAM can be used in industrial applications with AGVs. © Springer-Verlag Berlin Heidelberg 2014.

  • 44.
    Aramrattana, Maytheewat
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Detournay, J.
    Swedish National Transport Research Institute, Gothenburg, SE-402 78, Sweden.
    Englund, Cristofer
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Frimodig, Viktor
    Halmstad University, School of Information Technology.
    Jansson, Oscar Uddman
    Swedish National Transport Research Institute, Gothenburg, SE-402 78, Sweden.
    Larsson, Tony
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Mostowski, Wojciech
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Díez Rodríguez, Víctor
    Halmstad University, School of Information Technology.
    Rosenstatter, Thomas
    Halmstad University, School of Information Technology.
    Shahanoor, Golam
    Halmstad University, School of Information Technology.
    Team Halmstad Approach to Cooperative Driving in the Grand Cooperative Driving Challenge 20162018In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 19, no 4, p. 1248-1261Article in journal (Refereed)
    Abstract [en]

    This paper is an experience report of team Halmstad from the participation in a competition organised by the i-GAME project, the Grand Cooperative Driving Challenge 2016. The competition was held in Helmond, The Netherlands, during the last weekend of May 2016. We give an overview of our car’s control and communication system that was developed for the competition following the requirements and specifications of the i-GAME project. In particular, we describe our implementation of cooperative adaptive cruise control, our solution to the communication and logging requirements, as well as the high level decision making support. For the actual competition we did not manage to completely reach all of the goals set out by the organizers as well as ourselves. However, this did not prevent us from outperforming the competition. Moreover, the competition allowed us to collect data for further evaluation of our solutions to cooperative driving. Thus, we discuss what we believe were the strong points of our system, and discuss post-competition evaluation of the developments that were not fully integrated into our system during competition time. © 2000-2011 IEEE.

  • 45.
    Aramrattana, Maytheewat
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES). The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Englund, Cristofer
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research. RISE Viktoria, Göteborg, Sweden.
    Jansson, Jonas
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Larsson, Tony
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Nåbo, Arne
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Safety Analysis of Cooperative Adaptive Cruise Control in Vehicle Cut-in Situations2017In: Proceedings of 2017 4th International Symposium on Future Active Safety Technology towards Zero-Traffic-Accidents (FAST-zero), Society of Automotive Engineers of Japan , 2017, article id 20174621Conference paper (Refereed)
    Abstract [en]

    Cooperative adaptive cruise control (CACC) is a cooperative intelligent transport systems (C-ITS) function, which especially when used in platooning applications, possess many expected benefits including efficient road space utilization and reduced fuel consumption. Cut-in manoeuvres in platoons can potentially reduce those benefits, and are not desired from a safety point of view. Unfortunately, in realistic traffic scenarios, cut-in manoeuvres can be expected, especially from non-connected vehicles. In this paper two different controllers for platooning are explored, aiming at maintaining the safety of the platoon while a vehicle is cutting in from the adjacent lane. A realistic scenario, where a human driver performs the cut-in manoeuvre is used to demonstrate the effectiveness of the controllers. Safety analysis of CACC controllers using time to collision (TTC) under such situation is presented. The analysis using TTC indicate that, although potential risks are always high in CACC applications such as platooning due to the small inter-vehicular distances, dangerous TTC (TTC < 6 seconds) is not frequent. Future research directions are also discussed along with the results.

    Download full text (pdf)
    fulltext
  • 46.
    Aramrattana, Maytheewat
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES). The Swedish National Road and Transport Research Institute (VTI), Göteborg, Sweden.
    Englund, Cristofer
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research. RISE Viktoria, Göteborg, Sweden.
    Larsson, Tony
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Jansson, Jonas
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Nåbo, Arne
    The Swedish National Road and Transport Research Institute (VTI), Göteborg, Sweden.
    Safety Evaluation of Highway Platooning Under a Cut-In Situation Using SimulationManuscript (preprint) (Other (popular science, discussion, etc.))
    Abstract [en]

    Platooning refers to an application, where a group of connected and automated vehicles follow a lead vehicle autonomously, with short inter-vehicular distances. At merging points on highways such as on-ramp, platoons could encounter manually driven vehicles, which are merging on to the highways. In some situations, the manually driven vehicles could end up between the platooning vehicles. Such situations are expected and known as “cut-in” situations. This paper presents a simulation study of a cut-in situation, where a platoon of five vehicles encounter a manually driven vehicle at a merging point of a highway. The manually driven vehicle is driven by 37 test persons using a driving simulator. For the platooning vehicles, two longitudinal controllers with four gap settings between the platooning vehicles, i.e. 15 meters, 22.5 meters, 30 meters, and 42.5 meters, are evaluated. Results summarizing cut-in behaviours and how the participants perceived the situation are presented. Furthermore, the situation is assessed using safety indicators based on time-to-collision.

  • 47.
    Aramrattana, Maytheewat
    et al.
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Larsson, Tony
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Englund, Cristofer
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research. RISE Viktoria, Gothenburg, Sweden.
    Jansson, Jonas
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Nåbo, Arne
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    A Novel Risk Indicator for Cut-In Situations2020In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Piscataway, NJ: IEEE, 2020, article id 9294315Conference paper (Refereed)
    Abstract [en]

    Cut-in situations occurs when a vehicle intention- ally changes lane and ends up in front of another vehicle or in-between two vehicles. In such situations, having a method to indicate the collision risk prior to making the cut-in maneuver could potentially reduce the number of sideswipe and rear end collisions caused by the cut-in maneuvers. This paper propose a new risk indicator, namely cut-in risk indicator (CRI), as a way to indicate and potentially foresee collision risks in cut-in situations. As an example use case, we applied CRI on data from a driving simulation experiment involving a manually driven vehicle and an automated platoon in a highway merging situation. We then compared the results with time-to-collision (TTC), and suggest that CRI could correctly indicate collision risks in a more effective way. CRI can be computed on all vehicles involved in the cut-in situations, not only for the vehicle that is cutting in. Making it possible for other vehicles to estimate the collision risk, for example if a cut-in from another vehicle occurs, the surrounding vehicles could be warned and have the possibility to react in order to potentially avoid or mitigate accidents. © 2020 IEEE.

  • 48.
    Aramrattana, Maytheewat
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES). The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Larsson, Tony
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Englund, Cristofer
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research. RISE Viktoria, Gothenburg, Sweden.
    Jansson, Jonas
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Nåbo, Arne
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Simulation of Cut-In by Manually Driven Vehicles in Platooning Scenarios2017In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Piscataway, NJ: IEEE, 2017, p. 1-6Conference paper (Refereed)
    Abstract [en]

    In the near future, Cooperative Intelligent Transport System (C-ITS) applications are expected to be deployed. To support this, simulation is often used to design and evaluate the applications during the early development phases. Simulations of C-ITS scenarios often assume a fleet of homogeneous vehicles within the transportation system. In contrast, once C-ITS is deployed, the traffic scenarios will consist of a mixture of connected and non-connected vehicles, which, in addition, can be driven manually or automatically. Such mixed cases are rarely analysed, especially those where manually driven vehicles are involved. Therefore, this paper presents a C-ITS simulation framework, which incorporates a manually driven car through a driving simulator interacting with a traffic simulator, and a communication simulator, which together enable modelling and analysis of C-ITS applications and scenarios. Furthermore, example usages in the scenarios, where a manually driven vehicle cut-in to a platoon of Cooperative Adaptive Cruise Control (CACC) equipped vehicles are presented. © 2017 IEEE.

    Download full text (pdf)
    fulltext
  • 49.
    Aramrattana, Maytheewat
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES). The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Larsson, Tony
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Centre for Research on Embedded Systems (CERES).
    Jansson, Jonas
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Englund, Cristofer
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research. Viktoria Swedish ICT, Gothenburg, Sweden.
    Dimensions of Cooperative Driving, ITS and Automation2015In: 2015 IEEE Intelligent Vehicles Symposium (IV), Piscataway, NJ: IEEE Press, 2015, p. 144-149Conference paper (Refereed)
    Abstract [en]

    Wireless technology supporting vehicle-to-vehicle (V2V), and vehicle-to-infrastructure (V2I) communication, allow vehicles and infrastructures to exchange information, and cooperate. Cooperation among the actors in an intelligent transport system (ITS) can introduce several benefits, for instance, increase safety, comfort, efficiency. Automation has also evolved in vehicle control and active safety functions. Combining cooperation and automation would enable more advanced functions such as automated highway merge and negotiating right-of-way in a cooperative intersection. However, the combination have influences on the structure of the overall transport systems as well as on its behaviour. In order to provide a common understanding of such systems, this paper presents an analysis of cooperative ITS (C-ITS) with regard to dimensions of cooperation. It also presents possible influence on driving behaviour and challenges in deployment and automation of C-ITS.

    Download full text (pdf)
    fulltext
  • 50.
    Ashfaq, Awais
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Predicting clinical outcomes via machine learning on electronic health records2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The rising complexity in healthcare, exacerbated by an ageing population, results in ineffective decision-making leading to detrimental effects on care quality and escalates care costs. Consequently, there is a need for smart decision support systems that can empower clinician's to make better informed care decisions. Decisions, which are not only based on general clinical knowledge and personal experience, but also rest on personalised and precise insights about future patient outcomes. A promising approach is to leverage the ongoing digitization of healthcare that generates unprecedented amounts of clinical data stored in Electronic Health Records (EHRs) and couple it with modern Machine Learning (ML) toolset for clinical decision support, and simultaneously, expand the evidence base of medicine. As promising as it sounds, assimilating complete clinical data that provides a rich perspective of the patient's health state comes with a multitude of data-science challenges that impede efficient learning of ML models. This thesis primarily focuses on learning comprehensive patient representations from EHRs. The key challenges of heterogeneity and temporality in EHR data are addressed using human-derived features appended to contextual embeddings of clinical concepts and Long-Short-Term-Memory networks, respectively. The developed models are empirically evaluated in the context of predicting adverse clinical outcomes such as mortality or hospital readmissions. We also present evidence that, surprisingly, different ML models primarily designed for non-EHR analysis (like language processing and time-series prediction) can be combined and adapted into a single framework to efficiently represent EHR data and predict patient outcomes.

    Download full text (pdf)
    Lic
1234567 1 - 50 of 384
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf