hh.sePublikasjoner
Endre søk
Begrens søket
1234567 1 - 50 of 383
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Abiri, Najmeh
    et al.
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Linse, Björn
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Edén, Patrik
    Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Ohlsson, Mattias
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab). Department of Astronomy and Theoretical Physics, Lund University, Lund, Sweden.
    Establishing strong imputation performance of a denoising autoencoder in a wide range of missing data problems2019Inngår i: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 65, s. 137-146Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Dealing with missing data in data analysis is inevitable. Although powerful imputation methods that address this problem exist, there is still much room for improvement. In this study, we examined single imputation based on deep autoencoders, motivated by the apparent success of deep learning to efficiently extract useful dataset features. We have developed a consistent framework for both training and imputation. Moreover, we benchmarked the results against state-of-the-art imputation methods on different data sizes and characteristics. The work was not limited to the one-type variable dataset; we also imputed missing data with multi-type variables, e.g., a combination of binary, categorical, and continuous attributes. To evaluate the imputation methods, we randomly corrupted the complete data, with varying degrees of corruption, and then compared the imputed and original values. In all experiments, the developed autoencoder obtained the smallest error for all ranges of initial data corruption. © 2019 Elsevier B.V.

  • 2.
    Aein, Mohamad Javad
    et al.
    Department for Computational Neuroscience at the Bernstein Center Göttingen (Inst. of Physics 3) & Leibniz Science Campus for Primate Cognition, Georg-August-Universität Göttingen, Göttingen, Germany.
    Aksoy, Eren
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Wörgötter, Florentin
    Department for Computational Neuroscience at the Bernstein Center Göttingen (Inst. of Physics 3) & Leibniz Science Campus for Primate Cognition, Georg-August-Universität Göttingen, Göttingen, Germany.
    Library of actions: Implementing a generic robot execution framework by using manipulation action semantics2019Inngår i: The international journal of robotics research, ISSN 0278-3649, E-ISSN 1741-3176, Vol. 38, nr 8, s. 910-934Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Drive-thru-Internet is a scenario in cooperative intelligent transportation systems (C-ITSs), where a road-side unit (RSU) provides multimedia services to vehicles that pass by. Performance of the drive-thru-Internet depends on various factors, including data traffic intensity, vehicle traffic density, and radio-link quality within the coverage area of the RSU, and must be evaluated at the stage of system design in order to fulfill the quality-of-service requirements of the customers in C-ITS. In this paper, we present an analytical framework that models downlink traffic in a drive-thru-Internet scenario by means of a multidimensional Markov process: the packet arrivals in the RSU buffer constitute Poisson processes and the transmission times are exponentially distributed. Taking into account the state space explosion problem associated with multidimensional Markov processes, we use iterative perturbation techniques to calculate the stationary distribution of the Markov chain. Our numerical results reveal that the proposed approach yields accurate estimates of various performance metrics, such as the mean queue content and the mean packet delay for a wide range of workloads. © 2019 IEEE.

  • 3.
    Aksoy, Eren
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Baci, Saimir
    Volvo Technology AB, Volvo Group Trucks Technology, Vehicle Automation, Gothenburg, Sweden.
    Cavdar, Selcuk
    Volvo Technology AB, Volvo Group Trucks Technology, Vehicle Automation, Gothenburg, Sweden.
    SalsaNet: Fast Road and Vehicle Segmentationin LiDAR Point Clouds for Autonomous Driving2020Inngår i: IEEE Intelligent Vehicles Symposium: IV2020, Piscataway, N.J.: IEEE, 2020, s. 926-932Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we introduce a deep encoder-decoder network, named SalsaNet, for efficient semantic segmentation of 3D LiDAR point clouds. SalsaNet segments the road, i.e. drivable free-space, and vehicles in the scene by employing the Bird-Eye-View (BEV) image projection of the point cloud. To overcome the lack of annotated point cloud data, in particular for the road segments, we introduce an auto-labeling process which transfers automatically generated labels from the camera to LiDAR. We also explore the role of imagelike projection of LiDAR data in semantic segmentation by comparing BEV with spherical-front-view projection and show that SalsaNet is projection-agnostic. We perform quantitative and qualitative evaluations on the KITTI dataset, which demonstrate that the proposed SalsaNet outperforms other state-of-the-art semantic segmentation networks in terms of accuracy and computation time. Our code and data are publicly available at https://gitlab.com/aksoyeren/salsanet.git. 

    Fulltekst (pdf)
    SalsaNet
  • 4.
    Ali Hamad, Rebeen
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Järpe, Eric
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Lundström, Jens
    JeCom Consulting, Halmstad, Sweden.
    Stability analysis of the t-SNE algorithm for human activity pattern data2018Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Health technological systems learning from and reacting on how humans behave in sensor equipped environments are today being commercialized. These systems rely on the assumptions that training data and testing data share the same feature space, and residing from the same underlying distribution - which is commonly unrealistic in real-world applications. Instead, the use of transfer learning could be considered. In order to transfer knowledge between a source and a target domain these should be mapped to a common latent feature space. In this work, the dimensionality reduction algorithm t-SNE is used to map data to a similar feature space and is further investigated through a proposed novel analysis of output stability. The proposed analysis, Normalized Linear Procrustes Analysis (NLPA) extends the existing Procrustes and Local Procrustes algorithms for aligning manifolds. The methods are tested on data reflecting human behaviour patterns from data collected in a smart home environment. Results show high partial output stability for the t-SNE algorithm for the tested input data for which NLPA is able to detect clusters which are individually aligned and compared. The results highlight the importance of understanding output stability before incorporating dimensionality reduction algorithms into further computation, e.g. for transfer learning.

    Fulltekst (pdf)
    tsne-stability
  • 5.
    Ali Hamad, Rebeen
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Kimura, Masashi
    Convergence Lab, Tokyo, Japan.
    Lundström, Jens
    Convergia Consulting, Halmstad, Sweden.
    Efficacy of Imbalanced Data Handling Methods on Deep Learning for Smart Homes Environments2020Inngår i: SN Computer Science, ISSN 2661-8907, Vol. 1, nr 4, artikkel-id 204Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Human activity recognition as an engineering tool as well as an active research field has become fundamental to many applications in various fields such as health care, smart home monitoring and surveillance. However, delivering sufficiently robust activity recognition systems from sensor data recorded in a smart home setting is a challenging task. Moreover, human activity datasets are typically highly imbalanced because generally certain activities occur more frequently than others. Consequently, it is challenging to train classifiers from imbalanced human activity datasets. Deep learning algorithms perform well on balanced datasets, yet their performance cannot be promised on imbalanced datasets. Therefore, we aim to address the problem of class imbalance in deep learning for smart home data. We assess it with Activities of Daily Living recognition using binary sensors dataset. This paper proposes a data level perspective combined with a temporal window technique to handle imbalanced human activities from smart homes in order to make the learning algorithms more sensitive to the minority class. The experimental results indicate that handling imbalanced human activities from the data-level outperforms algorithms level and improved the classification performance. © The Author(s) 2020

    Fulltekst (pdf)
    fulltext
  • 6.
    Ali Hamad, Rebeen
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Salguero Hidalgo, Alberto
    University of Cádiz, Cádiz, Spain.
    Bouguelia, Mohamed-Rafik
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Estevez, Macarena Espinilla
    University of Jaén, Jaén, Spain.
    Quero, Javier Medina
    University of Jaén, Jaén, Spain.
    Efficient Activity Recognition in Smart Homes Using Delayed Fuzzy Temporal Windows on Binary Sensors2020Inngår i: IEEE journal of biomedical and health informatics, ISSN 2168-2194, E-ISSN 2168-2208, Vol. 24, nr 2, s. 387-395Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Human activity recognition has become an active research field over the past few years due to its wide application in various fields such as health-care, smart home monitoring, and surveillance. Existing approaches for activity recognition in smart homes have achieved promising results. Most of these approaches evaluate real-time recognition of activities using only sensor activations that precede the evaluation time (where the decision is made). However, in several critical situations, such as diagnosing people with dementia, “preceding sensor activations” are not always sufficient to accurately recognize the inhabitant's daily activities in each evaluated time. To improve performance, we propose a method that delays the recognition process in order to include some sensor activations that occur after the point in time where the decision needs to be made. For this, the proposed method uses multiple incremental fuzzy temporal windows to extract features from both preceding and some oncoming sensor activations. The proposed method is evaluated with two temporal deep learning models (convolutional neural network and long short-term memory), on a binary sensor dataset of real daily living activities. The experimental evaluation shows that the proposed method achieves significantly better results than the real-time approach, and that the representation with fuzzy temporal windows enhances performance within deep learning models. © Copyright 2020 IEEE

  • 7.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Barrachina, Javier
    Facephi Biometria, Alicante, Spain.
    Hernandez-Diaz, Kevin
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    SqueezeFacePoseNet: Lightweight Face Verification Across Different Poses for Mobile Platforms2021Inngår i: Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10-15, 2021, Proceedings, Part VIII / [ed] Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, Roberto Vezzani, Berlin: Springer, 2021, s. 139-153Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Ubiquitous and real-time person authentication has become critical after the breakthrough of all kind of services provided via mobile devices. In this context, face technologies can provide reliable and robust user authentication, given the availability of cameras in these devices, as well as their widespread use in everyday applications. The rapid development of deep Convolutional Neural Networks (CNNs) has resulted in many accurate face verification architectures. However, their typical size (hundreds of megabytes) makes them infeasible to be incorporated in downloadable mobile applications where the entire file typically may not exceed 100 Mb. Accordingly, we address the challenge of developing a lightweight face recognition network of just a few megabytes that can operate with sufficient accuracy in comparison to much larger models. The network also should be able to operate under different poses, given the variability naturally observed in uncontrolled environments where mobile devices are typically used. In this paper, we adapt the lightweight SqueezeNet model, of just 4.4MB, to effectively provide cross-pose face recognition. After trained on the MS-Celeb-1M and VGGFace2 databases, our model achieves an EER of 1.23% on the difficult frontal vs. profile comparison, and 0.54% on profile vs. profile images. Under less extreme variations involving frontal images in any of the enrolment/query images pair, EER is pushed down to <0.3%, and the FRR at FAR=0.1% to less than 1%. This makes our light model suitable for face recognition where at least acquisition of the enrolment image can be controlled. At the cost of a slight degradation in performance, we also test an even lighter model (of just 2.5MB) where regular convolutions are replaced with depth-wise separable convolutions. © 2021, Springer Nature Switzerland AG.

    Fulltekst (pdf)
    fulltext
  • 8.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    A survey on periocular biometrics research2016Inngår i: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 82, part 2, s. 92-105Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Periocular refers to the facial region in the vicinity of the eye, including eyelids, lashes and eyebrows. While face and irises have been extensively studied, the periocular region has emerged as a promising trait for unconstrained biometrics, following demands for increased robustness of face or iris systems. With a surprisingly high discrimination ability, this region can be easily obtained with existing setups for face and iris, and the requirement of user cooperation can be relaxed, thus facilitating the interaction with biometric systems. It is also available over a wide range of distances even when the iris texture cannot be reliably obtained (low resolution) or under partial face occlusion (close distances). Here, we review the state of the art in periocular biometrics research. A number of aspects are described, including: (i) existing databases, (ii) algorithms for periocular detection and/or segmentation, (iii) features employed for recognition, (iv) identification of the most discriminative regions of the periocular area, (v) comparison with iris and face modalities, (vi) soft-biometrics (gender/ethnicity classification), and (vii) impact of gender transformation and plastic surgery on the recognition accuracy. This work is expected to provide an insight of the most relevant issues in periocular biometrics, giving a comprehensive coverage of the existing literature and current state of the art. © 2015 Elsevier B.V. All rights reserved.

  • 9.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    An Overview of Periocular Biometrics2017Inngår i: Iris and Periocular Biometric Recognition / [ed] Christian Rathgeb & Christoph Busch, London: The Institution of Engineering and Technology , 2017, s. 29-53Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    Periocular biometrics specifically refers to the externally visible skin region of the face that surrounds the eye socket. Its utility is specially pronounced when the iris or the face cannot be properly acquired, being the ocular modality requiring the least constrained acquisition process. It appears over a wide range of distances, even under partial face occlusion (close distance) or low resolution iris (long distance), making it very suitable for unconstrained or uncooperative scenarios. It also avoids the need of iris segmentation, an issue in difficult images. In such situation, identifying a suspect where only the periocular region is visible is one of the toughest real-world challenges in biometrics. The richness of the periocular region in terms of identity is so high that the whole face can even be reconstructed only from images of the periocular region. The technological shift to mobile devices has also resulted in many identity-sensitive applications becoming prevalent on these devices.

  • 10.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Best Regions for Periocular Recognition with NIR and Visible Images2014Inngår i: 2014 IEEE International Conference on Image Processing (ICIP), Piscataway, NJ: IEEE Press, 2014, s. 4987-4991Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We evaluate the most useful regions for periocular recognition. For this purpose, we employ our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the spectrum. We use both NIR and visible iris images. The best regions are selected via Sequential Forward Floating Selection (SFFS). The iris neighborhood (including sclera and eyelashes) is found as the best region with NIR data, while the surrounding skin texture (which is over-illuminated in NIR images) is the most discriminative region in visible range. To the best of our knowledge, only one work in the literature has evaluated the influence of different regions in the performance of periocular recognition algorithms. Our results are in the same line, despite the use of completely different matchers. We also evaluate an iris texture matcher, providing fusion results with our periocular system as well. © 2014 IEEE.

    Fulltekst (pdf)
    fulltext
  • 11.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Biometric Recognition Using Periocular Images2013Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum at different frequencies and orientations. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, and 4) rotation compensation between query and test images. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region.

    Fulltekst (pdf)
    fulltext
  • 12.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Exploting Periocular and RGB Information in Fake Iris Detection2014Inngår i: 2014 37th International Conventionon Information and Communication Technology, Electronics and Microelectronics (MIPRO): 26 – 30 May 2014 Opatija, Croatia: Proceedings / [ed] Petar Biljanovic, Zeljko Butkovic, Karolj Skala, Stjepan Golubic, Marina Cicin-Sain, Vlado Sruk, Slobodan Ribaric, Stjepan Gros, Boris Vrdoljak, Mladen Mauher & Goran Cetusic, Rijeka: Croatian Society for Information and Communication Technology, Electronics and Microelectronics - MIPRO , 2014, s. 1354-1359Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Fake iris detection has been studied by several researchers. However, to date, the experimental setup has been limited to near-infrared (NIR) sensors, which provide grey-scale images. This work makes use of images captured in visible range with color (RGB) information. We employ Gray-Level CoOccurrence textural features and SVM classifiers for the task of fake iris detection. The best features are selected with the Sequential Forward Floating Selection (SFFS) algorithm. To the best of our knowledge, this is the first work evaluating spoofing attack using color iris images in visible range. Our results demonstrate that the use of features from the three color channels clearly outperform the accuracy obtained from the luminance (gray scale) image. Also, the R channel is found to be the best individual channel. Lastly, we analyze the effect of extracting features from selected (eye or periocular) regions only. The best performance is obtained when GLCM features are extracted from the whole image, highlighting that both the iris and the surrounding periocular region are relevant for fake iris detection. An added advantage is that no accurate iris segmentation is needed. This work is relevant due to the increasing prevalence of more relaxed scenarios where iris acquisition using NIR light is unfeasible (e.g. distant acquisition or mobile devices), which are putting high pressure in the development of algorithms capable of working with visible light. © 2014 MIPRO.

    Fulltekst (pdf)
    fulltext
  • 13.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Eye Detection by Complex Filtering for Periocular Recognition2014Inngår i: 2nd International Workshop on Biometrics and Forensics (IWBF2014): Valletta, Malta (27-28th March 2014), Piscataway, NJ: IEEE Press, 2014, artikkel-id 6914250Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a novel system to localize the eye position based on symmetry filters. By using a 2D separable filter tuned to detect circular symmetries, detection is done with a few ID convolutions. The detected eye center is used as input to our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the local power spectrum. This setup is evaluated with two databases of iris data, one acquired with a close-up NIR camera, and another in visible light with a web-cam. The periocular system shows high resilience to inaccuracies in the position of the detected eye center. The density of the sampling grid can also be reduced without sacrificing too much accuracy, allowing additional computational savings. We also evaluate an iris texture matcher based on ID Log-Gabor wavelets. Despite the poorer performance of the iris matcher with the webcam database, its fusion with the periocular system results in improved performance. ©2014 IEEE.

    Fulltekst (pdf)
    fulltext
  • 14.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Laboratoriet för intelligenta system.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Fake Iris Detection: A Comparison Between Near-Infrared and Visible Images2014Inngår i: Proceedings: 10th International Conference on Signal-Image Technology and Internet-Based Systems, SITIS 2014 / [ed] Kokou Yetongnon, Albert Dipanda & Richard Chbeir, Piscataway, NJ: IEEE Computer Society, 2014, s. 546-553Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Fake iris detection has been studied so far using near-infrared sensors (NIR), which provide grey scale-images, i.e. With luminance information only. Here, we incorporate into the analysis images captured in visible range, with color information, and perform comparative experiments between the two types of data. We employ Gray-Level Cocurrence textural features and SVM classifiers. These features analyze various image properties related with contrast, pixel regularity, and pixel co-occurrence statistics. We select the best features with the Sequential Forward Floating Selection (SFFS) algorithm. We also study the effect of extracting features from selected (eye or periocular) regions only. Our experiments are done with fake samples obtained from printed images, which are then presented to the same sensor than the real ones. Results show that fake images captured in NIR range are easier to detect than visible images (even if we down sample NIR images to equate the average size of the iris region between the two databases). We also observe that the best performance with both sensors can be obtained with features extracted from the whole image, showing that not only the eye region, but also the surrounding periocular texture is relevant for fake iris detection. An additional source of improvement with the visible sensor also comes from the use of the three RGB channels, in comparison with the luminance image only. A further analysis also reveals that some features are best suited to one particular sensor than the others. © 2014 IEEE

    Fulltekst (pdf)
    fulltext
  • 15.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Halmstad University submission to the First ICB Competition on Iris Recognition (ICIR2013)2013Annet (Annet vitenskapelig)
    Fulltekst (pdf)
    fulltext
  • 16.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Near-infrared and visible-light periocular recognition with Gabor features using frequency-adaptive automatic eye detection2015Inngår i: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 4, nr 2, s. 74-89Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Periocular recognition has gained attention recently due to demands of increased robustness of face or iris in less controlled scenarios. We present a new system for eye detection based on complex symmetry filters, which has the advantage of not needing training. Also, separability of the filters allows faster detection via one-dimensional convolutions. This system is used as input to a periocular algorithm based on retinotopic sampling grids and Gabor spectrum decomposition. The evaluation framework is composed of six databases acquired both with near-infrared and visible sensors. The experimental setup is complemented with four iris matchers, used for fusion experiments. The eye detection system presented shows very high accuracy with near-infrared data, and a reasonable good accuracy with one visible database. Regarding the periocular system, it exhibits great robustness to small errors in locating the eye centre, as well as to scale changes of the input image. The density of the sampling grid can also be reduced without sacrificing accuracy. Lastly, despite the poorer performance of the iris matchers with visible data, fusion with the periocular system can provide an improvement of more than 20%. The six databases used have been manually annotated, with the annotation made publicly available. © The Institution of Engineering and Technology 2015.

    Fulltekst (pdf)
    fulltext
  • 17.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Periocular Biometrics: Databases, Algorithms and Directions2016Inngår i: 2016 4th International Workshop on Biometrics and Forensics (IWBF): Proceedings : 3-4 March, 2016, Limassol, Cyprus, Piscataway, NJ: IEEE, 2016, artikkel-id 7449688Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a trade-off between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed. © 2016 IEEE.

  • 18.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Laboratoriet för intelligenta system.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Periocular Recognition Using Retinotopic Sampling and Gabor Decomposition2012Inngår i: Computer Vision – ECCV 2012: Workshops and demonstrations : Florence, Italy, October 7-13, 2012, Proceedings. Part II / [ed] Fusiello, Andrea; Murino, Vittorio; Cucchiara, Rita, Berlin: Springer, 2012, Vol. 7584, s. 309-318Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, 4) rotation compensation between query and test images, and 5) comparison with an iris machine expert. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region. © 2012 Springer-Verlag.

    Fulltekst (pdf)
    2012_WIAF_Periocular_Retinotopic_Gabor_Alonso
  • 19.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Periocular Biometrics: Databases, Algorithms and Directions2016Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a tradeoff between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed.

  • 20.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Quality Factors Affecting Iris Segmentation and Matching2013Inngår i: Proceedings – 2013 International Conference on Biometrics, ICB 2013 / [ed] Julian Fierrez, Ajay Kumar, Mayank Vatsa, Raymond Veldhuis & Javier Ortega-Garcia, Piscataway, N.J.: IEEE conference proceedings, 2013, artikkel-id 6613016Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Image degradations can affect the different processing steps of iris recognition systems. With several quality factors proposed for iris images, its specific effect in the segmentation accuracy is often obviated, with most of the efforts focused on its impact in the recognition accuracy. Accordingly, we evaluate the impact of 8 quality measures in the performance of iris segmentation. We use a database acquired with a close-up iris sensor and built-in quality checking process. Despite the latter, we report differences in behavior, with some measures clearly predicting the segmentation performance, while others giving inconclusive results. Recognition experiments with two matchers also show that segmentation and matching performance are not necessarily affected by the same factors. The resilience of one matcher to segmentation inaccuracies also suggest that segmentation errors due to low image quality are not necessarily revealed by the matcher, pointing out the importance of separate evaluation of the segmentation accuracy. © 2013 IEEE.

    Fulltekst (pdf)
    fulltext
  • 21.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Englund, Cristofer
    RISE Viktoria, Gothenburg, Sweden.
    Expression Recognition Using the Periocular Region: A Feasibility Study2018Inngår i: 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) / [ed] Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir, Los Alamitos: IEEE, 2018, s. 536-541Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0%/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.

    Fulltekst (pdf)
    fulltext
  • 22.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Eigen-patch iris super-resolution for iris recognition improvement2015Inngår i: 2015 23rd European Signal Processing Conference (EUSIPCO), Piscataway, NJ: IEEE Press, 2015, s. 76-80, artikkel-id 7362348Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Low image resolution will be a predominant factor in iris recognition systems as they evolve towards more relaxed acquisition conditions. Here, we propose a super-resolution technique to enhance iris images based on Principal Component Analysis (PCA) Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information and reducing artifacts. We validate the system used a database of 1,872 near-infrared iris images. Results show the superiority of the presented approach over bilinear or bicubic interpolation, with the eigen-patch method being more resilient to image resolution reduction. We also perform recognition experiments with an iris matcher based 1D Log-Gabor, demonstrating that verification rates degrades more rapidly with bilinear or bicubic interpolation. ©2015 IEEE

    Fulltekst (pdf)
    fulltext
  • 23.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Improving Very Low-Resolution Iris Identification Via Super-Resolution Reconstruction of Local Patches2017Inngår i: 2017 International Conference of the Biometrics Special Interest Group (BIOSIG) / [ed] Arslan Brömme, Christoph Busch, Antitza Dantcheva, Christian Rathgeb & Andreas Uhl, Bonn: Gesellschaft für Informatik, 2017, Vol. P-270, artikkel-id 8053512Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Relaxed acquisition conditions in iris recognition systems have significant effects on the quality and resolution of acquired images, which can severely affect performance if not addressed properly. Here, we evaluate two trained super-resolution algorithms in the context of iris identification. They are based on reconstruction of local image patches, where each patch is reconstructed separately using its own optimal reconstruction function. We employ a database of 1,872 near-infrared iris images (with 163 different identities for identification experiments) and three iris comparators. The trained approaches are substantially superior to bilinear or bicubic interpolations, with one of the comparators providing a Rank-1 performance of ∼88% with images of only 15×15 pixels, and an identification rate of 95% with a hit list size of only 8 identities. © 2017 Gesellschaft fuer Informatik.

    Fulltekst (pdf)
    fulltext
  • 24.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Iris Super-Resolution Using Iterative Neighbor Embedding2017Inngår i: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops / [ed] Lisa O’Conner, Los Alamitos: IEEE Computer Society, 2017, s. 655-663Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Iris recognition research is heading towards enabling more relaxed acquisition conditions. This has effects on the quality and resolution of acquired images, severely affecting the accuracy of recognition systems if not tackled appropriately. In this paper, we evaluate a super-resolution algorithm used to reconstruct iris images based on iterative neighbor embedding of local image patches which tries to represent input low-resolution patches while preserving the geometry of the original high-resolution space. To this end, the geometry of the low- and high-resolution manifolds are jointly considered during the reconstruction process. We validate the system with a database of 1,872 near-infrared iris images, while fusion of two iris comparators has been adopted to improve recognition performance. The presented approach is substantially superior to bilinear/bicubic interpolations at very low resolutions, and it also outperforms a previous PCA-based iris reconstruction approach which only considers the geometry of the low-resolution manifold during the reconstruction process. © 2017 IEEE

  • 25.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Reconstruction of Smartphone Images for Low Resolution Iris Recognition2015Inngår i: 2015 IEEE International Workshop on Information Forensics and Security (WIFS), Piscataway, NJ: IEEE Press, 2015, artikkel-id 7368600Konferansepaper (Fagfellevurdert)
    Abstract [en]

    As iris systems evolve towards a more relaxed acquisition, low image resolution will be a predominant issue. In this paper we evaluate a super-resolution method to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. We employ a database of 560 images captured in visible spectrum with two smartphones. The presented approach is superiorto bilinear or bicubic interpolation, especially at lower resolutions. We also carry out recognition experiments with six iris matchers, showing that better performance can be obtained at low-resolutions with the proposed eigen-patch reconstruction, with fusion of only two systems pushing the EER to below 5-8% for down-sampling factors up to a size of only 13x13. © 2015 IEEE.

  • 26.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Gonzalez-Sosa, Ester
    Nokia Bell-Labs, Madrid, Spain.
    A Survey of Super-Resolution in Iris Biometrics with Evaluation of Dictionary-Learning2019Inngår i: IEEE Access, E-ISSN 2169-3536, Vol. 7, s. 6519-6544Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The lack of resolution has a negative impact on the performance of image-based biometrics. While many generic super-resolution methods have been proposed to restore low-resolution images, they usually aim to enhance their visual appearance. However, an overall visual enhancement of biometric images does not necessarily correlate with a better recognition performance. Reconstruction approaches need thus to incorporate specific information from the target biometric modality to effectively improve recognition performance. This paper presents a comprehensive survey of iris super-resolution approaches proposed in the literature. We have also adapted an Eigen-patches reconstruction method based on PCA Eigentransformation of local image patches. The structure of the iris is exploited by building a patch-position dependent dictionary. In addition, image patches are restored separately, having their own reconstruction weights. This allows the solution to be locally optimized, helping to preserve local information. To evaluate the algorithm, we degraded high-resolution images from the CASIA Interval V3 database. Different restorations were considered, with 15 × 15 pixels being the smallest resolution evaluated. To the best of our knowledge, this is among the smallest resolutions employed in the literature. The experimental framework is complemented with six publicly available iris comparators, which were used to carry out biometric verification and identification experiments. Experimental results show that the proposed method significantly outperforms both bilinear and bicubic interpolation at very low-resolution. The performance of a number of comparators attain an impressive Equal Error Rate as low as 5%, and a Top-1 accuracy of 77-84% when considering iris images of only 15 × 15 pixels. These results clearly demonstrate the benefit of using trained super-resolution techniques to improve the quality of iris images prior to matching. © 2018, Emerald Publishing Limited.

  • 27.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Farrugia, Reuben A.
    University of Malta, Msida, Malta.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Super-Resolution for Selfie Biometrics: Introduction and Application to Face and Iris2019Inngår i: Selfie Biometrics: Advances and Challenges / [ed] Ajita Rattani, Reza Derakhshani & Arun A. Ross, Cham: Springer, 2019, 1, s. 105-128Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    Biometric research is heading towards enabling more relaxed acquisition conditions. This has effects on the quality and resolution of acquired images, severly affecting the accuracy of recognition systems if not tackled appropriately. In this chapter, we give an overview of recent research in super-resolution reconstruction applied to biometrics, with a focus on face and iris images in the visible spectrum, two prevalent modalities in selfie biometrics. After an introduction to the generic topic of super-resolution, we investigate methods adapted to cater for the particularities of these two modalities. By experiments, we show the benefits of incorporating super-resolution to improve the quality of biometric images prior to recognition. © Springer Nature AG 2019

  • 28.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Learning-Based Local-Patch Resolution Reconstruction of Iris Smart-phone Images2017Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Application of ocular biometrics in mobile and at a distance environments still has several open challenges, with the lack quality and resolution being an evident issue that can severely affects performance. In this paper, we evaluate two trained image reconstruction algorithms in the context of smart-phone biometrics. They are based on the use of coupled dictionaries to learn the mapping relations between low and high resolution images. In addition, reconstruction is made in local overlapped image patches, where up-scaling functions are modelled separately for each patch, allowing to better preserve local details. The experimental setup is complemented with a database of 560 images captured with two different smart-phones, and two iris comparators employed for verification experiments. We show that the trained approaches are substantially superior to bilinear or bicubic interpolations at very low resolutions (images of 13×13 pixels). Under such challenging conditions, an EER of ∼7% can be achieved using individual comparators, which is further pushed down to 4-6% after the fusion of the two systems. © 2017 IEEE

    Fulltekst (pdf)
    fulltext
  • 29.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Very Low-Resolution Iris Recognition Via Eigen-Patch Super-Resolution and Matcher Fusion2016Inngår i: 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Piscataway: IEEE, 2016, artikkel-id 7791208Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Current research in iris recognition is moving towards enabling more relaxed acquisition conditions. This has effects on the quality of acquired images, with low resolution being a predominant issue. Here, we evaluate a super-resolution algorithm used to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. Contrast enhancement is used to improve the reconstruction quality, while matcher fusion has been adopted to improve iris recognition performance. We validate the system using a database of 1,872 near-infrared iris images. The presented approach is superior to bilinear or bicubic interpolation, especially at lower resolutions, and the fusion of the two systems pushes the EER to below 5% for down-sampling factors up to a image size of only 13×13.

  • 30.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Laboratoriet för intelligenta system.
    Fierrez, Julian
    Universidad Autonoma de Madrid, Madrid, Spain.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Quality Measures in Biometric Systems2015Inngår i: Encyclopedia of Biometrics / [ed] Stan Z. Li & Anil K. Jain, New York: Springer Science+Business Media B.V., 2015, 2, s. 1287-1297Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    This is an excerpt from the content

    Synonyms

    Quality assessment; Biometric quality; Quality-based processing

    Definition

    Since the establishment of biometrics as a specific research area in the late 1990s, the biometric community has focused its efforts in the development of accurate recognition algorithms [1]. Nowadays, biometric recognition is a mature technology that is used in many applications, offering greater security and convenience than traditional methods of personal recognition [2].

    During the past few years, biometric quality measurement has become an important concern after a number of studies and technology benchmarks that demonstrate how performance of biometric systems is heavily affected by the quality of biometric signals [3]. This operationally important step has been nevertheless under-researched compared to the primary feature extraction and pattern recognition tasks [4]. One of the main challenges facing biometric technologies is performance degradation in less controlled situations, and the problem of biometric quality measurement has arisen even stronger with the proliferation of portable handheld devices, with at-a-distance and on-the-move acquisition capabilities. These will require robust algorithms capable of handling a range of changing characteristics [2]. Another important example is forensics, in which intrinsic operational factors further degrade recognition performance.

    There are number of factors that can affect the quality of biometric signals, and there are numerous roles of a quality measure in the context of biometric systems. This section summarizes the state of the art in the biometric quality problem, giving an overall framework of the different challenges involved.

    Fulltekst (pdf)
    fulltext
  • 31.
    Alonso-Fernandez, Fernando
    et al.
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Fierrez-Aguilar, Julian
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Fronthaler, Hartwig
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS).
    Kollreider, Klaus
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS).
    Ortega-Garcia, Javier
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Gonzalez-Rodriguez, Joaquin
    ATVS, Escuela Politecnica Superior, Campus de Cantoblanco, Avda. Francisco Tomas y Valiente 11, 28049 Madrid, Spain.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Combining multiple matchers for fingerprint verification: A case study in biosecure network of excellence2007Inngår i: Annales des télécommunications, ISSN 0003-4347, E-ISSN 1958-9395, Vol. 62, nr 1-2, s. 62-82Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We report on experiments for the fingerprint modality conducted during the First BioSecure Residential Workshop. Two reference systems for fingerprint verification have been tested together with two additional non-reference systems. These systems follow different approaches of fingerprint processing and are discussed in detail. Fusion experiments involving different combinations of the available systems are presented. The experimental results show that the best recognition strategy involves both minutiae-based and correlation-based measurements. Regarding the fusion experiments, the best relative improvement is obtained when fusing systems that are based on heterogeneous strategies for feature extraction and/or matching. The best combinations of two/three/four systems always include the best individual systems whereas the best verification performance is obtained when combining all the available systems.

    Fulltekst (pdf)
    fulltext
  • 32.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Hernandez-Diaz, Kevin
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Ramis, Silvia
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Spain.
    Perales, Francisco J.
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Spain.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Facial Masks and Soft-Biometrics: Leveraging Face Recognition CNNs for Age and Gender Prediction on Mobile Ocular Images2021Inngår i: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 10, nr 5, s. 562-580Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We address the use of selfie ocular images captured with smartphones to estimate age and gender. Partial face occlusion has become an issue due to the mandatory use of face masks. Also, the use of mobile devices has exploded, with the pandemic further accelerating the migration to digital services. However, state-of-the-art solutions in related tasks such as identity or expression recognition employ large Convolutional Neural Networks, whose use in mobile devices is infeasible due to hardware limitations and size restrictions of downloadable applications. To counteract this, we adapt two existing lightweight CNNs proposed in the context of the ImageNet Challenge, and two additional architectures proposed for mobile face recognition. Since datasets for soft-biometrics prediction using selfie images are limited, we counteract over-fitting by using networks pre-trained on ImageNet. Furthermore, some networks are further pre-trained for face recognition, for which very large training databases are available. Since both tasks employ similar input data, we hypothesize that such strategy can be beneficial for soft-biometrics estimation. A comprehensive study of the effects of different pre-training over the employed architectures is carried out, showing that, in most cases, a better accuracy is obtained after the networks have been fine-tuned for face recognition. © The Authors

    Fulltekst (pdf)
    fulltext
  • 33.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Hernandez-Diaz, Kevin
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Ramis, Silvia
    University of Balearic Islands, Spain.
    Perales, Francisco J.
    University of Balearic Islands, Spain.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Soft-Biometrics Estimation In the Era of Facial Masks2020Inngår i: 2020 International Conference of the Biometrics Special Interest Group (BIOSIG), Piscataway, N.J.: IEEE, 2020, s. 1-6Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We analyze the use of images from face parts to estimate soft-biometrics indicators. Partial face occlusion is common in unconstrained scenarios, and it has become mainstream during the COVID-19 pandemic due to the use of masks. Here, we apply existing pre-trained CNN architectures, proposed in the context of the ImageNet Large Scale Visual Recognition Challenge, to the tasks of gender, age, and ethnicity estimation. Experiments are done with 12007 images from the Labeled Faces in the Wild (LFW) database. We show that such off-the-shelf features can effectively estimate soft-biometrics indicators using only the ocular region. For completeness, we also evaluate images showing only the mouth region. In overall terms, the network providing the best accuracy only suffers accuracy drops of 2-4% when using the ocular region, in comparison to using the entire face. Our approach is also shown to outperform in several tasks two commercial off-the-shelf systems (COTS) that employ the whole face, even if we only use the eye or mouth regions. © 2020 German Computer Association (Gesellschaft für Informatik e.V.).

    Fulltekst (pdf)
    fulltext
  • 34.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Mikaelyan, Anna
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Compact Multi-scale Periocular Recognition Using SAFE Features2016Inngår i: Proceedings - International Conference on Pattern Recognition, Washington: IEEE, 2016, s. 1455-1460, artikkel-id 7899842Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper, we present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor, which encodes the presence of various symmetric curve families around image key points. We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this unique point of the eye. As it is demonstrated, such discriminative properties can be encoded with a reduced set of symmetric curves. Experiments are done with a database of periocular images captured with a digital camera. We test our system against reference periocular features, achieving top performance with a considerably smaller feature vector (given by the use of a single key point). All the systems tested also show a nearly steady correlation between acquisition distance and performance, and they are also able to cope well when enrolment and test images are not captured at the same distance. Fusion experiments among the available systems are also provided. © 2016 IEEE

  • 35.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Mikaelyan, Anna
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Comparison and Fusion of Multiple Iris and Periocular Matchers Using Near-Infrared and Visible Images2015Inngår i: 3rd International Workshop on Biometrics and Forensics, IWBF 2015, Piscataway, NJ: IEEE Press, 2015, s. Article number: 7110234-Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Periocular refers to the facial region in the eye vicinity. It can be easily obtained with existing face and iris setups, and it appears in iris images, so its fusion with the iris texture has a potential to improve the overall recognition. It is also suggested that iris is more suited to near-infrared (NIR) illu- mination, whereas the periocular modality is best for visible (VW) illumination. Here, we evaluate three periocular and three iris matchers based on different features. As experimen- tal data, we use five databases, three acquired with a close-up NIR camera, and two in VW light with a webcam and a dig- ital camera. We observe that the iris matchers perform better than the periocular matchers with NIR data, and the opposite with VW data. However, in both cases, their fusion can pro- vide additional performance improvements. This is specially relevant with VW data, where the iris matchers perform sig- nificantly worse (due to low resolution), but they are still able to complement the periocular modality. © 2015 IEEE.

    Fulltekst (pdf)
    fulltext
  • 36.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Raja, Kiran B.
    Norwegian University of Science and Technology, Gjøvik, Norway.
    Busch, Christoph
    Norwegian University of Science and Technology, Gjøvik, Norway.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Log-Likelihood Score Level Fusion for Improved Cross-Sensor Smartphone Periocular Recognition2017Inngår i: 2017 25th European Signal Processing Conference (EUSIPCO), Piscataway: IEEE, 2017, s. 281-285, artikkel-id 8081211Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The proliferation of cameras and personal devices results in a wide variability of imaging conditions, producing large intra-class variations and a significant performance drop when images from heterogeneous environments are compared. However, many applications require to deal with data from different sources regularly, thus needing to overcome these interoperability problems. Here, we employ fusion of several comparators to improve periocular performance when images from different smartphones are compared. We use a probabilistic fusion framework based on linear logistic regression, in which fused scores tend to be log-likelihood ratios, obtaining a reduction in cross-sensor EER of up to 40% due to the fusion. Our framework also provides an elegant and simple solution to handle signals from different devices, since same-sensor and crosssensor score distributions are aligned and mapped to a common probabilistic domain. This allows the use of Bayes thresholds for optimal decision making, eliminating the need of sensor-specific thresholds, which is essential in operational conditions because the threshold setting critically determines the accuracy of the authentication process in many applications. © EURASIP 2017

  • 37.
    Alonso-Fernandez, Fernando
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Sharon Belvisi, Nicole Mariah
    Högskolan i Halmstad, Akademin för informationsteknologi.
    Hernandez-Diaz, Kevin
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Muhammad, Naveed
    Institute of Computer Science, University of Tartu, Tartu , Estonia.
    Bigun, Josef
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Writer Identification Using Microblogging Texts for Social Media Forensics2021Inngår i: IEEE Transactions on Biometrics, Behavior, and Identity Science, E-ISSN 2637-6407, Vol. 3, nr 3, s. 405-426Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Establishing authorship of online texts is fundamental to combat cybercrimes. Unfortunately, text length is limited on some platforms, making the challenge harder. We aim at identifying the authorship of Twitter messages limited to 140 characters. We evaluate popular stylometric features, widely used in literary analysis, and specific Twitter features like URLs, hashtags, replies or quotes. We use two databases with 93 and 3957 authors, respectively. We test varying sized author sets and varying amounts of training/test texts per author. Performance is further improved by feature combination via automatic selection. With a large amount of training Tweets (>500), a good accuracy (Rank-5>80%) is achievable with only a few dozens of test Tweets, even with several thousands of authors. With smaller sample sizes (10-20 training Tweets), the search space can be diminished by 9-15% while keeping a high chance that the correct author is retrieved among the candidates. In such cases, automatic attribution can provide significant time savings to experts in suspect search. For completeness, we report verification results. With few training/test Tweets, the EER is above 20-25%, which is reduced to < 15% if hundreds of training Tweets are available. We also quantify the computational complexity and time permanence of the employed features. © 2019 IEEE.

    Fulltekst (pdf)
    fulltext
  • 38.
    Aloulou, Hamdi
    et al.
    Institut Mines Telecom, Paris, France & Laboratory of Informatics, Robotics and Microelectronics, Montpellier, France.
    Abdulrazak, Bessam
    Laboratory of Informatics, Robotics and Microelectronics, Montpellier, France & University of Sherbrooke, Sherbrooke, Canada.
    Endelin, Romain
    Institut Mines Telecom, Paris, France & Laboratory of Informatics, Robotics and Microelectronics, Montpellier, France.
    Bentes, João
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab). Image and Pervasive Access Laboratory, Singapore, Singapore.
    Tiberghien, Thibaut
    Institut Mines Telecom, Paris, France & Image and Pervasive Access Laboratory, Singapore, Singapore.
    Bellmunt, Joaquim
    Institut Mines Telecom, Paris, France & Image and Pervasive Access Laboratory, Singapore, Singapore.
    Simplifying Installation and Maintenance of Ambient Intelligent Solutions Toward Large Scale Deployment2016Inngår i: Inclusive Smart Cities and Digital Health: 14th International Conference on Smart Homes and Health Telematics, ICOST 2016, Wuhan, China, May 25-27, 2016. Proceedings / [ed] Chang C.K., Jin H., Cao Y., Aloulou H., Mokhtari M., Chiari L., Heidelberg: Springer, 2016, s. 121-132Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Simplify deployment and maintenance of Ambient Intelligence solutions is important to enable large-scale deployment and maximize the use/benefit of these solutions. More mature Ambient Intelligence solutions emerge on the market as a result of an intensive investment in research. This research targets mainly the accuracy, usefulness, and usability aspects of the solutions. Still, possibility to adapt to different environments, ease of deployment and maintenance are ongoing problems of Ambient Intelligence. Existing solutions require an expert to move on-site in order to install or maintain systems. Therefore, we present in this paper our attempt to enable quick large scale deployment. We discuss lessons learned from our approach for automating the deployment process in order to be performed by ordinary people. We also introduce a solution for simplifying the monitoring and maintenance of installed systems. © Springer International Publishing Switzerland 2016.

  • 39.
    Altarabichi, Mohammed Ghaith
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Fan, Yuantao
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Pashami, Sepideh
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Sheikholharam Mashhadi, Peyman
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Nowaczyk, Sławomir
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Extracting Invariant Features for Predicting State of Health of Batteries in Hybrid Energy Buses2021Inngår i: 2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA), Porto, Portugal, 6-9 Oct., 2021, IEEE, 2021, s. 1-6Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Batteries are a safety-critical and the most expensive component for electric vehicles (EVs). To ensure the reliability of the EVs in operation, it is crucial to monitor the state of health of those batteries. Monitoring their deterioration is also relevant to the sustainability of the transport solutions, through creating an efficient strategy for utilizing the remaining capacity of the battery and its second life. Electric buses, similar to other EVs, come in many different variants, including different configurations and operating conditions. Developing new degradation models for each existing combination of settings can become challenging from different perspectives such as unavailability of failure data for novel settings, heterogeneity in data, low amount of data available for less popular configurations, and lack of sufficient engineering knowledge. Therefore, being able to automatically transfer a machine learning model to new settings is crucial. More concretely, the aim of this work is to extract features that are invariant across different settings.

    In this study, we propose an evolutionary method, called genetic algorithm for domain invariant features (GADIF), that selects a set of features to be used for training machine learning models, in such a way as to maximize the invariance across different settings. A Genetic Algorithm, with each chromosome being a binary vector signaling selection of features, is equipped with a specific fitness function encompassing both the task performance and domain shift. We contrast the performance, in migrating to unseen domains, of our method against a number of classical feature selection methods without any transfer learning mechanism. Moreover, in the experimental result section, we analyze how different features are selected under different settings. The results show that using invariant features leads to a better generalization of the machine learning models to an unseen domain.

  • 40.
    Altarabichi, Mohammed Ghaith
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Nowaczyk, Sławomir
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Pashami, Sepideh
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Sheikholharam Mashhadi, Peyman
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Surrogate-Assisted Genetic Algorithm for Wrapper Feature Selection2021Inngår i: 2021 IEEE Congress on Evolutionary Computation (CEC), IEEE, 2021, s. 776-785Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Feature selection is an intractable problem, therefore practical algorithms often trade off the solution accuracy against the computation time. In this paper, we propose a novel multi-stage feature selection framework utilizing multiple levels of approximations, or surrogates. Such a framework allows for using wrapper approaches in a much more computationally efficient way, significantly increasing the quality of feature selection solutions achievable, especially on large datasets. We design and evaluate a Surrogate-Assisted Genetic Algorithm (SAGA) which utilizes this concept to guide the evolutionary search during the early phase of exploration. SAGA only switches to evaluating the original function at the final exploitation phase.

    We prove that the run-time upper bound of SAGA surrogate-assisted stage is at worse equal to the wrapper GA, and it scales better for induction algorithms of high order of complexity in number of instances. We demonstrate, using 14 datasets from the UCI ML repository, that in practice SAGA significantly reduces the computation time compared to a baseline wrapper Genetic Algorithm (GA), while converging to solutions of significantly higher accuracy. Our experiments show that SAGA can arrive at near-optimal solutions three times faster than a wrapper GA, on average. We also showcase the importance of evolution control approach designed to prevent surrogates from misleading the evolutionary search towards false optima.

  • 41.
    Amoozegar, Maryam
    et al.
    School of Computer Engineering, Iran University of Science and Technology, Narmak, Tehran, 1684613114, Iran.
    Minaei-Bidgoli, Behrouz
    School of Computer Engineering, Iran University of Science and Technology, Narmak, Tehran, 1684613114, Iran.
    Mansoor, Rezghi
    Department of Computer Science, Tarbiat Modares University, Tehran, 14115-175, Iran.
    Fanaee Tork, Hadi
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Extra-adaptive robust online subspace tracker for anomaly detection from streaming networks2020Inngår i: Engineering applications of artificial intelligence, ISSN 0952-1976, E-ISSN 1873-6769, Vol. 94, artikkel-id 103741Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Anomaly detection in time-evolving networks has many applications, for instance, traffic analysis in transportation networks and intrusion detection in computer networks. One group of popular methods for anomaly detection from evolving networks are robust online subspace trackers. However, these methods suffer from problem of insensitivity to drastic changes in the evolving subspace. In order to solve this problem, we propose a new robust online subspace and anomaly tracker, which is more adaptive and robust against sudden drastic changes in the subspace. More accurate estimation of low rank and sparse components by this tracker leads to more accurate anomaly detection. We evaluate the accuracy of our method with real-world dynamic network data sets with varying sparsity levels. The result is promising and our method outperforms the state-of-the-art.

  • 42.
    Andreasson, Henrik
    et al.
    Örebro University, Örebro, Sweden.
    Bouguerra, Abdelbaki
    Örebro University, Örebro, Sweden.
    Åstrand, Björn
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Rögnvaldsson, Thorsteinn
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Gold-fish SLAM: An application of SLAM to localize AGVs2014Inngår i: Field and Service Robotics: Results of the 8th International Conference / [ed] Kazuya Yoshida & Satoshi Tadokoro, Heidelberg: Springer, 2014, s. 585-598Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The main focus of this paper is to present a case study of a SLAM solution for Automated Guided Vehicles (AGVs) operating in real-world industrial environments. The studied solution, called Gold-fish SLAM, was implemented to provide localization estimates in dynamic industrial environments, where there are static landmarks that are only rarely perceived by the AGVs. The main idea of Gold-fish SLAM is to consider the goods that enter and leave the environment as temporary landmarks that can be used in combination with the rarely seen static landmarks to compute online estimates of AGV poses. The solution is tested and verified in a factory of paper using an eight ton diesel-truck retrofitted with an AGV control system running at speeds up to 3m/s. The paper includes also a general discussion on how SLAM can be used in industrial applications with AGVs. © Springer-Verlag Berlin Heidelberg 2014.

  • 43.
    Aramrattana, Maytheewat
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES).
    Detournay, J.
    Swedish National Transport Research Institute, Gothenburg, SE-402 78, Sweden.
    Englund, Cristofer
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Frimodig, Viktor
    Högskolan i Halmstad, Akademin för informationsteknologi.
    Jansson, Oscar Uddman
    Swedish National Transport Research Institute, Gothenburg, SE-402 78, Sweden.
    Larsson, Tony
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES).
    Mostowski, Wojciech
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES).
    Díez Rodríguez, Víctor
    Högskolan i Halmstad, Akademin för informationsteknologi.
    Rosenstatter, Thomas
    Högskolan i Halmstad, Akademin för informationsteknologi.
    Shahanoor, Golam
    Högskolan i Halmstad, Akademin för informationsteknologi.
    Team Halmstad Approach to Cooperative Driving in the Grand Cooperative Driving Challenge 20162018Inngår i: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 19, nr 4, s. 1248-1261Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper is an experience report of team Halmstad from the participation in a competition organised by the i-GAME project, the Grand Cooperative Driving Challenge 2016. The competition was held in Helmond, The Netherlands, during the last weekend of May 2016. We give an overview of our car’s control and communication system that was developed for the competition following the requirements and specifications of the i-GAME project. In particular, we describe our implementation of cooperative adaptive cruise control, our solution to the communication and logging requirements, as well as the high level decision making support. For the actual competition we did not manage to completely reach all of the goals set out by the organizers as well as ourselves. However, this did not prevent us from outperforming the competition. Moreover, the competition allowed us to collect data for further evaluation of our solutions to cooperative driving. Thus, we discuss what we believe were the strong points of our system, and discuss post-competition evaluation of the developments that were not fully integrated into our system during competition time. © 2000-2011 IEEE.

  • 44.
    Aramrattana, Maytheewat
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES). The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Englund, Cristofer
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab). RISE Viktoria, Göteborg, Sweden.
    Jansson, Jonas
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Larsson, Tony
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES).
    Nåbo, Arne
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Safety Analysis of Cooperative Adaptive Cruise Control in Vehicle Cut-in Situations2017Inngår i: Proceedings of 2017 4th International Symposium on Future Active Safety Technology towards Zero-Traffic-Accidents (FAST-zero), Society of Automotive Engineers of Japan , 2017, artikkel-id 20174621Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Cooperative adaptive cruise control (CACC) is a cooperative intelligent transport systems (C-ITS) function, which especially when used in platooning applications, possess many expected benefits including efficient road space utilization and reduced fuel consumption. Cut-in manoeuvres in platoons can potentially reduce those benefits, and are not desired from a safety point of view. Unfortunately, in realistic traffic scenarios, cut-in manoeuvres can be expected, especially from non-connected vehicles. In this paper two different controllers for platooning are explored, aiming at maintaining the safety of the platoon while a vehicle is cutting in from the adjacent lane. A realistic scenario, where a human driver performs the cut-in manoeuvre is used to demonstrate the effectiveness of the controllers. Safety analysis of CACC controllers using time to collision (TTC) under such situation is presented. The analysis using TTC indicate that, although potential risks are always high in CACC applications such as platooning due to the small inter-vehicular distances, dangerous TTC (TTC < 6 seconds) is not frequent. Future research directions are also discussed along with the results.

    Fulltekst (pdf)
    fulltext
  • 45.
    Aramrattana, Maytheewat
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES). The Swedish National Road and Transport Research Institute (VTI), Göteborg, Sweden.
    Englund, Cristofer
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab). RISE Viktoria, Göteborg, Sweden.
    Larsson, Tony
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES).
    Jansson, Jonas
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Nåbo, Arne
    The Swedish National Road and Transport Research Institute (VTI), Göteborg, Sweden.
    Safety Evaluation of Highway Platooning Under a Cut-In Situation Using Simulation2018Inngår i: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Platooning refers to an application, where a group of connected and automated vehicles follow a lead vehicle autonomously, with short inter-vehicular distances. At merging points on highways such as on-ramp, platoons could encounter manually driven vehicles, which are merging on to the highways. In some situations, the manually driven vehicles could end up between the platooning vehicles. Such situations are expected and known as “cut-in” situations. This paper presents a simulation study of a cut-in situation, where a platoon of five vehicles encounter a manually driven vehicle at a merging point of a highway. The manually driven vehicle is driven by 37 test persons using a driving simulator. For the platooning vehicles, two longitudinal controllers with four gap settings between the platooning vehicles, i.e. 15 meters, 22.5 meters, 30 meters, and 42.5 meters, are evaluated. Results summarizing cut-in behaviours and how the participants perceived the situation are presented. Furthermore, the situation is assessed using safety indicators based on time-to-collision.

  • 46.
    Aramrattana, Maytheewat
    et al.
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Larsson, Tony
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES).
    Englund, Cristofer
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab). RISE Viktoria, Gothenburg, Sweden.
    Jansson, Jonas
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Nåbo, Arne
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    A Novel Risk Indicator for Cut-In Situations2020Inngår i: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Piscataway, NJ: IEEE, 2020, artikkel-id 9294315Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Cut-in situations occurs when a vehicle intention- ally changes lane and ends up in front of another vehicle or in-between two vehicles. In such situations, having a method to indicate the collision risk prior to making the cut-in maneuver could potentially reduce the number of sideswipe and rear end collisions caused by the cut-in maneuvers. This paper propose a new risk indicator, namely cut-in risk indicator (CRI), as a way to indicate and potentially foresee collision risks in cut-in situations. As an example use case, we applied CRI on data from a driving simulation experiment involving a manually driven vehicle and an automated platoon in a highway merging situation. We then compared the results with time-to-collision (TTC), and suggest that CRI could correctly indicate collision risks in a more effective way. CRI can be computed on all vehicles involved in the cut-in situations, not only for the vehicle that is cutting in. Making it possible for other vehicles to estimate the collision risk, for example if a cut-in from another vehicle occurs, the surrounding vehicles could be warned and have the possibility to react in order to potentially avoid or mitigate accidents. © 2020 IEEE.

  • 47.
    Aramrattana, Maytheewat
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES). The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Larsson, Tony
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES).
    Englund, Cristofer
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab). RISE Viktoria, Gothenburg, Sweden.
    Jansson, Jonas
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Nåbo, Arne
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Simulation of Cut-In by Manually Driven Vehicles in Platooning Scenarios2017Inngår i: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Piscataway, NJ: IEEE, 2017, s. 1-6Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In the near future, Cooperative Intelligent Transport System (C-ITS) applications are expected to be deployed. To support this, simulation is often used to design and evaluate the applications during the early development phases. Simulations of C-ITS scenarios often assume a fleet of homogeneous vehicles within the transportation system. In contrast, once C-ITS is deployed, the traffic scenarios will consist of a mixture of connected and non-connected vehicles, which, in addition, can be driven manually or automatically. Such mixed cases are rarely analysed, especially those where manually driven vehicles are involved. Therefore, this paper presents a C-ITS simulation framework, which incorporates a manually driven car through a driving simulator interacting with a traffic simulator, and a communication simulator, which together enable modelling and analysis of C-ITS applications and scenarios. Furthermore, example usages in the scenarios, where a manually driven vehicle cut-in to a platoon of Cooperative Adaptive Cruise Control (CACC) equipped vehicles are presented. © 2017 IEEE.

    Fulltekst (pdf)
    fulltext
  • 48.
    Aramrattana, Maytheewat
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES). The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Larsson, Tony
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), Centrum för forskning om inbyggda system (CERES).
    Jansson, Jonas
    The Swedish National Road and Transport Research Institute (VTI), Linköping, Sweden.
    Englund, Cristofer
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab). Viktoria Swedish ICT, Gothenburg, Sweden.
    Dimensions of Cooperative Driving, ITS and Automation2015Inngår i: 2015 IEEE Intelligent Vehicles Symposium (IV), Piscataway, NJ: IEEE Press, 2015, s. 144-149Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Wireless technology supporting vehicle-to-vehicle (V2V), and vehicle-to-infrastructure (V2I) communication, allow vehicles and infrastructures to exchange information, and cooperate. Cooperation among the actors in an intelligent transport system (ITS) can introduce several benefits, for instance, increase safety, comfort, efficiency. Automation has also evolved in vehicle control and active safety functions. Combining cooperation and automation would enable more advanced functions such as automated highway merge and negotiating right-of-way in a cooperative intersection. However, the combination have influences on the structure of the overall transport systems as well as on its behaviour. In order to provide a common understanding of such systems, this paper presents an analysis of cooperative ITS (C-ITS) with regard to dimensions of cooperation. It also presents possible influence on driving behaviour and challenges in deployment and automation of C-ITS.

    Fulltekst (pdf)
    fulltext
  • 49.
    Ashfaq, Awais
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab).
    Predicting clinical outcomes via machine learning on electronic health records2019Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    The rising complexity in healthcare, exacerbated by an ageing population, results in ineffective decision-making leading to detrimental effects on care quality and escalates care costs. Consequently, there is a need for smart decision support systems that can empower clinician's to make better informed care decisions. Decisions, which are not only based on general clinical knowledge and personal experience, but also rest on personalised and precise insights about future patient outcomes. A promising approach is to leverage the ongoing digitization of healthcare that generates unprecedented amounts of clinical data stored in Electronic Health Records (EHRs) and couple it with modern Machine Learning (ML) toolset for clinical decision support, and simultaneously, expand the evidence base of medicine. As promising as it sounds, assimilating complete clinical data that provides a rich perspective of the patient's health state comes with a multitude of data-science challenges that impede efficient learning of ML models. This thesis primarily focuses on learning comprehensive patient representations from EHRs. The key challenges of heterogeneity and temporality in EHR data are addressed using human-derived features appended to contextual embeddings of clinical concepts and Long-Short-Term-Memory networks, respectively. The developed models are empirically evaluated in the context of predicting adverse clinical outcomes such as mortality or hospital readmissions. We also present evidence that, surprisingly, different ML models primarily designed for non-EHR analysis (like language processing and time-series prediction) can be combined and adapted into a single framework to efficiently represent EHR data and predict patient outcomes.

    Fulltekst (pdf)
    Lic
  • 50.
    Ashfaq, Awais
    et al.
    Högskolan i Halmstad, Akademin för informationsteknologi, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR Centrum för tillämpade intelligenta system (IS-lab). KTH Royal Institute of Technology, Stockholm, Sweden.
    Adler, Jonas
    KTH Royal Institute of Technology, Stockholm, Sweden & Elekta Instrument AB, Stockholm, Sweden.
    A modified fuzzy C means algorithm for shading correction in craniofacial CBCT images2017Inngår i: CMBEBIH 2017: Proceedings of the International Conference on Medical and Biological Engineering 2017 / [ed] Almir Badnjevic, Singapore: Springer, 2017, Vol. 62, s. 531-538Konferansepaper (Fagfellevurdert)
    Abstract [en]

    CBCT images suffer from acute shading artifacts primarily due to scatter. Numerous image-domain correction algorithms have been proposed in the literature that use patient-specific planning CT images to estimate shading contributions in CBCT images. However, in the context of radiosurgery applications such as gamma knife, planning images are often acquired through MRI which impedes the use of polynomial fitting approaches for shading correction. We present a new shading correction approach that is independent of planning CT images. Our algorithm is based on the assumption that true CBCT images follow a uniform volumetric intensity distribution per material, and scatter perturbs this uniform texture by contributing cupping and shading artifacts in the image domain. The framework is a combination of fuzzy C-means coupled with a neighborhood regularization term and Otsu’s method. Experimental results on artificially simulated craniofacial CBCT images are provided to demonstrate the effectiveness of our algorithm. Spatial non-uniformity is reduced from 16% to 7% in soft tissue and from 44% to 8% in bone regions. With shading-correction, thresholding based segmentation accuracy for bone pixels is improved from 85% to 91% when compared to thresholding without shading-correction. The proposed algorithm is thus practical and qualifies as a plug and play extension into any CBCT reconstruction software for shading correction. © Springer Nature Singapore Pte Ltd. 2017.

1234567 1 - 50 of 383
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf