hh.sePublications
Change search
Refine search result
123 1 - 50 of 104
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Albinsson, John
    et al.
    Lund Univ, Dept Biomed Engn, S-22100 Lund, Sweden..
    Brorsson, Sofia
    Halmstad University, School of Business, Engineering and Science, The Rydberg Laboratory for Applied Sciences (RLAS).
    Rydén Ahlgren, Åsa
    Lund Univ, Dept Clin Sci, Clin Physiol & Nucl Med Unit, Malmo, Sweden..
    Cinthio, Magnus
    Lund Univ, Dept Biomed Engn, S-22100 Lund, Sweden..
    Improved tracking performance of lagrangian block-matching methodologies using block expansion in the time domain: In silico, phantom and invivo evaluations2014In: Ultrasound in Medicine and Biology, ISSN 0301-5629, E-ISSN 1879-291X, Vol. 40, no 10, p. 2508-2520Article in journal (Refereed)
    Abstract [en]

    The aim of this study was to evaluate tracking performance when an extra reference block is added to a basic block-matching method, where the two reference blocks originate from two consecutive ultrasound frames. The use of an extra reference block was evaluated for two putative benefits: (i) an increase in tracking performance while maintaining the size of the reference blocks, evaluated using in silico and phantom cine loops; (ii) a reduction in the size of the reference blocks while maintaining the tracking performance, evaluated using in vivo cine loops of the common carotid artery where the longitudinal movement of the wall was estimated. The results indicated that tracking accuracy improved (mean - 48%, p<0.005 [in silico]; mean - 43%, p<0.01 [phantom]), and there was a reduction in size of the reference blocks while maintaining tracking performance (mean - 19%, p<0.01 [in vivo]). This novel method will facilitate further exploration of the longitudinal movement of the arterial wall. (C) 2014 World Federation for Ultrasound in Medicine & Biology.

  • 2.
    Ali, Hani
    et al.
    Halmstad University, School of Information Technology.
    Sunnergren, Pontus
    Halmstad University, School of Information Technology.
    Scenanalys - Övervakning och modellering2021Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Autonomous vehicles can decrease traffic congestion and reduce the amount of traffic related accidents. As there will be millions of autonomous vehicles in the future, a better understanding of the environment will be required. This project aims to create an external automated traffic system that can detect and track 3D objects within a complex traffic situation to later send these objects’ behavior for a larger-scale project that manages to 3D model the traffic situation. The project utilizes Tensorflow framework and YOLOv3 algorithm. The project also utilizes a camera to record traffic situations and a Linux operated computer. Using methods commonly used to create an automated traffic management system was evaluated. The final results show that the system is relatively unstable and can sometimes fail to recognize certain objects. If more images are used for the training process, a more robust and much more reliable system could be developed using a similar methodology. 

    Download full text (pdf)
    fulltext
  • 3.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Eye Detection by Complex Filtering for Periocular Recognition2014In: 2nd International Workshop on Biometrics and Forensics (IWBF2014): Valletta, Malta (27-28th March 2014), Piscataway, NJ: IEEE Press, 2014, article id 6914250Conference paper (Refereed)
    Abstract [en]

    We present a novel system to localize the eye position based on symmetry filters. By using a 2D separable filter tuned to detect circular symmetries, detection is done with a few ID convolutions. The detected eye center is used as input to our periocular algorithm based on retinotopic sampling grids and Gabor analysis of the local power spectrum. This setup is evaluated with two databases of iris data, one acquired with a close-up NIR camera, and another in visible light with a web-cam. The periocular system shows high resilience to inaccuracies in the position of the detected eye center. The density of the sampling grid can also be reduced without sacrificing too much accuracy, allowing additional computational savings. We also evaluate an iris texture matcher based on ID Log-Gabor wavelets. Despite the poorer performance of the iris matcher with the webcam database, its fusion with the periocular system results in improved performance. ©2014 IEEE.

    Download full text (pdf)
    fulltext
  • 4.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Periocular Biometrics: Databases, Algorithms and Directions2016In: 2016 4th International Workshop on Biometrics and Forensics (IWBF): Proceedings : 3-4 March, 2016, Limassol, Cyprus, Piscataway, NJ: IEEE, 2016, article id 7449688Conference paper (Refereed)
    Abstract [en]

    Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions. Periocular refers to the facial region in the eye vicinity, including eyelids, lashes and eyebrows. It is available over a wide range of acquisition distances, representing a trade-off between the whole face (which can be occluded at close distances) and the iris texture (which do not have enough resolution at long distances). Since the periocular region appears in face or iris images, it can be used also in conjunction with these modalities. Features extracted from the periocular region have been also used successfully for gender classification and ethnicity classification, and to study the impact of gender transformation or plastic surgery in the recognition performance. This paper presents a review of the state of the art in periocular biometric research, providing an insight of the most relevant issues and giving a thorough coverage of the existing literature. Future research trends are also briefly discussed. © 2016 IEEE.

  • 5.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Englund, Cristofer
    RISE Viktoria, Gothenburg, Sweden.
    Expression Recognition Using the Periocular Region: A Feasibility Study2018In: 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) / [ed] Gabriella Sanniti di Baja, Luigi Gallo, Kokou Yetongnon, Albert Dipanda, Modesto Castrillón-Santana & Richard Chbeir, Los Alamitos: IEEE, 2018, p. 536-541Conference paper (Refereed)
    Abstract [en]

    This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0%/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.

    Download full text (pdf)
    fulltext
  • 6.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Farrugia, Reuben
    University of Malta, Msida, Malta.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Very Low-Resolution Iris Recognition Via Eigen-Patch Super-Resolution and Matcher Fusion2016In: 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS), Piscataway: IEEE, 2016, article id 7791208Conference paper (Refereed)
    Abstract [en]

    Current research in iris recognition is moving towards enabling more relaxed acquisition conditions. This has effects on the quality of acquired images, with low resolution being a predominant issue. Here, we evaluate a super-resolution algorithm used to reconstruct iris images based on Eigen-transformation of local image patches. Each patch is reconstructed separately, allowing better quality of enhanced images by preserving local information. Contrast enhancement is used to improve the reconstruction quality, while matcher fusion has been adopted to improve iris recognition performance. We validate the system using a database of 1,872 near-infrared iris images. The presented approach is superior to bilinear or bicubic interpolation, especially at lower resolutions, and the fusion of the two systems pushes the EER to below 5% for down-sampling factors up to a image size of only 13×13.

  • 7.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Center for Applied Intelligent Systems Research (CAISR).
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology.
    Buades, Jose M.
    University of Balearic Islands, Palma, Spain.
    Tiwari, Prayag
    Halmstad University, School of Information Technology.
    Bigun, Josef
    Halmstad University, School of Information Technology.
    An Explainable Model-Agnostic Algorithm for CNN-Based Biometrics Verification2023In: 2023 IEEE International Workshop on Information Forensics and Security (WIFS), Institute of Electrical and Electronics Engineers (IEEE), 2023Conference paper (Refereed)
    Abstract [en]

    This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting. LIME was initially proposed for networks with the same output classes used for training, and it employs the softmax probability to determine which regions of the image contribute the most to classification. However, in a verification setting, the classes to be recognized have not been seen during training. In addition, instead of using the softmax output, face descriptors are usually obtained from a layer before the classification layer. The model is adapted to achieve explainability via cosine similarity between feature vectors of perturbated versions of the input image. The method is showcased for face biometrics with two CNN models based on MobileNetv2 and ResNet50. © 2023 IEEE.

  • 8.
    Alonso-Fernandez, Fernando
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Hernandez-Diaz, Kevin
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Ramis, Silvia
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Spain.
    Perales, Francisco J.
    Computer Graphics and Vision and AI Group, University of Balearic Islands, Spain.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Facial Masks and Soft-Biometrics: Leveraging Face Recognition CNNs for Age and Gender Prediction on Mobile Ocular Images2021In: IET Biometrics, ISSN 2047-4938, E-ISSN 2047-4946, Vol. 10, no 5, p. 562-580Article in journal (Refereed)
    Abstract [en]

    We address the use of selfie ocular images captured with smartphones to estimate age and gender. Partial face occlusion has become an issue due to the mandatory use of face masks. Also, the use of mobile devices has exploded, with the pandemic further accelerating the migration to digital services. However, state-of-the-art solutions in related tasks such as identity or expression recognition employ large Convolutional Neural Networks, whose use in mobile devices is infeasible due to hardware limitations and size restrictions of downloadable applications. To counteract this, we adapt two existing lightweight CNNs proposed in the context of the ImageNet Challenge, and two additional architectures proposed for mobile face recognition. Since datasets for soft-biometrics prediction using selfie images are limited, we counteract over-fitting by using networks pre-trained on ImageNet. Furthermore, some networks are further pre-trained for face recognition, for which very large training databases are available. Since both tasks employ similar input data, we hypothesize that such strategy can be beneficial for soft-biometrics estimation. A comprehensive study of the effects of different pre-training over the employed architectures is carried out, showing that, in most cases, a better accuracy is obtained after the networks have been fine-tuned for face recognition. © The Authors

    Download full text (pdf)
    fulltext
  • 9.
    Assabie Lake, Yaregal
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Multifont recognition System for Ethiopic Script2006Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis, we present a general framework for multi-font, multi-size and multi-style Ethiopic character recognition system. We propose structural and syntactic techniques for recognition of Ethiopic characters where the graphically comnplex characters are represented by less complex primitive structures and their spatial interrelationships. For each Ethiopic character, the primitive structures and their spatial interrelationships form a unique set of patterns.

    The interrelationships of primitives are represented by a special tree structure which resembles a binary search tree in the sense that it groups child nodes as left and right, and keeps the spatial position of primitives in orderly manner. For a better computational efficiency, the primitive tree is converted into string pattern using in-order traversal, which generates a base of the alphabet that stores possibly occuring string patterns for each character. The recognition of characters is then achieved by matching the generated patterns with each pattern in a stored knowledge base of characters.

    Structural features are extracted using direction field tensor, which is also used for character segmentation. In general, the recognition system does not need size normalization, thinning or other preprocessing procedures. The only parameter that needs to be adjusted during the recognition process is the size of Gaussian window which should be chosen optimally in relation to font sizes. We also constructed an Ethiopic Document Image Database (EDIDB) from real life documents and the recognition system is tested with respect to variations in font type, size, style, document skewness and document type. Experimental results are reported.

  • 10.
    Assabie, Yaregal
    et al.
    Addis Ababa University, Department of Computer Science, Addis Ababa, Ethiopia .
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    A Hybrid System for Robust Recognition of Ethiopic Script2007In: Ninth International Conference on Document Analysis and Recognition: proceedings : Curtiba, Paraná, Brazil, September 23-26, 2007 / [ed] IEEE Computer Society, Los Alamitos, Calif.: IEEE Computer Society, 2007, p. 556-560Conference paper (Refereed)
    Abstract [en]

    In real life, documents contain several font types, styles, and sizes. However, many character recognition systems show good results for specific type of documents and fail to produce satisfactory results for others. Over the past decades, various pattern recognition techniques have been applied with the aim to develop recognition systems insensitive to variations in the characteristics of documents. In this paper, we present a robust recognition system for Ethiopic script using a hybrid of classifiers. The complex structures of Ethiopic characters are structurally and syntactically analyzed, and represented as a pattern of simpler graphical units called primitives. The pattern is used for classification of characters using similarity-based matching and neural network classifier. The classification result is further refined by using template matching. A pair of directional filters is used for creating templates and extracting structural features. The recognition system is tested by real life documents and experimental results are reported.

    Download full text (pdf)
    FULLTEXT01
  • 11.
    Baerveldt, Albert-Jan
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    A vision system for object verification and localization based on local features2001In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 34, no 2-3, p. 83-92Article in journal (Refereed)
    Abstract [en]

    An object verification and localization system should answer the question whether an expected object is present in an image or not, i.e. verification, and if present where it is located. Such a system would be very useful for mobile robots, e.g. for landmark recognition or for the fulfilment of certain tasks. In this paper, we present an object verification and localization system specially adapted to the needs of mobile robots. The object model is based on a collection of local features derived from a small neighbourhood around automatically detected interest points. The learned representation of the object is then matched with the image under consideration. The tests, based on 81 images, showed a very satisfying tolerance to scale changes of up to 50%, to viewpoint variations of 20, to occlusion of up to 80% and to major background changes as well as to local and global illumination changes. The tests also showed that the verification capabilities are very good and that similar objects did not trigger any false verification.

  • 12.
    Bengtsson, Lars
    et al.
    Halmstad University, School of Information Technology.
    Svensson, Bertil
    Halmstad University, School of Information Technology. Chalmers University of Technology, Gothenburg, Sweden.
    Wiberg, Per-Arne
    Halmstad University, School of Information Technology.
    Brains for Autonomous Robots: Hardware and Surgery Tools1994In: Proceedings of PerAc '94. From Perception to Action / [ed] P. Gaussier; J-D. Nicoud, Los Alamitos: IEEE, 1994, p. 436-439Conference paper (Refereed)
    Abstract [en]

    This paper presents a hardware architecture and a software tool needed for future autonomous robots. Specific attention is given to the execution of artificial neural networks and to the need for a good inspection and visualization tool when developing this kind of systems. Achievable performance using state-of-the-art technology is estimated and module miniaturization issues are discussed. © 1994 IEEE.

  • 13.
    Bengtsson, Ola
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Baerveldt, Albert-Jan
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Localization in changing environments by matching laser range scans1999In: 1999 Third European Workshop on Advanced Mobile Robots (Eurobot'99).: Proceedings, Institute of Electrical and Electronics Engineers (IEEE), 1999, p. 169-176Conference paper (Refereed)
    Abstract [en]

    We present a novel scan matching algorithm, IDC-S, Iterative Dual Correspondence-Sector, that matches range scans. The algorithm is based on the known Iterative Dual Correspondence, IDC, algorithm which has shown good performance in real environments. The improvement is that IDC-S is able to deal with relatively large changes in the environment. It divides the scan in several sectors, detects and removes those sectors that are changed and matches the scans only using unchanged sectors. IDC-S and other variants of IDC are extensively simulated and evaluated. The simulations show that IDC-S is very robust and can locate in many different kind of environments. We also show that it is possible to effectively combine the existing IDC algorithms with IDC-S, thus obtaining an algorithm that performs very well both in rectilinear as well as nonrectilinear environments, even when changed as much as 65%. © 1999 IEEE.

  • 14.
    Bergman, Lars
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Verikas, Antanas
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Intelligent Monitoring of the Offset Printing Process2004In: Proceedings of the IASTED International Conference on Neural Networks and Computational Intelligence, ACTA Press, 2004, p. 173-178Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a neural networks and image analysis based approach to assessing colour deviations in an offset printing process from direct measurements on halftone multicoloured pictures--there are no measuring areas printed solely to assess the deviations. A committee of neural networks is trained to assess the ink proportions in a small image area. From only one measurement the trained committee is capable of estimating the actual amount of printing inks dispersed on paper in the measuring area. To match the measured image area of the printed picture with the corresponding area of the original image, when comparing the actual ink proportions with the targeted ones, properties of the 2-D Fourier transform are exploited.

  • 15.
    Bergman, Lars
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Verikas, Antanas
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Bacauskiene, M.
    Department of Applied Electronics, Kaunas University of Technology.
    Unsupervised colour image segmentation applied to printing quality assessment2005In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 23, no 4, p. 417-425Article in journal (Refereed)
    Abstract [en]

    We present an option for colour image segmentation applied to printing quality assessment in offset lithographic printing by measuring an average ink dot size in halftone pictures. The segmentation is accomplished in two stages through classification of image pixels. In the first stage, rough image segmentation is performed. The results of the first segmentation stage are then utilized to collect a balanced training data set for learning refined parameters of the decision rules. The developed software is successfully used in a printing shop to assess the ink dot size on paper and printing plates.

  • 16.
    Bernard, Florian
    et al.
    MPI Informatics, Saarland Informatics Campus, Saarbrücken, Germany.
    Thunberg, Johan
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Goncalves, Jorge
    LCSB, University of Luxembourg, Esch-sur-Alzette, Luxembourg.
    Theobalt, Christian
    MPI Informatics, Saarland Informatics Campus, Saarbrücken, Germany.
    Synchronisation of partial multi-matchings via non-negative factorisations2019In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 92, p. 146-155Article in journal (Refereed)
    Abstract [en]

    In this work we study permutation synchronisation for the challenging case of partial permutations, which plays an important role for the problem of matching multiple objects (e.g. images or shapes). The term synchronisation refers to the property that the set of pairwise matchings is cycle-consistent, i.e. in the full matching case all compositions of pairwise matchings over cycles must be equal to the identity. Motivated by clustering and matrix factorisation perspectives of cycle-consistency, we derive an algo- rithm to tackle the permutation synchronisation problem based on non-negative factorisations. In order to deal with the inherent non-convexity of the permutation synchronisation problem, we use an initialisation procedure based on a novel rotation scheme applied to the solution of the spectral relaxation. Moreover, this rotation scheme facilitates a convenient Euclidean projection to obtain a binary solution after solving our relaxed problem. In contrast to state-of-the-art methods, our approach is guaranteed to produce cycle-consistent results. We experimentally demonstrate the efficacy of our method and show that it achieves better results compared to existing methods. © 2019 Elsevier Ltd

  • 17.
    Bigun, Josef
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Fronthaler, Hartwig
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Kollreider, Klaus
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Assuring liveness in biometric identity authentication by real-time face tracking2004In: CIHSPS 2004: proceedings of the 2004 IEEE International Conference on Computational Intelligence for Homeland Security and Personal Safety : S. Giuliano, Venice, Italy, 21-22 July 2004 / [ed] IEEE, Piscataway, N.J.: IEEE Press, 2004, p. 104-111Conference paper (Refereed)
    Abstract [en]

    A system that combines real-time face tracking as well as the localization of facial landmarks in order to improve the authenticity of fingerprint recognition is introduced. The intended purpose of this application is to assist in securing public areas and individuals, in addition to enforce that the collected sensor data in a multi modal person authentication system originate front present persons, i.e. the system is not under a so called play back attack. Facial features are extracted with the help of Gabor filters and classified by SVM experts. For real-time performance, selected points from a retinotopic grid are used to form regional face models. Additionally only a subset of the Gabor decomposition is used for different face regions. The second modality presented is texture-based fingerprint recognition, exploiting linear symmetry. Experimental results on the proposed system are presented.

  • 18.
    Bigun, Josef
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Gustavsson, TomasChalmers University of Technology, Department of Signals and Systems, Gothenburg, Sweden.
    Image analysis: 13th Scandinavian Conference, SCIA 2003, Halmstad, Sweden, June 29-July 2, 2003, Proceedings2003Conference proceedings (editor) (Other academic)
    Abstract [en]

    This book constitutes the refeered proceedings of the 13th Scandinavian Conference on Image Analysis, SCIA 2003, held in Halmstad, Sweden in June/July 2003.The 148 revised full papers presented together with 6 invited contributions were carefully reviewed and selected for presentation. The papers are organized in topical sections on feature extraction, depth and surface, shape analysis, coding and representation, motion analysis, medical image processing, color analysis, texture analysis, indexing and categorization, and segmentation and spatial grouping.

  • 19.
    Bigun, Josef
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Malmqvist, KerstinHalmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Proceedings: Symposium on image analysis, Halmstad March 7-8, 20002000Conference proceedings (editor) (Other academic)
  • 20.
    Bigun, Josef
    et al.
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Verikas, AntanasHalmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Proceedings SSBA '09: Symposium on Image Analysis, Halmstad University, Halmstad, March 18-20, 20092009Conference proceedings (editor) (Other academic)
  • 21.
    Bouguerra, Abdelbaki
    et al.
    Centre for Applied Autonomous Sensor Systems (AASS), Örebro University, Sweden.
    Andreasson, Henrik
    Centre for Applied Autonomous Sensor Systems (AASS), Örebro University, Sweden.
    Lilienthal, Achim J.
    Centre for Applied Autonomous Sensor Systems (AASS), Örebro University, Sweden.
    Åstrand, Björn
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Rögnvaldsson, Thorsteinn
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    An autonomous robotic system for load transportation2009In: IEEE Conference on Emerging Technologies & Factory Automation, 2009. ETFA 2009, Piscataway, N.J.: IEEE Press, 2009, p. 1-4Conference paper (Refereed)
    Abstract [en]

    This paper presents an overview of an autonomous robotic system for material handling. The system is being developed by extending the functionalities of traditional AGVs to be able to operate reliably and safely in highly dynamic environments. Traditionally, the reliable functioning of AGVs relies on the availability of adequate infrastructure to support navigation. In the target environments of our system, such infrastructure is difficult to setup in an efficient way. Additionally, the location of objects to handle are unknown, which requires runtime object detection and tracking. Another requirement to be fulfilled by the system is the ability to generate trajectories dynamically, which is uncommon in industrial AGV systems. ©2009 IEEE.

    Download full text (pdf)
    FULLTEXT01
  • 22.
    Calvo, Rodrigo
    et al.
    Department of Computer Science and Statistics, University of São Paulo, Sao Carlos - SP, Brazil.
    Figueiredo, Maurício Fernandes
    Department of Computer Science, State University of Maringa, Maringa - PR, Brazil.
    Antonelo, Eric Aislan
    Halmstad University, School of Information Technology.
    Evolutionary fuzzy system for architecture control in a constructive neural network2005In: Proceedings 2005 IEEE International Symposium on Computational Intelligence in Robotics and Automation: CIRA 2005. June 27-30, 2005. Espoo, Finland, New York, NY: Institute of Electrical and Electronics Engineers (IEEE), 2005, p. 541-546Conference paper (Refereed)
    Abstract [en]

    This work describes an evolutionary system to control the growth of a constructive neural network for autonomous navigation. A classifier system generates Takagi-Sugeno fuzzy rules and controls the architecture of a constructive neural network. The performance of the mobile robot guides the evolutionary learning mechanism. Experiments show the efficiency of the classifier fuzzy system for analyzing if it is worth inserting a new neuron into the architecture. ©2005 IEEE.

  • 23.
    Cooney, Martin
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Berck, Peter
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Designing a Robot Which Paints With a Human: Visual Metaphors to Convey Contingency and Artistry2019Conference paper (Refereed)
    Abstract [en]

    Socially assistive robots could contribute to fulfilling an important need for interaction in contexts where human caregivers are scarce–such as art therapy, where peers, or patients and therapists, can make art together. However, current art-making robots typically generate art either by themselves, or as tools under the control of a human artist; how to make art together with a human in a good way has not yet received much attention, possibly because some concepts related to art, such as emotion and creativity, are not yet well understood. The current work reports on our use of a collaborative prototyping approach to explore this concept of a robot which can paint together with people. The result is a proposed design, based on an idea of using visual metaphors to convey contingency and artistry. Our aim is that the identified considerations will help support next steps, toward supporting positive experiences for people through art-making with a robot.

  • 24.
    Cooney, Martin
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    PastVision+: Thermovisual Inference of Recent Medicine Intake by Detecting Heated Objects and Cooled Lips2017In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 4, article id 61Article in journal (Refereed)
    Abstract [en]

    This article addresses the problem of how a robot can infer what a person has done recently, with a focus on checking oral medicine intake in dementia patients. We present PastVision+, an approach showing how thermovisual cues in objects and humans can be leveraged to infer recent unobserved human-object interactions. Our expectation is that this approach can provide enhanced speed and robustness compared to existing methods, because our approach can draw inferences from single images without needing to wait to observe ongoing actions and can deal with short-lasting occlusions; when combined, we expect a potential improvement in accuracy due to the extra information from knowing what a person has recently done. To evaluate our approach, we obtained some data in which an experimenter touched medicine packages and a glass of water to simulate intake of oral medicine, for a challenging scenario in which some touches were conducted in front of a warm background. Results were promising, with a detection accuracy of touched objects of 50% at the 15 s mark and 0% at the 60 s mark, and a detection accuracy of cooled lips of about 100 and 60% at the 15 s mark for cold and tepid water, respectively. Furthermore, we conducted a follow-up check for another challenging scenario in which some participants pretended to take medicine or otherwise touched a medicine package: accuracies of inferring object touches, mouth touches, and actions were 72.2, 80.3, and 58.3% initially, and 50.0, 81.7, and 50.0% at the 15 s mark, with a rate of 89.0% for person identification. The results suggested some areas in which further improvements would be possible, toward facilitating robot inference of human actions, in the context of medicine intake monitoring.

    Download full text (pdf)
    fulltext
  • 25.
    Cooney, Martin
    et al.
    Halmstad University, School of Information Technology.
    Järpe, Eric
    Halmstad University, School of Information Technology.
    Vinel, Alexey
    Halmstad University, School of Information Technology.
    “Robot Steganography”: Opportunities and Challenges2022In: Proceedings of the 14th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART / [ed] Ana Paula Rocha; Luc Steels; Jaap van den Herik, Setúbal: SciTePress, 2022, p. 200-207Conference paper (Refereed)
    Abstract [en]

    Robots are being designed to communicate with people in various public and domestic venues in a perceptive, helpful, and discreet way. Here, we use a speculative prototyping approach to shine light on a new concept of robot steganography (RS): that a robot could seek to help vulnerable populations by discreetly warning of potential threats: We first identify some potentially useful scenarios for RS related to safety and security– concerns that are estimated to cost the world trillions of dollars each year–with a focus on two kinds of robots, a socially assistive robot (SAR) and an autonomous vehicle (AV). Next, we propose that existing, powerful, computer-based steganography (CS) approaches can be adopted with little effort in new contexts (SARs), while also pointing out potential benefits of human-like steganography (HS): Although less efficient and robust than CS, HS represents a currently-unused form of RS that could also be used to avoid requiring a computer to receive messages, detection by more technically advanced adversaries, or a lack of alternative connectivity (e.g., if a wireless channel is being jammed). Some unique challenges of RS are also introduced, that arise from message generation, indirect perception, and effects of perspective. Finally, we confirm the feasibility of the basic concept for RS, that messages can be hidden in a robot’s behaviors, via a simplified, initial user study, also making available some code and a video. The immediate implication is that RS could potentially help to improve people’s lives and mitigate some costly problems, as robots become increasingly prevalent in our society–suggesting the usefulness of further discussion, ideation, and consideration by designers.

  • 26.
    Cooney, Martin
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Karlsson, Stefan M.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Impressions of Size-Changing in a Companion Robot2015In: PhyCS 2015 – 2nd International Conference on Physiological Computing Systems, Proceedings / [ed] Hugo Plácido da Silva, Pierre Chauvet, Andreas Holzinger, Stephen Fairclough & Dennis Majoe, SciTePress, 2015, p. 118-123Conference paper (Refereed)
    Abstract [en]

    Physiological data such as head movements can be used to intuitively control a companion robot to perform useful tasks. We believe that some tasks such as reaching for high objects or getting out of a person’s way could be accomplished via size changes, but such motions should not seem threatening or bothersome. To gain insight into how size changes are perceived, the Think Aloud Method was used to gather typical impressions of a new robotic prototype which can expand in height or width based on a user’s head movements. The results indicate promise for such systems, also highlighting some potential pitfalls.

  • 27.
    Cooney, Martin
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Leister, Wolfgang
    Norsk Regnesentral, Oslo, Norway.
    Using the Engagement Profile to Design an Engaging Robotic Teaching Assistant for Students2019In: Robotics, E-ISSN 2218-6581, Vol. 8, no 1, article id 21Article in journal (Refereed)
    Abstract [en]

    We report on an exploratory study conducted at a graduate school in Sweden with a humanoid robot, Baxter. First, we describe a list of potentially useful capabilities for a robot teaching assistant derived from brainstorming and interviews with faculty members, teachers, and students. These capabilities consist of reading educational materials out loud, greeting, alerting, allowing remote operation, providing clarifications, and moving to carry out physical tasks. Secondly, we present feedback on how the robot's capabilities, demonstrated in part with the Wizard of Oz approach, were perceived, and iteratively adapted over the course of several lectures, using the EngagementProfile tool. Thirdly, we discuss observations regarding the capabilities and the development process. Our findings suggest that using a social robot as a teachingassistant is promising using the chosen capabilities and Engagement Profile tool. We find that enhancing the robot's autonomous capabilities and further investigating the role of embodiment are some important topics to be considered in future work. © 2019 by the authors.

  • 28.
    Cooney, Martin
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Pashami, Sepideh
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Järpe, Eric
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Ashfaq, Awais
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Ong, Linda
    I+ srl, Florence, Italy.
    Avoiding Improper Treatment of Persons with Dementia by Care Robots2019Conference paper (Refereed)
    Abstract [en]

    The phrase “most cruel and revolting crimes” has been used to describe some poor historical treatment of vulnerable impaired persons by precisely those who should have had the responsibility of protecting and helping them. We believe we might be poised to see history repeat itself, as increasingly humanlike aware robots become capable of engaging in behavior which we would consider immoral in a human–either unknowingly or deliberately. In the current paper we focus in particular on exploring some potential dangers affecting persons with dementia (PWD), which could arise from insufficient software or external factors, and describe a proposed solution involving rich causal models and accountability measures: Specifically, the Consequences of Needs-driven Dementia-compromised Behaviour model (C-NDB) could be adapted to be used with conversation topic detection, causal networks and multi-criteria decision making, alongside reports, audits, and deterrents. Our aim is that the considerations raised could help inform the design of care robots intended to support well-being in PWD.

  • 29.
    Cortinhal, Tiago
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Kurnaz, Fatih
    Middle East Technical Univetsity, Ankara, Turkey.
    Aksoy, Eren
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Semantics-aware Multi-modal Domain Translation: From LiDAR Point Clouds to Panoramic Color Images2021In: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Los Alamitos: IEEE Computer Society, 2021, p. 3032-3041Conference paper (Refereed)
    Abstract [en]

    In this work, we present a simple yet effective framework to address the domain translation problem between different sensor modalities with unique data formats. By relying only on the semantics of the scene, our modular generative framework can, for the first time, synthesize a panoramic color image from a given full 3D LiDAR point cloud. The framework starts with semantic segmentation of the point cloud, which is initially projected onto a spherical surface. The same semantic segmentation is applied to the corresponding camera image. Next, our new conditional generative model adversarially learns to translate the predicted LiDAR segment maps to the camera image counterparts. Finally, generated image segments are processed to render the panoramic scene images. We provide a thorough quantitative evaluation on the SemanticKitti dataset and show that our proposed framework outperforms other strong baseline models. Our source code is available at https://github. com/halmstad-University/TITAN-NET. © 2021 IEEE.

    Download full text (pdf)
    fulltext
  • 30.
    David, Jennifer
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Valencia, Rafael
    Carnegie Mellon University, Pittsburgh, USA.
    Philippsen, Roland
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Bosshard, Pascal
    Autonomous System Lab, ETH Zurich, Switzerland.
    Iagnemma, Karl
    Massachusetts Institute of Technology, Cambridge, MA, USA.
    Gradient Based Path Optimization Method for Autonomous Driving2017In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Piscataway, NJ: IEEE, 2017, p. 4501-4508Conference paper (Refereed)
    Abstract [en]

    This paper discusses the possibilities of extending and adapting the CHOMP motion planner to work with a non-holonomic vehicle such as an autonomous truck with a single trailer. A detailed study has been done to find out the different ways of implementing these constraints on the motion planner. CHOMP, which is a successful motion planner for articulated robots produces very fast and collision-free trajectories. This nature is important for a local path adaptor in a multi-vehicle path planning for resolving path-conflicts in a very fast manner and hence, CHOMP was adapted. Secondly, this paper also details the experimental integration of the modified CHOMP with the sensor fusion and control system of an autonomous Volvo FH-16 truck. Integration experiments were conducted in a real-time environment with the developed autonomous truck. Finally, additional simulations were also conducted to compare the performance of the different approaches developed to study the feasibility of employing CHOMP to autonomous vehicles. ©2017 IEEE

  • 31.
    David, Jennifer
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Valencia, Rafael
    Carnegie Mellon University, Pittsburgh, USA.
    Philippsen, Roland
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Center for Applied Intelligent Systems Research.
    Iagnemma, Karl
    Massachusetts Institute of Technology, Cambridge, USA.
    Local Path Optimizer for an Autonomous Truck in a Harbour Scenario2017In: Field and Service Robotics: Results of the 11th International Conference / [ed] Marco Hutter; Roland Siegwart, Springer Publishing Company, 2017Conference paper (Refereed)
    Abstract [en]

    Recently, functional gradient algorithms like CHOMP have been very successful in producing locally optimal motion plans for articulated robots. In this paper, we have adapted CHOMP to work with a non-holonomic vehicle such as an autonomous truck with a single trailer and a differential drive robot. An extended CHOMP with rolling constraints have been implemented on both of these setup which yielded feasible curvatures. This paper details the experimental integration of the extended CHOMP motion planner with the sensor fusion and control system of an autonomous Volvo FH-16 truck. It also explains the experiments conducted on the differential-drive robot. Initial experimental investigations and results conducted in a real-world environment show that CHOMP can produce smooth and collision-free trajectories for mobile robots and vehicles as well. In conclusion, this paper discusses the feasibility of employing CHOMP to mobile robots. © 2018, Springer International Publishing AG.

  • 32.
    Ejnarsson, Marcus
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Nilsson, Carl Magnus
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Verikas, Antanas
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    A Kernel based multi-resolution time series analysis for screening deficiencies in paper production2006In: Advances in neural networks - ISNN 2006: third International Symposium on Neural Networks, Chengdu, China, May 28 - June 1, 2006 ; proceedings. III / [ed] Jun Wang, Berlin: Springer Berlin/Heidelberg, 2006, p. 1111-1116Conference paper (Refereed)
    Abstract [en]

    This paper is concerned with a multi-resolution tool for analysis of a time series aiming to detect abnormalities in various frequency regions. The task is treated as a kernel based novelty detection applied to a multi-level time series representation obtained from the discrete wavelet transform. Having a priori knowledge that the abnormalities manifest themselves in several frequency regions, a committee of detectors utilizing data dependent aggregation weights is build by combining outputs of detectors operating in those regions.

  • 33.
    Elofsson, Max
    et al.
    Halmstad University, School of Information Technology.
    Larsson, Victor
    Halmstad University, School of Information Technology.
    Reusage classification of damaged Paper Cores using Supervised Machine Learning2023Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This paper consists of a project exploring the possibility to assess paper code reusability by measuring chuck damages utilizing a 3D sensor and usingMachine Learning to classify reusage. The paper cores are part of a rolling/unrolling system at a paper mill whereas a chuck is used to slow and eventually stop the revolving paper core, which creates damages that at a certain point is too grave for reuse. The 3D sensor used is a TriSpector1008from SICK, based on active triangulation through laser line projection and optic sensing. A number of paper cores with damages varying in severity labeled approved or unapproved for further use was provided. SupervisedLearning in the form of K-NN, Support Vector Machine, Decision Trees andRandom Forest was used to binary classify the dataset based on readings from the sensor. Features were extracted from these readings based on the spatial and frequency domain of each reading in an experimental way.Classification of reusage was previously done through thresholding on internal features in the sensor software. The goal of the project is to unify the decision making protocol/system with economical, environmental and sustainable waste management benefits. K-NN was found to be best suitedin our case. Features for standard deviation of calculated depth obtained from the readings, performed best and lead to a zero false positive rate and recall score of 99.14%, outperforming the compared threshold system.

    Download full text (pdf)
    fulltext
  • 34.
    Englund, Cristofer
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Modelling and controlling an offset lithographic printing process2007Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The objective of this thesis is to provide methods for print quality enhancements in an offset lithographic printing proess. Various parameters characterising the print quality are recognised, however, in this work print quality is defined as the deviation of the amount of ink in a sample image from the reference print.

    The methods developed are model-based and historical data collected at the printing press are used to build the models. Inherent in the historical process data are outliers owing to sensor faults, measurement errors and impurity of the material used. It is essential to detect and remove these outliers to avoid using them to update the process models. A process modelbased outlieer detection tool has been proposed. Several diagnostic measures are ombined via a neural network to achieve robust data categorisation into inlier and outlier classes.

    To cope with the slow variation in printing process data, a SOM-based data mining and adaptive modelling technique has been proposed. The technique continously updates the data set characterising the process and the process models if they become out-of-date. A SOM-based approach to model ombination has been proposed to permit the cration of adaptive - data dependet - committees.

    A multiple models-based controller, which employs the process models developed, is combined with an integrating controller to achieve robust ink feed control. Results have shown that the robust ink feed controller is capable of controlling the ink feed in the newspaper printing press according to the desired process output. Based on the process modelling, techniques have also been developed for initialising the printing press in order to reduce the time needed to achieve the desired print quality. The use of the developed methods and tools at a print shop in Halmstad, Sweden, resulted in higher print quality and lower ink and paper waste.

  • 35.
    Englund, Cristofer
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Modelling the offset lithographic printing process2006Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    A concept for data management and adaptive modelling of the offset lithographic printing process is proposed. Artificial neural networks built from historical process data are used to model the offset printing process aiming to develop tools for online ink flow control.

    Inherent in the historical data are outliers owing to sensor faults, measurement errors and impurity of the materials used. It is fundamental to identify outliers in process data in order to avoid using these data points for updating the model. In this work, a hybrid the process-model-network-based technique for outlier detection is proposed. Several diagnosti measures are aggregated via a neural network to categorize the data points into the oulier or inlier classes. Experimentally it was demonstrated that a fuzzy expert can be configured to label data for training the categorization neural network.

    A SOM based model combination strategy, allowing to create adaptive - data dependent - committees, is proposed to build models used for printing press initialization. Both, the number of models included into a committee and aggregation weights are specific for each input data point analyzed.

    The printing process is constantly changing due to wear, seasonal changes, duration of print jobs etc. Consequently, models trained on historical data become out of date with time and need to be updated. Therefore, a data mining and adaptive modelling approach has been propsed. The experimental investigations performed have shown that the tools developed can follow the process changes and make appropriate adaptations of the ata set and the process models. A low process modelling error has been obtained by employing data dependent committees.

  • 36.
    Englund, Cristofer
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Verikas, Antanas
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    A hybrid approach to outlier detection in the offset lithographic printing process2005In: Engineering applications of artificial intelligence, ISSN 0952-1976, E-ISSN 1873-6769, Vol. 18, no 6, p. 759-768Article in journal (Refereed)
    Abstract [en]

    Artificial neural networks are used to model the offset printing process aiming to develop tools for on-line ink feed control. Inherent in the modelling data are outliers owing to sensor faults, measurement errors and impurity of materials used. It is fundamental to identify outliers in process data in order to avoid using these data points for updating the model. We present a hybrid, the process-model-network-based technique for outlier detection. The outliers can then be removed to improve the process model. Several diagnostic measures are aggregated via a neural network to categorize data points into the outlier and inlier classes. We demonstrate experimentally that a soft fuzzy expert can be configured to label data for training the categorization of neural network.

  • 37.
    Ericson, Stefan
    et al.
    University of Skövde, Skövde, Sweden.
    Åstrand, Björn
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Algorithms for Visual Odometry in Outdoor Field Environment2007In: Proceedings of the 13th IASTED International Conference on Robotics and Applications: August 29 - 31, 2007, Würzburg, Germany / [ed] Schilling, K, Anaheim, Calif.: ACTA Press, 2007, p. 287-292Conference paper (Refereed)
    Abstract [en]

    In this paper different algorithms for visual odometry are evaluated for navigating an agricultural weeding robot in outdoor field environment. Today there is an encoder wheel that keeps track of the weeding tools position relative the camera, but the system suffers from wheel slippage and errors caused by the uneven terrain. To overcome these difficulties the aim is to replace the encoders with visual odometry using the plant recognition camera. Four different optical flow algorithms are tested on four different surfaces, indoor carpet, outdoor asphalt, grass and soil. The tests are performed on an experimental platform. The result shows that the errors consist mainly of dropouts caused by overriding maximum speed, and of calibration error due to uneven ground. The number of dropouts can be reduced by limiting the maximum speed and detection of missing frames. The calibration problem can be solved using stereo cameras. This gives a height measurement and the calibration will be given by camera mounting. The algorithm using normalized cross-correlation shows the best result concerning number of dropouts, accuracy and calculation time.

  • 38.
    Faraj, Maycel Isaac
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Lip-motion and speech biometrics in person recognition2006Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Biometric identification techniques are frequently used to improve security, e.g. in financial transactions, computer networks and secure critical locations. The purpose of biometric authentication systems is to verify an individual by her biological characteristics including those generating characterisitic behaviour. It is not only fingerprints that are used for authentication; our lips, eyes, speech, signatures and even facial temperature are now being used to identify us. This presumably increases security since these traits are harder to copy, steal or lose.

    This thesis attempts to present an effective scheme to extract descriminative features based on a novel motion estimation algorithm for lip movement. Motion is defined as the distribution of apparent velocities in the changes of brightness patterns in an image. The velocity components of a lip sequence are computed by the well-known 3D structure tensor using 1D processing, in 2D manifolds. Since the velocities are computed without extracting the speaker's lip contours, more robust visual features can be obtained. The velocity estimation is performed in rectangular lip regions, which affords increased computational efficiency.

    To investigate the usefulness of the proposed motion features we implement a person authentication system based on lip movements information with (and without) speech information. It yields a speaker verification rate of 98% with lip and speech information. Comparisons are made with an alternative motion estimation technique and a description of our proposed feature fusion technique is given. Beside its value in authentication, the technique can be used naturally to evaluate the liveness i.e. to determine if the biometric data is be captured from a legitimate user, live user who is physically present at the point of acquisition, of a speaking person as it can be used in a text-prompted dialog.

  • 39.
    Faraj, Maycel Isaac
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Lip-motion biometrics for audio-visual identity recognition2008Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Biometric recognition systems have been established as powerful security tools to prevent unknown users from entering high risk systems and areas. They are increasingly being utilized in surveillance and access management (city centers, banks, etc.) by using individuals' physical or biological characteristics. The present study reports on the use of lip motion as a standalone biometric modality as well as a modality integrated with audio speech for identity and digit recognition. First, we estimate motion vectors from a sequence of lip-movement images. The motion is modelled as the distribution of apparent line velocities in the movement of brightness patterns in an image. Then, we construct compact lip-motion features from the regional statistics of the local velocities. These can be used alone or merged with audio features to recognize individuals or speech (digits). In this work, we utilized two classifiers for identification and verification of identity as well as with digit recognition. Although the study is focused on processing lip movements in a video sequence, significant speech processing is a prerequisite given that the contribution of video analysis to speech analysis is studied in conjunction with recognition of humans and what they say (digits). Such integration is necessary to understand multimodel biometric systems to the benefit of recognition performance and robustness against noise. Extensive experiments utilizing one of the largest available databases, XM2VTS, are presented.

  • 40.
    Faraj, Maycel Isaac
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Audio–visual person authentication using lip-motion from orientation maps2007In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 28, no 11, p. 1368-1382Article in journal (Refereed)
    Abstract [en]

    This paper describes a new identity authentication technique by a synergetic use of lip-motion and speech. The lip-motion is defined as the distribution of apparent velocities in the movement of brightness patterns in an image and is estimated by computing the velocity components of the structure tensor by 1D processing, in 2D manifolds. Since the velocities are computed without extracting the speaker’s lip-contours, more robust visual features can be obtained in comparison to motion features extracted from lip-contours. The motion estimations are performed in a rectangular lip-region, which affords increased computational efficiency. A person authentication implementation based on lip-movements and speech is presented along with experiments exhibiting a recognition rate of 98%. Besides its value in authentication, the technique can be used naturally to evaluate the “liveness” of someone speaking as it can be used in text-prompted dialogue. The XM2VTS database was used for performance quantification as it is currently the largest publicly available database (≈300 persons) containing both lip-motion and speech. Comparisons with other techniques are presented.

  • 41.
    Faraj, Maycel Isaac
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Lip Biometrics for Digit Recognition2007In: Computer Analysis of Images and Patterns, Proceedings, Berlin: Springer Berlin/Heidelberg, 2007, Vol. 4673, p. 360-365Conference paper (Refereed)
    Abstract [en]

    This paper presents a speaker-independent audio-visual digit recognition system that utilizes speech and visual lip signals. The extracted visual features are based on line-motion estimation obtained from video sequences with low resolution (128 ×128 pixels) to increase the robustness of audio recognition. The core experiments investigate lip motion biometrics as stand-alone as well as merged modality in speech recognition system. It uses Support Vector Machines, showing favourable experimental results with digit recognition featuring 83% to 100% on the XM2VTS database depending on the amount of available visual information.

  • 42.
    Fierrez-Aguilar, Julian
    et al.
    Escuela Politecnica Superior, Universidad Autonoma De Madrid, Spain.
    Ortega-Garcia, Javier
    Escuela Politecnica Superior, Universidad Autonoma De Madrid, Spain.
    Gonzalez-Rodriguez, Joaquin
    Escuela Politecnica Superior, Universidad Autonoma De Madrid, Spain.
    Bigun, Josef
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS).
    Discriminative multimodal biometric authentication based on quality measures2005In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 38, no 5, p. 777-779Article in journal (Refereed)
    Abstract [en]

    A novel score-level fusion strategy based on quality measures for multimodal biometric authentication is presented. In the proposed method, the fusion function is adapted every time an authentication claim is performed based on the estimated quality of the sensed biometric signals at this time. Experimental results combining written signatures and quality-labelled fingerprints are reported. The proposed scheme is shown to outperform significantly the fusion approach without considering quality signals. In particular, a relative improvement of approximately 20% is obtained on the publicly available MCYT bimodal database.

  • 43.
    Fronthaler, Hartwig
    et al.
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Kollreider, Klaus
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Bigun, Josef
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Automatic Image Quality Assessment with Application in Biometrics2006In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition / [ed] Cordelia Schmid, Stefano Soatto, Carlo Tomasi., Los Alamitos, Calif.: IEEE Computer Society, 2006, p. 7-Conference paper (Refereed)
    Abstract [en]

    A method using local features to assess the quality of an image, with demonstration in biometrics, is proposed. Recently, image quality awareness has been found to increase recognition rates and to support decisions in multimodal authentication systems significantly. Nevertheless, automatic quality assessment is still an open issue, especially with regard to general tasks. Indicators of perceptual quality like noise, lack of structure, blur, etc. can be retrieved from the orientation tensor of an image, but there are few studies reporting on this. Here we study the orientation tensor with a set of symmetry descriptors, which can be varied according to the application. Allowed classes of local shapes are generically provided by the user but no training or explicit reference information is required. Experimental results are given for fingerprint. Furthermore, we indicate the applicability of the proposed method to face images.

    Download full text (pdf)
    FULLTEXT01
  • 44.
    Gelzinis, A.
    et al.
    Department of Applied Electronics, Kaunas University of Technology, Lithuania.
    Verikas, Antanas
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    Bacauskiene, M.
    Department of Applied Electronics, Kaunas University of Technology, Lithuania.
    Automated speech analysis applied to laryngeal disease categorization2008In: Computer Methods and Programs in Biomedicine, ISSN 0169-2607, E-ISSN 1872-7565, Vol. 91, no 1, p. 36-47Article in journal (Refereed)
    Abstract [en]

    The long-term goal of the work is a decision support system for diagnostics of laryngeal diseases. Colour images of vocal folds, a voice signal, and questionnaire data are the information sources to be used in the analysis. This paper is concerned with automated analysis of a voice signal applied to screening of laryngeal diseases. The effectiveness of 11 different feature sets in classification of voice recordings of the sustained phonation of the vowel sound /a/ into a healthy and two pathological classes, diffuse and nodular, is investigated. A k-NN classifier, SVM, and a committee build using various aggregation options are used for the classification. The study was made using the mixed gender database containing 312 voice recordings. The correct classification rate of 84.6% was achieved when using an SVM committee consisting of four members. The pitch and amplitude perturbation measures, cepstral energy features, autocorrelation features as well as linear prediction cosine transform coefficients were amongst the feature sets providing the best performance. In the case of two class classification, using recordings from 79 subjects representing the pathological and 69 the healthy class, the correct classification rate of 95.5% was obtained from a five member committee. Again the pitch and amplitude perturbation measures provided the best performance.

  • 45.
    Gelzinis, Adas
    et al.
    Kaunas University of Technology, Kaunas, Lithuania.
    Vaiciukynas, Evaldas
    Kaunas University of Technology, Kaunas, Lithuania.
    Bacauskiene, Marija
    Kaunas University of Technology, Kaunas, Lithuania.
    Verikas, Antanas
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent systems (IS-lab).
    Sulcius, Sigitas
    Coastal Research and Planning Institute, Klaipeda University, Klaipeda, Lithuania.
    Paskauskas, Ricardas
    Coastal Research and Planning Institute, Klaipeda University, Klaipeda, Lithuania.
    Oleninaz, Irina
    Department of Marine Research, Environmental Protection Agency, Klaipeda, Lithuania.
    Boosting performance of the edge-based active contour model applied to phytoplankton images2012In: Proceedings of the 13th IEEE International Symposium on Computational Intelligence and Informatics, Piscataway, NJ: IEEE Press, 2012, p. 273-277Conference paper (Refereed)
    Abstract [en]

    Automated contour detection for objects representing the Prorocentrum minimum (P. minimum) species in phytoplankton images is the core goal of this study. The speciesis known to cause harmful blooms in many estuarine and coastal environments. Active contour model (ACM)-based image segmentation is the approach adopted here as a potential solution. Currently, the main research in ACM area is highly focused ondevelopment of various energy functions having some physical intuition. This work, by contrast, advocates the idea of rich and diverse image preprocessing before segmentation. Advantage of the proposed preprocessing is demonstrated experimentally by comparing it to the six well known active contour techniques applied to the cell segmentation in microscopy imagery task. © 2012 IEEE.

  • 46.
    Gelzinis, Adas
    et al.
    Department of Electrical Power Systems, Kaunas University of Technology, Kaunas, Lithuania.
    Verikas, Antanas
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory. Department of Electrical Power Systems, Kaunas University of Technology, Kaunas, Lithuania.
    Vaiciukynas, Evaldas
    Department of Electrical Power Systems & Department of Information Systems, Kaunas University of Technology, Kaunas, Lithuania.
    Bacauskiene, Marija
    Department of Electrical Power Systems, Kaunas University of Technology, Kaunas, Lithuania.
    A novel technique to extract accurate cell contours applied to analysis of phytoplankton images2015In: Machine Vision and Applications, ISSN 0932-8092, E-ISSN 1432-1769, Vol. 26, no 2-3, p. 305-315Article in journal (Refereed)
    Abstract [en]

    Active contour model (ACM) is an image segmentation technique widely applied for object detection. Most of the research in ACM area is dedicated to the development of various energy functions based on physical intuition. Here, instead of constructing a new energy function, we manipulate values of ACM parameters to generate a multitude of potential contours, score them using a machine-learned ranking technique, and select the best contour for each object in question. Several learning-to-rank (L2R) methods are evaluated with a goal to choose the most accurate in assessing the quality of generated contours. Superiority of the proposed segmentation approach over the original boosted edge-based ACM and three ACM implementations using the level-set framework is demonstrated for the task of Prorocentrum minimum cells’ detection in phytoplankton images. Experiments show that diverse set of contour features with grading learned by a variant of multiple additive regression trees (λ-MART) helped to extract precise contour for 87.6 % of cells tested.

  • 47.
    Gucciardi, Daniel F.
    et al.
    Curtin University, Perth, Australia.
    Lines, Robin L.J.
    Curtin University, Perth, Australia.
    Ntoumanis, Nikos
    Halmstad University, School of Health and Welfare. University Of Southern Denmark, Odense, Denmark; Curtin University, Perth, Australia.
    Handling effect size dependency in meta-analysis2021In: International Review of Sport and Exercise Psychology, ISSN 1750-984X, E-ISSN 1750-9858, Vol. 15, no 1, p. 152-178Article in journal (Refereed)
    Abstract [en]

    The statistical synthesis of quantitative effects within primary studies via meta-analysis is an important analytical technique in the scientific toolkit of modern researchers. As with any scientific method or technique, knowledge of the weaknesses that might render findings limited or potentially erroneous as well as strategies by which to mitigate these biases is essential for high-quality scientific evidence. In this paper, we focus on one prevalent consideration for meta-analytical investigations, namely dependency among effects. We provide readers with a non-technical introduction to and overview of statistical solutions for handling dependent effects for their efforts to integrate evidence within primary studies. This goal is achieved via a series of seven reflective questions that scholars might consider when planning and executing a meta-analysis in which some degree of dependency among effect sizes from primary studies may exist. We also provide an example application of the recommendations with real-world data, including an analytical script that readers can adapt for their own purposes. © 2021 Informa UK Limited, trading as Taylor & Francis Group.

  • 48.
    Guzaitis, Jonas
    et al.
    Kaunas Univ Technol, Dept Appl Elect, Kaunas, Lithuania.
    Verikas, Antanas
    Halmstad University, School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS).
    ANN and ICA sparse code shrinkage de-noising based defect detection in pavement tiles2008In: Information Technologies' 2008, Proceedings, Kaunas, Lithuania: Kaunas University of Technology Press , 2008, p. 62-71Conference paper (Refereed)
    Abstract [en]

    This paper is concerned with the problem of image analysis based detection of local defects embedded in pavement tiles surfaces. The technique developed is based on the ICA sparse code shrinkage denoising, the local 2D discrete Walsh transform and ANN. To reduce random noise, the ICA sparse code shrinkage de-noising is applied. Next, robust local features characterizing the surface texture are extracted based on the 2D Walsh transform and then analyzed by an artificial Neural Network. A 100% correct classification rate was obtained when testing the technique proposed on a set of surface images recorded from 400 tiles.

  • 49.
    Karami, Saeed
    et al.
    Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran.
    Saberi-Movahed, Farid
    Graduate University of Advanced Technology, Kerman, Iran.
    Tiwari, Prayag
    Halmstad University, School of Information Technology. Aalto University, Espoo, Finland.
    Marttinen, Pekka
    Aalto University, Espoo, Finland.
    Vahdati, Sahar
    Nature-Inspired Machine Intelligence-InfAI, Dresden, Germany.
    Unsupervised feature selection based on variance–covariance subspace distance2023In: Neural Networks, ISSN 0893-6080, E-ISSN 1879-2782, Vol. 166, p. 188-203Article in journal (Refereed)
    Abstract [en]

    Subspace distance is an invaluable tool exploited in a wide range of feature selection methods. The power of subspace distance is that it can identify a representative subspace, including a group of features that can efficiently approximate the space of original features. On the other hand, employing intrinsic statistical information of data can play a significant role in a feature selection process. Nevertheless, most of the existing feature selection methods founded on the subspace distance are limited in properly fulfilling this objective. To pursue this void, we propose a framework that takes a subspace distance into account which is called “Variance–Covariance subspace distance”. The approach gains advantages from the correlation of information included in the features of data, thus determines all the feature subsets whose corresponding Variance–Covariance matrix has the minimum norm property. Consequently, a novel, yet efficient unsupervised feature selection framework is introduced based on the Variance–Covariance distance to handle both the dimensionality reduction and subspace learning tasks. The proposed framework has the ability to exclude those features that have the least variance from the original feature set. Moreover, an efficient update algorithm is provided along with its associated convergence analysis to solve the optimization side of the proposed approach. An extensive number of experiments on nine benchmark datasets are also conducted to assess the performance of our method from which the results demonstrate its superiority over a variety of state-of-the-art unsupervised feature selection methods. The source code is available at https://github.com/SaeedKarami/VCSDFS. © 2023 The Author(s)

  • 50.
    Karlsson, Stefan
    Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), Halmstad Embedded and Intelligent Systems Research (EIS), Intelligent Systems´ laboratory.
    Real-Time optical flow2013Other (Other academic)
123 1 - 50 of 104
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf