Fake iris detection has been studied so far using near-infrared sensors (NIR), which provide grey scale-images, i.e. With luminance information only. Here, we incorporate into the analysis images captured in visible range, with color information, and perform comparative experiments between the two types of data. We employ Gray-Level Cocurrence textural features and SVM classifiers. These features analyze various image properties related with contrast, pixel regularity, and pixel co-occurrence statistics. We select the best features with the Sequential Forward Floating Selection (SFFS) algorithm. We also study the effect of extracting features from selected (eye or periocular) regions only. Our experiments are done with fake samples obtained from printed images, which are then presented to the same sensor than the real ones. Results show that fake images captured in NIR range are easier to detect than visible images (even if we down sample NIR images to equate the average size of the iris region between the two databases). We also observe that the best performance with both sensors can be obtained with features extracted from the whole image, showing that not only the eye region, but also the surrounding periocular texture is relevant for fake iris detection. An additional source of improvement with the visible sensor also comes from the use of the three RGB channels, in comparison with the luminance image only. A further analysis also reveals that some features are best suited to one particular sensor than the others. © 2014 IEEE
We present a new iris segmentation algorithm based onthe Generalized Structure Tensor (GST), which also includesan eyelid detection step. It is compared with traditionalsegmentation systems based on Hough transformand integro-differential operators. Results are given usingthe CASIA-IrisV3-Interval database. Segmentation performanceunder different degrees of image defocus and motionblur is also evaluated. Reported results shows the effectivenessof the proposed algorithm, with similar performancethan the others in pupil detection, and clearly betterperformance for sclera detection for all levels of degradation.Verification results using 1D Log-Gabor wavelets arealso given, showing the benefits of the eyelids removal step.These results point out the validity of the GST as an alternativeto other iris segmentation systems. © 2012 IEEE.
We present a new iris segmentation algorithm based on the Generalized Structure Tensor (GST). We compare this approach with traditional iris segmentation systems based on Hough transform and integro-differential operators. Results are given using the CASIA-IrisV3-Interval database with respect to a segmentation made manually by a human expert. The proposed algorithm outperforms the baseline approaches, pointing out the validity of the GST as an alternative to classic iris segmentation systems. We also detect the cross positions between the eyelids and the outer iris boundary. Verification results using a publicly available iris recognition system based on 1D Log-Gabor wavelets are also given, showing the benefits of the eyelids removal step.
We present a new system for biometric recognition using periocular images based on retinotopic sampling grids and Gabor analysis of the local power spectrum. A number of aspects are studied, including: 1) grid adaptation to dimensions of the target eye vs. grids of constant size, 2) comparison between circular- and rectangular-shaped grids, 3) use of Gabor magnitude vs. phase vectors for recognition, 4) rotation compensation between query and test images, and 5) comparison with an iris machine expert. Results show that our system achieves competitive verification rates compared with other periocular recognition approaches. We also show that top verification rates can be obtained without rotation compensation, thus allowing to remove this step for computational efficiency. Also, the performance is not affected substantially if we use a grid of fixed dimensions, or it is even better in certain situations, avoiding the need of accurate detection of the iris region. © 2012 Springer-Verlag.
This is an excerpt from the content
Synonyms
Fingerprint benchmark; Fingerprint corpora; Fingerprint dataset
Definition
Fingerprint databases are structured collections of fingerprint data mainly used for either evaluation or operational recognition purposes.
Fingerprint data in databases for evaluation are usually detached from the identity of corresponding individuals. These databases are publicly available for research purposes, and they usually consist of raw fingerprint images acquired with live-scan sensors or digitized from inked fingerprint impressions on paper. Databases for evaluation are the basis for research in automatic fingerprint recognition, and together with specific experimental protocols, they are the basis for a number of technology evaluations and benchmarks. This is the type of fingerprint databases further covered here.
On the other hand, fingerprint databases for operational recognition are typically proprietary, they usually incorporate personal information about the enrolled people together with the fingerprint data, and they can incorporate either raw fingerprint image data or some form of distinctive fingerprint descriptors such as minutiae templates. These fingerprint databases represent one of the modules in operational automated fingerprint recognition systems, and they will not be adressed here.
Quality assessment; Biometric quality; Quality-based processing
Since the establishment of biometrics as a specific research area in the late 1990s, the biometric community has focused its efforts in the development of accurate recognition algorithms [1]. Nowadays, biometric recognition is a mature technology that is used in many applications, offering greater security and convenience than traditional methods of personal recognition [2].
During the past few years, biometric quality measurement has become an important concern after a number of studies and technology benchmarks that demonstrate how performance of biometric systems is heavily affected by the quality of biometric signals [3]. This operationally important step has been nevertheless under-researched compared to the primary feature extraction and pattern recognition tasks [4]. One of the main challenges facing biometric technologies is performance degradation in less controlled situations, and the problem of biometric quality measurement has arisen even stronger with the proliferation of portable handheld devices, with at-a-distance and on-the-move acquisition capabilities. These will require robust algorithms capable of handling a range of changing characteristics [2]. Another important example is forensics, in which intrinsic operational factors further degrade recognition performance.
There are number of factors that can affect the quality of biometric signals, and there are numerous roles of a quality measure in the context of biometric systems. This section summarizes the state of the art in the biometric quality problem, giving an overall framework of the different challenges involved.
Biometric technology has been increasingly deployed in the last decade, offering greater security and convenience than traditional methods of personal recognition. But although the performance of biometric systems is heavily affected by the quality of biometric signals, prior work on quality evaluation is limited. Quality assessment is a critical issue in the security arena, especially in challenging scenarios (e.g. surveillance cameras, forensics, portable devices or remote access through Internet). Different questions regarding the factors influencing biometric quality and how to overcome them, or the incorporation of quality measures in the context of biometric systems have to be analyzed first. In this paper, a review of the state-of-the-art in these matters is provided, giving an overall framework of the main factors related to the challenges associated with biometric quality.
This paper describes two approaches for Amharic word recognition in unconstrained handwritten text using HMMs. The first approach builds word models from concatenated features of constituent characters and in the second method HMMs of constituent characters are concatenated to form word model. In both cases, the features used for training and recognition are a set of primitive strokes and their spatial relationships. The recognition system does not require segmentation of characters but requires text line detection and extraction of structural features, which is done by making use of direction field tensor. The performance of the recognition system is tested by a dataset of unconstrained handwritten documents collected from various sources, and promising results are obtained. (C) 2011 Elsevier B.V. All rights reserved.
With the introduction of low-cost wireless communication many new applications have been made possible; applications where systems can collaboratively learn and get wiser without human supervision. One potential application is automated monitoring for fault isolation in mobile mechatronic systems such as commercial vehicles. The paper proposes an agent design that is based on uploading software agents to a fleet of mechatronic systems. Each agent searches for interesting state representations of a system and reports them to a central server application. The states from the fleet of systems can then be used to form a consensus from which it can be possible to detect deviations and even locating a fault.
Socially assistive robots are increasingly being designed to interact with humans in various therapeutical scenarios. We believe that one useful scenario is providing exercise coaching for Persons with Dementia (PWD), which involves unique challenges related to memory and communication. We present a design for a robot that can seek to help a PWD to conduct exercises by recognizing their behaviors and providing appropriate feedback, in an online, multimodal, and engaging way. Additionally, following a mid-fidelity prototyping approach, we report on some observations from an exploratory user study using a Baxter robot; although limited by the sample size and our simplified approach, the results suggested the usefulness of the general scenario, and that the degree to which a robot provides feedback–occasional or continuous– could moderate impressions of attentiveness or fun. Some possibilities for future improvement are outlined, touching on richer recognition and behavior generation strategies based on deep learning and haptic feedback, toward informing next designs. © 2020 IEEE.
Managing the maintenance of a commercial vehicle fleet is an attractive application domain of ubiquitous knowledge discovery. Cost effective methods for predictive maintenance are progressively demanded in the automotive industry. The traditional diagnostic paradigm that requires human experts to define models is not scalable to today's vehicles with hundreds of computing units and thousands of control and sensor signals streaming through the on-board controller area network. A more autonomous approach must be developed. In this paper we evaluate the performance of the COSMO approach for automatic detection of air pressure related faults on a fleet of city buses. The method is both generic and robust. Histograms of a single pressure signal are collected and compared across the fleet and deviations are matched against workshop maintenance and repair records. It is shown that the method can detect several of the cases when compressors fail on the road, well before the failure. The work is based on data from a three year long field study involving 19 buses operating in and around a city on the west coast of Sweden. © The Authors. Published by Elsevier B.V.
In the automotive industry, cost effective methods for predictive maintenance are increasingly in demand. The traditional approach for developing diagnostic methods on commercial vehicles is heavily based on knowledge of human experts, and thus it does not scale well to modern vehicles with many components and subsystems. In previous work we have presented a generic self-organising approach called COSMO that can detect, in an unsupervised manner, many different faults. In a study based on a commercial fleet of 19 buses operating in Kungsbacka, we have been able to predict, for example, fifty percent of the compressors that break down on the road, in many cases weeks before the failure.
In this paper we compare those results with a state of the art approach currently used in the industry, and we investigate how features suggested by experts for detecting compressor failures can be incorporated into the COSMO method. We perform several experiments, using both real and synthetic data, to identify issues that need to be considered to improve the accuracy. The final results show that the COSMO method outperforms the expert method.
In this paper, we propose quality function for an unsupervised neural classification. The function is based on the third order polynomials. The objective of the quality function is to find a place of the input space sparse in data points. By maximising the quality function, we find decision boundary between data clusters instead of centres of the clusters. The shape and place of the decision boundary are rather insensitive to the magnitude of the weight vector established during the maximisation process. A superiority of the proposed quality function over other similar functions as well as conventional clustering algorithms tested has been observed in the experiments. The proposed quality function has been successfully used for colour image segmentation.
Active contour model (ACM) is an image segmentation technique widely applied for object detection. Most of the research in ACM area is dedicated to the development of various energy functions based on physical intuition. Here, instead of constructing a new energy function, we manipulate values of ACM parameters to generate a multitude of potential contours, score them using a machine-learned ranking technique, and select the best contour for each object in question. Several learning-to-rank (L2R) methods are evaluated with a goal to choose the most accurate in assessing the quality of generated contours. Superiority of the proposed segmentation approach over the original boosted edge-based ACM and three ACM implementations using the level-set framework is demonstrated for the task of Prorocentrum minimum cells’ detection in phytoplankton images. Experiments show that diverse set of contour features with grading learned by a variant of multiple additive regression trees (λ-MART) helped to extract precise contour for 87.6 % of cells tested.
Objective speech intelligibility measurement techniques like AI (Articulation Index) and AI based STI (Speech Transmission Index) fail to assess speech intelligibility in modern telecommunication networks that use several non-linear processing for enhancing speech. Moreover, these techniques do not allow prediction of single individual CVC (Consonant Vowel Consonant) word intelligibility scores. ITU-T P.863 standard [1], which was developed for assessing speech quality, is used as a starting point to develop a simple new model for predicting subjective speech intelligibility of individual CVC words. Subjective intelligibility measurements were carried out for a large set of speech degradations. The subjective test uses single CVC word presentations in an eight alternative closed response set experiment. Subjects assess individual degraded CVC words and an average of correct recognition is used as the intelligibility score for a particular CVC word. The first subjective database uses CVC words that have variations in the first consonant i.e. /C/ous (represented as "kæʊs" using International Phonetic Association phonetic alphabets). This database is used for developing the objective model, while a new database based on VC words (Vowel Consonant) that uses variations in the second consonant (a/C/ e.g. aH, aL) is used for validating the model.
ITU-T P.863 shows very poor results with a correlation of 0.30 for the first subjective database. A first extension to make P.863 suited for intelligibility prediction is done by restructuring speech material to meet the temporal structure requirements (speech+silence+speech) set for standard P.863 measurements. The restructuring is done by concatenating every original and degraded CVC word with itself. There is no significant improvement in correlation (0.34) when using P.863 on the restructured first subjective database (speech material meets temporal requirements). In this thesis a simple model based on P.863 is developed for assessing intelligibility of individual CVC words. The model uses a linear combination of a simple time clipping indicator (missing speech parts) and a “Good frame count” indicator which is based on the local perceptual (frame by frame) signal to noise ratio. Using this model on the restructured first database, a reasonably good correlation of 0.81 is seen between subjective scores and the model output values. For the validation database, a correlation of around 0.76 is obtained. Further validation on an existing database at TNO, which uses time clipping degradation only, shows an excellent correlation of 0.98.
Although a reasonably good correlation is seen on the first database and the validation database, it is too low for reliable measurements. Further validation and development is required, nevertheless the results show that a perception-based technique that uses internal representations of signals can be used for predicting subjective intelligibility scores of individual CVC words.
This article describes results of the work on knowledge representation techniques chosen for use in the European project SIARAS (Skill-Based Inspection and Assembly for Reconfigurable Automation Systems). Its goal was to create intelligent support system for reconfiguration and adaptation of robot-based manufacturing cells. Declarative knowledge is represented first of all in an ontology expressed in OWL, for a generic taxonomical reasoning, and in a number of special-purpose reasoning modules, specific for the application domain. The domaindependent modules are organized in a blackboard-like architecture. © 2011 The authors and IOS Press. All rights reserved.
When water is removed from the paper during paper making, a dimensional change occurs in which the paper shrinks in the direction perpendicular to the direction of processing. The dimensional changes vary across the web and influence, e.g., the surface and compression properties of the paper; they also complicate the control of the paper machine. In this article, a robust method for estimating the relative shrinkage profile is presented. The method is based on a one-dimensional recording of the imprints from the forming fabric, using a fluorescence technique. The recording is transformed into a time-frequency spectrum, on which three different frequency estimators have been evaluated. In simulations on synthetic data and measurements on paper profiles the estimator that maximizes the correlation energy showed the most robust and accurate performance of the methods evaluated, even at a low signal-to-noise ratio.
This article presents an approach to designing an adaptive, data dependent, committee of models applied to prediction of several financial attributes for assessing company's future performance. Current liabilities/Current assets, Total liabilities/Total assets, Net income/Total assets, and Operating Income/Total liabilities are the attributes used in this paper. A self-organizing map (SOM) used for data mapping and analysis enables building committees, which are specific (committee size and aggregation weights) for each SOM node. The number of basic models aggregated into a committee and the aggregation weights depend on accuracy of basic models and their ability to generalize in the vicinity of the SOM node. A random forest is used a basic model in this study. The developed technique was tested on data concerning companies from ten sectors of the healthcare industry of the United States and compared with results obtained from averaging and weighted averaging committees. The proposed adaptivity of a committee size and aggregation weights led to a statistically significant increase in prediction accuracy if compared to other types of committees. © 2012 Elsevier Ltd. All rights reserved.
The major problem associated with the walking of humanoid robots is to main- tain its dynamic equilibrium while walking. To achieve this one must detect gait instability during walking to apply proper fall avoidance schemes and bring back the robot into stable equilibrium. A good approach to detect gait insta- bility is to study the evolution of the attitude of the humanoid's trunk. Most attitude estimation techniques involve using the information from inertial sen- sors positioned at the trunk. However, inertial sensors like accelerometer and gyro are highly prone to noise which lead to poor attitude estimates that can cause false fall detections and falsely trigger fall avoidance schemes. In this paper we present a novel way to access the information from joint encoders present in the legs and fuse it with the information from inertial sensors to provide a highly improved attitude estimate during humanoid walk. Also if the joint encoders' attitude measure is compared separately with the IMU's atti- tude estimate, then it is observed that they are different when there is a change of contact between the stance leg and the ground. This may be used to detect a loss of contact and can be verified by the information from force sensors present at the feet of the robot. The propositions are validated by experiments performed on humanoid robot NAO. Copyright © 2013 by World Scientific Publishing Co. Pte. Ltd.
Detecting gait events is the key to many gait analysis applications which would immensely benefit if the analysis could be carried out using wearable sensors in uncontrolled outdoor environments, enabling continuous monitoring and long-term analysis. This would allow exploring new frontiers in gait analysis by facilitating the availability of more data and empower individuals, especially patients, to avail the benefits of gait analysis in their everyday lives. Previous gait event detection algorithms impose many restrictions as they have been developed from data collected incontrolled, indoor environments. This paper proposes a robust algorithm that utilizes a priori knowledge of gait in conjunction with continuous wavelet transform analysis, to accurately identify heel strike and toe off, from noisy accelerometer signals collected during indoor and outdoor walking. The accuracy of the algorithm is evaluated by using footswitches that are considered as ground truth and the results are compared with another recently published algorithm.
Applying machine learning methods in scenarios involving smart homes is a complex task. The many possible variations of sensors, feature representations, machine learning algorithms, middle-ware architectures, reasoning/decision schemes, and interactive strategies make research and development tasks non-trivial to solve.In this paper, the use of a portable, flexible and holistic smart home demonstrator is proposed to facilitate iterative development and the acquisition of feedback when testing in regard to the above-mentioned issues. Specifically, the focus in this paper is on scenarios involving anomaly detection and response. First a model for anomaly detection is trained with simulated data representing a priori knowledge pertaining to a person living in an apartment. Then a reasoning mechanism uses the trained model to infer and plan a reaction to deviating activities. Reactions are carried out by a mobile interactive robot to investigate if a detected anomaly constitutes a true emergency. The implemented demonstrator was able to detect and respond properly in 18 of 20 trials featuring normal and deviating activity patterns, suggesting the feasibility of the proposed approach for such scenarios. © IEEE 2015
Variations in offset print quality relate to numerous parameter of printing press and paper. To maintain constant quality of products, press operators need to assess, explore and monitor print quality. This paper presents a novel system for assessing and predicting values of print quality attributes, where the adopted, random forests (RF)-based, modeling approach also allows quantifying the influence of different parameters. In contrast to other print quality assessment systems, this system utilizes common print marks known as double grey-bars. A novel virtual sensor for assessing the mis-registration degree of printing plates using images of double grey-bars is presented. The inferred influence of paper and printing press parameters on print quality shows correlation with known print quality conditions.
Variations in offset print quality relate to numerous parameters of printing press and paper. To maintain a constant high print quality press operators need to assess, explore and monitor quality of prints. Today assessment is mainly done manually. This paper presents a novel system for assessing and predicting values of print quality attributes, where the adopted, random forests (RFs)-based, modeling approach also allows quantifying the influence of different paper and press parameters on print quality. In contrast to other print quality assessment systems the proposed system utilises common, simple print marks known as double grey-bars. Novel virtual sensors assessing print quality attributes using images of double grey-bars are presented. The inferred influence of paper and printing press parameters on quality of colour prints shows clear relation with known print quality conditions. Thorough analysis and categorisation of related work is also given in the paper. (C) 2012 Elsevier Ltd. All rights reserved.
The goal of creating machines that autonomously perform useful work in a safe, robust and intelligent manner continues to motivate robotics research.Achieving this autonomy requires capabilities for understanding the environment, physically interacting with it, predicting the outcomes of actions and reasoning with this knowledge.Such intelligent physical interaction was at the centre of early robotic investigations and remains an open topic.
In this paper, we build on the fruit of decades of research to explore further this question in the context of autonomous construction in unknown environments with scarce resources.Our scenario involves a miniature mobile robot that autonomously maps an environment and uses cubes to bridge ditches and build vertical structures according to high-level goals given by a human.
Based on a "real but contrived" experimental design, our results encompass practical insights for future applications that also need to integrate complex behaviours under hardware constraints, and shed light on the broader question of the capabilities required for intelligent physical interaction with the real world.
Smart grids are advanced power grids that use modern hardware and software technologies to provide clean, safe, secure, reliable, ecient and sustainable energy. However, there are many challenges in the eld of smart grids in terms of communication, reliability, interoperability, and big data that should be considered. In this paper we present a brief overview of some of the challenges and solutions in the smart grids, focusing especially on the Swedish point of view. We discuss thirty articles, from 2006 until 2013, with the main interest on datarelated challenges.
Underground power cables are one of the fundamental elements in power grids, but also one of the more difficult ones to monitor. Those cables are heavily affected by ionization, as well as thermal and mechanical stresses. At the same time, both pinpointing and repairing faults is very costly and time consuming. This has caused many power distribution companies to search for ways of predicting cable failures based on available historical data.
In this paper, we investigate five different models estimating the probability of failures for in-service underground cables. In particular, we focus on a methodology for evaluating how well different models fit the historical data. In many practical cases, the amount of data available is very limited, and it is difficult to know how much confidence should one have in the goodness-of-fit results.
We use two goodness-of-fit measures, a commonly used one based on mean square error and a new one based on calculating the probability of generating the data from a given model. The corresponding results for a real data set can then be interpreted by comparing against confidence intervals obtained from synthetic data generated according to different models.
Our results show that the goodness-of-fit of several commonly used failure rate models, such as linear, piecewise linear and exponential, are virtually identical. In addition, they do not explain the data as well as a new model we introduce: piecewise constant.
This paper presents a method for mining nonlinear relationships in machine data with the purpose of using such relationships to detect faults, isolate faults and predict wear and maintenance needs. The method is based on the symmetrical uncertainty measure from information theory, hierarchical clustering and self-organizing maps. It is demonstrated on synthetic data sets where it is shown to be able to detect interesting signal relations and outperform linear methods. It is also demonstrated on real data sets where it is considerably harder to select small feature sets. It is also demonstrated on the real data sets that there is information about system wear and system faults in the detected relationships. The work is part of a long-term research project with the aim to construct a self-organizing autonomic computing system for self-monitoring of mechatronic systems.
Predictive maintenance is becoming more and more important in many industries, especially taking into account the increasing focus on offering uptime guarantees to the customers. However, in automotive industry, there is a limitation on the engineering effort and sensor capabilities available for that purpose. Luckily, it has recently become feasible to analyse large amounts of data on-board vehicles in a timely manner. This allows approaches based on data mining and pattern recognition techniques to augment existing, hand crafted algorithms.
Automated deviation detection offers both broader applicability, by virtue of detecting unexpected faults and cross-analysing data from different subsystems, as well as higher sensitivity, due to its ability to take into account specifics of a selected, small set of vehicles used in a particular way under similar conditions.
In a project called Redi2Service we work towards developing methods for autonomous and unsupervised relationship discovery, algorithms for detecting deviations within those relationships (both considering different moments in time, and different vehicles in a fleet), as well as ways to correlate those deviations to known and unknown faults. In this paper we present the type of data we are working with, justify why we believe relationships between signals are a good knowledge representation, and show results of early experiments where supervised learning was used to evaluate discovered relations.
We have implemented an algorithm for detection and segmentation of protein spots in 2-D gel electrophoresis images using symmetry derivative features computed using low level image processing operations. The implementation was compared with a previously published Watershed segmentation and a commercial software. Our algorithm was found to yield segmentation results that were either better than or comparable to the other solutions while having fewer free parameters and a low computational cost. © Springer-Verlag Berlin Heidelberg 2005.
Whole-body operational space control is a powerful compliant control approach for robots that physically interact with their environment. The underlying mathematical and algorithmic principles have been laid in a large body of published work, and novel research keeps advancing its formulation and variations. However, the lack of a reusable and robust shared implementation has hindered its widespread adoption. To fill this gap, we present an open-source implementation of whole-body operational space control that provides runtime configurability, ease of reuse and extension, and independence from specific middlewares or operating systems. Our libraries are highly portable. Decoupling from specific runtime platforms (such as RTAI or ROS) is achieved by containing application code in a thin adaptation layer. In this paper, we briefly survey the foundations of whole-body control for mobile manipulation, describe the structure of our software, very briefly present experiments on two quite different robots, and then delve into the bundled tutorials to help prospective new users.
Motion analysis deals with determining what and how activities are being performed by a subject, through the use of sensors. The process of answering the what question is commonly known as classification, and answering the how question is here referred to as characterization. Frequently, combinations of inertial sensor such as accelerometers and gyroscopes are used for motion analysis. These sensors are cheap, small, and can easily be incorporated into wearable systems.
The overall goal of this thesis was to improve the processing of inertial sensor data for the characterization of movements. This thesis presents a framework for the development of motion analysis systems that targets movement characterization, and describes an implementation of the framework for gait analysis. One substantial aspect of the framework is symbolization, which transforms the sensor data into strings of symbols. Another aspect of the framework is the inclusion of human expert knowledge, which facilitates the connection between data and human concepts, and clarifies the analysis process to a human expert.
The proposed implementation was compared to state of practice gait analysis systems, and evaluated in a clinical environment. Results showed that expert knowledge can be successfully used to parse symbolic data and identify the different phases of gait. In addition, the symbolic representation enabled the creation of new gait symmetry and gait normality indices. The proposed symmetry index was superior to many others in detecting movement asymmetry in early-to-mid-stage Parkinson's Disease patients. Furthermore, the normality index showed potential in the assessment of patient recovery after hip-replacement surgery. In conclusion, this implementation of the gait analysis system illustrated that the framework can be used as a road map for the development of movement analysis systems.
Movement asymmetry is one of the motor symptoms associated with Parkinson's Disease (PD). Therefore, being able to detect and measure movement symmetry is important for monitoring the patient's condition.
The present paper introduces a novel symbol based symmetry index calculated from inertial sensor data. The method is explained, evaluated and compared to six other symmetry measures. These measures were used to determine the symmetry of both upper and lower limbs during walking of 11 early-to-mid-stage PD patients and 15 control subjects. The patients included in the study showed minimal motor abnormalities according to the Unified Parkinson's Disease Rating Scale (UPDRS).
The symmetry indices were used to classify subjects into two different groups corresponding to PD or control. The proposed method presented high sensitivity and specificity with an area under the Receiver Operating Characteristic (ROC) curve of 0.872, 9\% greater than the second best method. The proposed method also showed an excellent Intraclass Correlation Coefficient (ICC) of 0.949, 55\% greater than the second best method. Results suggest that the proposed symmetry index is appropriate for this particular group of patients.
Symbolization of time-series has successfully been used to extract temporal patterns from experimental data. Segmentation is an unavoidable step of the symbolization process, and it may be characterized on two domains: the amplitude and the temporal domain. These two groups of methods present advantages and disadvantages each. Can their performance be estimated a priori based on signal characteristics? This paper evaluates the performance of SAX, Persist and ACA on 47 different time-series, based on signal periodicity. Results show that SAX tends to perform best on random signals whereas ACA may outperform the other methods on highly periodic signals. However, results do not support that a most adequate method may be determined a priory.
The gold standard for gait analysis, in-lab 3D motion capture, is not routinely used for clinical assessment due to limitations in availability, cost and required training. Inexpensive alternatives to quantitative gait analysis are needed to increase the its adoption. Inertial sensors such as accelerometers and gyroscopes are promising tools for the development of wearable gait analysis (WGA) systems. The present study evaluates the use of a WGA system on hip-arthroplasty patients in a real clinical setting. The system provides information about gait symmetry and normality. Results show that the normality measurements are well correlated with various quantitative and qualitative measures of recovery and health status.
Gait analysis (GA) is an important tool in the assessment of several physical and cognitive conditions. The lack of simple and economically viable quantitative GA systems has hindered the routine clinical use of GA in many areas. As a result, patients may be receiving sub-optimal treatment. The present study introduces and evaluates measures of gait symmetry and gait normality calculated from inertial sensor data. These indices support the creation of mobile, cheap and easy to use quantitative GA systems. The proposed method was compared to measures of symmetry and normality derived from 3D kinematic data. Results show that the proposed method is well correlated to the kinematic analysis in both symmetry (r=0.84, p<0.0001) and normality (r=0.81, p<0.0001). In addition, the proposed indices can be used to classify normal from abnormal gait.
We investigate controllers for mobile humanoid robots that maneuver in irregular terrains while performing accurate physical interactions with the environment and with human operators and test them on Dreamer, our new robot with a humanoid upper body (torso, arm, head) and a holonomic mobile base (triangularly arranged Omni wheels). All its actuators are torque controlled, and the upper body provides redundant degrees of freedom. We developed new dynamical models and created controllers that stabilize the robot in the presence of slope variations, while it compliantly interacts with humans.
This paper considers underactuated free-body dynamics with contact constraints between the wheels and the terrain. Moreover, Dreamer incorporates a biarticular mechanical transmission that we model as a force constraint. Using these tools, we develop new compliant multiobjective skills and include self-motion stabilization for the highly redundant robot. © 2013 Massachusetts Institute of Technology.
Evaluating the health condition of a material that could potentially contain micro-flaws is a common and important application within the field of non-destructive testing. Examples of such micro-defects include dislocation, fatigue cracks or impurities and are often hard to detect. The ability to precisely measure their type, size and position is a prerequisite for estimating the remaining useful life of the component. One technique that was shown successful in the past is based on traditional ultrasonic testing methods. In most cases, inner micro-flaws induce slight changes of acoustic wave spectrum components. However, these changes are often difficult to detect directly, as they tend to exhibit features that are most naturally analyzed using statistical and probabilistic methods. In this paper we apply Consensus Self-Organizing Models (COSMO) method to detect micro-flaws in metallic material. This approach is essentially an unsupervised deviation detection method based on the concept of "wisdom of the crowd". This method is used to analyze the spectrum of acoustic waves received by the transducer attached on the surface of material being analyzed. We have modeled a steel board with micro-cracks and collected time-series of acoustic echo response, at different positions on material's surface. The experimental results show that the COSMO method is able to detect and locate micro-flaws. © 2016 IEEE
The battery cells are an important part of electric and hybrid vehicles, and their deterioration due to aging or malfunction directly affects the life cycle and performance of the whole battery system. Therefore, an early detection of deviation in performance of the battery cells is an important task and its correct solution could significantly improve the whole vehicle performance. This paper presents a computational strategy for the detection of deviation of battery cells, due to aging or malfunction. The detection is based on periodically processing a predetermined number of data collected in data blocks that are obtained during the real operation of the vehicle. The first step is data compression, when the original large amount of data is reduced to smaller number of cluster centers. This is done by a newly proposed sequential clustering algorithm that arranges the clusters in decreasing order of their volumes. The next step is using a fuzzy inference procedure for weighted approximation of the cluster centers to create one-dimensional models for each battery cell that represents the voltage–current relationship. This creates an equal basis for the further comparison of the battery cells. Finally, the detection of the deviated battery cells is treated as a similarity-analysis problem, in which the pair distances between all battery cells are estimated by analyzing the estimations for voltage from the respective fuzzy models. All these three steps of the computational procedure are explained in the paper and applied to real experimental data for the detection of deviation of five battery cells. Discussions and suggestions are made for a practical application aimed at designing a monitoring system for the detection of deviations. © 2013 Wiley Periodicals, Inc.
In this paper identification of laryngeal disorders using cepstral parameters of human voice is investigated. Mel-frequency cepstral coefficients (MFCC), extracted from audio recordings, are further approximated, using 3 strategies: sampling, averaging, and estimation. SVM and LS-SVM categorize pre-processed data into normal, nodular, and diffuse classes. Since it is a three-class problem, various combination schemes are explored. Constructed custom kernels outperformed a popular non-linear RBF kernel. Features, estimated with GMM, and SVM kernels, designed to exploit this information, is an interesting fusion of probabilistic and discriminative models for human voice-based classification of larynx pathology.
This paper presents a general framework for designing a fuzzyrule-based classifier. Structure and parameters of the classifierare evolved through a two-stage genetic search. To reduce the searchspace, the classifier structure is constrained by a tree createdusing the evolving SOM tree algorithm. Salient input variables arespecific for each fuzzy rule and are found during the genetic searchprocess. It is shown through computer simulations of four real worldproblems that a large number of rules and input variables can beeliminated from the model without deteriorating the classificationaccuracy. By contrast, the classification accuracy of unseen data isincreased due to the elimination.This paper presents a general framework for designing a fuzzyrule-based classifier. Structure and parameters of the classifierare evolved through a two-stage genetic search. To reduce the searchspace, the classifier structure is constrained by a tree createdusing the evolving SOM tree algorithm. Salient input variables arespecific for each fuzzy rule and are found during the genetic searchprocess. It is shown through computer simulations of four real worldproblems that a large number of rules and input variables can beeliminated from the model without deteriorating the classificationaccuracy. By contrast, the classification accuracy of unseen data isincreased due to the elimination.
A method for robust tuning of individual cylinders air-fuel ratio is proposed. The fuel injection is adjusted so that each cylinder has the same air-fuel ratio in inner control loops, and the resulting air-fuel ratio in the exhaust pipe is controlled with an exhaust gas oxygen sensor (EGO) in an outer control loop to achieve stoichiometric air-fuel ratio. Correction factors to provide cylinder individual fuel injection timing are calculated based on measurements of the ion currents for the individual cylinders. An implementation in a production vehicle is shown with results from driving on the highway. © 2005 SAE International.