Health technological systems learning from and reacting on how humans behave in sensor equipped environments are today being commercialized. These systems rely on the assumptions that training data and testing data share the same feature space, and residing from the same underlying distribution - which is commonly unrealistic in real-world applications. Instead, the use of transfer learning could be considered. In order to transfer knowledge between a source and a target domain these should be mapped to a common latent feature space. In this work, the dimensionality reduction algorithm t-SNE is used to map data to a similar feature space and is further investigated through a proposed novel analysis of output stability. The proposed analysis, Normalized Linear Procrustes Analysis (NLPA) extends the existing Procrustes and Local Procrustes algorithms for aligning manifolds. The methods are tested on data reflecting human behaviour patterns from data collected in a smart home environment. Results show high partial output stability for the t-SNE algorithm for the tested input data for which NLPA is able to detect clusters which are individually aligned and compared. The results highlight the importance of understanding output stability before incorporating dimensionality reduction algorithms into further computation, e.g. for transfer learning.
Robots are being designed to communicate with people in various public and domestic venues in a perceptive, helpful, and discreet way. Here, we use a speculative prototyping approach to shine light on a new concept of robot steganography (RS): that a robot could seek to help vulnerable populations by discreetly warning of potential threats: We first identify some potentially useful scenarios for RS related to safety and security– concerns that are estimated to cost the world trillions of dollars each year–with a focus on two kinds of robots, a socially assistive robot (SAR) and an autonomous vehicle (AV). Next, we propose that existing, powerful, computer-based steganography (CS) approaches can be adopted with little effort in new contexts (SARs), while also pointing out potential benefits of human-like steganography (HS): Although less efficient and robust than CS, HS represents a currently-unused form of RS that could also be used to avoid requiring a computer to receive messages, detection by more technically advanced adversaries, or a lack of alternative connectivity (e.g., if a wireless channel is being jammed). Some unique challenges of RS are also introduced, that arise from message generation, indirect perception, and effects of perspective. Finally, we confirm the feasibility of the basic concept for RS, that messages can be hidden in a robot’s behaviors, via a simplified, initial user study, also making available some code and a video. The immediate implication is that RS could potentially help to improve people’s lives and mitigate some costly problems, as robots become increasingly prevalent in our society–suggesting the usefulness of further discussion, ideation, and consideration by designers.
What if an autonomous vehicle (AV) could secretly warn of potential threats? “Steganography”, the hiding of messages, is a vital way for vulnerable populations to communicate securely and get help. Here, we shine light on the concept of vehicular steganography (VS) using a speculative approach: We identify some key scenarios, highlighting unique challenges that arise from indirect perception, message generation, and effects of perspective-as well as potential carrier signals and message generation considerations. One observation is that, despite challenges to transmission rates and robustness, physical signals such as locomotion or sound could offer a complementary, currently-unused alternative to traditional methods. The immediate implication is that VS could help to mitigate some costly safety problems-suggesting the benefit of further discussion and ideation. © 2021. All Rights Reserved.
The phrase “most cruel and revolting crimes” has been used to describe some poor historical treatment of vulnerable impaired persons by precisely those who should have had the responsibility of protecting and helping them. We believe we might be poised to see history repeat itself, as increasingly humanlike aware robots become capable of engaging in behavior which we would consider immoral in a human–either unknowingly or deliberately. In the current paper we focus in particular on exploring some potential dangers affecting persons with dementia (PWD), which could arise from insufficient software or external factors, and describe a proposed solution involving rich causal models and accountability measures: Specifically, the Consequences of Needs-driven Dementia-compromised Behaviour model (C-NDB) could be adapted to be used with conversation topic detection, causal networks and multi-criteria decision making, alongside reports, audits, and deterrents. Our aim is that the considerations raised could help inform the design of care robots intended to support well-being in PWD.
The Diffie–Hellman protocol, ingenious in its simplicity, is still the major solution in protocols for generating a shared secret in cryptography for e-trading and many other applications after an impressive number of decades. However, lately, the threat from a future quantum computer has prompted successors resilient to quantum computer-based attacks. Here, an algorithm similar to Diffie–Hellman is presented. In contrast to the classic Diffie–Hellman, it involves floating point numbers of arbitrary size in the generation of a shared secret. This can, in turn, be used for encrypted communication based on symmetric cyphers. The validity of the algorithm is verified by proving that a vital part of the algorithm satisfies a one-way property. The decimal part is deployed for the one-way function in a way that makes the protocol a post-quantum key generation procedure. This is concluded from the fact that there is, as of yet, no quantum computer algorithm reverse engineering the one-way function. An example illustrating the use of the protocol in combination with XOR encryption is given. © 2020 MDPI (Basel, Switzerland)
A model which possesses both spatial and time dependence is the Markov chain Markov field (see X. Guyon, 1995). Here inference about the parameter for spatio-temporal interaction of a special case of a Markov chain Markov field model is considered. A statistic which is minimal sufficient for the interaction parameter and its asympotic distribution are derived. A condition for stationarity of the sufficient statistic process and the stationary distribution are given. Likelihood based inference such as estimation, hypothesis testing and monitoring are briefly examined.
Ämnet kryptologi, dvs. kryptering, dekryptering och kodknäckning omfattar såväl matematik, datorprogrammering som allmän finurlighet. Denna bok behandlar Caesarkrypto, substitutionskrypto, Vignèrekrypto, RSA-krypto och den bakomliggande matematiken (ekvationslösning, räkning med exponentialuttryck, resträkning, primtalsteori och rekursion) samt angränsande matematik (såsom kombinatorik, statistiska metoder för t.ex. detektering av krypterad kod och beräkning av överföringskvalitet). Boken innehåller även en så pass bred genomgång av elementär algebra (mängdlära, logik, trigonometri, komplexa tal och rekurrensekvationer) och analys (funktioner i en variabel, derivata och integraler) att den kan användas vid inledande studier i matematik inom en mängd olika utbildningar. Boken syftar till att ge en introduktion till kryptologisk problemlösning och visa på de stora synergieffekter som uppnås genom att tillämpa en välbalanserad kombination av grundläggande matematik och elementär programmering inom området. Förhoppningen är också att läsaren, sporrad av de nyvunna insikterna om kryptologi, lockas till vidare kunskapsfördjupning. Boken vänder sig i första hand till blivande IT-forensiker som kan behöva kompetensen att kryptera och knäcka krypton i sin yrkesroll men som inte har en omfattande matematisk förkunskap. Den vänder sig även till studenter på landets tekniska högskolor och universitet.
Surveillance to detect changes of spatial patterns is of interest in many areas such as environmental control and regional analysis. Here the interaction parameter of the Ising model, is considered. A minimal sufficient statistic and its asymptotic distribution are used. It is demonstrated that the convergence to normal distribution is rapid. The main result is that when the lattice is large, all approximations are better in several respects. It is shown that, for large lattice sizes, earlier results on surveillance of a normally distributed random variable can be used in cases of most interest. The expected delay of alarm at a fixed level of false alarm probability is examined for some examples. Copyright © 1999 by Marcel Dekker, Inc.
Discriminating between encrypted and non-encrypted information is desired for many purposes. Much of the efforts in this direction in the literature is focused on deploying machine learning methods for the discrimination in streamed data which is transmitted in packets in communication networks. Here, however, the focus and the methods are different. The retrieval of data from computer hard drives that have been seized from police busts against suspected criminals is sometimes not straightforward. Typically the incriminating code, which may be important evidence in subsequent trials, is encrypted and quick deleted. The cryptanalysis of what can be recovered from such hard drives is then subject to time-consuming brute forcing and password guessing. To this end methods for accurate classification of what is encrypted code and what is not is of the essence. Here a procedure for discriminating encrypted code from non-encrypted is derived. Two methods to detect where encrypted data is located in a hard disk drive are detailed using passive change-point detection. Measures of performance of such methods are discussed and a new property for evaluation is suggested. The methods are then evaluated and discussed according to the performance measures.
A new method for musical steganography for the MIDI format is presented. The MIDI standard is a user-friendly music technology protocol that is frequently deployed by composers of different levels of ambition. There is to the author’s knowledge no fully implemented and rigorously specified, publicly available method for MIDI steganography. The goal of this study, however, is to investigate how a novel MIDI steganography algorithm can be implemented by manipulation of the velocity attribute subject to restrictions of capacity and security. Many of today’s MIDI steganography methods—less rigorously described in the literature—fail to be resilient to steganalysis. Traces (such as artefacts in the MIDI code which would not occur by the mere generation of MIDI music: MIDI file size inflation, radical changes in mean absolute error or peak signal-to-noise ratio of certain kinds of MIDI events or even audible effects in the stego MIDI file) that could catch the eye of a scrutinizing steganalyst are side-effects of many current methods described in the literature. This steganalysis resilience is an imperative property of the steganography method. However, by restricting the carrier MIDI files to classical organ and harpsichord pieces, the problem of velocities following the mood of the music can be avoided. The proposed method, called Velody 2, is found to be on par with or better than the cutting edge alternative methods regarding capacity and inflation while still possessing a better resilience against steganalysis. An audibility test was conducted to check that there are no signs of audible traces in the stego MIDI files. © 2020 by the author. Licensee MDPI, Basel, Switzerland.
We study, by means of simulations, the performance of the Shewhart method, the Cusum method, the Shiryaev-Roberts method and the likelihood ratio method in the case when the true shift differs from the shift for which the methods are optimal. The methods are compared for a fixed expected time until false alarm. The comparisons are made with respect to some measures associated with power such as probability of alarm when the change occurs immediately, expected delay of true alarm and predictive value of an alarm. Copyright © 2000 by Marcel Dekker, Inc.
Blood lactate accumulation is a crucial fatigue indicator during sports training. Previous studies have predicted cycling fatigue using surface-electromyography (sEMG) to non-invasively estimate lactate concentration in blood. This study used sEMG to predict muscle fatigue while running and proposes a novel method for the automatic classification of running fatigue based on sEMG. Data were acquired from 12 runners during an incremental treadmill running-test using sEMG sensors placed on the vastus-lateralis, vastus-medialis, biceps-femoris, semitendinosus, and gastrocnemius muscles of the right and left legs. Blood lactate samples of each runner were collected every two minutes during the test. A change-point segmentation algorithm labeled each sample with a class of fatigue level as (1) aerobic, (2) anaerobic, or (3) recovery. Three separate random forest models were trained to classify fatigue using 36 frequency, 51 time-domain, and 36 time-event sEMG features. The models were optimized using a forward sequential feature elimination algorithm. Results showed that the random forest trained using distributive power frequency of the sEMG signal of the vastus-lateralis muscle alone could classify fatigue with high accuracy. Importantly for this feature, group-mean ranks were significantly different (p < 0.01) between fatigue classes. Findings support using this model for monitoring fatigue levels during running. © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
A system for detecting deviating human behaviour in a smart home environment is the long-term goal of this work. It is believed that such systems will be very important in ambient assisted living services. Three types of deviations are considered in this work: deviation in activity intensity, deviation in time and deviation in space. Detection of deviations in activity intensity is formulated as the on-line quickest detection of a parameter shift in a sequence of independent Poisson random variables. Random forests trained in an unsupervised fashion are used to learn the spatial and temporal structure of data representing normal behaviour and are thereafter utilised to find deviations.The experimental investigations have shown that the Page and Shiryaev change-point detection methods are preferable in terms of expected delay of motivated alarm. Interestingly only a little is lost when the methods are specified with estimated intensity parameters rather than the true intensity values which are not available in a real situation. As to the spatial and temporal deviations, they can be revealed through analysis of a 2D map of high dimensional data. It was demonstrated that such a map is stable in terms of the number of clusters formed. We have shown that the data clusters can be understood/explored by finding the most important variables and by analysing the structure of the most representative tree.
A system for detecting deviating human behaviour in a smart home environment is the long-term goal of this work. Clearly, such systems will be very important in ambient assisted living services. A new approach to modelling human behaviour patterns is suggested in this paper. The approach reveals promising results in unsupervised modelling of human behaviour and detection of deviations by using such a model. Human behaviour/activity in a short time interval is represented in a novel fashion by responses of simple non-intrusive sensors. Deviating behaviour is revealed through data clustering and analysis of associations between clusters and data vectors representing adjacent time intervals (analysing transitions between clusters). To obtain clusters of human behaviour patterns, first, a random forest is trained without using beforehand defined teacher signals. Then information collected in the random forest data proximity matrix is mapped onto the 2D space and data clusters are revealed there by agglomerative clustering. Transitions between clusters are modelled by the third order Markov chain.
Three types of deviations are considered: deviation in time, deviation in space and deviation in the transition between clusters of similar behaviour patterns.
The proposed modelling approach does not make any assumptions about the position, type, and relationship of sensors but is nevertheless able to successfully create and use a model for deviation detection-this is claimed as a significant result in the area of expert and intelligent systems. Results show that spatial and temporal deviations can be revealed through analysis of a 2D map of high dimensional data. It is demonstrated that such a map is stable in terms of the number of clusters formed. We show that the data clusters can be understood/explored by finding the most important variables and by analysing the structure of the most representative tree. © 2016 Elsevier Ltd. All rights reserved.
Development, testing and validation of algorithms for smart home applications are often complex, expensive and tedious processes. Research on simulation of resident activity patterns in Smart Homes is an active research area and facilitates development of algorithms of smart home applications. However, the simulation of passive infrared (PIR) sensors is often used in a static fashion by generating equidistant events while an intended occupant is within sensor proximity. This paper suggests the combination of avatar-based control and probabilistic sampling in order to increase realism of the simulated data. The number of PIR events during a time interval is assumed to be Poisson distributed and this assumption is used in the simulation of Smart Home data. Results suggest that the proposed approach increase realism of simulated data, however results also indicate that improvements could be achieved using the geometric distribution as a model for the number of PIR events during a time interval. © IEEE 2015
An anti-counterfeit and authentication method usingtime controlled numeric tokens enabling a secure logistic chain ispresented. Implementation of the method is illustrated with apharmaceutical anti-counterfeit system. The method uses activeRFID technology in combination with product seal. Authenticityis verified by comparing time controlled ID-codes, i.e. numerictokens, stored in RFID tags and by identical numeric tokensstored in a secure database. The pharmaceutical products areprotected from the supplier to the pharmacist, with thepossibility to extend the authentication out to the end customer.The ability of the method is analyzed by discussion of severalpossible scenarios. It is shown that an accuracy of 99.9% tellingthe customer she has an authentic product is achieved by the useof 11-bit ID-code strings.
Activity recognition in smart environments is essential for ensuring the wellbeing of older residents. By tracking activities of daily living (ADLs), a person’s health status can be monitored over time. Nonetheless, accurate activity classification must overcome the fact that each person performs ADLs in different ways and in homes with different layouts. One possible solution is to obtain large amounts of data to train a supervised classifier. Data collection in real environments, however, is very expensive and cannot contain every possible variation of how different ADLs are performed. A more cost-effective solution is to generate a variety of simulated scenarios and synthesize large amounts of data. Nonetheless, simulated data can be considerably different from real data. Therefore, this paper proposes the use of regression models to better approximate real observations based on simulated data. To achieve this, ADL data from a smart home were first compared with equivalent ADLs performed in a simulator. Such comparison was undertaken considering the number of events per activity, number of events per type of sensor per activity, and activity duration. Then, different regression models were assessed for calculating real data based on simulated data. The results evidenced that simulated data can be transformed with a prediction accuracy of R2 = 97.03%.
© Springer Science+Business Media, LLC, part of Springer Nature 2020
The accurate recognition of activities is fundamental for following up on the health progress of people with dementia (PwD), thereby supporting subsequent diagnosis and treatments. When monitoring the activities of daily living (ADLs), it is feasible to detect behaviour patterns, parse out the disease evolution, and consequently provide effective and timely assistance. However, this task is affected by uncertainties derived from the differences in smart home configurations and the way in which each person undertakes the ADLs. One adjacent pathway is to train a supervised classification algorithm using large-sized datasets; nonetheless, obtaining real-world data is costly and characterized by a challenging recruiting research process. The resulting activity data is then small and may not capture each person's intrinsic properties. Simulation approaches have risen as an alternative efficient choice, but synthetic data can be significantly dissimilar compared to real data. Hence, this paper proposes the application of Partial Least Squares Regression (PLSR) to approximate the real activity duration of various ADLs based on synthetic observations. First, the real activity duration of each ADL is initially contrasted with the one derived from an intelligent environment simulator. Following this, different PLSR models were evaluated for estimating real activity duration based on synthetic variables. A case study including eight ADLs was considered to validate the proposed approach. The results revealed that simulated and real observations are significantly different in some ADLs (p-value < 0.05), nevertheless synthetic variables can be further modified to predict the real activity duration with high accuracy (R2(pred)>90%). © 2022 by the authors.
Automatic detection and recognition of Activities of Daily Living (ADL) are crucial for providing effective care to frail older adults living alone. A step forward in addressing this challenge is the deployment of smart home sensors capturing the intrinsic nature of ADLs performed by these people. As the real-life scenario is characterized by a comprehensive range of ADLs and smart home layouts, deviations are expected in the number of sensor events per activity (SEPA), a variable often used for training activity recognition models. Such models, however, rely on the availability of suitable and representative data collection and is habitually expensive and resource-intensive. Simulation tools are an alternative for tackling these barriers; nonetheless, an ongoing challenge is their ability to generate synthetic data representing the real SEPA. Hence, this paper proposes the use of Poisson regression modelling for transforming simulated data in a better approximation of real SEPA. First, synthetic and real data were compared to verify the equivalence hypothesis. Then, several Poisson regression models were formulated for estimating real SEPA using simulated data. The outcomes revealed that real SEPA can be better approximated ( R2pred = 92.72 % ) if synthetic data is post-processed through Poisson regression incorporating dummy variables. © 2020 MDPI (Basel, Switzerland)
The Brahmi descended Sinhala script is used by 75% of the 18 million population in Sri Lanka. To the best of our knowledge, none of the Brahmi descended scripts used by hundreds of millions of people in South Asia, possess commercial OCR products. In the process of implementation of an OCR system for the printed Sinhala script which is easily adoptable to similar scripts [Premaratne, L., Assabie, Y., Bigun, J., 2004. Recognition of modification-based scripts using direction tensors. In: 4th Indian Conf. on Computer Vision, Graphics and Image Processing (ICVGIP2004), pp. 587–592]; a segmentation-free recognition method using orientation features has been proposed in [Premaratne, H.L., Bigun, J., 2004. A segmentation-free approach to recognise printed Sinhala script using linear symmetry. Pattern Recognition 37, 2081–2089]. Due to the limitations in image analysis techniques the character level accuracy of the results directly produced by the proposed character recognition algorithm saturates at 94%. The false rejections from the recognition algorithm are initially identified only as ‘missing character positions’ or ‘blank characters’. It is necessary to identify suitable substitutes for such ‘missing character positions’ and optimise the accuracy of words to an acceptable level. This paper proposes a novel method that explores the lexicon in association with the hidden Markov models to improve the rate of accuracy of the recognised script. The proposed method could easily be extended with minor changes to other modification-based scripts consisting of confusing characters. The word-level accuracy which was at 81.5% is improved to 88.5% by the proposed optimisation algorithm.
Deviation detection is important for self-monitoring systems. To perform deviation detection well requires methods that, given only "normal" data from a distribution of unknown parametric form, can produce a reliable statistic for rejecting the null hypothesis, i.e. evidence for devating data. One measure of the strength of this evidence based on the data is the p-value, but few deviation detection methods utilize p-value estimation. We compare three methods that can be used to produce p-values: one class support vector machine (OCSVM), conformal anomaly detection (CAD), and a simple "most central pattern" (MCP) algorithm. The SVM and the CAD method should be able to handle a distribution of any shape. The methods are evaluated on synthetic data sets to test and illustrate their strengths and weaknesses, and on data from a real life self-monitoring scenario with a city bus fleet in normal traffic. The OCSVM has a Gaussian kernel for the synthetic data and a Hellinger kernel for the empirical data. The MCP method uses the Mahalanobis metric for the synthetic data and the Hellinger metric for the empirical data. The CAD uses the same metrics as the MCP method and has a k-nearest neighbour (kNN) non-conformity measure for both sets. The conclusion is that all three methods give reasonable, and quite similar, results on the real life data set but that they have clear strengths and weaknesses on the synthetic data sets. The MCP algorithm is quick and accurate when the "normal" data distribution is unimodal and symmetric (with the chosen metric) but not otherwise. The OCSVM is a bit cumbersome to use to create (quantized) p-values but is accurate and reliable when the data distribution is multimodal and asymmetric. The CAD is also accurate for multimodal and asymmetric distributions. The experiment on the vehicle data illustrate how algorithms like these can be used in a self-monitoring system that uses a fleet of vehicles to conduct deviation detection without supervisi- n and without prior knowledge about what is being monitored. © 2014 IEEE.
Quantitative analysis of the evolution of innovations at national systems level is not alwayspossible due to the lack of reliable, comprehensive and adequate data sets. Therefore, managerialpractice among organizations as well as policy decision making are often myopic anduninformed about actual dynamics.In the Swedish case, there are promising data sets, even if the adequacy of existing variabledefinitions needs to be explored and debated. Official data collected by the central statisticsauthority SCB (Statistics Sweden) includes several potentially relevant variables on all privateand public organizations in Sweden and their employees. These data are compiled into timeseries for a number of years which allows for longitudinal analysis. Data can also be mergedwith other data sets on the environmental goods and services sector and energy consumption dataand therefore allow for a detailed “demographic” or “population ecology” analysis ofenvironmentally oriented or friendly innovation since at least 2003. Halmstad University hasrecently gained full access to these data.In this paper, these databases are described in some detail. Problems of definitions andmeasurement are particularly discussed, and some initial descriptive statistics are presented.Further, the paper advocates the use of models inspired by population ecology and demographyin analyzing existing data. In particular it is suggested that interactive diffusion models mayenhance the understanding of the evolution of green innovations and their dynamics. It is alsosuggested that multi-level regression analysis is applicable in estimating the power of factors thatbring progress to the “greening” of the Swedish innovation system.Together, such models are potentially useful in forecasting the development of innovationsystems. The models can also be used in generating, testing by simulating and thus evaluatingapproaches to management of innovation and innovation policy implementation. A dynamicunderstanding of the “greening” of the innovation system is a critical asset in the development oftools to be used for continuous improvements in both policy making and the management ofinnovation in organizations.
Most countries aim to transform towards becoming greener societies. In parallel, many companies struggle with the question of how to build more sustainable operations while at the same time sustaining or developing their competitive advantage. Research has, up until today, however, largely failed to provide solid explanations for how to achieve these aims, from which policy and managerial decision-making can deduced. One reason for this failure is that quantitative analysis of “green” innovation at national systems level is not always possible due to the lack of reliable, comprehensive and adequate data sets. In the Swedish case, there are promising data sets, even if one always can debate the adequacy of existing variable definitions. Official data collected by Statistics Sweden (SCB) includes several interesting variables on all private and public organizations in Sweden and all employees, compiled into time series for a number of years. These can be merged with other data sets on the environmental goods and services sector and energy consumption data and therefore allow for a detailed “demographic” or “population ecology” analysis of environmentally oriented or friendly innovation since at least 2003. In this paper, these databases are described in some detail. Problems of definitions and measurement are particularly discussed. Initial explorations describe the shift from fossil to non-fossil energy sources in the Swedish innovation system. Further, we also suggest some models inspired by demography and population ecology and also multi-level models. In particular it is suggested that diffusion models could be applied, including models in which diffusion processes interact in micro-level systems. It is suggested to apply multi-level regression analysis in order to estimate the power of factors affecting the “greening” of Swedish innovation system.
This study describes a new method for musical steganography utilizing the MIDI format. MIDI is a standard music technology protocol that is used around the world to create music and make it available for listening. Since no publicly available method for MIDI steganography has been found (even though there are a few methods described in the literature), the study investigates how a new algorithm for MIDI steganography can be designed so that it satisfies capacity and security criteria. As part of the study, a method for using velocity values to hide information in music has been designed and evaluated, during which the capacity of the method is found to be comparable with similar methods. In an audibility test, it is observed that audible impact on the music can not be distinguished at any reasonable significance level, which means that also a security criterion is met. © 2017 IEEE.
Extortion using digital platforms is an increasing form of crime. A commonly seen problem is extortion in the form of an infection of a Crypto Ransomware that encrypts the files of the target and demands a ransom to recover the locked data. By analyzing the four most common Crypto Ransomwares, at writing, a clear vulnerability is identified; all infections rely on tools available on the target system to be able to prevent a simple recovery after the attack has been detected. By renaming the system tool that handles shadow copies it is possible to recover from infections from all four of the most common Crypto Ransomwares. The solution is packaged in a single, easy to use script. © 2016 IEEE.