This paper presents a decomposition technique for Hennessy-Milner logic with past and its extension with recursively defined formulae. In order to highlight the main ideas and technical tools, processes are described using a subset of CCS with parallel composition, nondeterministic choice, action prefixing and the inaction constant. The study focuses on developing decompositional reasoning techniques for parallel contexts in that language. © 2012 Springer-Verlag.
This paper presents a general technique for obtaining new results pertaining to the non-finite axiomatizability of behavioural (pre)congruences over process algebras from old ones. The proposed technique is based on a variation on the classic idea of reduction mappings. In this setting, such reductions are translations between languages that preserve sound (in)equations and (in)equational provability over the source language, and reflect families of (in)equations responsible for the non-finite axiomatizability of the target language. The proposed technique is applied to obtain a number of new non-finite axiomatizability theorems in process algebra via reduction to Moller's celebrated non-finite axiomatizability result for CCS. The limitations of the reduction technique are also studied. In particular, it is shown that prebisimilarity is not finitely based over CCS with the divergent process Ω, but that this result cannot be proved by a reduction to the non-finite axiomatizability of CCS modulo bisimilarity. This negative result is the inspiration for the development of a sharpened reduction method that is powerful enough to show that prebisimilarity is not finitely based over CCS with the divergent process Ω. © 2010 Springer-Verlag.
Determinism is a semantic property of (a fragment of) a language that specifies that a program cannot evolve operationally in several different ways. Idempotence is a property of binary composition operators requiring that the composition of two identical specifications or programs will result in a piece of specification or program that is equivalent to the original components. In this paper, we propose (related) meta-theorems for guaranteeing the determinism and idempotence of binary operators. These meta-theorems are formulated in terms of syntactic templates for operational semantics, called rule formats. In order to obtain a powerful rule format for idempotence, we make use of the determinism of certain transition relations in the definition of the format for idempotence. We show the applicability of our formats by applying them to various operational semantics from the literature. © 2010 Elsevier B.V. All rights reserved.
Decompositional reasoning aims at automatically decomposing a global property of a composite system into local properties of (possibly unknown) components. In concurrency theory, decompositional reasoning techniques date back to the seminal work of Larsen and Liu in the late 1980s and early 1990s. However, we are not aware of any such decomposition technique that applies to reasoning about the "past". In this paper, we address this problem and present a decomposition technique for Hennessy-Milner logic with past and its extension with recursively defined formulae. As a language for processes, we use a subset of Milner's CCS with parallel composition, non-deterministic choice, action prefixing and the inaction constant. We focus on developing decompositional reasoning techniques for parallel contexts in that language.
This paper proposes rule formats for Structural Operational Semantics guaranteeing that certain binary operators are left distributive with respect to a set of binary operators. Examples of left-distributivity laws from the literature are shown to be instances of the provided formats. Some conditions ensuring the invalidity of the left-distributivity law are also offered. © 2012 Elsevier B.V. All rights reserved.
This paper proposes rule formats for Structural Operational Semantics guaranteeing that certain constants act as left or right unit/zero elements for a set of binary operators. Examples of left and right zero, as well as unit, elements from the literature are shown to fit the rule formats offered in this study. © 2011 Elsevier B.V. All rights reserved.
This paper proposes a rule format for Structural Operational Semantics guaranteeing that certain constants act as left or right zero elements for a set of binary operators. Our design approach is also applied to reformulate an earlier rule format for unit elements developed by some of the authors. Examples of left and right zero, as well as unit, elements from the literature are shown to be checkable using the provided formats.
This paper proposes rule formats for Structural Operational Semantics guaranteeing that certain binary operators are left distributive with respect to a set of binary operators. Examples of left-distributivity laws from the literature are shown to be instances of the provided formats.
This paper proposes rule formats for Structural Operational Semantics guaranteeing that certain binary operators are left distributive with respect to a set of binary operators. Examples of left-distributivity laws from the literature are shown to be instances of the provided formats. © 2011 Springer-Verlag.
This paper presents a general technique for obtaining new results pertaining to the non-finite axiomatizability of behavioral semantics over process algebras from old ones. The proposed technique is based on a variation on the classic idea of reduction mappings. In this setting, such reductions are translations between languages that preserve sound (in)equations and (in)equational proofs over the source language, and reflect families of (in)equations responsible for the non-finite axiomatizability of the target language. The proposed technique is applied to obtain a number of new non-finite axiomatizability theorems in process algebra via reduction to Moller’s celebrated non-finite axiomatizability result for CCS. The limitations of the reduction technique are also studied.
In the field of structural operational semantics (SOS), there have been several proposals both for syntactic rule formats guaranteeing the validity of algebraic laws, and for algorithms for automatically generating ground-complete axiomatizations. However, there has been no synergy between these two types of results. This paper takes the first steps in marrying these two areas of research in the meta-theory of SOS and shows that taking algebraic laws into account in the mechanical generation of axiomatizations results in simpler axiomatizations. The proposed theory is applied to a paradigmatic example from the literature, showing that, in this case, the generated axiomatization coincides with a classic hand-crafted one. © 2013 Springer-Verlag Berlin Heidelberg.
Algebraic properties specify some natural properties of programming and specification constructs. This paper provides an overview of techniques to guarantee or generate algebraic properties of language constructs by investigating the syntactic shape of the deduction rules defining their operational semantics.
We study the equational theory of Timed CCS as proposed by Wang Yi in CONCUR'90. Common to Wang Yi's paper, we particularly focus on a class of linearly-ordered time domains exemplified by the positive real or rational numbers. We show that, even when the set of basic actions is a singleton, there are parallel Timed CCS processes that do not have any sequential equivalent and thus improve on the Gap Theorem for Timed CCS presented by Godskesen and Larsen in FSTTCS'92. Furthermore, we show that timed bisimilarity is not finitely based both for single-sorted and two-sorted presentations of Timed CCS. We further strengthen this result by showing that, unlike in some other process algebras, adding the untimed or the timed left-merge operator to the syntax and semantics of Timed CCS does not solve the axiomatizability problem.
This paper offers a meta-theorem for languages with a Structural Operational Semantics (SOS) in the style of Plotkin. Namely, it proposes a generic rule format for SOS guaranteeing that certain constants act as left- or right-unit elements for a set of binary operators. We show the generality of our format by applying it to a wide range of operators from the literature on process calculi.
Process algebra provides abstract and rigorous means for studying communicating concurrent systems. Coordination languages also provide abstract means for the specifying and programming communication of components. Hence, the two fields seem to have very much in common and the link between these two research areas have been established formally by means of several translations, mainly from coordination languages to process algebras. There have also been proposals of process algebras whose communication policy is inspired by the one underlying coordination languages. The aim of this workshop was to push the state of the art in the study of the connections between process algebra and coordination languages by bringing together experts as well as young researchers from the two fields to communicate their ideas and findings. It includes both contributed and invited papers that have been presented during the one day meeting on Process Algebra and Coordination (PACO 2011) which took place on June 9, 2011 in Reykjavik, Iceland.
We report on a tool prototype for model-based testing of cyber-physical systems. Our starting point is a hybrid-system model specified in a domain-specific language called Acumen. Our prototype tool is implemented in Matlab and covers three stages of model-based testing, namely, test-case generation, test-case execution, and conformance analysis. We have applied our implementation to a number of typical examples of cyber-physical systems in order to analyze its applicability. In this paper, we report on the result of applying the prototype tool on a DC-DC boost converter. © Springer International Publishing Switzerland 2015
Cyber-physical systems (CPSs) are the result of the integration of connected computer systems with the physical world. They feature complex interactions that go beyond traditional communication schemes and protocols in computer systems. One distinguished feature of such complex interactions is the tight coupling between discrete and continuous interactions, captured by hybrid system models.
Due to the complexity of CPSs, providing rigorous and model-based analysis methods and tools for verifying correctness of such systems is of the utmost importance. Model-based testing (MBT) is one such verification technique that can be used for checking the conformance of an implementation of a system to its specification (model).
In this chapter, we first review the main concepts and techniques in MBT. Subsequently, we review the most common modeling formalisms for CPSs, with focus on hybrid system models. Subsequently, we provide a brief overview of conformance relations and conformance testing techniques for CPSs. © 2017 Elsevier Inc. All rights reserved.
We present a survey of the recent research efforts in integrating model learning with model-based testing. We distinguished two strands of work in this domain, namely test-based learning (also called test-based modeling) and learning-based testing. We classify the results in terms of their underlying models, their test purpose and techniques, and their target domains. © Springer International Publishing AG
Conformance testing is a formal and structured approach to verifying system correctness. We propose a conformance testing algorithm for cyber-physical systems, based on the notion of hybrid conformance by Abbas and Fainekos. We show how the dynamics of system specification and the sampling rate play an essential role in making sound verdicts. We specify and prove error bounds that lead to sound test-suites for a given specification and a given sampling rate. We use reachability analysis to find such bounds and implement the proposed approach using the CORA toolbox in Matlab. We apply the implemented approach on a case study from the automotive domain. © 2017 The Author(s).
We present a process for sound conformance testing of cyber-physical systems, which involves functional but also non-functional aspects. The process starts with a hybrid model of cyber-physical systems in which the correct behavior of the system (at its interface level) is specified. Such a model captures both discrete behavior and evolution of continuous dynamics of the system in time. Since conformance testing inherently involves comparing continuous dynamics, the key parameters of the process are (1) the conformance bounds defining when two signals are sufficiently close to each other, and (2) the permitted error margin in the conformance analysis introduced by sampling of continuous signals. The final parameter of this process is (3) finding (and adjusting) the sampling rate of the dynamic behavior. In the specified process, we provide different alternatives for fixing the error margin of the conformance testing if the sampling rate is fixed, establishing the sampling rate if the error margin is fixed and finding conformance bounds once the sampling rate and the error margin are fixed. © 2017 IEEE.
This paper provides some background and the roadmap of the AUTO-CAAS project, which is a 3-year project financed by the Swedish Knowledge Foundation and is ongoing as a joint project among three academic and industrial partners. The aim of the project is to exploit the formal models of the AUTOSAR standard, developed by the industrial partner of the project Quviq AB, in order to predict possible future failures in concrete implementations of components. To this end, the deviations from the formal specification will be exploited to generate test-cases that can push concrete components to the corners where such deviation will result in observable failures. The same information will also be used in the diagnosis of otherwise detected failures in order to pinpoint their root causes.
In this paper, we present a process-algebraic specication of group membership protocols specified in [Y. Amir, D. Dolev, S. Kramer and D. Malki, Membership Algorithms for Multicast Communication Groups, Springer-Verlag, 1992]. In order to formalise the protocol and its properties we disambiguate the informal specification provided by the paper. This requires trying different possible interpretations in the formal model and checking the consistency of the assumption and formally verifying the correctness properties. We thus present a formal reconstruction of the membership algorithms and model-check our reconstruction.
XY-simulation is a generalization of bisimulation that is parameterized with two subsets of actions. XY-simulation is known in the literature under different names such as modal refinement, partial bisimulation, and alternating simulation. In this paper, we propose a precongruence rule format for XY-simulation. The format allows for checking compositionality of XY-simulation for an arbitrary language with structural operational semantics, by performing very simple checks on the syntactic shape of the rules. We apply our format to derive concrete compositionality results for different notions of behavioral pre-order with respect to different process calculi in the literature. © IFIP International Federation for Information Processing 2015
We extend the theory of input-output conformance testing to the setting of software product lines. In particular, we allow for input-output featured transition systems to be used as the basis for generating test suites and test cases. We introduce refinement operators both at the level of models and at the level of test suites that allow for projecting them into a specific product configuration (or a product sub-line). We show that the two sorts of refinement are consistent and lead to the same set of test-cases. © Copyright 2014 ACM
We extend the theory of input-output conformance (IOCO) testing to accommodate behavioral models of software product lines (SPLs). We present the notions of residual and spinal testing. These notions allow for structuring the test process for SPLs by taking variability into account and extracting separate test suites for common and specific features of an SPL. The introduced notions of residual and spinal test suites allow for focusing on the newly introduced behavior and avoiding unnecessary re-test of the old one. Residual test suites are very conservative in that they require retesting the old behavior that can reach to new behavior. However, spinal test suites more aggressively prune the old tests and only focus on those test sequences that are necessary in reaching the new behavior. We show that residual testing is complete but does not usually lead to much reduction in the test-suite. In contrast, spinal testing is not necessarily complete but does reduce the test-suite. We give sufficient conditions on the implementation to guarantee completeness of spinal testing. Finally, we specify and analyze an example regarding the Ceiling Speed Monitoring Function from the European Train Control System. (C) 2016 The Author(s). Published by Elsevier Inc.
A major challenge in testing software product lines is efficiency. In particular, testing a product line should take less effort than testing each and every product individually. We address this issue in the context of input-output conformance testing, which is a formal theory of model-based testing. We extend the notion of conformance testing on input-output featured transition systems with the novel concept of spinal test suites. We show how this concept dispenses with retesting the common behavior among different, but similar, products of a software product line. © H. Beohar & M.R. Mousavi.
In order to provide a rigorous foundation for Software Product Lines (SPLs), several fundamental approaches have been proposed to their formal behavioral modeling. In this paper, we provide a structured overview of those formalisms based on labeled transition systems and compare their expressiveness in terms of the set of products they can specify. Moreover, we define the notion of tests for each of these formalisms and show that our notions of testing precisely capture product derivation, i.e., all valid products will pass the set of test cases of the product line and each invalid product fails at least one test case of the product line. © 2015 The Authors.
Regression testing is a means to assure that a change in the software, or its execution environment, does not introduce new defects. It involves the expensive undertaking of rerunning test cases. Several techniques have been proposed to reduce the number of test cases to execute in regression testing, however, there is no research on how to assess industrial relevance and applicability of such techniques. We conducted a systematic literature review with the following two goals: firstly, to enable researchers to design and present regression testing research with a focus on industrial relevance and applicability and secondly, to facilitate the industrial adoption of such research by addressing the attributes of concern from the practitioners' perspective. Using a reference-based search approach, we identified 1068 papers on regression testing. We then reduced the scope to only include papers with explicit discussions about relevance and applicability (i.e. mainly studies involving industrial stakeholders). Uniquely in this literature review, practitioners were consulted at several steps to increase the likelihood of achieving our aim of identifying factors important for relevance and applicability. We have summarised the results of these consultations and an analysis of the literature in three taxonomies, which capture aspects of industrial-relevance regarding the regression testing techniques. Based on these taxonomies, we mapped 38 papers reporting the evaluation of 26 regression testing techniques in industrial settings. © The Author(s) 2019
In this paper we introduce a notion of counterfactual causality in the Halpern and Pearl sense that is compositional with respect to the interleaving of transition systems. The formal framework for reasoning on what caused the violation of a safety property is established in the context of labeled transition systems and Hennessy Milner logic. The compositionality results are devised for non-communicating systems.
Transition rules with negative premises are needed in the structural operational semantics of programming and specification constructs such as priority and interrupt, as well as in timed extensions of specification languages. The well-known proof-theoretic semantics for transition system specifications involving such rules is based on well-supported proofs for closed transitions. Dealing with open formulae by considering all closed instances is inherently non-modular - proofs are not necessarily preserved by disjoint extensions of the transition system specification. Here, we conservatively extend the notion of well-supported proof to open transition rules. We prove that the resulting semantics is modular, consistent, and closed under instantiation. Our results provide the foundations for modular notions of bisimulation such that equivalence can be proved with reference only to the relevant rules, without appealing to all existing closed instantiations of terms. © 2013 Springer-Verlag.
Plotkin’s style of Structural Operational Semantics (SOS) has become a de facto standard in giving operational semantics to formalisms and process calculi. In many such formalisms and calculi, the concepts of names, variables and binders are essential ingredients. In this paper, we propose a formal framework for dealing with names in SOS. The framework is based on the Nominal Logic of Gabbay and Pitts and hence is called Nominal SOS. We define nominal bisimilarity, an adaptation of the notion of bisimilarity that is aware of binding. We provide evidence of the expressiveness of the framework by formulating the early π-calculus and Abramsky’s lazy λ-calculus within Nominal SOS. For both calculi we establish the operational correspondence with the original calculi. Moreover, in the context of the π-calculus, we prove that nominal bisimilarity coincides with Sangiorgi’s open bisimilarity and in the context of the λ-calculus we prove that nominal bisimilarity coincides with Abramsky’s applicative bisimilarity. © 2012 Elsevier B.V.
Input/Output Transition Systems (IOTSs) have been widely used as test models in model-based testing. Traditionally, input output conformance testing (IOCO) has been used to generate random test cases from IOTSs. A recent test case generation method for IOTSs, called Complete IOCO, applies fault models to obtain complete test suites with guaranteed fault coverage for IOTSs. This paper measures the efficiency of Complete IOCO in comparison with the traditional IOCO test case generation implemented in the JTorX tool. To this end, we use a case study involving five specification models from the automotive and the railway domains. Faulty mutations of the specifications were produced in order to compare the efficiency of both test generation methods in killing them. The results indicate that Complete IOCO is more efficient in detecting deep faults in large state spaces while IOCO is more efficient in detecting shallow faults in small state spaces. © 2016 ACM.
We propose a rule format that guarantees associativity of binary operators with respect to all notions of behavioral equivalence that are defined in terms of (im)possibility of transitions, e.g., the notions below strong bisimilarity in van Glabbeek's spectrum. The initial format is a subset of the De Simone format. We show that all trivial generalizations of our format are bound for failure. We further extend the format in a few directions and illustrate its application to several formalisms in the literature. A subset of the format is studied to obtain associativity with respect to graph isomorphism.
Process algebras have been developed as formalisms for specifying the behavioral aspects of protocols. Interpreted systems have been proposed as a semantic model for multi-agent communication. In this paper, we connect these two formalisms by defining an interpreted systems semantics for a generic process algebraic formalism. This allows us to translate and compare the vast body of knowledge and results for each of the two formalisms to the other and perform epistemic reasoning, e.g., using model-checking tools for interpreted systems, on process algebraic specifications. Based on our translation we formulate and prove some results about the interpreted systems generated by process algebraic specifications. © 2013 Springer-Verlag.
Operational models of protocols, on one hand, are readable and conveniently match their implementation, at a certain abstraction level. Epistemic models, on the other hand, are appropriate for specifying knowledge-related properties such as anonymity. These two approaches to specification and analysis have so far developed in parallel and one has either to define ad hoc correctness criteria for the operational model or use complicated epistemic models to specify the operational behavior. We work towards bridging this gap by proposing a combined framework which allows modeling the behavior of a protocol in a process language with an operational semantics and supports reasoning about properties expressed in a rich logic with temporal and epistemic operators.
Operational models of (security) protocols, on one hand, are readable and conveniently match their implementation (at a certain abstraction level). Epistemic models, on the other hand, are appropriate for specifying knowledge-related properties such as anonymity or secrecy. These two approaches to specification and verification have so far developed in parallel and one has either to define ad hoc correctness criteria for the operational model or use complicated epistemic models to specify the operational behavior. We work towards bridging this gap by proposing a combined framework which allows for modeling the behavior of a protocol in a process language with an operational semantics and supports reasoning about properties expressed in a rich logic which combines temporal and epistemic operators.
Automatic generation of random test inputs is an approach that can alleviate the challenges of manual test case design. However, random test cases may be ineffective in fault detection and increase testing cost, especially in systems where test execution is resource- and time-consuming. To remedy this, the domain knowledge of test engineers can be exploited to select potentially effective test cases. To this end, test selection constraints suggested by domain experts can be utilized either for filtering randomly generated test inputs or for direct generation of inputs using constraint solvers. In this article, we propose a domain specific language (DSL) for formalizing locality-based test selection constraints of autonomous agents and discuss the impact of test selection filters, specified in our DSL, on randomly generated test cases. We study and compare the performance of filtering and constraint solving approaches in generating selective test cases for different test scenario parameters and discuss the role of these parameters in test generation performance. Through our study, we provide criteria for suitability of the random data filtering approach versus the constraint solving one under the varying size and complexity of our testing problem. We formulate the corresponding research questions and answer them by designing and conducting experiments using QuickCheck for random test data generation with filtering and Z3 for constraint solving. Our observations and statistical analysis indicate that applying filters can significantly improve test efficiency of randomly generated test cases. Furthermore, we observe that test scenario parameters affect the performance of the filtering and constraint solving approaches differently. In particular, our results indicate that the two approaches have complementary strengths: random generation and filteringworks best for large agent numbers and long paths, while its performance degrades in the larger grid sizes and more strict constraints. On the contrary, constraint solving has a robust performance for large grid sizes and strict constraints, while its performance degrades with more agents and long paths. © 2023 Copyright held by the owner/author(s).
The automatic generation of random test inputs offers a potential solution to the challenges associated with manual test case design. However, the use of random test cases may prove ineffective for fault detection and can escalate testing costs, particularly in systems where test execution demands significant resources and time. To address this issue, leveraging the domain knowledge of test engineers becomes crucial for selecting test cases with the potential for effectiveness. One approach involves utilizing test selection constraints recommended by domain experts, which can be applied to generate targeted test inputs. In our previous paper, we introduced a domain-specific language (DSL) designed to formalize locality-based test selection constraints specifically tailored for autonomous agents. In this work, we devise an extended DSL for specifying more detailed test scenarios for a more elaborate model of autonomous agents and environment. We design a questionnaire and ask several experts' opinions about the usefulness of the DSL and also design an experiment to compare the efficiency, in terms of time needed to reach a failure, of the extended DSL with the initially proposed one. The questionnaire results show that some features of the extended DSL look useful in the experts' opinion, and the experiment results show that testing with the extended DSL can considerably improve the efficiency of the testing process.
Automated random testing is useful in finding faulty corner cases that are difficult to find by using manually-defined fixed test suites. However, random test inputs can be inefficient in finding faults, particularly in systems where test execution is time- and resource-consuming. Hence, filtering out less-effective test cases by applying domain knowledge constraints can contribute to test effectiveness and efficiency. In this paper, we provide a domain specific language (DSL) for formalising locality-based test selection constraints for autonomous agents. We use this DSL for filtering randomly generated test inputs. To evaluate our approach, we use a simple case study of autonomous agents and evaluate our approach using the QuickCheck tool. The results of our experiments show that using domain knowledge and applying test selection filters significantly reduce the required number of potentially expensive test executions to discover still existing faults. We have also identified the need for applying filters earlier during the test data generation. This observation shows the need to make a more formal connection between the data generation and the DSL-based filtering, which will be addressed in future work. © 2022, IFIP International Federation for Information Processing.
To test a Software Product Line (SPL), the test artifacts and the techniques must be extended to support variability. In general, when new SPL products are developed, more tests are generated to cover new or modified features. A dominant source of extra effort for such tests is the concretization of newly generated tests. Thus, minimizing the amount of new non-concretized tests required to perform conformance testing on new products reduces the overall test effort. In this paper, we propose a test reuse strategy for conformance testing of SPL products that aims at reducing test effort. We use incremental test generation methods based on finite state machines (FSMs) to maximize test reuse. We combine these methods with a selection algorithm used to identify non-redundant concretized tests. We illustrate our strategy using examples and a case study with an embedded mobile SPL. The results indicate that our strategy can save up to 36% of test effort in comparison to current test reuse strategies for the same fault detection capability. © 2017 IEEE.
Variants of the finite state machine (FSM) model have been extensively used to describe the behaviour of reactive systems. In particular, several model-based testing techniques have been developed to support test case generation and test case executions from FSMs. Most such techniques require several validation properties to hold for the underlying test models. In this paper, we propose an extension of the FSM test model for software product lines (SPLs), named featured finite state machine (FFSM). As the first step towards using FFSMs as test models, we define feature-oriented variants of basic test model validation criteria. We show how the high-level validation properties coincide with the necessary properties on the product FSMs. Moreover, we provide a mechanised tool prototype for checking the feature-oriented properties using satisfiability modulo theory (SMT) solver tools. We investigate the applicability of our approach by applying it to both randomly generated FFSMs as well as those from a realistic case study (the Body Comfort System). The results of our study show that for random FFSMs over 16 independent non-mandatory features, our technique provides substantial efficiency gains for the set of proposed validity checks. © Springer International Publishing AG 2017
There exists a rich literature of rule formats guaranteeing different algebraic properties for formalisms with a Structural Operational Semantics. Moreover, there exist a few approaches for automatically deriving axiomatizations characterizing strong bisimilarity of processes. To our knowledge, this literature has never been extended to the setting with data (e.g. to model storage and memory). We show how the rule formats for algebraic properties can be exploited in a genericmanner in the setting with data. Moreover, we introduce a new approach for deriving sound and ground-complete axiom schemata for a notion of bisimilarity with data, called stateless bisimilarity, based on intuitive auxiliary function symbols for handling the store component. We do restrict, however, the axiomatization to the setting where the store component is only given in terms of constants. © Gebler, Goriac & Mousavi.