hh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Test Automation for Grid-Based Multiagent Autonomous Systems
Halmstad University, School of Information Technology.ORCID iD: 0000-0002-8929-1750
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Traditional software testing usually comes with manual definitions of test cases. This manual process can be time-consuming, tedious, and incomplete in covering important but elusive corner cases that are hardly identifiable. Automatic generation of random test cases emerges as a strategy to mitigate the challenges associated with the manual test case design. However, the effectiveness of random test cases in fault detection may be limited, leading to increased testing costs, particularly in systems where test execution demands substantial resources and time. Leveraging the domain knowledge of test experts can guide the automatic random generation of test cases to more effective zones. In this thesis, we target quality assurance of multiagent autonomous systems and aim to automate test generation for them by applying the domain knowledge of test experts.

To formalize the specification of the domain expert's knowledge, we introduce a small Domain Specific Language (DSL) for formal specification of particular locality-based constraints for grid-based multiagent systems. We initially employ this DSL for filtering randomly generated test inputs. Then, we evaluate the effectiveness of the generated test cases through an experiment on a case study of autonomous agents. Applying statistical analysis on the experiment results demonstrates that utilizing the domain knowledge to specify test selection criteria for filtering randomly generated test cases significantly reduces the number of potentially costly test executions to identify the persisting faults. 

Domain knowledge of experts can also be utilized to directly generate test inputs with constraint solvers. We conduct a comprehensive study to compare the performance of filtering random cases and constraint-solving approaches in generating selective test cases across various test scenario parameters. The examination of these parameters provides criteria for determining the suitability of random data filtering versus constraint solving, considering the varying size and complexity of the test input generation constraint. To conduct our experiments, we use QuickCheck tool for random test data generation with filtering, and we employ Z3 for constraint solving. The findings, supported by observations and statistical analysis, reveal that test scenario parameters impact the performance of filtering and constraint-solving approaches differently. Specifically, the results indicate complementary strengths between the two approaches: random generation and filtering approach excels for the systems with a large number of agents and long agent paths but shows degradation in larger grid sizes and stricter constraints. Conversely, constraint solving approach demonstrates robust performance for large grid sizes and strict constraints but experiences degradation with increased agent numbers and longer paths.

Our initially proposed DSL is limited in its features and is only capable of specifying particular locality-based constraints. To be able to specify more elaborate test scenarios, we extend that DSL based on a more intricate model of autonomous agents and their environment. Using the extended DSL, we can specify test oracles and test scenarios for a dynamic grid environment and agents having several attributes. To assess the extended DSL's utility, we design a questionnaire to gather opinions from several experts and also run an experiment to compare the efficiency of the extended DSL with the initially proposed one. The questionnaire results indicate that the extended DSL was successful in specifying several scenarios that the experts found more useful than the scenarios specified by the initial DSL. Moreover, the experimental results demonstrate that testing with the extended DSL can significantly reduce the number of test executions to detect system faults, leading to a more efficient testing process.

Place, publisher, year, edition, pages
Halmstad: Halmstad University Press, 2024. , p. 21
Series
Halmstad University Dissertations ; 116
Keywords [en]
Test Input Generation, Domain Specific Languages, Test Selection, Autonomous Agents, Scenario-based Testing
National Category
Software Engineering Computer Sciences
Identifiers
URN: urn:nbn:se:hh:diva-53268Libris ID: m5xh7xszk503gljxISBN: 978-91-89587-49-6 (print)ISBN: 978-91-89587-48-9 (electronic)OAI: oai:DiVA.org:hh-53268DiVA, id: diva2:1855298
Presentation
2024-05-24, Wigforss salen, hus J, Kristian IV:s väg 3, Halmstad, 13:00 (English)
Opponent
Supervisors
Projects
Safety of Connected Intelligent Vehicles in Smart Cities – SafeSmart
Funder
Knowledge FoundationAvailable from: 2024-04-30 Created: 2024-04-30 Last updated: 2024-05-03Bibliographically approved
List of papers
1. Locality-Based Test Selection for Autonomous Agents
Open this publication in new window or tab >>Locality-Based Test Selection for Autonomous Agents
2022 (English)In: Testing Software and Systems: 33rd IFIP WG 6.1 International Conference on Testing Software Systems, ICTSS 2021, London, UK, November 10-12, 2021 Proceedings / [ed] Clark D., Menendez H., Cavalli A.R., Springer Science+Business Media B.V., 2022, Vol. 13045, p. 73-89Conference paper, Published paper (Refereed)
Abstract [en]

Automated random testing is useful in finding faulty corner cases that are difficult to find by using manually-defined fixed test suites. However, random test inputs can be inefficient in finding faults, particularly in systems where test execution is time- and resource-consuming. Hence, filtering out less-effective test cases by applying domain knowledge constraints can contribute to test effectiveness and efficiency. In this paper, we provide a domain specific language (DSL) for formalising locality-based test selection constraints for autonomous agents. We use this DSL for filtering randomly generated test inputs. To evaluate our approach, we use a simple case study of autonomous agents and evaluate our approach using the QuickCheck tool. The results of our experiments show that using domain knowledge and applying test selection filters significantly reduce the required number of potentially expensive test executions to discover still existing faults. We have also identified the need for applying filters earlier during the test data generation. This observation shows the need to make a more formal connection between the data generation and the DSL-based filtering, which will be addressed in future work. © 2022, IFIP International Federation for Information Processing.

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2022
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 13045
Keywords
Autonomous agents, Domain specific languages, Model-based testing, Scenario-based testing, Test input generation, Test selection
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-49111 (URN)10.1007/978-3-031-04673-5_6 (DOI)000870696000006 ()2-s2.0-85130232932 (Scopus ID)9783031046728 (ISBN)
Conference
33rd IFIP WG 6.1 International Conference on Testing Software Systems, ICTSS 2021, 10 November 2021 through 12 November 2021, , 277709, 2022
Available from: 2023-01-05 Created: 2023-01-05 Last updated: 2024-04-30Bibliographically approved
2. Automated and Efficient Test-Generation for Grid-Based Multiagent Systems: Comparing Random Input Filtering versus Constraint Solving
Open this publication in new window or tab >>Automated and Efficient Test-Generation for Grid-Based Multiagent Systems: Comparing Random Input Filtering versus Constraint Solving
2023 (English)In: ACM Transactions on Software Engineering and Methodology, ISSN 1049-331X, E-ISSN 1557-7392, Vol. 33, no 1, article id 12Article in journal (Refereed) Published
Abstract [en]

Automatic generation of random test inputs is an approach that can alleviate the challenges of manual test case design. However, random test cases may be ineffective in fault detection and increase testing cost, especially in systems where test execution is resource- and time-consuming. To remedy this, the domain knowledge of test engineers can be exploited to select potentially effective test cases. To this end, test selection constraints suggested by domain experts can be utilized either for filtering randomly generated test inputs or for direct generation of inputs using constraint solvers. In this article, we propose a domain specific language (DSL) for formalizing locality-based test selection constraints of autonomous agents and discuss the impact of test selection filters, specified in our DSL, on randomly generated test cases. We study and compare the performance of filtering and constraint solving approaches in generating selective test cases for different test scenario parameters and discuss the role of these parameters in test generation performance. Through our study, we provide criteria for suitability of the random data filtering approach versus the constraint solving one under the varying size and complexity of our testing problem. We formulate the corresponding research questions and answer them by designing and conducting experiments using QuickCheck for random test data generation with filtering and Z3 for constraint solving. Our observations and statistical analysis indicate that applying filters can significantly improve test efficiency of randomly generated test cases. Furthermore, we observe that test scenario parameters affect the performance of the filtering and constraint solving approaches differently. In particular, our results indicate that the two approaches have complementary strengths: random generation and filteringworks best for large agent numbers and long paths, while its performance degrades in the larger grid sizes and more strict constraints. On the contrary, constraint solving has a robust performance for large grid sizes and strict constraints, while its performance degrades with more agents and long paths. © 2023 Copyright held by the owner/author(s).

Place, publisher, year, edition, pages
New York, NY: Association for Computing Machinery (ACM), 2023
Keywords
autonomous agents, constraint solving, domain specific languages, grid-based systems, multiagent systems, test input filtering, Test input generation, test selection
National Category
Computer Sciences
Identifiers
urn:nbn:se:hh:diva-52732 (URN)10.1145/3624736 (DOI)2-s2.0-85183761518 (Scopus ID)
Available from: 2024-02-21 Created: 2024-02-21 Last updated: 2024-04-30Bibliographically approved
3. Domain Specific Language for Testing Grid-based Multiagent Autonomous Systems
Open this publication in new window or tab >>Domain Specific Language for Testing Grid-based Multiagent Autonomous Systems
(English)Manuscript (preprint) (Other academic)
Abstract [en]

The automatic generation of random test inputs offers a potential solution to the challenges associated with manual test case design. However, the use of random test cases may prove ineffective for fault detection and can escalate testing costs, particularly in systems where test execution demands significant resources and time. To address this issue, leveraging the domain knowledge of test engineers becomes crucial for selecting test cases with the potential for effectiveness. One approach involves utilizing test selection constraints recommended by domain experts, which can be applied to generate targeted test inputs. In our previous paper, we introduced a domain-specific language (DSL) designed to formalize locality-based test selection constraints specifically tailored for autonomous agents. In this work, we devise an extended DSL for specifying more detailed test scenarios for a more elaborate model of autonomous agents and environment. We design a questionnaire and ask several experts' opinions about the usefulness of the DSL and also design an experiment to compare the efficiency, in terms of time needed to reach a failure, of the extended DSL with the initially proposed one. The questionnaire results show that some features of the extended DSL look useful in the experts' opinion, and the experiment results show that testing with the extended DSL can considerably improve the efficiency of the testing process.

Keywords
Test Input Generation, Domain Specific Languages, Test Selection, Autonomous Agents, Scenario-based Testing
National Category
Computer Systems
Research subject
Smart Cities and Communities
Identifiers
urn:nbn:se:hh:diva-53181 (URN)
Projects
Safety of Connected Intelligent Vehicles in Smart Cities – SafeSmart
Funder
Knowledge Foundation
Note

Som manuscript i avhandling/As manuscript in thesis

Available from: 2024-04-12 Created: 2024-04-12 Last updated: 2024-11-13Bibliographically approved

Open Access in DiVA

Sina_Entekhabi_Lic(689 kB)148 downloads
File information
File name FULLTEXT01.pdfFile size 689 kBChecksum SHA-512
5a87e2a5aa3d309704b95b4e054b75348e24a1ed604a19a899f4e9326e75061b24dbe0281246bc7500d92f09005634fb59c5ca7e2a845e488595c5b8d90fa2c4
Type fulltextMimetype application/pdf

Authority records

Entekhabi, Sina

Search in DiVA

By author/editor
Entekhabi, Sina
By organisation
School of Information Technology
Software EngineeringComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 148 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 413 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf