print · source · login   

Theme: Model Learning and Model-Based Testing

Within our group we work on the design of algorithms that allow computers to learn complex state diagrams by providing inputs and observing outputs. This is an exciting area in which we collaborate with the Digital Security group to learn models of security protocols and also with several companies (e.g. ASML, Philips Healtcare, Oce, Thermo Fisher and Axini). We refer to the review article by Frits Vaandrager in Communications of the ACM for a general introduction. In model-based testing, a model is the starting point for testing. This model expresses precisely and completely what a system under test should do, and should not do, and, consequently, it is a good basis for systematically generating test cases. Model learning and model-based testing are complementary techniques. In particular, model-based testing can be used to find counterexamples for candidate models produced by model learning techniques. There are several opportunities for theoretical, tool/programming oriented, and application oriented projects related to model learning and model-based testing:

a) Theory

Did you like the Languages and Automata course and are you interested in theory? Well, we can define dozens of projects on automata learning that build further on the classical theory of languages and automata. How can we make automata learning algorithms more efficient (for instance by integrating learning and testing of hypotheses)? How can we extend Angluin's original L* algorithm for learning finite automata to richer classes of automata that can model data, time, nondeterministic, and probabilistic choice? (contact Frits Vaandrager or Jurriaan Rot).

b) Benchmark Repository

Model learning and model-based testing are popular research topics and during recent years numerous researchers have proposed new learning and testing algorithms for slightly different modelling frameworks, which then are evaluated on a few case studies. However in order to get real scientific progress, it is important to have a big, shared collection of benchmarks that can be used for the evaluation of all these algorithms, supported by tools that support translations between different modelling frameworks. In 2019, we have set up a repository with a publicly available set of benchmarks of state machines that model real protocols and embedded systems to allow researchers to compare the performance of learning and testing algorithms. Our benchmarks have already been used by several groups, for example, by researchers from l'UniversitÚ Grenoble Alpes, the University of Sao Paulo, the University of Sheffield, the University of Leicester, Universidad Complutense de Madrid, Sabanci University in Turkey, Graz University of Technology, IST Austria, the Indian Institute of Science Bangelore, and Kansas State University. There is still much work to do in extending this repository with new benchmarks and software, and properly comparing the performance of existing learning and testing algorithms. This is an area where, as a student, you may have real scientific impact and do work that is much appreciated by scientists around the globe. (contact Frits Vaandrager).

c) Learning & Testing Tools

If you like programming and enjoyed Algorithms & Data Structures (esp. graph algorithms), there are many opportunities to develop and implement new automata learning algorithms. (contact Frits Vaandrager).

For model-based testing, we are working on the TorXakis tool. Also here there are several possibilities for tool oriented projects. Examples are the connection of TorXakis to other tools, such as constraint solvers for test-data generation, to model-checkers for validation of test models, and to industrial and popular testing frameworks (SoapUI, Robot Framework, Behave, ...) for combining model-based testing with existing testing practices. Other possibilities include improving the performance of TorXakis, extension of the TorXakis modelling language, visualization of TorXakis models and test results, and the implementation of new test generation algorithms. (contact Jan Tretmans).

d) Combining Black Box and White Box Learning

Thus far, in Nijmegen, we have concentrated mostly on black box automata learning techniques. Application of such techniques makes sense if we have either no access to the code of the system under learning, or when this code is too complicated to analyze/understand. Recently, interesting progress has been made in the area of white-box learning. Here the goal is to derive abstract models of external behavior directly from the code. We are exploring how symbolic execution (KLEE) and taint analysis (read our paper!) can be used to speed up model learning and scale it to industrial code. (contact Frits Vaandrager or Bharat Garhewal)

e) Learning Models of Network Protocols

Did you like the Computer Networks course and do you find it interesting to understand the details of network protocols? There are many possibilities for projects in which the goal is to learn models of protocol components. We are particularly interested in learning models of security related protocols (like TCP, SSH, TLS, EMV, etc). Here model learning may help us to find specific sequences of input events that exhibit security vulnerabilities. (contact Frits Vaandrager or Erik Poll).

f) Generalizing Algorithms for FSM Conformance Testing to an Ioco Setting

Theorists have developed a powerful set of algorithms for conformance testing of deterministic Finite State Machine models, see e.g. Lee & Yannakakis. Now a problem is that in practice systems often are nondeterministic and do not exhibit the strict alternation of inputs and outputs from the FSM model: sometimes an input is not followed by any output at all, sometimes an input is followed by a series of outputs. Jan Tretmans has developed a popular theory of testing for this general class of systems, but there are still very few algoritms for efficient conformance testing in this general setting. Recently, we have published papers on combining Tretmans' theory with the theory of Finite State Machines, see here and here, but this theory requires some next steps to make it applicable and to implement in the model-based testing tool TorXakis. The goal of this project would be to extend, adapt, and apply this theory. (contact Frits Vaandrager or Jan Tretmans).

g) Model-based Concolic Testing

Concolic testing is a combination of symbolic and concrete testing applied for white-box testing, which, among others, is successfully used by Microsoft. Model-based testing (MBT) is a promising approach for black-box testing. The goal is to combine these two approaches by applying the concepts of concolic tesing to (ioco-based) MBT, in order to improve test selection for MBT (contact Jan Tretmans).

h) Minimal Counterexamples in Model-based Testing

We defined Domain Specific Languages to express logic properties of pure functions and the allowed behaviour of state-based systems. Based on these specifications the automatic test system generates test cases, executes the associated tests, and gives a verdict about the system under test. Current research focuses on finding smaller test cases showing non-conformance, and challenging applications of model-based testing (contact Pieter Koopman).

i) Using Usage Profiles in Model-Based Testing

Current systems are huge. Scaling model-based testing to such large systems is a challenge, in particular the selection of appropriate test cases for testing. One approach is to use usage (operational) profiles as the basis for testing, i.e., using users' scenarios as the basis for test selection. The goal is to combine user profile-based testing with ioco model-based testing (contact Jan Tretmans).

j) Internships at Thermo Fisher

Thermo Fisher develops high-end microscopes. To build control software for electron microscopes, Thermo Fisher is continuously looking for ways to optimize the software engineering process. Two internships are currently available for which we are looking for students. A first internship concerns the application of model learning to learn a model of a legacy software component for an electron microscope (contact Frits Vaandrager).

k) Internships at Canon Production Printing

There are several opportunities for internships at Oce, for instance on applying model-based testing. The latter opportunities may involve technical and tool aspects (e.g., connecting TorXakis to the OcÚ test environment), process aspects (e.g., how to embed model-based testing in the OcÚ testing process), as well as human aspects (e.g., how to support test engineers in using model-based testing). (contact Jan Tretmans).

l) Internships at Axini

Axini is a spin-off of the University of Twente, based on research by Jan Tretmans and his students on model-based testing. The company specializes in test automation and aims at the application of formal methods in practice. There are several MSc thesis projects available at Axini.

m) Internships at other companies

In the context of our research on model based testing and automata learning, we are collaborating with several other companies that are interested in the potential of these new techniques, for instance, ASML, Philips Healthcare, Tom Tom, Vanderlande, InTraffic, SpronQ or the users from the TicToc project on testing (contact Jozef Hooman, Jan Tretmans or Frits Vaandrager).