print · source · login   

Vision and Mission

Software has shaped every aspect of our modern lives, including our work, our communication, our social life, our transportation, our financial systems, our healthcare, and even our food production. Ensuring that software is correct and secure, and can be adapted to our needs, is both a major scientific challenge and an enterprise with enormous societal relevance.

The Software Science group of iCIS studies models for the design and analysis of software, where a model is defined as any description of a system that is not the thing-in-itself (das Ding an sich in Kantian philosophy). We study models both from an engineering perspective ("Can we build a computer system whose behavior matches that of a given model?") and from a science perspective ("Can we build a model whose behavior matches that of a given computer system?"). We believe that both perspectives are crucial, and that by properly combining them we can build better software.

Engineering models of software can be defined by domain experts using domain specific languages. In our research, we work on the design of domain specific languages and on the challenge to generate applications fully automatically from a high-level model, with the aim of reducing software development time and increasing system reliability and performance. Domain specific languages must possess powerful abstraction mechanisms enabling to abstract from low level details. For this reason we believe it is better to embed these language in an advanced general purpose programming language. This also makes it easier to extend the modeling language with new constructs. For over two decades, Nijmegen has maintained a leading role in the development of efficient functional programming languages.

Systematic testing is important for software quality, but it is also error prone, expensive, and time-consuming. Our group is known for pioneering work on model-based testing, a technology that can improve the effectiveness and efficiency of the testing process. In model-based testing, the goal is to automatically test whether the behavior of a given computer system conforms to the behavior of a given (engineering or scientific) model.

An exciting new research area - in which our group is internationally leading - is to develop machine learning algorithms that automatically construct (scientific, behavioral) models of computer systems. These model learning algorithms can be used, for instance, to find bugs in implementations. We can also use a learned (scientific) model of a legacy software component to check conformance of a refactored implementation.

With rapidly growing application of artificial intelligence and machine learning, the need for verified guarantees against potentially fatal flaws is self-evident. An important question is then, how confidence in system behaviors obtained from machine learning can be transferred to formal verification. Vice versa, when we apply formal verification to challenging applications, e.g., from industry, scalability often is still an issue. Leveraging the capabilities of machine learning to assess large data sets will make it easier to verify realistic systems.

Our group has a strong reputation in studying the mathematical foundations of software modelling frameworks and algorithms and tools to analyze software models (e.g., type theory, proof assistants, automata theory, concurrency theory, co-algebras, and term rewriting). We want to bridge the gap between theory and practice through collaboration with stakeholders from industry and other application areas, and want to understand in what way the developed methods can contribute to solving real-world problems. In particular, we apply our methods and techniques to cyber-physical systems in joint projects with industrial partners.