Mathematical models of computation are at the core of many areas of computer science, and there are many different types of models - think, for instance, of the different kinds of automata that you might have encountered in Languages & Automata! Taking a step back and studying how to specify and analyse and compute with these kinds of models is interesting and important, for instance, to help analyse real software and hardware systems. Research projects in this direction typically involve a combination of mathematics and computer science.

Did you enjoy Languages & Automata, and would like to study automata theory further? There are various extensions of automata: for instance, weighted automata which add numbers (typically costs, distances, ...), probabilistic automata (which, you might guess, add probabilities to transitions), register automata (which can store values and compute over infinite alphabets!), timed automata, and many more. Each of these extensions pose interesting questions, similar to those we ask in Languages & Automata: do these which different classes are there, which languages do they accept, can we determinise them, can we minimise them, etc etc. These questions can become really interesting for some of these extended types of automata! Contact me for a more concrete project in this direction. This can be really theoretical in nature, but can also be a mix: for instance, efficient implementation and comparison of automata theory algorithms.

Contact: Jurriaan Rot

Automata learning is a family of methods and algorithms that allow to construct models of systems by making observations. These techniques are particularly useful to analyse ``black-box'' systems, of which we don't have the code (or where it is too complex to analyse). Automata learning is a fascinating area! Read more about it here. The problems in automata learning lead to nice theoretical questions - in particular, which types of automata can we learn? At what level of generality can these techniques be phrased? Many of these methods rely on some form of testing - what formal guarantees can we give on how much testing is enough? What happens in the case of quantitative systems, which feature probabilities, for instance? In case you followed the course Category theory and Coalgebra: can we phrase learning at the general level of state-based systems modelled as coalgebras?

Contact: Jurriaan Rot

Kleene algebra is the algebra of regular languages; it forms a complete axiomatisation, that is, every two expressions which denote the same language are provably equal in Kleene algebra. There's been quite some recent work on Kleene algebra as a foundation to reason about programs, with for instance the variant NetKAT' that allows to analyse network programs, or variants for concurrent programs. There are several potential projects in this direction: for instance, can we use Kleene algebra to speak about programs with pointers? What are the connections with other formalisms such as process algebra? What about the decidability of some of the recent extensions?

Contact: Jurriaan Rot

What you've seen in Languages and Automata for string languages can be extended in various ways. Some examples (1) Tree languages and hedge languages. These are ways to capture XML languages, which have context-free aspects like the requirement for matching brackets, but for the rest are pretty regular. (2) Parsing Expression Grammar (PEG) is a formalism that captures more than the regular languages with a special focus on obtaining simple parsing algorithms. PEGs are used a lot. (2) Connjuctive and Boolean grammars extend context-free grammars with conjunction and complementation operations, and thus go well beyond context-free languages. You could for instance work on implementing part of the theory, or even better: solve one of the open problems from Okhotin's list (the inventor of these grammars)!

Contact: Herman Geuvers