There is (a lot of) work on this by, for example, van Leeuwen - Wiederman, Baeten - Luttik van Tilburg, Wegner, Goldin, Smolka and many others. Many questions on the universality of these models and their relative strength remain. A bachelor thesis on this topic has been written by Rick Erkens, but there is still a lot to be studied!

Contact: Geuvers; stream: General Theory

Give a semantics to a programming language you like, or a programming paradigm, as you've studied in "Semantiek en Correctheid" or "Berekeningsmodellen". AN example is the bachelor thesis of Tom Nikken who studied iTasks, supervised by Peter Achten and Herman Geuvers

Contact: Geuvers; stream: General Theory

What you've seen in "Talen en Automaten" for string languages can be extended in various ways:

- To tree languages and hedge languages. These are ways to capture XML languages, which have context-free aspects like the requirement for matching brackets, but for the rest are pretty regular. Examplaric questions: how can we (automatically) learn an XML grammar from a collection of sample XML documents? A bachelor thesis on this topic has been written by Tom Evers, but there is still a lot to be looked into!
- Parsing Expression Grammar (PEG) is a formalism that captures more than the regular languages with a special focus on obtaining simple parsing algorithms. PEGs are used a lot.
- Boolean grammars extend context-free grammars with conjunction and complementation operations, and thus go well beyond context-free languages. You could for instance work on implementing part of the theory, or even better: solve one of the open problems from Okhotin's list (the inventor of Boolean grammars)!

Contact: Geuvers; stream: General Theory

Together with Tonny Hurkens I have been working on finding a general procedure to derive deduction rules from truth tables, both for classical and for constructive logic. We have written a paper on this, but lots of questions remain and lots of generalizations are still possible, e.g. how to describe cut-elimination (proof reduction) in general, how to define a general proof term interpretation, how to extend this to predicate logic, ...

Contact: Geuvers; stream: Logic and Type Theory

The Curry-Howard formulas-as-types isomorphism is between proofs in constructive logic and typed lambda terms, which can be viewed as functional programs. Extending this isomorphism to classical logic, we can interpret classical proofs as "functional programs with jumps", which can me modeled as escape or call-cc constructs. There has been done a lot of work on the extension of formulas-as-types to classical logic, giving rise to new presentations of the logical systems, new typed lambda calculi, new reduction rules and new reduction strategies. In all these areas (or in the combination of them) there are still many open questions left.

Contact: Geuvers; stream: Type Theory

The latest advancement in type theory is HoTT (Homotopy Type Theory), which aims at giving a "Univalent Foundations of Mathematics". It also aims at giving a better treatment of various fundamental concepts from computer science, like "data types with laws" and "equivalence of data". An important concept to work with these are so called "higher inductive types". A preliminary treatment of what these are is given here, but there is still a lot to be studied!

Contact: Geuvers; stream: Type Theory

In inconsistent type theories like lambdaU, lambdaU- or lambda*, we can find a closed term of type bot. We can also find a closed term of the type of the fixed point combinator: forall A:*, (A->A)->A. It is know that we can make a looping combinator of this type, but it is an open question whether there is a fixed point combinator of this type. (It is generally believed there is none.) Also, the proof of bot can be given in various ways, usually by interpreting some form of Burali-Forti paradox, saying that we cannot well-order the collection of all well-orderings. But can we also interpret the much simpler Russell paradox? (Stating that the collection of all sets that are not members of themselves is not a set.)

Contact: Geuvers; stream: Type Theory

We propose the following method for defining the partial recursive functions (over natural numbers) as functions from streams to streams.

We consider streams over {0,S,c}, where c denotes a computation step (a tau step in process algebra terminology), so the number n is represented by c...c S c...c S ... S c... c 0 ..., so n times a S followed by a 0, separated by c and followed by arbitrary what. A function takes streams and produces a stream; given a (code for a) partial recursive function f of arity m we define its interpretation as a function (in Coq) taking n streams and producing a stream. All this should be defined in Coq

Now we want to prove formally (in Coq), that the set of total functions is not enumerable: there is no function g that takes a (code of a) function f and returns true if f is total and false if f is not.

Contact: Geuvers; stream: Formalization

Formalize the 100 prisoners and one light bulb problem; look here for details. There are two goals that you want a solution to have:

- When it terminates you want to be certain all prisoners have been in the room.
- Have a low expected running time.

The algorithms can get pretty complex so that computing the expected running time becomes extremely difficult. So people write simulations to estimate the expected running times to compute (2). However the simulations don't provide confidence in (1). Any broken algorithm that takes several (simulated) years to run will work in every tested case with near certainty, but the goal is to have absolute certainty. A proof is required for (1). This is a nice example of a problem where you want to write an algorithm, and prove it's correctness (1). Then extract the algorithm and execute it to estimate (2), and compare the solutions to the puzzle.

Conact Geuvers; stream: Formalization

We had the FTA project, in which we formalized the proof of the Fundamental Theorem of Algebra. However, we never got a reasonable algorithm out of that. It would be very nice to go the other way, start with an algorithm for finding zeroes of complex polynomials, and then formalize a proof of its correctness in the CoRN framework (possibly using that we already know that FTA holds, if necessary). The algorithm I would look at first for this would be the Jenkinsâ€“Traub algorithm. Wikipedia states that that is "practically a standard in black-box polynomial root-finders".

Contact: Wiedijk; stream: General Theory

The correctness of the Kenzo computer algebra software has been a hot topic for a while now. Generally people want either to (a) formalize the algebraic topology constructively and hoping a reasonable executable program would flow from that, or (b) formalize the correctness of fragments of the Lisp code of the Kenzo program. I think both approaches are misguided: you really want to prove an _explicit_ program correct, but if you do this in type theory that program really should be written in a typed functional programming language and not in Lisp. So, the project would be to take a core part of the Kenzo system and rewrite that in a typed language like OCaml or Haskell. And then the next stage (if there was time for it) would be to "port" that to Coq and prove properties of it.

Contact: Wiedijk; stream: Proof Assistant Technology

In our project to write a formal version of the C standard there are parts that can be easily isolated. Examples are the C preprocessor, or the floating point arithmetic. It would be great if a student would write a formalization of such a part of the standard as a Bachelor's or Master's thesis project. Another possible project related to the Formalin project would be to define a framework for comparing different versions of C. Yet another possibility would be to do a "mini-Formalin", by writing a semantics for the very small specific subset of C called MISRA C (a restricted C standard for the automobile industry.)

Contact: Wiedijk; stream: Formalization

There are various systems for proving imperative programs correct, mostly based on Hoare logic (or nowadays on its extension separation logic.) A nice project might be to either "compare" those systems by formalizing the correctness of a small example in a couple of them. Or alternatively, to "stress test" one of these systems by formalizing the correctness of a not-so-small example in it.

Contact: Wiedijk; stream: Formalization

There still is a (to me) interesting unfinished work by Herman Geuvers and me: we had a paper about "explicit conversions" in which we defined two type theories, that we called lambda-H and lambda-F. So proving the main theorem for lambda-H was much easier than for lambda-F, and therefore we (= Herman) only did that. However, I think that lambda-F is a nicer system, so the project would be to prove the analogous theorems for the lambda-F system. Alternatively, the project could be to look into other variations of these systems, and to prove properties of _those_ type theories.

Contact:Wiedijk; stream: Type Theory

Something that _I_ would like very much: de Bruijn, who was a genius, once defined a system called Lambda-Delta (or Delta-Lambda, depending on the paper that you look at :-)) However, this was before type theory got its present form, and therefore these systems do not resemble the current systems at all. Now Philippe de Groote once had a paper in which he gave a "more modern" version of these systems, but that was rather skimpy. It would be a nice project to (a) explicitly prove that the system by de Bruijn indeed corresponds to the system by de Groote and (b) to implement this system efficiently. Also, it would be very interesting to relate this system to the modern type theories (it's quite close to LF aka lambda-P), or to create a variant that is closer to the more modern systems.

Contact: Wiedijk; stream: Type Theory

The automated theorem provers like Otter, Prover 9, Vampire and the E prover (random examples) can prove first order formulas automatically. However, they use skolemization, and therefore they don't give you a proof of the _original_ formula. Of course the proof that they _do_ give can be transformed into a proof of the original formula, but I don't think that that was ever implemented in general. There's _some_ work about this by Hans de Nivelle, but that really deskolemizes the proofs that _his_ theorem prover Bliksem finds, and not arbitrary proofs of the skolemized formula. Also, I don't think there's a generic implementation of this that works on proofs in a common proof format (like for example Ivy or the TPTP proof format.) This would be very nice to have implemented, even if it is only a proof of concept (the proofs probably would blow up exponentially, so I don't know whether it would be too useful in practice.)

Contact: Wiedijk; stream: Automated Reasoning

There's the declarative miz3 proof language for HOL Light. There are proof transformers for that framework that can be implemented, analogous to the "relprem" and "relinfer" tools of the Mizar system. Working on a tool like that would make a great master's thesis project.

Contact: Wiedijk; stream: Proof Assistant Technology

It would be very nice to add a "Poincare principle" to HOL, i.e., to add a way to HOL to "import" information from the execution of a program as a theorem of the system. This might be essential to get the Flyspeck project to a close. The main things to work on is the specific "execution model" that you want this extension of HOL to be about, and the other (more important and more difficult) is to have this in a way that you actually can reasonably easily prove things about the programs. Because you do want to get meaningful information out of the fact that the execution of a program gave a certain result.

Contact: Wiedijk; stream: Proof Assistant Technology

Minimization of deterministic automata is an important and classical problem. Many possible constructions and algorithms exist, and it is still not always clear which performs well in practice, and what the fundamental connections are. There are lots of exciting possible projects in this area - on understanding of connections between various approaches, comlexity, experimental evaluation, or on extension of existing approaches to new kinds of automata. Erin van der Veen wrote his bachelor thesis on experimental evaluation, but there is still a lot to be done!

Contact: Jurriaan Rot

Deciding equivalence of models of computation is a fundamental problem, underlying many problems and techniques in the analysis and verification of programs. Among the various notions of equivalence, weak bisimilarity (and variants) practically very relevant, and is in use in various verification tools. Weak bisimilarity abstracts away from interal computation steps, so that two programs are equivalent if they have the same observable behaviour. The aim of this project is to develop proof techniques for weak bisimilarity of potentially infinite state programs.

Contact: Jurriaan Rot

Pointer programs are notoriously difficult to analyze. One of the basic questions is reachability: is a variable x reachable from some pointer y (potentially via various steps) at some point in the program? The idea of this project is to approach the problem of reachability using regular expressions with tests.

Contact: Jurriaan Rot

In 2015, using formal verification we learned that the sorting algorithm in the standard library of Java had a bug, which could make it crash! The bug itself may be slightly worrying, but it is also a success: it shows how formal verification can really help even with widely used software such as the Java library. There are many possible projects in this direction: you could for instance work on verifying other case studies in the Java case study with the same tools, or investigate other tools that can help for such verification efforts.

Contact: Jurriaan Rot

In this project, you will look at programs on streams (infinite sequences). Is it possible to prove automatically that two stream programs behave the same? Or to prove, for instance, that a given program always returns an increasing stream?

Contact: Jurriaan Rot

For several puzzles one can wonder how hard they are to solve. A standard level of hardness is NP-completeness, and for many puzzles one can prove they are NP-complete. Coming up with a puzzle of your own choice and investigate its complexity class is a nice topic for a bachelor project. Similarly, one may investigate whether some configuration in a particular game has a winning strategy. In 2017 PSPACE completeness of a particular variant of five-in-a-row was investigated by Laurens Kuiper as his bachelor project, and in 2018 the exponential behavior of Sokoban was investigated by Jan Potma as his bachelor project.

Contact: Hans Zantema