print · login   

Code Generation

We try to leverage code generation techniques to provide programmers with tools that elevate their productivity, simplify code maintenance, and provide high levels of performance across a wide spectrum of different architectures.

Most of our research is validated in the context of two tool chains:

  • The Clean compiler and infrastructure
  • The SaC compiler sac2c and its infrastructure

Both these languages are purely functional, yet they target different application areas. Clean is a fully-fledged lazy language similar in expressiveness as Haskell. Here, the key focus of work lies on productivity and maintainability. SaC is an array programming language where our focus lies in striving for competitive parallel performance on a variety of hardware architectures without any target-specific code. Our current expertise covers various aspects relevant for achieving our overarching goals, including:

Functional language design and implementation

Driven by practical examples, we are continuously investigating new programming concepts and explore ways to implement these efficiently.

Compiler optimisation

Code generation for declarative languages comes with many challenges when aiming for performance. These challenges lie on different levels of abstractions. Some of them are on the declarative level, others lie on the classical lower level. Improvements of the state of the art on both levels are essential for being able to achieve competitive runtimes for a variety of applications.

Runtime systems for parallel execution

In particular when generating code for a variety of architectures, which may be used in a collaborative fashion, the design and implementation of a suitable runtime layer is essential. We are trying to meet the challenges that are posed by an ever-changing landscape of available hardware.

Code generation for novel hardware

Portability of code implies the ability to efficientlyutilise novel hardware whenever it has properties that are deemed beneficial. A lot of our work looks at possible ways to leverage novel hardware for parallel executions. This does not only include clusters, shared-memory multi-core systems, and GPUs, but also TPUs, FPGAs and systems on chip.

Application exploration

All our research is driven by practical needs from concrete applications. Non-surprisingly, this often raises questions on how to formulate algorithms in the first place.