## Lambda Calculus: the Generalized Finite Development Theorem

**Lecturer**: Jean-Jacques Lévy (Université Paris-Diderot - Paris 7 & École Polytechnique, France)

**Slides**: 📄 Download | **Video**: ▶️ ️Watch

The type-free lambda calculus is not normalizing. Reductions maybe infinite. But typed lambda-calculi are strongly normalizing. All reductions are finite. But the proof of strong normalization is often quite complex. Non-normalization is due to the infinite creation of new redexes. The generalized finite development theorem states that the finiteness of redex families induces the finiteness of reductions. It is a generalization of Curry&Feys’s finite development theorem for residuals of redexes.

We therefore review residuals of redexes, equivalences by permutations, labeled calculus, redex families, canonical representatives, stability of redexes, GFD theorem, connections with first-order typed calculus and 2nd-order type theory.

## Compiler Construction in Maude

**Lecturer**: Christiano Braga (Universidade Federal Fluminense, Brasil)

**Slides**: 📄 Download | **Video**: ▶️ ️Watch

Term rewriting, and in particular, rewriting modulo, is an expressive and powerful technique to formally describe program transformers by relating the initial algebras of the given languages. Moreover, the close relationship between term rewriting systems, automata and Kripke models allows for a powerful approach to compilation and modal logic-based program reasoning.

In this short course we will discuss how this approach can be effectively implemented in the Maude language. We will go through each component of a compiler and study how it can be implemented in the Maude language, including parsing and code generation. This short-course relies on a formally specified intermediate language whose semantics is directly captured as a term-rewriting system. A compiler then is represented as a transformation from the concrete syntax of a given programming language into the intermediate representation. From the intermediate representation, code is generated and logical program analysis may be performed.

We plan to introduce the necessary concepts of compiler construction, the Maude language, and program analysis on a need basis and by example. No prior background is anticipated. This practical approach should allow us to cover the many topics involved in this short-course.

The objective of this short course is manifold:

- Discuss the relationship between program transformation and term-rewriting using compiler construction as a case study.
- Discuss how metaprograming can be done in term rewriting using the Maude language.
- Illustrate how the Maude language and system can be effectively used in compiler construction, exploring, in particular, its metaprogramming facilities.

## An Introduction to Deep Inference Methodology in Proof Theory

**Lecturer**: Paola Bruscoli (University of Bath, United Kingdom)

**Slides**: 📄 Download | **Video**: ▶️ ️Watch

Deep inference has emerged during the past 20 years as a methodology in proof theory to design proof formalisms and proof systems for various logics, addressing aspects of computational relevance. While at first sight it can be regarded as a combination of equational reasoning and term rewriting systems, its value comes from the rich theory of normalisation and new methods of proof composition that arise from a careful design of systems. The range of logics re-thought through the lens of deep inference is broad, and includes substructural logics related to process algebras that cannot be otherwise directly represented in sequent calculi: this opens the possibility of transferring those theoretical methods towards static analysis.

## Unification and Narrowing in Maude 3.0

**Lecturer**: Santiago Escobar (Universitat Politècnica de València, Spain)

**Slides**: 📄 Download | **Video**: ▶️ ️Watch

Maude is a language and a system based on rewriting logic. It is a mathematical modeling language thanks to its logical basis and its initial model semantics. Maude can be used in three, mutually reinforcing ways: as a declarative programming language, as an executable formal specification language, and as a formal verification system. Logical reasoning capabilities have been recently added to Maude and, in this course, we give an overview of the different unification and narrowing techniques available in Maude 3.0, focusing on some of the programming, modeling, and verification aspects of Maude.

## Nominal Rewriting

**Lecturer**: Maribel Fernandez (King’s College London, United Kingdom)

**Slides**: 📄 Download | **Video**: ▶️ ️Watch

The nominal approach to the specification of languages with binding operators, introduced by Gabbay and Pitts, has its roots in nominal set theory. In this course, we will start by defining nominal syntax (including the notions of fresh atoms and alpha-equivalence) and develop matching and unification algorithms which take into account the alpha-equivalence relation. We will then define nominal rewriting, a generalisation of first-order rewriting that provides in-built support for alpha-equivalence following the nominal-set approach. We will show how good properties of first-order rewriting survive the extension, by giving an efficient rewriting algorithm, a critical pair lemma, and a confluence theorem for orthogonal systems. Finally, we will discuss typed versions of nominal rewriting and notions of rewriting modulo AC operators.

## Automated Complexity Analysis for Term Rewriting

**Lecturer**: Carsten Fuhs (University of London, United Kingdom)

**Slides**: 📄 Download | **Video**: ▶️ ️Watch

Complexity analysis for term rewrite systems (TRSs) investigates the length of the longest rewrite sequence (or: longest derivation) as a function of the size of its start term. If the set of start terms is not restricted, this complexity function is known as the derivational complexity of the TRS. It is in general not computable; that is why the development of techniques and tools has focused on sufficient criteria for the inference of (not necessarily tight) upper and lower asymptotic bounds of the complexity function.

Research in this area initially dealt with upper bounds on the derivational complexity induced by termination proof techniques: if termination of a given TRS is proved using a particular technique, this implies that the derivational complexity of the TRS cannot exceed a bound specific to the proof technique and its parameters.

However, from a program analysis perspective, derivational complexity is not a suitable complexity measure: even a simple TRS with the two rules `double(0) → 0`

and `double(s(x)) → s(s(double(x)))`

to double a natural number has exponential derivational complexity. The issue is that derivational complexity considers start terms of arbitrary shape; in particular, start terms with nested defined symbols like `double(double(. . . double(s(0)) . . .))`

are allowed and may cause exponential derivational complexity. In contrast, computing the double of a natural number `double(s(...s(0)...))`

(intuitively: applying the function double to data) has linear complex- ity. This latter notion, where defined symbols in the start term are allowed only at its root, is significantly closer to complexity analysis for conventional programming languages. The corresponding complexity function is known as the runtime complexity of a TRS.

In this course, we will discuss a selection of automated techniques for inferring asymptotic lower and upper bounds on derivational and runtime complexity of TRSs. For upper bounds, the dependency pair method from termination analysis has been adapted to allow for a certain modularity in the analysis. Further techniques to find upper bounds include transformations between different complexity problems for rewriting or from rewriting to programs on integer numbers. As witnesses for lower bounds, one can find “decreasing loops” or identify a family of start terms tn parametric in the term size n for which one can prove inductively that corresponding rewrite sequences of a certain length exist.

If time permits, we may also briefly look at the application of complexity analysis tools for term rewriting as backends for the complexity analysis of programming languages such as Prolog, OCaml, and Java.

### Inversion and Term Rewriting Systems

**Lecturers**: Maja H. Kirkeby (Roskilde University, Denmark) and Robert Glück (University of Copenhagen, Denmark)

**Slides**: 📄 Download | **Video**: ▶️ ️Watch

Inversion is an important and useful program transformation and has been studied in various programming language paradigms. For example, the transformation from a zip-program into an unzip program is an inversion. Dijkstra appears to be the first to study inversion of programs and did so in a guarded command language. Subsequently, some program inversion algorithms were developed for different forms of inversion and for different programming languages. In particular, in the last decade program inversion was studied in the context of term rewriting systems.

In these lectures, we use conditional constructor term rewriting systems as a framework to study program inversion. They can model first-order functional programs, logic programs, and functional logic programs, and are suitable for observing and discussing common problems arising from inversion without considering language specifications. We present and discuss inverters developed for rewrite systems; these algorithms yield different types of program inversions, namely full, partial and semi-inversion.

Starting with a gentle introduction to inversion, we will present the inversion algorithms step-by-step; moving from full inversion to semi-inversion of conditional constructor term rewriting systems. The inverters will be illustrated with examples and to support the understanding the lecture will be interleaved with exercises.

### Lambda calculi with patterns

**Lecturers**: Luigi Liquori (INRIA Sophia Antipolis - Méditerranée, France) and Vincent van Oostrom (University of Innsbruck, Austria)

**Slides**: 📄 Download | **Video**: ▶️ ️Watch

Pattern matching has been at the core of functional programming since the 1980s and is making its way into other programming language paradigms currently, e.g. into Java. In this course, we present pattern matching calculi from the perspective of rewriting, in the form of lambda calculi with patterns; where the lambda calculus is the calculus of parameter passing, pattern matching calculi add to this the feature to test whether the argument matches some pattern.

- Pattern matching lambda calculi are presented together with their design features, and their expressiveness, based on examples of pattern matching programs.
- Rewrite properties of the lambda calculus such as confluence, termination, standardisation, are discussed based on the examples introduced.
- Typing pattern calculi is discussed from the perspective of the Curry-Howard isomorphism, and the static guarantees they can give, such as termination.
- Several inroads towards establishing the rewrite properties in (2) are presented, and its (dis)advantages discussed:
- Embedding into a known format, e.g. CRSs/HRSs
- Instantiating axiomitised proof methods, e.g. AxRS
- Making the matching (theory) explicit, e.g. commutativity or associativity, and establishing properties modulo it.

- Open problems and themes for further research are presented.

### From Rewriting to Matching Logic

**Lecturers:** Dorel Lucanu (Alexandru Ioan Cuza University, Romania), Xiaohong Chen (University of Illinois at Urbana-Champaign, USA), Grigore Rosu (University of Illinois at Urbana-Champaign, USA) and Andrei Arusoaie (Alexandru Ioan Cuza University, Romania)

**Slides**: 📄 Download | **Video**: ▶️ ️Watch

Matching Logics (ML) refers in fact to a family of logics designed to supply a suitable framework for specifying and reasoning both static structures and dynamic properties by means of patterns and pattern matching. One of its source of inspiration is The Rewriting Logic Semantics Project. A language definition in ML comprises both the operational and the axiomatic aspects, making no distinction between the two. The semantics of a programming language is given by a set of rewrite rules, which are special patterns in ML. A language-independent proof system can then be used to derive both operational behaviors and formal properties of a program. In other words, one matching logic semantics fulfills the roles of both operational and axiomatic semantics in a uniform manner. In particular, ML unifies both unification and anti-unification, both of which are special instances of ML reasoning (of conjunctive and disjunctive patterns). This opens a window to apply term rewriting techniques in ML.

Term rewriting and its related tools can be used in both experimenting the operational aspects, e.g., to derive interpreters, and proving process, to deliver proof certificates for the pattern equivalences provided by them. For instance, the syntactic unifiers can be used for transforming conjunction of two structural pattern into an equivalent conjunction of a structural pattern and a constraint, which are the preferred form of the program analysis/verification tools. Moreover, the unification algorithm can supply a proof certificate for the equivalence of the two patterns.

### Session Types for Message-Passing Concurrency

**Lecturer:** Jorge A. Pérez (University of Groningen, The Netherlands)

**Slides**: 📄 Download | **Video**: ▶️ ️Watch

This lecture will offer ISR participants an overview of the key concepts of session types, a type-based approach for communication correctness in message-passing concurrent programs.

Session types have been intensively developed within different research communities (notably, concurrency theory, programming languages, and software engineering) in the last 20 years. The lecture describes these developments in two parts. In the first part, participants will be exposed to basic concepts of session types, in binary and multiparty formulations. The second part analyzes these concepts from the perspective of “propositions as sessions”, the correspondence that elegantly connects session types and linear logic in the style of the Curry-Howard isomorphism.

After the lecture, participants will have a high-level understanding of how session types offer a rigorous, paradigm-independent verification methodology for message-passing programs. They will also have an essential understanding of the key literature and open research problems in this area.

### Pathway Logic: Using Rewriting Logic to Understand How Cells Work

**Lecturer:** Carolyn Talcott (SRI International, USA)

**Slides**: 📄 Download | **Video**: ▶️ ️Watch

Biological cells sense their external environment and internal state; propagate and process sensed information via concurrent processes involving proteins, metabolites, genes. As a result genes can be turned on/off, proteins and other biochemicals can be secreted (sending messages to neighboring cells), and cells may decide to proliferate or die.

Formal executable models of cells and their response to perturbations can help to understand how cells work, how the different processes coordinate or compete, and how to intercede when things go wrong (therapeutics).

Pathway Logic (PL) is a logical system based on rewriting logic for building such executable models (in Maude) and carrying out in silico experiments.

This talk will describe how cell signaling processes are modeled in rewriting logic, how Maude’s support for reflection is used to provide an interactive graphical interface to explore models, and illustrate the system with a variety of examples.

Pathway Logic Web Site: pl.csl.sri.com