Skip to main content

Combinator Calculus EDSLs in C#

A bit of departure from my usual practical posts in C#, I decided to try my hand at an abstract computer science topic. I've become interested in concatenative programming, and combinator calculi are right up this alley.

The quintessential combinator calculus is the well-known SKI calculus, which features two core combinators, S and K, and one optional combinator I. SKI is quite simple, and it's pretty straightforward to implement as an EDSL. I suspect the key stumbling block for most programmers unfamiliar with functional programming will be the pervasive partial application inherent to combinators.

Partial Application

Partial application is a pretty simple concept for anyone whose done even a little programming with LINQ. C#/.NET features first-class functions called "delegates" with types like:

public delegate T2 Func<T0, T1, T2>(T0 arg0, T1 arg1);

That's the type declaration for a delegate that accepts two arguments of types T0 and T1, and returns a value of type T2. Partial application is simply an operation that applies a function to only some of its arguments, with the result of the partial application being another function that accepts the remaining arguments.

We can simulate a partial application operation in any language featuring first-class functions as follows:

public static class Function
{
  public static Func<T1, T2> Partial<T0, T1, T2>(Func<T0, T1, T2> func,
                                                 T0 arg0)
  {
    return arg1 => func(arg0, arg1);
  }
  public static Func<T0, T2> Partial<T0, T1, T2>(Func<T0, T1, T2> func,
                                                 T1 arg1)
  {
    return arg0 => func(arg0, arg1);
  }
}

The above code sample simulates partial application for the two-arg delegate. For every delegate type accepting arguments of types T0, T1, ... Tn, partial application requires a set of Function.Partial overloads for partially applying 1 argument, 2 arguments, 3 arguments, ..., up to n-1 arguments. This reduces to the formula:

(n choose 1) + (n choose 2) + ... + (n choose n-1) = 2n - 2

Normally, partial application is provided by a language's compiler, so all these overloads aren't typically required in a language that has strong support for functional programming.

There is also another way to support something like partial application in a language without needing copious overloads: currying.

Aside: Currying

Most .NET languages feature uncurried functions by default, which is to say that functions can take multiple arguments (F# being the only exception I'm aware of). Curried functions only ever take a single argument, and return a single value, no exceptions. Functions that accept multiple arguments can be simulated by functions that return other single-argument functions that encapsulate the previous arguments.

For instance, consider simple integer addition, which can be represented by the following delegate:

Func<decimal, decimal, decimal> add = decimal.Add;

If C# featured curried functions, this would instead look like:

Func<decimal, Func<decimal, decimal>> add = decimal.Add;

As you can see, add has now become a function that only accepts a single argument, and returns a single argument: it accepts an argument of type decimal, and returns a function that accepts a decimal argument and returns a decimal. In abstract, all N-argument function applications are then a series of N-1 partial applications terminated by a final application that invokes the underlying operation:

Func<decimal, Func<decimal, decimal>> add = decimal.Add;
Func<decimal, decimal> add1 = add(1M);
Func<decimal, decimal> add2 = add(2M);
Func<decimal, decimal> add3 = add(3M);
...
Console.WriteLine(add2(0));   // output: 2
Console.WriteLine(add(2)(0)); // output: 2

Currying is a pretty powerful concept because it simplifies functions into a standard form, taking and returning single-values, which allows easy composition of functions in higher-order patterns, like we find in LINQ.

Languages with currying by default avoid all the above parentheses by simply having whitespace as application. For instance, the above in F# is something like:

let add = decimal.Add in
let add1 = add 1 in
let add2 = add 2 in
let add3 = add 3 in
...
Console.WriteLine (add2 0);
Console.WriteLine (add 2 0);;

This syntax closely matches the concepts you'll find in combinator calculi like the SKI calculus.

The SKI Calculus

A minimal understanding of partial application is needed in order to understand how to implement the SKI calculus in C#. Basically, you can view the S, K and I combinators either like curried functions, or like operators supporting partial application by default. In abstract, the SKI combinators are described as:

let x,y,z be variables representing any subexpression in the SKI calculus:

I(x)     = x
K(x,y)   = x
S(x,y,z) = x(z)(y(z))

Basically, I is a function that accepts one argument and simply returns it. K is a function that accepts two arguments, and simply returns the first argument. S is a function accepting 3 arguments, and first applies the first argument to the third, expecting it to return a function of some sort, and then applies that function to the result of applying the second argument to the third argument.

It's important to note here that all function applications above can be partial applications. So you can have a legitimate expression like "S x", which supplies S with only one argument, even though S takes 3 arguments; it's simply a partial application. The result of "S x" is then a function that accepts at least two more arguments.

This is where currying comes in handy, because partial application semantics is pervasive. The SKI calculus in a language with curried functions would look like:

let x,y,z be variables representing any subexpression in the SKI calculus:

I x     = x
K x y   = x
S x y z = x z (y z)

This is how the raw SKI calculus is generally presented in the academic literature.

Given that the SKI calculus supports pervasive partial application, we need to create a class hierarchy that represents the S, K and I terms, and all possible partial applications. We can do this naively in standard object-oriented fashion with the following set of classes:

public abstract class SK
{
    public abstract SK Apply(SK term);
}
public class I : SK
{
    internal static I instance = new I();
    public override SK Apply(SK term)
    {
        return term;
    }
}
public class K : SK
{
    internal static K instance = new K();
    public override SK Apply(SK term)
    {
        return new K1 { First = term };
    }
}
public class S : SK
{
    static internal S instance = new S();
    public override SK Apply(SK term)
    {
        return new S1 { First = term };
    }
}

This is a standard object-oriented representation of curried SKI combinators. Each combinator takes a single argument of type SK, and returns a single value of type SK. This is all well and good for combinators like I, which only take a single argument, but K and S take more. This where partial application comes in. We return intermediate terms K1 and S1 for partial applications of K and S, respectively:

public class K1 : SK
{
    public SK First { get; set; }
    public override SK Apply(SK term)
    {
        return First;
    }
}
public class S1 : SK
{
    public SK First { get; set; }
    public override SK Apply(SK term)
    {
        return new S2 { First = First, Second = term };
    }
}

Since S1 requires one further argument before the operation can be applied, it returns another intermediate value representing the partial application of two arguments to S:

public class S2 : SK
{
    public SK First { get; set; }
    public SK Second { get; set; }
    public override SK Apply(SK term)
    {
        return First.Apply(term).Apply(Second.Apply(term));
    }
}

And thus we have the full SKI combinator calculus. An example SKI term:

var SKSK = new S().Apply(new K()).Apply(new S()).Apply(new K());

This is pretty awkward, and we can do much better. You might have noticed that the core S, K, I classes were singletons, and that's because we only ever need the one instance since they contain no state. Combined with some additional properties on the base SK class, embedded SKI expressions can be pretty convenient:

public abstract class SK
{
    public abstract SK Apply(SK term);
    public SK S
    {
        get { return Apply(Combinators.S.instance); }
    }
    public SK I
    {
        get { return Apply(Combinators.I.instance); }
    }
    public SK K
    {
        get { return Apply(Combinators.K.instance); }
    }
}

The previous example then becomes:

var SKSK = new S().K.S.K;

The SKI calculus is actually Turing complete, so any computable function can be computed by some SKI expression. For instance, here's a simple infinite loop:

var SII_SII = new S().I.I.Apply(new S().I.I); // loops forever

The Functional Representation

The above is a canonical object-oriented representation of terms. C# provides us with some other options though, including a more functional encoding. Simply stated, we convert the class hierarchy into enums representing a disjoint union, and evaluation is a switch over the possible cases instead of a method dispatch:

public enum SkiCombinators
{
    I, K, S
}
public struct SK
{
    SkiCombinators op;
    SK[] args;
    public SK Apply(SK term)
    {
        switch (op)
        {
            case SkiCombinators.I:
                return term;
            case SkiCombinators.K:
                return args != null
                     ? args[0]
                     : new SK { op = op, args = new[] { term } };
            case SkiCombinators.S:
                return args == null      ? new SK { op = op, args = new[] { term } }:
                       args.Length == 1  ? new SK { op = op, args = new[] { args[0], term } }:
                                           args[0].Apply(term).Apply(args[1].Apply(term));
            default: throw new InvalidOperationException("Unknown combinator.");
        }
    }
}

Combinators are here represented in a form that's closer to an efficient instruction encoding for an abstract machine. We have an opcode representing the combinator, and we have a list of arguments represented as an array. The I combinator only ever gets 0 or 1 element arrays. The K combinator only ever gets 0, 1 or 2 element arrays, the first two of which represent partial applications. And finally, the S combinator only ever has 0, 1, 2, or 3 element arrays.

I tend to find this representation more convenient to work with since the evaluation function is defined concisely in one place. It's also a small step from this encoding, to an opcode array representing an instruction machine for an SKI machine (I'll write about it if there's interest).

We can make embedded SKI expressions convenient with the following helper properties:

public struct SK
{
    ...
    public static SK Begin_I
    {
        get { return new SK { op = SkiCombinators.I }; }
    }
    public static SK Begin_K
    {
        get { return new SK { op = SkiCombinators.K }; }
    }
    public static SK Begin_S
    {
        get { return new SK { op = SkiCombinators.S }; }
    }

    public SK S
    {
        get { return Apply(Begin_S); }
    }
    public SK I
    {
        get { return Apply(Begin_I); }
    }
    public SK K
    {
        get { return Apply(Begin_K); }
    }
}

The previous SKI examples now look like this:

var SKSK = SK.Begin_S.K.S.K;
var SII_SII = SK.Begin_S.I.I.Apply(SK.Begin_S.I.I);

This representation is also similar to the algebraic representation you'd find in a functional language like F#. In F#, this would look something like:

type ski = S | K | I | K1 of ski | S1 x of ski | S2 of ski * ski
let apply e arg0 = match e with
    S  -> S1 arg0
  | K  -> K1 arg0
  | I  -> arg0
  | K1 x -> x
  | S1 x -> S2 arg0 x
  | S2 x y -> x arg0 (y arg0);;

This is the direct algebraic equivalent of the first object-oriented representation of the SKI calculus, with values representing the partial applications of combinators.

What's It All For?

This was really just a bit of fun in exploring abstract computer science concepts with C#, and hopefully can serve as a rudimentary introduction to convenient functional encodings for language terms. Combinator calculi are a good introduction because they are simple and concise, yet computationally complete.

Despite being simple and studied for a long time, there's still interesting work being done with these calculi. For instance, a great feature of some dynamically typed languages which still isn't well understood in typed terms, is metacircularity. Dynamically typed languages can represent native programs as values, can inspect those values, and can execute those values (Lisp is the canonical example).

However, doing this in a statically typed language is still an open problem... except for typed combinator calculi! I believe the SFB combinator calculus is the first example of a statically typed language that can inspect and evaluate its own programs, and it doesn't need anything beyond the types of System F, which we also have in C#.

Like the SK calculus, the SFB calculus can represent lambda abstraction and application, and so in theory, we actually have a metacircular statically typed language! The downside is that the metacircularity is at the level of the combinator terms, not at the lambda abstraction level, so it's allegedly a little awkward to use for programs written using lambda abstraction.

Still, concatenative languages exist, and some are quite popular, like Factor, so perhaps there's some application of this approach to concatenative languages. My next post will probably be a C# encoding of the SFB calculus, which I've already written, along with some interesting terms showing its expressive power.

Comments

Popular posts from this blog

async.h - asynchronous, stackless subroutines in C

The async/await idiom is becoming increasingly popular. The first widely used language to include it was C#, and it has now spread into JavaScript and Rust. Now C/C++ programmers don't have to feel left out, because async.h is a header-only library that brings async/await to C! Features: It's 100% portable C. It requires very little state (2 bytes). It's not dependent on an OS. It's a bit simpler to understand than protothreads because the async state is caller-saved rather than callee-saved. #include "async.h" struct async pt; struct timer timer; async example(struct async *pt) { async_begin(pt); while(1) { if(initiate_io()) { timer_start(&timer); await(io_completed() || timer_expired(&timer)); read_data(); } } async_end; } This library is basically a modified version of the idioms found in the Protothreads library by Adam Dunkels, so it's not truly ground bre

Building a Query DSL in C#

I recently built a REST API prototype where one of the endpoints accepted a string representing a filter to apply to a set of results. For instance, for entities with named properties "Foo" and "Bar", a string like "(Foo = 'some string') or (Bar > 99)" would filter out the results where either Bar is less than or equal to 99, or Foo is not "some string". This would translate pretty straightforwardly into a SQL query, but as a masochist I was set on using Google Datastore as the backend, which unfortunately has a limited filtering API : It does not support disjunctions, ie. "OR" clauses. It does not support filtering using inequalities on more than one property. It does not support a not-equal operation. So in this post, I will describe the design which achieves the following goals: A backend-agnostic querying API supporting arbitrary clauses, conjunctions ("AND"), and disjunctions ("OR"). Implemen

Easy Automatic Differentiation in C#

I've recently been researching optimization and automatic differentiation (AD) , and decided to take a crack at distilling its essence in C#. Note that automatic differentiation (AD) is different than numerical differentiation . Math.NET already provides excellent support for numerical differentiation . C# doesn't seem to have many options for automatic differentiation, consisting mainly of an F# library with an interop layer, or paid libraries . Neither of these are suitable for learning how AD works. So here's a simple C# implementation of AD that relies on only two things: C#'s operator overloading, and arrays to represent the derivatives, which I think makes it pretty easy to understand. It's not particularly efficient, but it's simple! See the "Optimizations" section at the end if you want a very efficient specialization of this technique. What is Automatic Differentiation? Simply put, automatic differentiation is a technique for calcu