Skip to main content

ML Modules in C# - Sorely Missing Polymorphic Type Constructors

As Chung-chieh Shan pointed out, my encoding of modules in C# is somewhat limited. In particular, I cannot abstract over type constructors, which is to say, C# is missing generics over generics. Consider the Orc.NET interpreter:
class Orc {
class Exp<T> { ... }

public Exp<U> Seq<T,U>(Exp<T> e1, <U>) { ... }
public Exp<T> Par<T>(Exp<T> e1, Exp<T>) { ... }
public Exp<T> Where<T>(Exp<T> e1, Exp<Promise<T>>) { ... }
}

This is the result of my translation, which was necessitated by the "Where" method. Where introduces a dependency which currently cannot be expressed with ordinary C# constraints, so the module encoding is necessary.

The above interface is a direct, faithful implementation of the Orc semantics. The implementation I currently have is an interpreter for those semantics. What if I want to provide a compiler instead? The interface should remain the same, but the implementation should differ. This is a classic abstraction problem.

It's clear that the implementations can be changed at compile-time, but providing a different Orc implementation at runtime, something very natural for OO languages, does not seem possible. The reason is that C# lacks the requisite polymorphism.

A generic type is a type constructor, which means that from an abstract type definition, Exp<T>, you can construct a concrete type by providing a concrete type T, such as Exp<int>. But, if Exp<T> is not a concrete type until you provide it a concrete type T, what is the type of the abstract definition Exp<T>?

Reasoning about type constructors requires the introduction of something called "kinds". As types classify values, so kinds classify types. The set of values, or concrete types, is of kind *. The set of type constructors is of kind *⇒*, which is to say they are "compile-time functions" accepting a concrete type and producing a concrete type.

Now consider that we have multiple Exp<T> implementations, say Interpreter.Orc.Exp and Compiler.Orc.Exp, all with different innards, and all define the same operations and thus are theoretically interchangeable. We would somehow like to abstract over the different implementations of Exp<T> so that we can use whichever one is most appropriate. In other words, we want to make our code polymorphic over a set of generic types Exp<T>.

This necessitates type constructor constructors, the kind *⇒*⇒*, which accepts a type such as Orc.Compiler.Exp, and produces a type constructor Exp<T>.

At the moment, I can't think of a way to encode such polymorphism in C#. Functional languages provide this level of polymorphism, and some even provide higher-order kinds, meaning type constructors can be arguments to other type constructors, thus achieving entirely new levels of expressiveness.

Comments

Matt Hellige said…
As I'm sure you know, Scala has added this kind of polymorphism (no pun intended). I would be curious to see whether your approach works in that setting, since it's fairly close to C# in spirit.
Sandro Magi said…
Is the Scala implementation type-safe Java without casts?
Matt Hellige said…
I'm not sure I understand what you mean. I think your idea can be implemented in a type-safe way in Scala, so yes, I believe it would be type-safe in Scala with no casts.

But I'm not sure what this has to do with Java... The Scala compiler is written in Scala, and produces JVM bytecode. JVM classes compiled from Scala are callable from Java, but generic types are erased, so if the client code is in Java, I think it would require casts.

But I'm not sure that answers your question... Can you clarify what you meant?
Sandro Magi said…
I was asking whether using Scala code from Java was type-safe. Clearly Scala has some semantics-preserving translation from their more expressive calculus to the Java calculus. I was wondering whether this translation required casts, and so is "unsafe" for Java clients to use naively.

They will also require a similar translation for a .NET implementation of Scala, with a possibly important difference: .NET preserves runtime types, so type-safety is assured unless they erase everything to System.Object, as with Java.

My goal is to understand whether Scala's translation can be used to embed more powerful idioms than my translation is capable of.
Matt Hellige said…
Thanks, now I do see what you mean.

Scala's translation to JVM byte-code does use erasure, and so does introduce casts (as I mentioned). But then, the translation of regular Java to JVM byte-code uses the same technique. I do not believe there is any complete translation from Scala to the Java language (as opposed to bytecode), but I suspect such a translation would require casts.

The .NET backend for Scala is incomplete, so I do not know whether the CLR's non-erased generics would eliminate the need for casts. In short, I don't think the Scala compiler implements the kind of type-safe encoding you're looking for, but I could be wrong.

In any case, looking only at Scala source code, I would still be curious to see whether your pattern can be expressed. Mainly I'm interested in probing the stylistic and technical differences between Scala, Haskell and OCaml. If I can find time, maybe I'll look at your code and give it a try. I'm 90% sure that Scala can express this.
Sandro Magi said…
The approach I took with Orc is the tagless, staged interpreters approach, so that's probably the more interesting question.

Popular posts from this blog

async.h - asynchronous, stackless subroutines in C

The async/await idiom is becoming increasingly popular. The first widely used language to include it was C#, and it has now spread into JavaScript and Rust. Now C/C++ programmers don't have to feel left out, because async.h is a header-only library that brings async/await to C! Features: It's 100% portable C. It requires very little state (2 bytes). It's not dependent on an OS. It's a bit simpler to understand than protothreads because the async state is caller-saved rather than callee-saved. #include "async.h" struct async pt; struct timer timer; async example(struct async *pt) { async_begin(pt); while(1) { if(initiate_io()) { timer_start(&timer); await(io_completed() || timer_expired(&timer)); read_data(); } } async_end; } This library is basically a modified version of the idioms found in the Protothreads library by Adam Dunkels, so it's not truly ground bre

Simple, Extensible IoC in C#

I just committed the core of a simple dependency injection container to a standalone assembly, Sasa.IoC . The interface is pretty straightforward: public static class Dependency { // static, type-indexed operations public static T Resolve<T>(); public static void Register<T>(Func<T> create) public static void Register<TInterface, TRegistrant>() where TRegistrant : TInterface, new() // dynamic, runtime type operations public static object Resolve(Type registrant); public static void Register(Type publicInterface, Type registrant, params Type[] dependencies) } If you were ever curious about IoC, the Dependency class is only about 100 lines of code. You can even skip the dynamic operations and it's only ~50 lines of code. The dynamic operations then just use reflection to invoke the typed operations. Dependency uses static generic fields, so resolution is pretty much just a field access + invoking a

Easy Automatic Differentiation in C#

I've recently been researching optimization and automatic differentiation (AD) , and decided to take a crack at distilling its essence in C#. Note that automatic differentiation (AD) is different than numerical differentiation . Math.NET already provides excellent support for numerical differentiation . C# doesn't seem to have many options for automatic differentiation, consisting mainly of an F# library with an interop layer, or paid libraries . Neither of these are suitable for learning how AD works. So here's a simple C# implementation of AD that relies on only two things: C#'s operator overloading, and arrays to represent the derivatives, which I think makes it pretty easy to understand. It's not particularly efficient, but it's simple! See the "Optimizations" section at the end if you want a very efficient specialization of this technique. What is Automatic Differentiation? Simply put, automatic differentiation is a technique for calcu