Sunday, October 18, 2015

Versioning Domain Entities with Ordinary SQL

In any system with mutable entities saved in an object store, it's inevitable that you'll want to audit the changes made to a particular entity. Explicitly versioning your entities solves this problem, and it's often pretty useful to end users of a sophisticated application, say for undo purposes. However, it's not always so clear how to transform a mutable entity into an versioned entity.

There are also a number of complications to versioning mutable objects:

  1. Handling parent-child relations such that a child change does not propagate to every parent
  2. Supporting efficient queries on history
  3. Supporting efficient queries for the newest version
  4. Space complexity of data representation
  5. Concurrent changes

Some of the above properties also have a natural tension, ie. supporting efficient queries often require storing more data so their space complexity is worse. I'm going to provide an overview here of a very simple schema for versioned entities with the following properties:

  1. Queries on history are just as efficient as queries on current data
  2. Queries on history have the same form as queries on current data, ie. no separate history table that must be handled specially -- in fact, you can use the exact same queries for both history and current data, thus eliminating all query duplication!
  3. Parents and children can evolve independently, ie. data update is "strictly disjoint access parallel" in transactional concurrency parlance: as long as two updates don't update the same object, they will both commit successfully


The schema changes required are quite straightforward for any entity with a primary key (which should be all of them!):

  1. Add a version number column of type int or long: this is a monotonically increasing version number, where version=0 is used when the object is first created, version=1 is after the first change is committed, etc.
  2. Ensure the PK index orders the version number in DESCENDING order, because the highest version number is the most current version
  3. Change the primary key to be composite over the existing PK and the new version number -- the original PK is now called the "logical id"
  4. Add a revisionDate column storing the date the version was committed
  5. Add an index over the logical id and the revisionDate, again with revisionDate in DESCENDING order

That's it! The original entity's PK is now a logical id rather than an actual instance id, so accessing a particular version of an entity now requires both the logical id and the version to access. The latest version is simply the entry with the largest version number.


Supposing we have an entity Foo whose PK is called "id", and :columnName is the syntax for a named query parameter. Ordinary queries on entities now become the following queries on versioned entities:

  • Insert a new entity:
    INSERT INTO Foo(id, ..., version, revDate)
    VALUES (:id, ..., 0, GETDATE())
  • Get the most current version:
    WHERE id=:id
    ORDER BY version DESC
  • Get a version before a specific date/time:
    WHERE id=:id AND revisionDate <= :rev
    ORDER BY version DESC
  • Update an entity:
    INSERT INTO Foo(id, ..., version, revDate)
    VALUES (:id, ..., :cur_version_plus_1, GETDATE())

Updates on entities are now inserts, so the entities at the data store level are now effectively immutable and the entity table is an append-only log of all changes to those entity types. Since the indexes are ordered so that the latest version is always first, the impact on accessing the most current version should be minimal. Accessing older versions are similarly easy, consisting of a query on the revisionDate column.

We use the revisionDate and not the version number when querying over history to easily support queries over more than one entity. Since each entity evolves its version independent of all others, the version numbers of two connected entities, say Foo[id=1] and Foo[id=2], are completely unrelated. The former might have version=999, and the latter version=1. The change that caused Foo[id=1] to increment its version from 998 to 999, could have also been the one to change Foo[id=2] from version 0 to 1. So if you look at Foo[id=1] at version=998, the only way to see the correct version of Foo[id=2] is to return the first version whose revisionDate is <= the Foo[id=1].revisionDate for version=998.

Fortunately, this makes the queries extremely simple, since a complicated query for a whole set of interconnected entities needs only a single date parameter. In other words, you can compose two independent queries using the same revisionDate parameter name via simple concatenation into a multiquery, and it just works.

Furthermore, your UI then only needs to pass around an optional revisionDate parameter to switch between queries on the latest data and queries on history. In fact, switching the query isn't even needed, because you can use the same query for both!

WHERE id=:id AND (:rev IS NULL OR revisionDate <= :rev)

As promised, this query unifies the queries for the most current version, and any historical version of an entity. If the :rev parameter is set to NULL, then the most recent version is returned. The query optimizer can easily eliminate the redundant WHERE clause, so there should be no performance impact. If :rev is present, the redundant NULL check should be eliminated by the query optimizer and the last change before the given date/time would be returned. Alternately, you can simply always use the query with the revisionDate and simply pass in DateTime.Now, which will always return the latest result.


Being strictly disjoint access parallel, this schema on mutable entities is pretty much as good as it gets concurrency-wise. The entity table is essentially an append-only log of changes, and the new composite primary key ensures that any entity can undergo only one change event at a time. This ensures that any invariants in the application's domain model are preserved once committed to the entity store.

This is also the ideal optimistic concurrency model for any CRUD application.

Space Complexity

The space complexity is obviously sub-optimal since every version is stored. However, managing this space is a breeze, since you can easily write a query to delete all entity history before the current version, or keep the N latest versions, or basically any storage policy that makes sense for your business.

One serious concern are "large" columns in an entity, say an unbounded text column, or binary data. In this case, you should perform an additional refactoring of your schema to model git's way of doing things: add a Blob table that holds the data indexed by its hash, then just store the hash in the entity table. This keeps it compact, and ensures you only ever store one version of a binary blob, which you should probably be doing anyway.


There are a number of variations on this schema, like normalizing the revision date into a separate Revisions table to which you can add a userId to blame for the change, etc. But then the Revision table becomes a central bottleneck to insert performance and updates are no longer strictly disjoint access parallel. Furthermore, all select queries must now join on this Revisions table to find the revision date. I think the schema I've laid out provides the optimal performance tradeoff with little enough space overhead.

Versioned ORM

Standardizing this schema as the default operating mode for an ORM would be ideal. This sort of ubiquitous schema management should be automatic, and now that ORMs are moving to code-first designs where they manage the whole schema anyway, this should be a snap right?

The API interacting with the ORM remains largely unchanged, but all change submissions transparently become inserts, and all select queries automatically have the universal query form described above. A query on history would then simply be a query that's started with a date or version parameter, ie. instead of context.Foo.Where(...), it would be something like context.Foo.Before(DateTime?).Where(...).


I'm not the first to describe this schema for versioning, but I haven't seen a description quite so comprehensive, detailing the queries and its concurrency properties.

These days most of the buzz is around CQRS and event sourcing. This schema has some connections to event sourcing, since entity tables are now append-only logs with the entire history remaining intact. However, the schema of the entity change events aren't domain events as they are in event sourcing.

CRUD applications don't get much attention anymore, but I agree with Martin Fowler that most simple domains are still better served by a CRUD model than by CQRS, at least given our current tools. Domain entities usually define proper concurrency boundaries (and if they don't, your domain model is probably wrong!), so this schema is a natural fit that still ensures a domain model can naively enforce its invariants in ordinary code, and if those changes succeed then those invariants are preserved in the entity store.

Sunday, October 4, 2015

Efficient Curried Application

I just wanted to jot down some thoughts on compiling eval/apply curried application using an efficient register-based calling convention. This translation is actually quite simple once you get the trick. Leroy's Zinc abstract machine showed the way: evaluate arguments right-to-left instead of the typical left-to-right. The ZAM wrote these arguments to a stack, but in our case we want them to go to the register file first.

So we evaluate arguments right-to-left and write them into registers left-to-right, ie. argN goes in reg0, argN-1 goes to reg 1, ... arg0 goes to regN. At some threshold, we'll run out of registers. If your program is in direct-style with a stack, this threshold is often defined by the callee-save registers of the native calling convention. If your program is in CPS form, then you can use the entire register file.

Once you run out of registers, you have to spill the remaining arguments to the stack. However, the stack is simply a register pointing to some base address from which all other values are accessed by an offset from that base. But so is the closure record! If you lay out your closure record in memory so it has the same layout as the stack frame would, then you can just treat it like a small stack. This can save many memory access operations moving data between closures and registers.

Suppose our register spill threshold is 2 registers (reg0 and reg1), with a third register left for the remaining arguments (reg2), and consider the following C-style function:

int foo(int x, void *y, int z);

Here we have three arguments, one of which will need to be spilled to the stack since we can only allocate two when calling foo. Arguments z and y will go in registers, while argument x will go on the stack. However, the compilation of foo will require a certain layout in memory for optimal efficiency as well (assuming 32-bit pointers):

  loadi reg0, reg2 // reg0 = *reg2
  sub1 reg2, 4     // reg2 += sizeof(int)
  loadi reg1, reg2 // reg1 = *reg2
  sub1 reg2, 4     // reg2 += sizeof(void*)
  ...              // foo: reg0=arg2, reg1=arg1, reg2=&struct{arg0}

Hopefully the pseudo-assembly is clear. Function foo gets "prologues", one per argument which is used for partial applications. Each prologue loads an argument from the "stack"/closure record into the registers and decrements the pointer so it points to the next argument. By the time all the registers are used up, you should have exactly the "stack frame" that you need left, and the registers-passed arguments are all in their proper place.

For instance, when you partially apply foo to 1 argument, the function pointer value of the closure record is actually pointing to foo_papp1, and when you partially apply with two arguments, it's pointing to foo_papp2. This pattern is the same for any number of arguments, and the code will all have the same size for each argument, which we will need later when building partially applied closures.

Now consider the following curried applications of foo:

int(void*,int) foo_1 = foo(0);
int(int) foo_2 = foo(0, somePointer);

Hopefully the pseudo C-style closure syntax is clear, ie. int(char) is a closure taking a char and returning an int. The above are two different partial applications, foo_1 which applies only the first argument, and foo_2 which partially applies the first and second arguments. These are what the closures will look like in memory:

foo_1 →
function = &foo_papp1
arity = 2
applied = 1
x = 0
foo_2 →
function = &foo_papp2
arity = 1
applied = 2
y = somePointer
x = 0

Now when you try to apply values foo_1 and foo_2 to the remaining arguments, this actually performs a dynamic check to see how many arguments are required. For instance, fully applying a closure with the signature int(int, int) elaborates to the following pseudo-code:

obj_t returnValue;
switch (closure→arity) {
case 0:
  closure = closure→function();
  goto RETRY;
case 2:
  returnValue = closure→function(arg0, arg1);
  // reg0=arg1, reg1=arg0, reg2=&closure->arity + 1
case 1:
  closure = closure→function(arg0);
  returnValue = closure→function(arg1);

You have to dynamically check if you're applying all remaining arguments. I handle the case of arity=0 to show that intermediate closures do exist, but you don't typically need that specific case.

When you're performing a partial application, you also have to check the arity and build intermediate closures referencing the proper function address, which changes depending on the arity. Given a closure and performing a partial application, you have to make adjustments to point to the new function pointer. So given a 3 argument closure to which you're applying n < 3 arguments:

closure nclo;
obj_t returnValue;
if (closure→arity == closure→applied + n) {
  returnValue = closure->function(arg0, ..., argn);
} else {
  nclo = malloc(2 * sizeof(void*) + closure→arity - closure→applied + n * sizeof(void*));
  nclo→arity = closure→arity - n;
  nclo→applied = closure→applied + n;
  nclo→function = closure→function - SIZE_OF_FUNCTION_PROLOGUE;  // compute offset of new entry point
  nclo→arg0 = arg0;
  nclo→argn = argn;
  memcpy(&nclo->argn + sizeof(void*), &closure→applied + sizeof(void*), closure→applied * sizeof(void*));

The most general case to handle is when you're in fully polymorphic code for which you cannot know if applying the closure you have is a partial application. This is a combination of the last two elaborations, but still requires only a simple switch statement to handle all cases.

Now the most boring case: when a function is not a closure and it's being fully applied with all arguments at a call site: well in this case you fill the registers as usual, then you push the remaining arguments on the real program stack, and pass the pointer to that activation frame as the "stack" argument. Simple!

This is only a high-level overview for myself, so no doubt I've skipped some important details, or gotten them wrong while jotting these notes down. Please let me know if you spot something!


Hopefully that didn't ramble so much that the core idea was obscured. The core idea is that all closures are functions taking a fixed number of arguments defined by the number of caller-save registers. These arguments are passed in registers, and a last register is reserved for a "stack frame" holding the remaining arguments. The closure record can act as this stack frame, so you don't have to copy values between records and the stack on closure application.

Furthermore, each function has a "prologue" generated for each argument laid out prior to the actual function being executed in memory. These prologues are used when applying partially applied functions, and they load any previously applied arguments from the closure record into registers, assuming all the reserved registers aren't already used.

This is basically an implementation of the eval/apply strategy, with a few tweaks to reduce the amount of redundant code generated and reuse as much memory as possible. I don't think there's anything particularly novel, but I haven't seen all the details laid out like this (although I may of course have ommitted some details I consider "obvious", even if they aren't to others). If you have any further suggestions, I'd love to hear it!


I don't discuss any of the complexities surrounding unboxed types of non-pointer size, which considerably complicates calling conventions. Most functional languages require a universal representation for this reason, usually of pointer size, which is what I assume here. See my series on Garbage Collection Representations to get a feel for how these values translate to machine code, and how this would impact closure representations.

Wednesday, August 26, 2015

Algebra.NET: A Simple Algebra eDSL for .NET

Algebra.NET is a simple library designed to facilitate easy expression and manipulation of algebraic functions. For instance, here's a simple function:

Function<Func<double, double>> a = Algebra.Function(x => 2 * x + 1);

We can compile such a function to efficient IL:

Func<double, double> func = a.Compile("times2plus1");

Or we can apply some algebraic identities to rewrite it:

Identity associative = Algebra.Identity(x => x + 1 == 1 + x);
Identity mulEqAdd = Algebra.Identity(x => 2 * x == x + x);
Console.WriteLine(a.Rewrite(1, associative, mulEqAdd));

// Prints:
// ((2 * x) + 1)
// (1 + (x + x))

Rewrites can sometimes loop forever (consider "x + y == y + x"), so the Rewrite method takes a number indicating the maximum number of iterations to perform all the rewrites.

All the usual arithmetic operations are available, including an extension method for exponentiation:

var f = Algebra.Function(x => x.Pow(3));

// Prints:
// (x ^ (3))


As of this writing, Algebra.NET is a functional example of a simple term rewriting system. Term rewriting is usually pretty awkward to express in an object-oriented language, and I banged my head against the keyboard to figure out a nice way to do it, until I hit on just doing unification (of course!).

So I reused the term language and added an equality operator to generate an identity that conceptually maps one term to another. I then perform unification on the left hand side, and generate a set of substitutions to transform the matching term into the right hand side of the identity.

It was ultimately quite simple, consisting of 3 methods on Term:

Term Rewrite(Identity e, Term[] bindings)
bool TryUnify(Term e, Term[] bindings)
Term Subsitute(Term[] bindings)

Rewrite tries to recursively unify the Identity's left hand side with the current term using TryUnify. On success, the 'bindings' array will have been populated by TryUnify with the substitutions to perform, so it substitutes the bindings into the identity's right hand side to generate the new term.

There are only 3 term types: constants, variables and binary operations. Negation is handled as a binary operation "0 - x" for simplicity. The unification methods on each of the term types are only a few lines of code each, but are quite powerful!

So if you want to understand expression compilation to CIL, unification, or term rewriting, this is pretty much as simple as it gets.

Algebra.NET doesn't perform any term simplification at this point, only term rewriting. Some rewrites may of course be simplifications, but a term like "0 - 3" will not be simplified to "-3".

Future Work

As mentioned, Algebra.NET doesn't perform simplification, so that's a big one. I started developing this to work on symbolic and automatic differentiation for numerical optimization problems. I'm aware of other .NET libraries for this, but I didn't like how clumsy it was to write algebraic expressions, nor did they have nice and extensible facilities for rewriting expressions. So I created this in my spare time and intended to continue fleshing it out as needed.

Monday, August 10, 2015

Sasa v0.17.0 Released

A few new features in this release, and a few bugfixes and enhancements in MIME parsing. As before, the online docs are available on my site, the full release including experimental assemblies on Sourceforge, and the production assemblies on Nuget. The changelog:

 * sasametal can now replace columns and FK associations with enums that are
   provided separately
 * sasametal can now output an interface file, which generates an interface
   describing the database entities
 * many MIME fixes thanks to a few bug reports on Sasa discussion list
 * added System.Data, which provides useful extensions to ADO.NET and
   IQueryable, like a simple interface for batched SQL queries
 * MIME parser now accepts an optional parsing parameter that controls various
   parsing options, like how strict e-mail address formatting accepted,
   and whether HTML views should be parsed into MailMessage.Body
 * added enum as a primitive to Sasa.Dynamics IReducer and IBuilder
 * fixed Sasa.Enums.IsSequential check

Sasa.Dynamics probably isn't used much, but in my experiments I found it increasingly important to handle enums as a primitive, so type reduction and construction (IReduce and IBuild respectively) now includes it as such.

I created sasametal to ease the pain of maintaining several legacy systems that depend upon Microsoft's Linq2Sql. It first renames some associations to make them more meaningful, since Linq2Sql does a bad job of this in some cases. This made some integrations with ORMs like EntityFramework and NHibernate a little easier, but there still remained some painful points.

One of these pains was handling enums, ie. an entity set whose members are effectively fixed. Sasametal now takes enums into account via a command-line option, /enum:[file path], that accepts a file listing table-to-enum rewrites, one per line. For instance, if an entity Foo had a StatusId column with an FooStatus enum for the status, we could call sasametal like so:

C:\somepath> sasametal ...[other option]... /code:FooDb.cs /enum:enums.txt

and the enums.txt contains merely:


You should provide the full table name including schema that sqlmetal would generate, and provide the fully qualified name of the enum you want to substitute. Sasametal will then eliminate the FooStatus entity and any FK associations linked to that entity, then it will update the type of any properties for that foreign key to use the enum type. The entities generated by sasametal are thus somewhat more efficient, and I've found it easier to use Linq2Sql as-is for quick projects against legacy systems.

When combined with sasametal's other new feature, generating interfaces describing the entities, it also eases the transition to more sophisticated ORMs that already handle enums.

Monday, June 15, 2015

Sasa v0.16.1 Released

Just a bugfix release, mainly for the MIME parsing. Changelog:

* added support for 8bit transfer encoding, even if not supported by .NET <4.5
* support content types whose prologue is empty
* added support for arbitrarily nested multipart messages
* added alternative to Enum.HasFlag that invokes Sasa's faster enum code

As usual, online docs are available, or the a CHM file is available on Sourceforge.

Saturday, June 6, 2015

C# Enums with [Flags]

I've had numerous posts here describing instances where C# has come so close to getting it right, and yet misses the mark by an inch. The original C# enums have a simple semantics:

enum SomeEnum
  First,   // compiler implicitly assigns 0
  Second,  // compiler implicitly assigns 1
  Third,   // compiler implicitly assigns 2
  Fourth,  // compiler implicitly assigns 3

This worked nicely as a concise expression of a need for a set of distinct of values, but without caring what they are. C# later introduced the [Flags] attribute, which signals to the compiler that a particular enum isn't actually a set of disjoint values, but a set of bit flags. However, the compiler doesn't actually change its behaviour given this change of semantics. For instance, the following enum is completely unchanged, despite the semantically meaningful change to a set of bitwise flags:

enum SomeEnum
  First,   // compiler implicitly assigns 0
  Second,  // compiler implicitly assigns 1
  Third,   // compiler implicitly assigns 2
  Fourth,  // compiler implicitly assigns 3

SomeEnum.First is now not a valid flag, and SomeEnum.Fourth is now equivalent to (SomeEnum.Second | SomeEnum.Third). The native enum behaviour is useless as a set of enumerated flags.

I suspect most people would argue that if you're interested in bitwise flags, you generally need to explicitly specify what the flag values ought to be. I don't think this is true. In fact, the only places where this is true is where the flags are defined in an external system, for instance read/write flags when opening file streams.

Exactly the same argument could be levelled against the native enum behaviour too, but like native enums provide a default correct semantics for when you don't care, so flags should provide a default correct semantics for when you don't care.

Monday, April 13, 2015

Sasa v0.16.0 Released

Mainly a bugfix release, with only a few new features. As always, the docs are available here online, or as a compiled CHM file on sourceforge. Here's the changelog:

 * a few bugfixes to MIME parsing for header and word decoding (Thanks Evan!)
 * added combined SubstringSplit function for more space efficient parsing
 * explicitly parse e-mail addresses for more flexible address separators
 * NonNull now throws an exception if encapsulated value is null, which
   is used in the case when the instance is invalidly constructed (Thanks Mauricio!)
 * build now checks for the presence of the signing key and ignores it if
   it's not present
 * a more efficient Enum.HasFlag using codegen
 * added a new ImmutableAttribute which Sasa's runtime analyses respects
 * FilePath's encapsulated string now exposed as a property

Thanks to Evan/iaiken for a number of bug reports, and to Mauricio for feedback on the NonNull<T> pattern.

Monday, March 2, 2015

Sasa v0.15.0 Released

This release features a few new conveniences, a few bugfixes and a vector that's faster than any other I've managed to find for .NET. Here's the changelog since the last v0.14.0 release:

 * more elaborate benchmarking and testing procedures
 * added a minor optimization to purely functional queues that caches
   the reversed enqueue list
 * FingerTree was switched to use arrays as opposed to a Node type
 * optimized FingerTree enumerator
 * FingerTree array nodes are now reused whenever possible
 * Sasa.Collections no longer depends on Sasa.Binary
 * MIME message's body encoding is now defaulted to ASCII if none specified
 * added convenient ExceptNull to filter sequences of optional values
 * NonNull<T> no longer requires T to be a class type, which inhibited its
   use in some scenarios
 * fixed null error when Weak<T> improperly constructed
 * added variance annotations on some base interfaces
 * added a super fast Vector<T> to Sasa.Collections
 * sasametal: fixed member name generated for compound keys
 * Sasa's Option<T> now integrates more seamlessly in LINQ queries
   with IEnumerable<T> thanks to two new SelectMany overloads
 * added two new array combinators to extract values without exceptions
 * added a few efficient Enumerables.Interleave extensions to interleave
   the values of two or more sequences
 * decorated many collection methods with purity attributes
 * fixed a bug with the Enums<T>.Flags property

A forthcoming post on the recent optimizations I've applied to Sasa's immutable collections will show the dramatic performance when compared to Microsoft's and F#'s immutable collections. Sasa's vector is an order of magnitude faster than F#'s, and Sasa's trie is actually faster than the standard mutable System.Collections.Generic.Dictionary up to around 100 elements.

As usual, you can find the individual packages on nuget, or download the full release via Sourceforge. The documentation is available in full as a .CHM file from Sourceforge, or you can peruse it online.

Tuesday, February 24, 2015

µKanren.NET - Featherweight Relational Logic Programming in C#

The µKanren paper is a nice introduction to a lightweight logic programming language which is a simplification of the miniKanren family of languages. The existing µKanren implementation in C# was a translation from Scheme, and thus is verbose, untyped with lots of casts, and non-idiomatic. I also found most of the other Kanren implementations unnecessarily obscure, heavily relying on native idioms that aren't clear to newcomers.

uKanren.NET provides a clear presentation of the core principles of µKanren using only IEnumerable<T> and lambdas, showing that µKanren's search is fundamentally just a set of combinators for transforming sequences of states. The values of the sequence are sets of bound variables that satisfy a set of equations. For instance, given the following expression:

Kanren.Exists(x => x == 5 | x == 6)

You can read it off as saying there exists an integer value to which we can bind variable x, such that x equals either 5 or 6 [1]. Solving this equation produces this sequence of values:

x[0] = 5,
x[0] = 6,

where each line represents a different value that satisfies the equation. These equations can be arbitrarily nested or chained using | and &, just like ordinary logical expressions, and µKanren will find a solution for all variables, if a solution exists. A further benefit of this implementation, is that you cannot accidentally mix up boolean equality expressions with relational logic expressions, because relational equality, conjunction and disjunction operators all operate on non-boolean types.

Logic Variables, Goals & States

We start with the simple core, which is just a logic variable identified uniquely by an integer, and with a convenient local name:

public sealed class Kanren
    internal int id;

    // the variable's public name, extracted from the delegate parameter
    public string Name { get; internal set; }

    public static Goal operator ==(object left, Kanren right)
        return Equals(left, right);

    public static Goal operator ==(Kanren left, object right)
        return Equals(left, right);

    public static Goal operator ==(Kanren left, Kanren right)
        return Equals(left, right);

µKanren.NET variables override the equality operators to generate "Goals". A Goal is represented by a delegate that takes a State and generates a sequence of States:

public delegate IEnumerable<State> Goal(State state);

A State is simply a set of variable bindings that satisfy the equations described by the Goal:

public sealed class State
    internal Trie<Var, Kanren> subs;
    internal int next = 0;
    internal Func<Goal> incomplete;

    internal Kanren Get(Var x)
        Kanren v;
        return subs.TryGetValue(x, out v) ? v : null;

    internal State Extend(Var x, Kanren v)
        return new State
            subs = subs.Add(x, v),
            next = next,
            incomplete = incomplete,

    internal State Next()
        return new State
            subs = subs,
            next = next + 1,
            incomplete = incomplete,

Each new variable introduced by Kanren.Exists increments the unique variable index we track in, so each variable gets a unique number. Variable bindings are tracked in an immutable hash map provided by Sasa.Collections.Trie<T> (a fast immutable IDictionary).

µKanren Operators

There are 3 core operations on Goals, constructing a Goal with Kanren.Exists, and combining Goals using either disjunction or conjunction:

public sealed class Kanren
    public static Goal Exists(Func<Kanren, Goal> body)
        var arg = body.Method.GetParameters()[0].Name;
        return state =>
            var goal = body(new Kanren { id =, Name = arg });
            return goal(state.Next());

    public static Goal Conjunction(Goal left, Goal right)
        return state => left(state).SelectMany(x => right(x));

    public static Goal Disjunction(Goal left, Goal right)
        return state => left(state).Concat(right(state));

    public static Goal Recurse(Func<Kanren, Goal> body, Kanren x)
        return state => new[]
            new State
                subs = state.subs,
                next =,
                incomplete = () => body(x)

Kanren.Exists just creates a fresh variable using the next number in the sequence, and invokes the function body that produces the Goal. The Goal is then evaluated using the given State, but with an updated variable counter to ensure variables are unique.

Conjunction consists of equations like x == 6 & y == 7. For every State generated by the 'left' Goal, conjunction generates a set of States built from applying the 'right' Goal to that same state. So, x == 6 produces a binding assigning x to 6, which is then passed to the Goal for y == 7, which then adds a binding for y = 7. Thus, "x[0] = 6, y[1] = 7" is produced.

Disjunction consists of equations like x == 6 | x == 7. These Goals are independent and don't interfere with each other, and so any States generated by 'right' are simply appended to the States generated by 'left'. The 'left' Goal produces x = 6, and 'right' produce x = 7, so the result will be:

x[0] = 6,
x[0] = 7,

Because C# features eager evaluation, we have to handle recursion in Kanren expressions manually. We do this via Kanren.Recurse, which takes a Goal constructor and a kanren variable, and returns an "incomplete" State, which basically means a State that was only partially evaluated and may produce further states if you resume evaluation:

public sealed class State
    // True if this state is final, such that all bindings have values,
    // false if some computation remains to be done.
    public bool IsComplete
        get { return incomplete == null; }

    // Get the pairs of bound variables and their values.
    public IEnumerable<KeyValuePair<Kanren, object>> GetValues()
        return subs;

    // Return the remaining stream of states, if any.
    public IEnumerable<State> Continue()
        if (IsComplete) throw new InvalidOperationException("State is complete.");
        return incomplete()(this);

The Kanren client is currently responsible for handling incomplete States while iterating over the results. It's possible to handle this in a more uniform manner with a little more execution overhead, but uKanren.NET doesn't currently do so.


Finally, the real meat of logic programming is resolving equality between logic variables and values. Logic programming solves equations using an algorithm called "unification":

public sealed class Kanren
    static object Walk(object uvar, State env)
        // search for final Kanren binding in env
        Kanren v;
        while (true)
            v = uvar as Kanren;
            if (ReferenceEquals(v, null)) break;
            var tmp = env.Get(v);
            if (ReferenceEquals(tmp, null)) break;
            uvar = tmp;
        return uvar;

    static State Unify(object uorig, object vorig, State s)
        var u = Walk(uorig, s);
        var v = Walk(vorig, s);
        var uvar = u as Kanren;
        var vvar = v as Kanren;
        if (!ReferenceEquals(uvar, null) && !ReferenceEquals(vvar, null)
            && uvar.Equals(vvar))
            return s;
        if (!ReferenceEquals(uvar, null))
            return s.Extend(uvar, v);
        if (!ReferenceEquals(vvar, null))
            return s.Extend(vvar, u);
        if (u.Equals(v))
            return s;
        return null;

Unification brings together all equality declarations between variables and values and ensures they are all consistent. If two variables are unified, then they are either the same variable, or the variable values must unify. If a variable and value are unified, then that value is bound to that variable in the State by extending it. If two values are unified, then they must be equal to each other. If none of these conditions hold, then a null State is returned, indicating an inconsistent set of equality conditions.

Wrap Up

The real uKanren.NET implementation provides a few tweaks to interleave certain streams, and to provide convenient operators on Goals so you can easily combine more sophisticated expressions:

static Goal Fives(Kanren x)
    return x == 5 | Kanren.Recurse(Fives, x);

static Goal Sixes(Kanren x)
    return 6 == x | Kanren.Recurse(Sixes, x);

static Goal FivesAndSixes()
    return Kanren.Exists(x => Fives(x) | Sixes(x));

// output of FivesAndSixes:
// x[0] = 5,
// x[0] = 6,
// x[0] = 5, x[0] = 5, ... [infinite stream of fives]
// x[0] = 6, x[0] = 6, ... [infinite stream of sixes]

However, the implementation described above covers the core operation of µKanren, and the rest is just syntactic sugar to make it nice in C#.

[1] The "Exists" operator was called "call/fresh" in the original paper and in most other implementations. I found it easier to read off uKanren expressions as English with the form I settled on.

Wednesday, January 21, 2015

NHibernate: Associations with Composite Primary Keys as Part of a Composite Primary Key

NHibernate is a pretty useful tool, but occasionally it's not entirely documented in a way that makes it's flexibility evident. Composite keys are a particularly difficult area in this regard, as evidenced by the numerous articles on the topic.

Most of the existing articles cover this simply enough, but there is one uncommon corner case I have yet to see explained anywhere: a composite primary key one of whose key properties is an association with a composite key. This is probably pretty uncommon and there are ways around it, hence the lack of examples, but as a testament to NHibernate's flexibility, it's possible! Here's the example in code listing only the primary keys:

public class MotorType
  public Horsepower Horsepower { get; protected set; }
  public VoltageType VoltageType { get; protected set; }
public class Motor
  public MotorType MotorType { get; protected set; }
  public Efficiency Efficiency { get; protected set; }

The tables look like this:


Most people would probably map this with the Motor class having the three primary key properties, one for each column, and an additional many-to-one association referencing MotorType, but that shouldn't actually be necessary. Composite primary keys are possible, and the primary key can contain an association, therefore it should be possible, in principle, for that association to itself need a composite key.

And here's how it's done for the Motors table:

<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">
  <class name="Motor, ExampleProject" table="Motors">
      <key-many-to-one name="MotorType">
        <column name="HorsepowerId" />
        <column name="VoltageTypeId" />
      <key-property name="Efficiency" column="EfficiencyId" />

The MotorType class is a simple composite primary key:

<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">
<class name="MotorType, ExampleProject" table="MotorTypes">
      <key-property name="Horsepower" column="HorsepowerId" />
      <key-property name="VoltageType" column="VoltageTypeId" />