Skip to main content


Showing posts from September, 2013

CLR: The Cost of Dynamic Type Tests

I recently came across Vance Morrison's blog post on the relative costs of dynamic type tests on the CLR, and I was struck by how much my experience differed from the results he received. In past tests, I had concluded that operations and comparisons using System.Type were just as fast as operations on System.RuntimeTypeHandle . This struck me as a little odd at the time, but numbers don't lie. Vance helpfully provided the code he used for his benchmarks, so I decided to see if perhaps I was mistaken. Lo and behold, the numbers I received from running his code matched my past results, ie. RuntimeTypeHandle provided no advantage. This seemed extremely odd, and after digging a little deeper, it turns out that Vance and I are both right. I've been doing most of my development on 64-bit x64 machines, and I suspect Vance was running x86 at the time. It turns out that the x64 runtime for the CLR is woefully underperforming when compared to x86, at least for this type of code

CLR Concurrency: Preventing Torn Reads Without Locks

The CLR's value types are incredibly useful for reducing memory usage of programs, but they have a severe limitation in concurrent scenarios: structs larger than the atomic type on a given machine can suffer from torn reads . Most typical applications won't encounter this because their concurrent accesses, both reads and writes, are protected by locks. However, there are some scenarios where locks just aren't viable for various reasons. For instance, if a few shared variables are read an order of magnitude more often than they're written, then all the lock contention is 90% wasted work among readers that aren't performing any updates. In principle, the lock is really there to permit only one writer to modify the variable at a time. This means we can possibly use some other signalling mechanism to notify readers that a write is taking place, or has taken place. Ideally, this mechanism shouldn't cause contention among readers thus permitting more read parallel

On Confluence and Type Parameter Unification in C#

Awhile back I had written about a a type unification nuisance I had run into. In a nutshell, the problem occurs when a class with two type parameters tries to implement the same interface twice, once for each type parameter: // compiler error: // 'Foo<T0,T1>' cannot implement both 'IEnumerable<T0>' and // 'IEnumerable&;lt;T1>' because they may unify for some type // parameter substitutions public class Foo<T0, T1> : IEnumerable<T0>, IEnumerable<T1> { } As Doug McClean pointed out in the comments, the reason behind this error is because the two implementations of the interfaces may not be confluent, ie. behaviourally identical, in which case there's no legitimate way to choose between the two. The application I had in mind at the time used marker interfaces, ie. interfaces with no methods or properties, so they were guaranteed to be confluent. I also had a sneaking suspicion that C# already permitted this structure el

Combinator Calculus EDSLs in C#

A bit of departure from my usual practical posts in C#, I decided to try my hand at an abstract computer science topic. I've become interested in concatenative programming, and combinator calculi are right up this alley. The quintessential combinator calculus is the well-known SKI calculus , which features two core combinators, S and K, and one optional combinator I. SKI is quite simple, and it's pretty straightforward to implement as an EDSL. I suspect the key stumbling block for most programmers unfamiliar with functional programming will be the pervasive partial application inherent to combinators. Partial Application Partial application is a pretty simple concept for anyone whose done even a little programming with LINQ. C#/.NET features first-class functions called "delegates" with types like: public delegate T2 Func<T0, T1, T2>(T0 arg0, T1 arg1); That's the type declaration for a delegate that accepts two arguments of types T0 and T1, and retu