Skip to main content

C# for Haskell and ML Programmers

An interesting question was posed to /r/Haskell today: is there a quick intro to C# from Haskell programmers?

Well there are plenty of explanations of C#'s basic syntax, classes, structs, etc., but nothing that specifically addresses the functional mindset a Haskell programmer would be starting with, so I wrote a reply providing links to the various familiar concepts from functional programming found in C#, and described various caveats that might be surprising to a Haskell user. I'll reproduce the post here for posterity:

I think there's a reasonable C# subset for functional programming, so if you stick to that you should be able to pick it up relatively quickly. Read up on:

  • LINQ -- you can use the query comprehension syntax, or the regular first-class function syntax. The former is sugar for the latter.
  • Tuples
  • Lambdas and delegates (delegates are first-class functions)
  • Parametric polymorphism is known as generics. You can place generic parameters on methods/functions, and type declarations.
  • The standard System.Func* and System.Action* delegates. These are delegates with generic parameters. Most first-class functions you deal with will be one of these two classes of delegates.
  • Type constraints

Caveats:

  • lambdas and delegates are nominally typed, so you can't implicitly or explicitly coerce a delegate of one type into a delegate of a compatible signature, ie. System.Predicate<int> is signature compatible with System.Func<int, bool>, but they are not interconvertible without doing some magic like I do in my Sasa.Func.Coerce library function.
  • methods and delegates are multiparameter, not curried like in OCaml and Haskell, thus leading to all the Func* and Action* overloads.
  • The "void" return type is not a type, so it can't be used as a generic argument. Hence the need for all the Action* delegate types distinct from the Func* delegate types. Action* differ only in the fact that they return void.
  • generic parameters on methods are strictly more general than generic parameters on delegates (which are types). Method generics support first-class polymorphism, while type declaration generics do not.
  • classes are always implicitly option types, ie. they are nullable, while struct types always have a "valid" value, ie. are not nullable. Struct types are then useful for eliminating null reference exceptions in programs, as long the default struct value is meaningful.

Comments

Popular posts from this blog

async.h - asynchronous, stackless subroutines in C

The async/await idiom is becoming increasingly popular. The first widely used language to include it was C#, and it has now spread into JavaScript and Rust. Now C/C++ programmers don't have to feel left out, because async.h is a header-only library that brings async/await to C! Features: It's 100% portable C. It requires very little state (2 bytes). It's not dependent on an OS. It's a bit simpler to understand than protothreads because the async state is caller-saved rather than callee-saved. #include "async.h" struct async pt; struct timer timer; async example(struct async *pt) { async_begin(pt); while(1) { if(initiate_io()) { timer_start(&timer); await(io_completed() || timer_expired(&timer)); read_data(); } } async_end; } This library is basically a modified version of the idioms found in the Protothreads library by Adam Dunkels, so it's not truly ground bre...

Easy Automatic Differentiation in C#

I've recently been researching optimization and automatic differentiation (AD) , and decided to take a crack at distilling its essence in C#. Note that automatic differentiation (AD) is different than numerical differentiation . Math.NET already provides excellent support for numerical differentiation . C# doesn't seem to have many options for automatic differentiation, consisting mainly of an F# library with an interop layer, or paid libraries . Neither of these are suitable for learning how AD works. So here's a simple C# implementation of AD that relies on only two things: C#'s operator overloading, and arrays to represent the derivatives, which I think makes it pretty easy to understand. It's not particularly efficient, but it's simple! See the "Optimizations" section at the end if you want a very efficient specialization of this technique. What is Automatic Differentiation? Simply put, automatic differentiation is a technique for calcu...

Easy Reverse Mode Automatic Differentiation in C#

Continuing from my last post on implementing forward-mode automatic differentiation (AD) using C# operator overloading , this is just a quick follow-up showing how easy reverse mode is to achieve, and why it's important. Why Reverse Mode Automatic Differentiation? As explained in the last post, the vector representation of forward-mode AD can compute the derivatives of all parameter simultaneously, but it does so with considerable space cost: each operation creates a vector computing the derivative of each parameter. So N parameters with M operations would allocation O(N*M) space. It turns out, this is unnecessary! Reverse mode AD allocates only O(N+M) space to compute the derivatives of N parameters across M operations. In general, forward mode AD is best suited to differentiating functions of type: R → R N That is, functions of 1 parameter that compute multiple outputs. Reverse mode AD is suited to the dual scenario: R N → R That is, functions of many parameters t...