Skip to main content


Showing posts from 2012

AspQ - A JavaScript Event Queue for ASP.NET

A nearly universal problem in ASP.NET is handling multiple postbacks. There are various server-side and client-side solutions to deal with this, but these solutions can only be understood once you already have the domain knowledge to solve the problem yourself. You then end up repeating the pattern in every project. Well, programming is about not repeating yourself and automating repetitive tasks. So I've released a reusable abstraction to handle multiple postbacks on the client-side. Because it's a client-side solution, it will only work for browsers with JavaScript enabled, but that's by far the most common case. AspQ AspQ is a small JavaScript object that hooks into the ASP.NET JS standard and AJAX runtimes. It basically queues all sync and async postback requests so they don't interfere with one another, and applies the updates in event order. I believe this option provides a superior user experience to simply preventing a submit until the previous postback ha

Can't Catch Exceptions When Invoking Methods Via Reflection in .NET 4

I just tried updating the Sasa build to .NET 4, when I ran into a bizarre problem. Basically, the following code throws an exception when running under .NET 4 that isn't caught by a general exception handler: static int ThrowError() { throw new InvalidOperationException(); // debugger breaks here } public static void Main(string[] args) { var ethrow = new Func<int>(ThrowError).Method; try { ethrow.Invoke(null, null); } catch (Exception) { // general exception handler doesn't work } } Turns out this is a Visual Studio setting. Given the description there, whatever hooks VS has into the runtime when transitioning between the native and managed code have changed their behaviour from .NET 3.5. So VS isn't aware of the general exception handler further up the stack, and breaks immediately. I can see this being handy if you're writing reflection-heavy code, as it breaks at the actual source of an exception instead of

Managed Data for .NET

Ensō is an interesting new language being developed by Alex Loh, William R. Cook, and Tijs van der Storm. The overarching goal is to significantly raise the level of abstraction, partly via declarative data models. They recently published a paper on this subject for Onwards! 2012 titled Managed Data: Modular Strategies for Data Abstraction . Instead of programmers defining concrete classes, managed data requires the programmer to define a schema describing his data model, consisting of a description of the set of fields and field types. Actual implementations of this schema are provided by "data managers", which interpret the schema and add custom behaviour. This is conceptually similar to aspect-oriented programming, but with a safer, more principled foundation. A data manager can implement any sort of field-like behaviour. The paper describes a few basic variants: BasicRecord: implements a simple record with getters and setters. LockableRecord: implements locking on

M3U.NET: Parsing and Output of .m3u files in .NET

I've been reorganizing my media library using the very cool MusicBrainz Picard , but of course all my m3u files broke. So I wrote the free M3U.NET library , and then wrote a utility called FixM3U that regenerates an M3U file by searching your music folder for the media files based on whatever extended M3U information is available: > FixM3u.exe /order:title,artist foo.m3u bar.m3u ... The M3U.NET library itself has a fairly simple interface: // Parsing M3U files. public static class M3u { // Write a media list to an extended M3U file. public static string Write(IEnumerable<MediaFile> media); // Parse an M3U file. public static IEnumerable<MediaFile> Parse( string input, DirectiveOrder order); // Parse an M3U file. public static IEnumerable<MediaFile> Parse( IEnumerable<string> lines, DirectiveOrder order); } The 3 exported types are straightforward. A MediaFile just has a full path to the file itself and

Delete Duplicate Files From the Command-line with .NET

Having run into a scenario where I had directories with many duplicate files, I just hacked up a simple command-line solution based on crypto signatures. It's the same idea used in source control systems like Git and Mercurial, basically the SHA-1 hash of a file's contents. Sample usage: DupDel.exe [target-directory] The utility will recursively analyze any sub-directories under the target directory and build an index of all files based on their content. Once complete, duplicates are processed in an interactive manner where the user is presented with a choice of which duplicate to keep Keep which of the following duplicates: 1. \Some foo.txt 2. \bar\some other foo.doc > The types of files under the target directory are not important, so you can pass in directories to documents, music files, pictures, etc. My computer churned through 30 GB of data in about 5 minutes, so it's reasonably fast.

Simple, Extensible IoC in C#

I just committed the core of a simple dependency injection container to a standalone assembly, Sasa.IoC . The interface is pretty straightforward: public static class Dependency { // static, type-indexed operations public static T Resolve<T>(); public static void Register<T>(Func<T> create) public static void Register<TInterface, TRegistrant>() where TRegistrant : TInterface, new() // dynamic, runtime type operations public static object Resolve(Type registrant); public static void Register(Type publicInterface, Type registrant, params Type[] dependencies) } If you were ever curious about IoC, the Dependency class is only about 100 lines of code. You can even skip the dynamic operations and it's only ~50 lines of code. The dynamic operations then just use reflection to invoke the typed operations. Dependency uses static generic fields, so resolution is pretty much just a field access + invoking a

Hash Array Mapped Trie for C# - Feature Complete

I finally got around to finishing the immutable HAMT implementation I wrote about in my last post. The only missing features were tree merging and hash collision handling. Both features are now implemented with unit tests, and the whole branch has been merged back into "default" . It now also conforms to Sasa's standard collection semantics, namely the publicly exported type is a struct, so null reference errors are impossible, and it provides an atomic swap operation for concurrent use. Here's the API: /// <summary> /// An immutable hash-array mapped trie. /// </summary> /// <typeparam name="K">The type of keys.</typeparam> /// <typeparam name="T">The type of values.</typeparam> public struct Tree<K, T> : IEnumerable<KeyValuePair<K, T>>, IAtomic<Tree<K, T>> { /// <summary> /// The empty tree. /// </summary> public static Tre

Immutable Hash Array Mapped Trie in C#

I just completed an implementation of an immutable hash array mapped trie (HAMT) in C#. The HAMT is an ingenious hash tree first described by Phil Bagwell . It's used in many different domains because of its time and space efficiency, although only some languages use the immutable variant. For instance, Clojure uses immutable HAMTs to implement arrays/vectors which are essential to its concurrency. The linked implementation is pretty much the bare minimum supporting add, remove and lookup operations, so if you're interested in learning more about it, it's a good starting point. Many thanks also to krukow's fine article which helped me quickly grasp the bit-twiddling needed for the HAMT. The tree interface is basically this: /// <summary> /// An immutable hash-array mapped trie. /// </summary> /// <typeparam name="K">The type of keys.</typeparam> /// <typeparam name="T">The type of values.</typeparam> public cl

Simplest Authentication in Lift

Lift has been an interesting experience so far, particularly since I'm learning Scala at the same time. Lift comes with quite a few built-in mechanisms to handle various features, such as authentication, authorization and role-based access control. A lot of the documentation utilizes these built-ins to good effect, but because the core mechanisms are complete skipped, you have no idea where to start if you have to roll your own authentication. A suggestion for Lift documentation: cover the basic introduction first, then show how Lift builds on that foundation. I present here the simplest possible authentication scheme for Lift, inspired by this page on the liftweb wiki : object isLoggedIn extends SessionVar[Boolean](false) ... // in Boot.scala LiftRules.loggedInTest = Full(() => isLoggedIn.get) That last line only needs to return a boolean. If you wish to include this with Lift's white-listed menu system, you merely need to add this sort of test: val auth = If(() =&g

Debugging Lift 2.4 with Eclipse

To continue my last post , launching a Lift program and debugging from Eclipse turns out to be straightforward. The starting point was this stackoverlow thread which pointed out the existence of the RunJettyRun Eclipse plugin , which can launch a Jetty instance from within Eclipse configured for remote debugging. Here are the steps to get launching and debugging working seamlessly: Install RunJettyRun from within Eclipse the usual way, ie. menu Help > Install New Software, then copy-paste this link . Once installed, go to menu Run > Debug Configurations, and double-click Jetty Webapp. This will create a new configuration for this project. Click Apply to save this configuration, and you can now start debugging to your heart's content. NOTE: running Jetty in SBT via ~container:start puts the web app in the root of the web server, ie. http://localhost:8080/ , but this plugin defaults to http://localhost:8080/project_name . You can change this via the "Context"

Getting Started with Scala Web Development - Lift 2.4 and SBT 0.11.2

Anyone following this blog knows I do most of my development in C#, but I recently had an opportunity for a Java project, so I'm taking the Scala plunge with Lift . It's been a bit of a frustrating experience so far since all of the documentation on any Java web framework assumes prior experience with Java servlets or other Java web frameworks. Just a note, I'm not intending to bash anything, but I will point out some serious flaws in the tools or documentation that I encountered which seriously soured me on these tools. This is not a reason to get defensive, but it's an opportunity to see how accessible these tools are to someone with little bias or background in this environment. In this whole endeavour, it was most frustrating trying to find a coherent explanation of the directory structures used by various build tools, JSPs and WARs. I finally managed to find a good intro for Lift that didn't assume I already knew how files are organized, and that started

Oh C#, why must you make life so difficult?

Ran into a problem with C#'s implicit conversions, which don't seem to support generic types: class Foo<T> { public T Value { get; set; } public static implicit operator Foo<T>(T value) { return new Foo<T> { Value = value }; } } static class Program { static void Main(string[] args) { // this is fine: Foo<IEnumerable<int>> x = new int[0]; // this is not fine: Foo<IEnumerable<int>> y = Enumerable.Empty<int>(); //Error 2: Cannot implicitly convert type 'IEnumerable<int>' //to 'Foo<IEnumerable<int>>'. An explicit conversion //exists (are you missing a cast? } } So basically, you can't implicitly convert nested generic types, but implicit array conversions work just fine.

Reusable Ad-Hoc Extensions for .NET

I posted awhile ago about a pattern for ad-hoc extensions in .NET using generics. Unfortunately, like every "design pattern", you had to manually ensure that your abstraction properly implements the pattern. There was no way to have the compiler enforce it, like conforming to an interface. It's common wisdom that "design patterns" are simply a crutch for languages with insufficient abstractive power. Fortunately, .NET's multicast delegates provides the abstractive power we need to eliminate the design pattern for ad-hoc extensions: /// <summary> /// Dispatch cases to handlers. /// </summary> /// <typeparam name="T">The type of the handler.</typeparam> public static class Pattern<T> { static Dispatcher<T> dispatch; static Action<T, object> any; delegate void Dispatcher<T>(T func, object value, Type type, ref bool found); /// <summary> //

Why Sealed Classes Should Be Allowed In Type Constraints

One of my older posts on Stackoverflow listed some of what I consider to be flaws of C# and/or the .NET runtime. A recent reply to my post posed a good question about one of those flaws, which was that sealed classes should be allowed as type constraints. That seems like a sensible restriction for C# at first, but there are legitimate programs that it disallows. I figured others would have run into this problem at some point, but a quick Google search didn't turn up much, so I will document the actual problem with this rule. Consider the following interface: interface IFoo<T> { void Bar<U>(U bar) where U : T; } The important part to notice here is the type constraint on the method, U : T. This means whatever T we specify for IFoo<T>, we should be able to list as a type constraint on the method Bar. Of course, if T is a sealed class, we cannot do this: class Foo : IFoo<string> { public void Bar<U>(U bar) where U : string //ERROR: stri

Diff for IEnumerable<T>

I've just added a simple diff algorithm under Sasa.Linq. The signature is as follows: /// <summary> /// Compute the set of differences between two sequences. /// </summary> /// <typeparam name="T">The type of sequence items.</typeparam> /// <param name="original">The original sequence.</param> /// <param name="updated">The updated sequence to compare to.</param> /// <returns> /// The smallest sequence of changes to transform /// <paramref name="original"/> into <paramref name="updated"/>. /// </returns> public static IEnumerable<Change<T>> Difference<T>( this IEnumerable<T> original, IEnumerable<T> updated); /// <summary> /// Compute the set of differences between two sequences. /// </summary> /// <typeparam name="T">The type of sequence items.</typeparam> /// <param name="origina

Clavis - A Web Security Microframework

Web programming has been a pretty big deal for over 10 years now, but in some ways the tools web developers use haven't really progressed that much, particularly when it comes to security. For instance, CSRF and clickjacking are instances of the Confused Deputy problem, a security problem known since at least 1988. Since these are both instances of the same underlying problem, in principle they should have the same or very similar solutions. However, the current solutions to these pervasive vulnerabilities are designed to solve only the specific problem at hand, and not the general problem of Confused Deputies. This means that if another HTML or Flash enhancement comes along that introduces another Confused Deputy, these solutions will not necessarily prevent exploitation of that vulnerability. However, if we solve the underlying Confused Deputy problem, then that solution will address all present and future Confused Deputies , assuming any new features don't violate the