Skip to main content

Sasa v0.15.0 Released

This release features a few new conveniences, a few bugfixes and a vector that's faster than any other I've managed to find for .NET. Here's the changelog since the last v0.14.0 release:

 * more elaborate benchmarking and testing procedures
 * added a minor optimization to purely functional queues that caches
   the reversed enqueue list
 * FingerTree was switched to use arrays as opposed to a Node type
 * optimized FingerTree enumerator
 * FingerTree array nodes are now reused whenever possible
 * Sasa.Collections no longer depends on Sasa.Binary
 * MIME message's body encoding is now defaulted to ASCII if none specified
 * added convenient ExceptNull to filter sequences of optional values
 * NonNull<T> no longer requires T to be a class type, which inhibited its
   use in some scenarios
 * fixed null error when Weak<T> improperly constructed
 * added variance annotations on some base interfaces
 * added a super fast Vector<T> to Sasa.Collections
 * sasametal: fixed member name generated for compound keys
 * Sasa's Option<T> now integrates more seamlessly in LINQ queries
   with IEnumerable<T> thanks to two new SelectMany overloads
 * added two new array combinators to extract values without exceptions
 * added a few efficient Enumerables.Interleave extensions to interleave
   the values of two or more sequences
 * decorated many collection methods with purity attributes
 * fixed a bug with the Enums<T>.Flags property

A forthcoming post on the recent optimizations I've applied to Sasa's immutable collections will show the dramatic performance when compared to Microsoft's and F#'s immutable collections. Sasa's vector is an order of magnitude faster than F#'s, and Sasa's trie is actually faster than the standard mutable System.Collections.Generic.Dictionary up to around 100 elements.

As usual, you can find the individual packages on nuget, or download the full release via Sourceforge. The documentation is available in full as a .CHM file from Sourceforge, or you can peruse it online.

Comments

Popular posts from this blog

async.h - asynchronous, stackless subroutines in C

The async/await idiom is becoming increasingly popular. The first widely used language to include it was C#, and it has now spread into JavaScript and Rust. Now C/C++ programmers don't have to feel left out, because async.h is a header-only library that brings async/await to C! Features: It's 100% portable C. It requires very little state (2 bytes). It's not dependent on an OS. It's a bit simpler to understand than protothreads because the async state is caller-saved rather than callee-saved. #include "async.h" struct async pt; struct timer timer; async example(struct async *pt) { async_begin(pt); while(1) { if(initiate_io()) { timer_start(&timer); await(io_completed() || timer_expired(&timer)); read_data(); } } async_end; } This library is basically a modified version of the idioms found in the Protothreads library by Adam Dunkels, so it's not truly ground bre...

Easy Automatic Differentiation in C#

I've recently been researching optimization and automatic differentiation (AD) , and decided to take a crack at distilling its essence in C#. Note that automatic differentiation (AD) is different than numerical differentiation . Math.NET already provides excellent support for numerical differentiation . C# doesn't seem to have many options for automatic differentiation, consisting mainly of an F# library with an interop layer, or paid libraries . Neither of these are suitable for learning how AD works. So here's a simple C# implementation of AD that relies on only two things: C#'s operator overloading, and arrays to represent the derivatives, which I think makes it pretty easy to understand. It's not particularly efficient, but it's simple! See the "Optimizations" section at the end if you want a very efficient specialization of this technique. What is Automatic Differentiation? Simply put, automatic differentiation is a technique for calcu...

Easy Reverse Mode Automatic Differentiation in C#

Continuing from my last post on implementing forward-mode automatic differentiation (AD) using C# operator overloading , this is just a quick follow-up showing how easy reverse mode is to achieve, and why it's important. Why Reverse Mode Automatic Differentiation? As explained in the last post, the vector representation of forward-mode AD can compute the derivatives of all parameter simultaneously, but it does so with considerable space cost: each operation creates a vector computing the derivative of each parameter. So N parameters with M operations would allocation O(N*M) space. It turns out, this is unnecessary! Reverse mode AD allocates only O(N+M) space to compute the derivatives of N parameters across M operations. In general, forward mode AD is best suited to differentiating functions of type: R → R N That is, functions of 1 parameter that compute multiple outputs. Reverse mode AD is suited to the dual scenario: R N → R That is, functions of many parameters t...