Skip to main content

Coroutines in C

I've just uploaded a functional coroutine library for C, called libconcurrency. It's available under the LGPL. I think it's the most complete, flexible and simplest coroutine implementation I've seen, so hopefully it will find some use.

Next on the todo list are some more rigourous tests of the corner cases, and then extending libconcurrency to scale across CPUs. This will make it the C equivalent of Manticore for ML.

There is a rich opportunity for scalable concurrency in C. Of course, I only built this library to serve as a core component of a virtual machine I'm building, and that's all I'm going to say about that. ;-)

Comments

Chris said…
Have you looked at libcoroutine? It's by the guy who created Io. I haven't looked at it personally, but it sounds worth investigating if your looking for libraries for such things.
Sandro Magi said…
I provide a comparison to existing libraries at the libconcurrency project page, including libCoroutine.
kjk said…
Not sure how strongly do you feel about your licence choice. Any chance of changing the licence to MIT or BSD or public domain? LGPL is cumbersome when using in non-open-source applications since you have to put the code in a separate *.dll just to satisfy the license. MIT/BSD don't require that.
Sandro Magi said…
I don't believe that's an accurate assessment of the LGPL requirements. The LGPL requires only that any modifications to libconcurrency code used in a distributed program be available under an LGPL-compatible license. It places no restrictions on how you link or include the code in your program. I think it's a fair compromise between the GPL and the MIT/BSD licenses.

I will probably make a dll available soon as well.
Aaron Denney said…
Sandro Magi: It's a bit stronger than that. Modifications have to be made available, and you have to afford your users the opportunity to use other versions of the LGPL software with the software that's using it. Actually, it even specifies one of two ways that you must use to do this: providing object files so they can relink, or using it as a DLL so they can replace the DLL.

See, e.g. section 4.d in the LGPLv3.

d) Do one of the following:

* 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.
* 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version.

Early versions have substantially similar restrictions, though they may be worded a bit more confusingly
Sandro Magi said…
Thanks for the overview Aaron. The libconcurrency build files are already set up to generate a dll, so I don't see this requirement as all that onerous. Do you or others really think it is?

I will take these additional restrictions under advisement though.
Aaron Denney said…
Oh, no, I personally don't think it's an issue at all.
bodhi said…
Nice post. I'm curious how you would use multiple cores (in your todo list) as you are using stack copying technique.
Sandro Magi said…
If I still had time to work on it, I'd probably implement something like Apple's Grand Central Dispatch.

Popular posts from this blog

async.h - asynchronous, stackless subroutines in C

The async/await idiom is becoming increasingly popular. The first widely used language to include it was C#, and it has now spread into JavaScript and Rust. Now C/C++ programmers don't have to feel left out, because async.h is a header-only library that brings async/await to C! Features: It's 100% portable C. It requires very little state (2 bytes). It's not dependent on an OS. It's a bit simpler to understand than protothreads because the async state is caller-saved rather than callee-saved. #include "async.h" struct async pt; struct timer timer; async example(struct async *pt) { async_begin(pt); while(1) { if(initiate_io()) { timer_start(&timer); await(io_completed() || timer_expired(&timer)); read_data(); } } async_end; } This library is basically a modified version of the idioms found in the Protothreads library by Adam Dunkels, so it's not truly ground bre...

Easy Automatic Differentiation in C#

I've recently been researching optimization and automatic differentiation (AD) , and decided to take a crack at distilling its essence in C#. Note that automatic differentiation (AD) is different than numerical differentiation . Math.NET already provides excellent support for numerical differentiation . C# doesn't seem to have many options for automatic differentiation, consisting mainly of an F# library with an interop layer, or paid libraries . Neither of these are suitable for learning how AD works. So here's a simple C# implementation of AD that relies on only two things: C#'s operator overloading, and arrays to represent the derivatives, which I think makes it pretty easy to understand. It's not particularly efficient, but it's simple! See the "Optimizations" section at the end if you want a very efficient specialization of this technique. What is Automatic Differentiation? Simply put, automatic differentiation is a technique for calcu...

Easy Reverse Mode Automatic Differentiation in C#

Continuing from my last post on implementing forward-mode automatic differentiation (AD) using C# operator overloading , this is just a quick follow-up showing how easy reverse mode is to achieve, and why it's important. Why Reverse Mode Automatic Differentiation? As explained in the last post, the vector representation of forward-mode AD can compute the derivatives of all parameter simultaneously, but it does so with considerable space cost: each operation creates a vector computing the derivative of each parameter. So N parameters with M operations would allocation O(N*M) space. It turns out, this is unnecessary! Reverse mode AD allocates only O(N+M) space to compute the derivatives of N parameters across M operations. In general, forward mode AD is best suited to differentiating functions of type: R → R N That is, functions of 1 parameter that compute multiple outputs. Reverse mode AD is suited to the dual scenario: R N → R That is, functions of many parameters t...