Calculating permutations has some fairly nifty algorithms out there. I recently ran into a permutation problem for which I couldn't find an existing algorithm. I admit that I didn't look too hard though. Basically, I needed the permutations of the elements of a set of size N over K slots. However, the permutations should include duplicate elements from the set, as K > N is valid configuration. This corresponds to NK permutations. Most algorithms I found did not permit duplicate elements.
As an example of an application for such a permutation algorithm, imagine the set of all function signatures of arity K-1 over N types. This corresponds to K slots with N possible choices for each slot.
I devised a fairly simple implementation of such a permutation algorithm. Essentially, N forms the base of an arbitrary-precision integer of size K. In other words, we have an array of elements with a maximum of N which index our set. To permute, we simply increment the first element and propagate any overflow across the rest of the array. If the carry is 1 when we're done iterating over the whole array, then we're done generating permutations.
Calculating permutations has been reduced to counting! Here is the C code:
The only assumption is that N is less than UINT_MAX. Also, for some insane reason that I cannot fathom, I can't get the slots to print in reverse order. The trivial reversal of the print loop induces an access violation under Windows:
The mind boggles.
[Edit: a commenter pointed out the problem, so the solution can be found in the comments.]
As an example of an application for such a permutation algorithm, imagine the set of all function signatures of arity K-1 over N types. This corresponds to K slots with N possible choices for each slot.
I devised a fairly simple implementation of such a permutation algorithm. Essentially, N forms the base of an arbitrary-precision integer of size K. In other words, we have an array of elements with a maximum of N which index our set. To permute, we simply increment the first element and propagate any overflow across the rest of the array. If the carry is 1 when we're done iterating over the whole array, then we're done generating permutations.
Calculating permutations has been reduced to counting! Here is the C code:
#include <stdio.h>
void print(const unsigned *slots, const unsigned K)
{
unsigned i;
for (i = 0; i < K; ++i) {
printf("%4d", slots[i] );
}
printf("\n");
}
unsigned incr(unsigned *slots, const unsigned K, const unsigned N) {
unsigned i, carry;
print(slots, K);
for (i=0, carry=1; i < K; ++i) {
unsigned b = slots[i] + carry;
carry = b/N;
slots[i] = b % N;
}
return !carry;
}
void count(const unsigned N, const unsigned K) {
unsigned i;
unsigned *slots = calloc(K, sizeof(unsigned));
while(incr(slots, K, N)) {
}
}
//output:
// 0 0 0
// 1 0 0
// 0 1 0
// 1 1 0
// 0 0 1
// 1 0 1
// 0 1 1
// 1 1 1
int main(int argc, char ** argv) {
count(2, 3);
getchar();
}
The only assumption is that N is less than UINT_MAX. Also, for some insane reason that I cannot fathom, I can't get the slots to print in reverse order. The trivial reversal of the print loop induces an access violation under Windows:
void print(const unsigned *slots, const unsigned K)
{
unsigned i;
for (i = K-1; i >= 0; --i) {
printf("%4d", slots[i] );
}
printf("\n");
}
The mind boggles.
[Edit: a commenter pointed out the problem, so the solution can be found in the comments.]
Comments
The correct loop is of course:
void print(const unsigned *slots, const unsigned K)
{
unsigned i;
for (i = K-1; i >= 0-1; --i) {
printf("%4d", slots[i] );
}
printf("\n");
}
> should not print anything
> because K >= 0-1 forall K if
> your using unsigned arithmetic.
Not true, as --i wraps to UINT_MAX which is greater than all K. I was going to use UINT_MAX from limits.h, but (0-1) does the job too. This should work on all architectures with ring arithmetic on integers, which is pretty much all of them.