

Lambda the Ultimate on "Coders at Work" - hvs
http://lambda-the-ultimate.org/node/3613

======
randomwalker
I've always found Knuth's views curious. Is it just that he's from a different
generation, or am I too much of an amateur to understand his wisdom? I don't
know.

A couple of days ago he gave me a copy of his paper on Leaper graphs (On dead
trees.. quaint :-) His office is a few doors from mine.) Along with a charming
note about rewards for finding bugs. Anyway, Leaper graphs are the graphs that
describe the moves of a generalized Knight on generalized chessboard. So I
researched this some more, and found that Knuth was the first to enumerate all
the Knight's tours that are invariant under 180° rotation.

At this point I'm thinking, this sounds like it could be an ICFP problem, and
a language like Haskell would be ideally suited for it. Nope, turns out he
wrote the program in his C variant.
<http://sunburn.stanford.edu/~knuth/programs/sham.w> I'm amazed you can write
that in C and have it turn out as short as it was. It also looks like it has
goto's.

I really don't know what to take away from it.

~~~
mhansen
The chessboard has rotational symmetry: if you turn it 180 degrees it is the
same. Would not all tours be invariant under 180 degree rotation?

EDIT: Sorry, I understand now. If the knight's tour ends directly opposite
it's starting position, it is invariant under 180 degree rotation.

~~~
randomwalker
This might help: <http://www.mathpuzzle.com/leapers.htm>

As you can see from those examples, a symmetric tour is a rather regular-
looking object; a random tour has an extremely low probability of being
symmetric.

~~~
mhansen
Ah, thank you!

------
sb
having sources of important systems widely (and publicly!) available as
"common goods" would be a very good idea for people to learn and read what has
been done in the past and how they did it.

it is admittedly very difficult to effectively grasp what has been done more
than 25 years ago (even from research papers), i.e., it is hard to deduce from
the papers only how APL was implemented or what boyer-moore implemented (heck,
douglas comer even has a joke on "critizing" theoretical work by mentioning
work by hartmanis)

EDIT: on further reflection, i think that without such an archive, there is no
way around "those who do not understand history, are doomed to repeat it."
which is particularly sad, since in computer science we do not face many
physical problems from conservation science...

~~~
jacquesm
This is the main reason for so many people being busy with re-inventing the
wheel.

To some extent that's the programmers mentality of 'I can do that better', but
plenty of times it actually isn't better and the 'oldies' got it solved quite
nicely.

The one thing that came by recently where I thought that there actually was
something new happening was the thread on improving on quicksort (I can't seem
to find the thread :( ).

Another instance that comes to mind is the ever improving compression ratio of
lossless compressors, just when you thought the lemon was really empty someone
comes up with something extremely clever.

~~~
bad_user
> _This is the main reason for so many people being busy with re-inventing the
> wheel._

I don't know about other people, but I'm reinventing the wheel to learn
better.

An algorithm you reinvented only to discover a better version later will stick
to your head. And it's also fun doing it.

Also, a library or a tool you "reinvent" is many times better for your needs
considering that assembling software pieces is not actually like putting
together Lego blocks (although corporates are still trying).

Quicksort reinvention is also useful, because that standard qsort you've got
laying around there might not behave as you expect (like glib's qsort that
uses merge-sort instead) or it might not be optimized for your datasets
(because you know, depending on the partitioning function you might end up in
the worst-case scenario quite a lot).

~~~
jacquesm
> I don't know about other people, but I'm reinventing the wheel to learn
> better.

That's an excellent reason for doing so. But the GGPs point was I think people
reinventing the wheel simply because they are not aware of what was done in
the past.

> An algorithm you reinvented only to discover a better version later will
> stick to your head. And it's also fun doing it.

That's very true. I played around with clustering for a while, then decided to
get serious about it which means downloading and studying all the papers that
I can find in open access journals or on websites of the authors. That way you
get the best of both worlds, first you gain the experience of doing it
yourself and appreciating the problems of that particular domain, then you get
to see how others dealt with those problems.

