Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What is the most beautiful piece of code you've ever read?
476 points by seisvelas 24 days ago | hide | past | web | favorite | 379 comments
Preferably an elegant snippet rather than an entire (well engineered) codebase.

For me it's a BASIC one-liner which generates mazes

  10 PRINT CHR$ (205.5 + RND (1)); : GOTO 10
I found this specific one via slashdot[1], but something similar, which I've never managed to find/replicate, was used to generate mazes on the Atari 800 XL at my school when I was a kid.

[1] https://developers.slashdot.org/story/12/12/01/1847244/how-d...

No, it does not generate a maze. It generates a random sequence of \s and /s, that can trigger a maze being generated in a typical user's mind.

This is the art of illusion.

What is a maze? One could argue that if the user perceives a maze, then it produces a maze.

> One could argue that if the user perceives a X, then it produces a X.

This can be stretched to turn anything into anything else. For example, why can't this comment be a maze? In that case 10 REM; 20 END is my even shorter, more elegant maze program because I see a maze in it.

Some constraints that are typically implied when people say "computer generated mazes":

- They are solvable (have a start and end)

- OR they loop endlessly with no dead ends.

- Walls and spaces consume 1 element on the grid.

- No space on the grid is surrounded by all 4 NWSE walls.

>>One could argue that if the user perceives a X, then it produces a X. >This can be stretched to turn anything into anything else. For example, why can't this comment be a maze?

Because unlike the one-liner nobody perceives it as a maze, so the "this can be stretched to turn anything into anything else" argument is tenuous...

I believe your third criterion is actually met by this generator. The grid is just tilted 45 degrees.

> This can be stretched to turn anything into anything else. For example, why can't this comment be a maze?

Name one person who perceives your comment as a maze.

It must be perceived as a maze by people for it to apply.

As it stands, I’m not sure if people perceive your comment as a maze.

Maybe theoretically there’s a nonzero number of people who perceive the comment as a maze. But unless your audience is those few people, there wouldn’t be much utility in it. In the OP’s example this maze perception is not only highly widespread but it’s also leveraged for a particular intended effect.


The idea that words used in specific contexts can't have clear meanings is the more tedious thing.

Never expected to find myself on the path to enlightenment after reading a line of BASIC.

Maze has a starting point and a goal. The goal must be reachable from the start.

Only if you want to give the occupant of the maze a chance to escape :-)

So this would be more of a labyrinth than a maze?

No, a labarynth is a single twisting route without branches. A maze has branches and thus can have dead-ends. Both must have an entry and exit point (technically mazes may have more than one exit and entry, but most don't).

That’s open for debate. https://en.wikipedia.org/wiki/Labyrinth:

”Although early Cretan coins occasionally exhibit branching (multicursal) patterns


both logic and literary descriptions make it clear that the Minotaur was trapped in a complex branching maze


In English, the term labyrinth is generally synonymous with maze. As a result of the long history of unicursal representation of the mythological Labyrinth, however, many contemporary scholars and enthusiasts observe a distinction between the two.”

> a labarynth is a single twisting route without branches. A maze has branches and thus can have dead-ends.

I was today years old when I learned this.

Jeopardy this: Only then can you realize, there is no maze.

Everything is an illusion. \s and /s are just liquid crystal pixels on the screen, filtering some light. However it's easier to say that if something has a name and you can identify it, then it's probably that thing. I see a maze it means it's a maze :)

>I see a maze it means it's a maze :)

I'd suggest to not go bar-hopping in Thailand with that mindset...

Nice to see a bit of casual transphobia on hackernews.

Nice to see confusing a joke reference to a regional fact (as to things that can seem X but be Y) with transphobia. Anybody told you that merely referencing trans women in Thailand is opposing them? Projecting much?

Zhyl 24 days ago [flagged]

I'm amazed that you're defending that comment.

coldtea 23 days ago [flagged]

Well, I am amazed that one can be so rude.

First, you come blazing with accusations of transphobia, because of a joking mention of Thailand's ladyboy scene, to challenge the parents' notion that everything is WYSIWYG. Do you even know me? Or you have a hobby of randomly assuming things about people you don't know?

Second, when I don't take that quietly, you add the above comment about having the gal to defend myself, going for the "This animal is extremely vicious. When attacked, it defends itself" angle...

Perhaps you can go even lower against some stranger on the internet, but you wont be getting any replies from me in this thread.

Good on you for standing for what you say. I personally didn't see any transphobia in your comment, what you said is merely a joke on a fact of that part of the world. The "phobia" was added on by Zhyl

The transphobia isn't related to the 'fact' about Thailand. If anything, the fact that it's a reference point is the reason why the sentiment is distasteful. The implication/joke is:

* A person goes into a bar in Thailand, sees what they think is a girl and are surprised to find that things are not what they seem.

Broken down:

* Person sees a girl

* It is not a girl

* This is a bad thing

Now, it would be easy to claim (as indeed you and GP seem to be doing) that this isn't a slur on the second party in this encounter. It's a fact of the world. They exist. OP, the person who is saying that 'things are what they seem to be' would probably not want to find themselves in a situation where they making an assumption that the person is female.

The fact that this is negative can be seen in the 'I hope you don't' words which set the tone and are the basis of most of the sarcasm. I have to say that even if GP had picked a different example, I don't like this kind of comment anyway. It's a bitter, jabby comment that I don't think furthers discussion and overall makes HN a more negative and uncomfortable place to be.

So an assumption I made, which of course is debatable, is that the setting of 'bar' implies a romantic or sexual intent on the first party. GP will probably deny or refute this, but really without this implication there isn't much of a joke to speak of. "I spoke to this girl and it turned out she was a dude! What larks we had!".

GP seems to be making this claim in this comment:

>Nice to see confusing a joke reference to a regional fact (as to things that can seem X but be Y) with transphobia. Anybody told you that merely referencing trans women in Thailand is opposing them? Projecting much?

If it was 'merely referencing' then it wouldn't be a joke. 'Merely referencing' would we to reword the comment as:

"Things are not always what they seem, like commonly accepted phenomena of Thaiwanese ladyboys".

I mean, not much different, and still in poor taste for a thread that is about beautiful code, but the tone of the comment is completely changed. It's now no longer a jab at the PC, but using a counterexample to illustrate a point.

And why is suggesting that someone being shocked to discover a ladyboy transphobic anyway? Basically, it boils down to the fact that trans people have to suffer against stigma that they're somehow 'out to trick' people and that unassuming straight males need to 'watch out'. This comment is a prime example of this. It's literally a warning. Sure, it's mild transphobia, but it's perpetuating this stigma and I feel like Hackernews generally, and this thread specifically, are not really places for this kind of comment.

I normally wouldn't write this much out about this topic, but GP seemed very upset about their comment being called transphobic. Note, I didn't call them transphobic at all. I didn't actually make any comment about them, their person, their intent or anything. Just that the comment was 'casual transphobia' which I firmly believe that it is.

Let's break down the rest of their comment:

>Well, I am amazed that one can be so rude.

In both cases they use my own words back to me. I'm not sure why. It seems to be a mix of defensiveness and sarcasm. I don't actually think I was being rude - certainly no ruder than they were being to who they were commenting to. I felt that their original comment was snide.

>First, you come blazing with accusations of transphobia

I called the comment transphobic. Addressed above.

>to challenge the parents' notion that everything is WYSIWYG

Not very well.

>Do you even know me? Or you have a hobby of randomly assuming things about people you don't know?

This is incredibly defensive. Especially the second part. It's not enough to call out transphobia, they have to then pile on unfounded accusations of 'randomly assuming things about people' I don't know. I think the irony of this is amazing. They're literally accusing me of doing exactly what they're doing. It's like a bad girlfriend stereotype. (If you want to do some whataboutism, why not pick apart the inherent sexism of that comment? I'll give you a starter for 10 by lampshading it).

>econd, when I don't take that quietly, you add the above comment about having the gal to defend myself, going for the "This animal is extremely vicious. When attacked, it defends itself" angle...

So GP edited this comment a couple of times. Originally this was basically all there was. 'Having the gal' to defend themselves is a laugh. There was no self reflection, no admission that what they wrote could have been seen as transphobic. Just immediately jump to a victim paradox. 'Poor old me being attacked by nasty SJW just for merely mentioning ladyboys. Is it a crime? What did I do wrong????'

>Perhaps you can go even lower against some stranger on the internet, but you wont be getting any replies from me in this thread.

I mean I love this sign off. It's the reason I knew I had gotten to them and the reason I didn't reply. They didn't want to hear it. It was clear they weren't going to learn, and in fact they're telling me that they don't want a response anyway. I love the way that they're so deep in the role of the victim that they think I'm going to go off and harass more strangers. It's a lovely thought, but I don't do that. I just calls em as I sees em.

So anyway, I'm glad that you don't see the transphobia, but I don't think I was wrong to call them out on what I saw as implied transphobia.

Is the user then not amazed?

A maze does not have to be solvable.

There’s actually an entire book on this at: https://10print.org/

324 page book originating from 1 line of code! Amazing.

Was kinda hoping the maze on the left was procedurally generated.

This book is some nice addition to "Computer Culture".

Perl version if you don't have a BASIC interpreter handy:

perl -C -e 'print chr(9585.5+rand) while 1'

  Wide character in print at -e line 1.

Oh, Windows maybe? add:


To the front.

Unicode on languages made before it existed is...argh.

What is the probability that it prints a completable[0] maze?

Expressed in terms of line width (w) and number of lines (n).

[0] Where there's a valid path from the first line to the last.

1/2^n, I believe (where n is the number of lines after the first one, otherwise 1/2^n+1). So halved every time: 100% with 1 line, 50% with 2 lines, 25% with 3 lines, and so on. Width is irrelevant (you can do this in your head comparing w=1 n=2 to w=2 n=2 for example).

Edited for clarity and accuracy.

One line, not completable:

Also, I don't believe it doesn't depend on width. I think with constant height, the probability of completable one should increase with width, and become almost 1 for very large widths.

Hm, good points, looks like I need more sleep. Looking at the video again, a path is valid if two slashes follow each other, with the same pattern shifted by 1 the line below.

But I wasn't treating /\ as an invalid on the first line. Eg thinking of:

    /\\\\\\\ … and so on
(Using the /\ pattern, paths can go back up and come back down, so this requires a lot more thought than I instinctively put into it)

This, as a WebGL shader in shadertoy: https://www.shadertoy.com/view/ld23DW

Oh wow this is a cool loop closure. I wrote a CHIP 8 emulator to learn stuff and my main test rom was this program. I walked through the bytecode very slowly as I learned and debugged and repeatedly wondered how the heck the program was so tiny.

Wow! This is really cool. I just entered it quickly in a C64 emulator and it ran, creating a maze. Not sure, whether it would be a solvable one, but it is interesting to note, that the paths are really connected!

> the paths are really connected!

What do you mean by that?

The paths make sense. I.e. you got uninterrupted lines, surrounded by walls, so it is a real maze and not just a lookalike or partial maze. Oh, and it's a-maze-ing! :-)

I'm still not sure what you mean. It just prints / and \ randomly. What is a maze lookalike and partial maze? Would you mind sharing any examples?

If we define "elegant" as getting a lot done with a little, another BASIC (kinda..) one: https://linusakesson.net/scene/a-mind-is-born/ (The last part of the song is so great, too.)

The mysterious maze algorithm of Atari game Entombed: https://www.bbc.com/future/article/20190919-the-maze-puzzle-...

That was great, thanks

while [ 1 ]; do for i in `seq 1 80`; do printf "\xE2\x95\xB$(( ( RANDOM % 2 ) + 1 ))"; done; printf "\n"; done

That is very cool, but does it produce no branches?

ohh wow,

I remember being fascinated with Pascal program that makes snow fall.

Basically, it was white dots on a blue background in an infinite loop. :)

Somewhere I have a copy of a BASIC program I typed into my TRS-80 Color Computer as a kid, from an issue of K-Power magazine.

I don't recall if it was one of the full issues, or one of the "insert issues" found within Family Computing of the era.

It was a special kind of "snow fall" generator; IIRC, there were only a couple of versions, one for the Apple IIe and the other for the PCjr - I likely used the PCjr version for the conversion since the BASIC language was nearly identical between the PCjr and my CoCo 2.

What was special was that it managed to seemingly move hundreds of pixels relatively rapidly on the screen; now, this might not seem like such a feat, but it really was considering we're talking about a computer system running an interpreted BASIC program, with a max speed below 1 MHz in standard mode (0.89 MHz - there was a so-called "speedup poke" that "doubled" the speed - POKE 65495,0 if anyone cares).

I don't recall what trick it used - or if it really used anything special at all - but I do remember being impressed by that bit of code.

I love it!

The one that blew my mind when I was in college was a simplified version of quicksort in Haskell. It's just so elegant and clean.

    quicksort :: Ord a => [a] -> [a]
    quicksort [] = []
    quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater)
            lesser = filter (< p) xs
            greater = filter (>= p) xs
Now surely someone may come along and point out how this isn't a true quicksort[0] because it doesn't partition the elements in place, but it's more of the simplicity of the logic and its readability that showed me how beautiful functional code can be.

[0] https://stackoverflow.com/questions/7717691/why-is-the-minim...

Quick sort comes with a steep penalty. Worst case is O(n^(2)). The reason quicksort is good is because it's in place. Once you throw away the in place aspect of quick sort, it's straight up bad.

This implementation of quicksort is actually a great example of why functional programming sucks. It silently transforms an O(1) space algorithm into an O(n) space one, and adds an enormous constant time overhead.

Algorithms that are optimal under the mutable data assumption are different than algorithms that are optimal under the constant data assumption. So a normal programmer might sort via quicksort in Haskell because it's the optimal sort in imperative languages even though naive Haskell quicksort is objectively worse in every way than naive Haskell mergesort.

Performant programming in Haskell requires a much more intimate understanding of the underlying architecture than performant programming in, for instance, C++. And that's a very low bar.

Ok, I’ll bite.

If having to think a bit harder about your sorting algorithm is why functional programming sucks, can I give examples of every bullshit concurrency problem I’ve had in Java as an example of why imperative programming sucks?

Persistent data structures are a bit slower but largely become non issues if you deal with concurrency and avoid a lock that you would have otherwise required in C or Java.

It’s not like the people who wrote Haskell are idiots; most of these data structures end up sharing a lot of data, and a “new” structure is often only the diffs from the previous version. Not to mention that since you have a compile time guarantee that the data isn’t going to change, you effectively avoid any need for defensive copying.

I’m sure you can find some benchmark that proves C++ is faster in the most technical sense, but for network and multithreaded applications, it’s not even close; it’s so much easier to make sure a functional concurrent language is actually correct that the benchmarks become almost irrelevant.

I say this as a die hard FP fan: I agree with your parent comment. I think FP is generally good enough or sometimes even quite close. But I don't know how many times I ended up writing C in Haskell or scheme to make an algorithm really fast (once all other algorithms have been tried or discarded).

The quicksort example is a shitty quicksort, because it will be slow as molasses. A proper quicksort in Haskell will degrade you to writing C in Haskell (with the benefits of a large standard library)and then you have lost. C in C will always beat C in Haskell or any other language.

Is it a worthwhile tradeoff? I believe so. The few times I am limited by speed in that way are few and far between, and most often it was because I thought something like O(n2) would be good enough which is easily fixable. Sometimes I just need to make optimal code faster. By then I always wish I would be using an imperative language instead.

I'm genuinely curious; how often do you actually end up writing sorts in Haskell? I just use the built-in sort function, which is fast enough.

Maybe your use-case is different than mine, but I typically use Haskell (or another functional language) for network applications, which typically don't benefit from tight-loops since the network is almost always the bottleneck anyway. For these kinds of applications, having proper concurrency and IO handling matters a lot more than worrying about whether your loops result in a cache-miss.

I have had a couple of times where the overhead of idiomatic Haskell warranted rewriting it in a mutable imperative way. Sorting was never a problem, but things like functional heaps (leftist or pairing heaps) will never be as efficient as mutable, cache aware ones.

The lower level mucking around when you have to do those kinds of optimizations is, IMO, much more pleasant in imperative languages.

Sure, I won't argue with that. I've never used Haskell for that low-level of work, so I can't speak with any kind of expertise on that; I've never found Haskell's FFI to be terribly hard to use though, so you could conceivably get the best of both worlds.

Also, have you tried Liquid Haskell? It uses refinement types to let you use the "unsafe" and fast versions of functions to guarantee correctness while also increasing performance.

Is the level of my English lacking? The comment you're agreeing to clearly seems to argue that fp sucks as a whole because it's not as fast for tight looped algorithms, not that it sucks only for them.

> most of these data structures end up sharing a lot of data, and a “new” structure is often only the diffs from the previous version.

Like everything else in life, those data structures come with tradeoffs. Accessing an array element is an indexing operation into a continuous block of memory. The indexing is built into the instruction set as an addressing mode, the array's memory is continuous (so better locality of reference for caching), and if you're traversing it predictably, prefetch hardware can being it into cache before you issue the instructions that do the read.

The same operation in a persistent vector is a function call, several pointer dereferences, and a more complex on-memory structure with more overhead. It works elegantly, but it's a completely different (and likely slower) level of abstraction from a simple in-memory vector.

A few years ago, I switched a toy search program away from ImmutableJS and over to naive copying of raw JS data structures. I don't remember the exact performance delta, but it was something like an x10-100 speedup, for code that was otherwise structurally identical:


In this case, I had a goal to achieve, a timeline on which to achieve it, and it was the ability to switch to simpler, closer-to-the-metal data structures that made it happen. Had I needed more optimization, the most logical approach would have been to remove even more copying and rely even more heavily on mutation. This shouldn't come as a surprise, because at their core, these machines are fundamentally built on mutation, and there's a cost to pretending otherwise.

Now, in fairness to the persistent structures, this is something like a worst case scenario for their use - the algorithm is almost entirely dominated by manipulating relatively small data structures without much opportunity for sharing. It was also single threaded, small, and developed by a single developer in a small amount of time, so the organizational benefits of immutability were not apparent either.

If this was a huge redux state shared across a team of 100 developers and updated only a few times a second, I could probably tell a completely different story. It's easy to imagine (based on firsthand experience!) the kind of havoc that can be inflicted on a project with an erroneous update.

I get the enthusiasm for functional programming and persistant structures, etc., but at the end of the day it's just an engineering approach. One of many, and there's room to do the work to choose between them.

> I get the enthusiasm for functional programming and persistant structures, etc., but at the end of the day it's just an engineering approach. One of many, and there's room to do the work to choose between them.

I don't disagree with this but that's not what you said initially. You said "this is why FP sucks".

I definitely think that if your work is doing tight-loops where every micro-second matters, an imperative language will usually be the correct approach. I don't think anyone is (or would) argue against that point, including most Haskellers.

However, functional languages do simplify things with network/concurrent applications. There's a reason that something like MapReduce is popular. Something like Spark or Onyx is just simpler to do correctly and work with than trying to achieve something equivalent using C or C++; maybe this is just me.

EDIT: My bad, I was responding to the wrong person, this person never said FP sucks. I apologize!

> You said "this is why FP sucks".

No, I did not.

You are absolutely right; I misread some usernames. I apologize for the misattribution.

No worries... I figured it might be something like that.

> It silently transforms an O(1) space algorithm into an O(n) space one, and adds an enormous constant time overhead.

Please tell me what imperative quicksort algorithm has O(1) space. All versions I've seen and could recall use recursion; although each recursive call uses O(1) space, in the worst case of bad pivot element selection each recursive call would only really sort one element resulting in a worst-case O(n) space. Use of randomization would result in a high probability of choosing a good pivot element, but even then you can expect approximately O(log n) space.

Also would like to see why you think the Haskell version has enormous constant time overhead. Where do you think this overhead comes from? If you are comparing to an equivalent program in C++ then sure allocations and stuff, but compared with the typical Haskell list-processing programs I don't see any significantly larger overhead.

I don't know that I would ever do this, since O(lgn) space is normally trivial, but couldn't you use a RNG that's based on like the depth and start position of a given "stack frame" of a recursion-less quicksort?

Like to "recurse" (not actually recurse, but pretend) you would increment depth, the update the start position, and calculate the new partition index based on the (depth, start) tuple?

And run that in reverse for going back up the stack.

edit: Hah. This is fun. There's a variant where you do tricks with the elements of the array to get constant space.


The idea is that you partition the elements, but then instead of storing the bounds of the left and right sides, you switch the element from the start of the right side with the partition element of the "stack frame" above you. This later serves as a flag indicating the end of the right side, since the partition of the parent "stack frame" is greater than all elements on the right you know you've hit then end of the right side when you see a larger number than the parent "stack frame"'s pivot.

The path towards O(1) space instead of O(log n) space is fraught with technicalities. At the bottom, you realize that no matter what algorithm you use, you always store the array size in lg(n) bits! So it makes more sense to say “less memory” rather than O(1) memory for sorts.

For non recursive quicksorts see code at bottom of this link: https://www.geeksforgeeks.org/iterative-quick-sort/

That link basically just simulates recursion by defining a stack in the function. It recognizes that in the recursive version among the stack variables only the two indexes need to be stored so it stores them. It is even less space efficient than the recursive version because it always allocates O(n) space.

I don’t see how this is a great example of how functional programming sucks. It is easy in any programming language to build a slow sorting algorithm. The fact that you need to use different algorithms with immutable data is irrelevant. You need to use different algorithms for pretty much any type of data structure.

Functional programming makes a different set of tradeoffs than imperative programming. You have to work harder to get peak performance, but in many cases it is actually quite performant. Just look at how fast Elm is for web applications, and how that approach allows for a time-traveling debugger and virtually no crashes.

> It silently transforms an O(1) space algorithm into an O(n) space one, and adds an enormous constant time overhead.

It doesn't silently transform O(1) to O(n), the code is explicitly O(n^2) space worst-case. The only 'silent' transformation that could happen here is an optimization to improve performance. Also, I don't know where you got the 'enormous constant time overhead' part.

> Algorithms that are optimal under the mutable data assumption are different than algorithms that are optimal under the constant data assumption.

A normal programmer wouldn't be defining writing their own sort functions. A normal Haskell programmer would understand mutability in Haskell.

> Performant programming in Haskell requires a much more intimate understanding of the underlying architecture than performant programming in, for instance, C++.

It really depends on what you're writing and how much performance you actually need. Implementing a moderate-complexity parallel data processing algorithm in Haskell may result in a slightly slower but much simpler implementation than C++. An implementation of the same complexity in C++ may be slower than Haskell. Writing performant, parallel, safe code for a moderately complex algorithm in C++ is far from easy.

I remember on olympiads one trick that I though was brilliant out-of-the-box thinking was to shuffle input before sorting to avoid worst-case-prepared inputs.

> Quick sort comes with a steep penalty. Worst case is O(n^(2)). The reason quicksort is good is because it's in place. Once you throw away the in place aspect of quick sort, it's straight up bad.

That's why you shuffle the list before you sort it :)

This is why Python uses Timsort: https://en.wikipedia.org/wiki/Timsort

I saw a similar version in erlang (in Joe Armstrong's Programming Erlang) and agree it beautifully illustrates the concept of quicksort. Apparently, Tony Hoare came up with it when he took a class teaching him recursion (i.e. he was a student).

But the flaw you note unfortunately undermines performance... the "quick" in "quicksort".

Thus, IMHO it's a compelling illustration of the profound strengths and weaknesses of fp.

Agreed that this is beautiful purely from the standpoint of how there is essentially no disconnect between the concept of the algorithm and the way it's expressed in code. It's almost the math/algo just translated directly into code (not commenting on the efficiency or other issues that may exist).

I won't comment on how quicksort-y this is, but the combination of pattern matching and recursion is bread-and-butter Haskell. A number of seemingly complex algorithms can be implemented in a similar way.

Having said that, it is elegant and clean and -- taken by itself -- does make Haskell very attractive.

When it comes to Haskell, this function that generates all the fibonacci numbers did it for me:

  fibs = 0 : scanl (+) 1 fibs

Another beaut!

The problem with this piece of code is not that it doesn't do it in-place. The problem is, if the input list is (almost) constant, then this code will (almost) certainly take quadratic time to sort it, even if it is shuffled before being fed to this function.

However, I get what you mean. It really is beautiful!

EDIT: grammar.

Though it's not true quicksort, it's possible to explain the idea of this algorithm with code like this in just a few minutes while the truest implementation in C with Hoare partition looks really confusing

Unfortunately it's also incorrect! It discards NaNs, as they are neither < nor >=.

In the example there was no talking about floating point numbers. The ordered type _has_ a total ordering as shown by the Ord typeclass restriction.

I would argue that this is better than most alternatives.

Haskell blows your mind whatever ;)

I began coding in IBM/LCSI PC Logo. The first line of code I ever wrote was:

  FD 100
That's the "hello, world" of turtle graphics in Logo. While probably not as beautiful as the several splendid examples posted in this thread, that simple line of code changed my world. I could make stuff happen in an otherwise mostly blank monochrome CRT display. Until then I had seen CRTs in televisions where I had very little control on what I see on the screen. But now, I had control. The turtle became my toy and I could make it draw anything on a 320 x 250 canvas.

The next beautiful piece of code I came across in the same language was:

  REPEAT 360 [FD 1 RT 1]
The code above draws an approximation of a circle by combining 360 short line segments. It showed me how control flow can be used elegantly to express complex ideas in a simple expression. And then I came across this:

  REPEAT 20 [REPEAT 180 [FD 1 RT 2] RT 18]
The above code draws 20 overlapping circles. The output looks like this: https://susam.in/files/blog/dosbox-logo-1.png .

At an impressionable age of 9, reading and writing code like this, and using simple arithmetic, geometry, logic, and code to manipulate a two-dimensional world had a lasting effect on me. I like to believe that my passion for software engineering as well as my love for writing code, sharing code, and open source development are a result of coming across these beautiful code examples early in my life.

The true beauty of Logo comes in the realization that you can add your own words to it, and that all these specialist words you devise are themselves first-class citizens of the language, of syntax and status equal to its generic builtins. Hence:

    REPEAT 360 [FD 1 RT 1]

    REPEAT 20 [CIRCLE RT 18]
and so on ad infinitum, until you arrive at a complete custom vocabulary that concisely and precisely expresses the particular concepts and behaviors of interest and importance to you. In doing so, you move from thinking algorithmically (which is glorified spaghetti) to thinking compositionally, which is the key to scalability – managing complexity as your needs and ambitions grow.

Whereas Algol-y languages treat user-defined vocabulary as second-class citizens, beneath their own privileged built-ins. Which is a ridiculous of status when you consider which is actually important to the user: precise, powerful, tailored words that describe their particular problem, or the crude primitive undifferentiated building blocks that the language dumps out of the box?

The beauty of bottom-up programming (as any Lisp fule kno:) is that endlessly tests your own understanding of the problem domain: to define effective, productive words you must have some idea of what you’re talking about; you can’t help but learn the foundations of the problem space as you go. There’s a basic humility to this approach; there’s nowhere to hide laziness or ignorance.

Whereas in top-down programming it’s much too easy for highly-educated highly-paid absolute know-nothings to bullshit eternally, constructing great theatrical class architectures; grand mechanical castles in the sky that look all very difficult and impressive to observers while never saying anything relevant or useful.

That key switch from algorithmic to compositional thinking is not a natural conceptual leap for self-learners – it takes a carefully directed prod at a particular point in the learning curve to jump those rails – but it opens up worlds. #PlatosCave

Thanks for writing this. I think you've explained why I've always liked the ability to invoke functions without parentheses. I always thought it simply appealed to my sense of aesthetics, and I've had somewhat of a hard time defending it but now that you mention it, building my own first-class primitives is a big part of it.

Oh man, PC Logo takes me back down the memory lane. We were introduced to PC Logo in 4th grade, that's about 9 years ago. I instantly fell in love with it. The amount of power I had over the computer - ordering it to draw what I want, how I want was really exciting.

My father had installed Kubuntu on our home computer which had an amazing suite of educational applications, including KTurtle which was (almost) the same as PC Logo. I got so addicted to it, my father suggested me to create a blog and regularly update it with the drawings and the code for it. And so I did [1]! I used Google Blogpost (that was the thing back then). Good times :).

1: http://kturtlecommands.blogspot.com/

Edit: Just checked the blog after almost 6 years, it still gets 100+ weekly impression, haha :D

I clicked on your blog and all the images are gray exclamation marks :(

Wayback Machine to the rescue: https://web.archive.org/web/20121023081714/https://kturtleco...

I recommend adding bookmarklets for Wayback Machine: https://en.wikipedia.org/wiki/Help:Using_the_Wayback_Machine...

Thanks for this! But it none of the saved webpage seem actually saved or clickable.

Yeah, the images got corrupted or lost in Google's servers somehow some time around 2013. Couldn't get them back :/

Logo was the best!

I still remember my father ‘explaining’ me Pythagoras theorem when I was around 5 to show me how to draw the roof of a house on our hand-soldered Philips Apple II clone.

I’ve been hooked ever since.

The best thing was that 25 years later I opened a Logo emulator again and when faced with having to clean the screen somewhere deep, deep from my muscle memory the right command sprang forward: CLEAR

For all the Logo fans in this thread, I have created a Slack workspace here for us to get together: https://bit.ly/fd100slackinvite

Please do join it even if you don't remember Logo anymore. The intention here is not to discuss Logo but to share the joy of computing that we discovered through Logo and has remained in our lives. I hope to see you all there. :-)

By the way, there is also #fd100 channel on Freenode IRC but I am not sure whether most HN users prefer IRC or Slack.

In the same vein :

CALL -151

That was the entry point to Apple 2's "monitor" (ie bare bones assembler). Not the most beautiful, but very evocative, gave me a sense of power :-)

How my, so many souvenirs. Especially the one when our teacher took us to the courtyard and had us do the turtle in real life, him giving the instructions.

For all you Logo fans out there, you should really play "Duskers". It's Logo meets Aliens.

Cool, we got Logo, BASIC and now Prolog. All one needs to begin programming, eh.

John Carmack's Fast Inverse Square Root: https://en.wikipedia.org/wiki/Fast_inverse_square_root#Overv.... The first time I really and truly felt that people approach problems differently from how I, by default, go about them.

IIRC Carmack didn't write that code, as brilliant as he is.

It even says that he didn’t write it in that wikipedia page.

The most beautiful piece of code you've ever read contains a magic number with the comments "evil floating point bit level hacking" and "what the fuck?"... you're a madman.

CPUs have much faster (2-5 cycles latency) and much more precise (<0.04% relative error) version in hardware, since 1999: https://www.felixcloutier.com/x86/rsqrtps

I wonder which one used less energy in 1999?

This is still one of my favorites. In a list of magic numbers, sorted by magic-ness, 0x5F3759DF would sit pretty high.

it's the antithesis of maintainable code. But given that one is unlikely to want to change from calculating the inverse square root to inverse cube root or any other variation....

I disagree strongly. This code is very maintainable: it does not have dependencies, it is trivial to test, it is wickedly short, and with the appropriate comment there is no confusion regarding its purpose. Also, its field of applicability is clear from the context: replace this code with a call to fsqrt if you happen to have a fast hardware implementation of it. It is the most easily maintainable code, ever!

By "the appropriate comment" you mean, and I quote:

"// what the fuck? "

the one maintainability metric, and i think is the most important one, is whether it's easy to modify (to make it do something slightly different).

You'd be hard pressed to write an inverse square cube without basically rewriting the whole function. There's nothing that can be reused. The only saving grace is that it is side-effect free, so replacement is trivial, unlike a lot of other code that's not maintainable.

Your metric is only applicable within the context of the problem domain. In 3-d graphics, the inverse square root (1 divided by the square root of x) is an extremely common operation (such as for normalizing vectors), but the inverse cube root pretty much isn't used, so it's safe to assume you will not need to extend it to arbitrary powers.

As the designer of the code, you would understand that the inverse square root is a standalone problem.

The question is how likely are you to need it to do something different, and what are the different things it might need to do? This will depend on the domain, but making things too flexible hampers maintainability too.

>inverse square cube

You mean inverse cube root? I believe the geometry ain’t easily upgradeable this way.

it's a NUMBER - there's zero maintenance required for a number of constant value.

If you enjoy this sort of thing, this book is full of bit twiddling algorithms you never thought possible: https://www.amazon.com/Hackers-Delight-2nd-Henry-Warren/dp/0...

How do I get a job in /this/ area? I couldn't care much to do JavaScript programming, but this is awesome.

I came here to post this. Remember back in the day when it was wildly discussed. Still impressive to this day.

And it also made the guys a ton of money :) They found a way to take something useful in theory and other practical applications and use it to make something never before seen for the massmarket. That one magic number alone is responsible for a good portion of his net worth and of course his Ferrari

Pick a random line from a file / stream without knowing how many lines there are to choose from in one pass without storing the lines that have been seen.

  perl -e 'while(<>){$x=$_ if rand()<=(1/$.)}print $x'
For each line, pick that line as your random line if a random number (0<=n<1) is less than the reciprocal of the number of lines read so far ($.).

It hits my elegant bone. Only one line... rand < 1/1, pick it. Two lines, same as one, but the second line has a 1/2 change of replacing line one. Third line same as before but gets a 1/3 chance of taking the place of whichever line has survived the first two picks. At the end... you have your random line.

Same number of keystrokes, but IMHO more idiomatic & readable:

    perl -ne '$x=$_ if rand()<=(1/$.); END { print $x }'

Ha, that's how I originally wrote it, but I thought I should get rid of -n and END{} in an attempt to ward of the eww Perl comments. Sigh.

But at least now I know that it's called Reservoir Sampling. I had wondered how to generalize it to wanting N lines.

Does this algorithm guarantee uniform probability for all lines? Seems like the original ordering of the lines is a factor.

The OP should have mentioned that it's this algorithm:


IMO it's a lot clearer if it's not in Perl ...

The pseudocode in Wikipedia also avoids division.

IMO it’s a lot clearer if it’s not in Perl

What isn’t?

If you want to monte carlo it, https://gist.github.com/patio11/546c64c927c749c69964b18527fa... ; feel free to adjust the constants upwards and/or do a chi squared test after doing so.

It didn't sound plausible to me (because I misunderstood what the algorithm was; yay perl), hence quickly testing it, but it seems to work, and after appreciating what the algorithm actually is (see sibling replies), it not only works but it _should_ work. (There is a CS versus engineering joke in here, somewhere.)

Perhaps surprisingly, yes, it does! This is called "reservoir sampling", and a blog post with some more details is available at https://blog.plover.com/prog/weighted-reservoir-sampling.htm....

It does.

It's a special case of reservoir sampling.


Something super simple but that really entertained me when learning lisp:

to have a REPL. (Just reverse the letters, easy enough to remember).

That to me is elegance. It's simple yet powerful, and just 4 words really.

Nice. I immediately had to try the same in PHP. To make a working repl with newlines in the output etc, this is what I a came up with:

    while(1) {eval(fgets(STDIN));echo "\n";};
I then tried it on the command line like this:

    php -r 'while(1) {eval(fgets(STDIN));echo "\n";};';
Hurray, it prompted me for input! So I typed:

    for ($i=0;$i<10;$i++) echo $i;
Which got me:

So far so good.

I wondered: Can we now run the repl in the repl? So I typed:

    while(1) {eval(fgets(STDIN));echo "\n";};
It kept prompting me for input. Am I in a REPL in a REPL now? I typed:

    echo "Hello from a REPL in a REPL!";
And the reply was:

    Hello from a REPL in a REPL!
I'm not totally sure if I believe it though.

It is that simple! It has practical benefits. For example, if you can call that code from somewhere specific in your program, which is handy for debugging. If your `eval` function takes a lexical environment, you can also implement a debugger easily.

To test whether your logic is working properly, make your repl print out a different character when it prompts for input. For example, the toplevel repl can print out "> " whereas the inner repl prints out "repl> "

For bonus points, your repl should exit if you type Ctrl-D. That way you can go from the inner repl to the outer repl, and from there it should exit your program.

Sounds like a Nine Inch Nails song.

"I am in a REPL in a REPL in a REPL,

Running all my code in nested loops..."


This is the first example I've seen in this thread that I would consider beautiful. The rest are neat tricks, but quite difficult to understand, so I wouldn't be happy to see them in production code.

I mean... You really shouldn't have an unchecked repl in your production code either.

True but at least you'd understand what's going on :D

Don’t you have the cart before horse; isn’t the acronym derived from this defintion?

Yeah REPL comes from this definition of "reading, evaluation, printing, looping". But in lisp you write it in reverse order, "Loop, print, evaluate, read".

I'm not saying the acronym came before or after the LISP code :P

  (define eval-expr
    (lambda (expr env)
      (pmatch expr
        [`,x (guard (symbol? x))
          (env x)]
        [`(lambda (,x) ,body)
          (lambda (arg)
            (eval-expr body (lambda (y)
                              (if (eq? x y)
                                  (env y)))))]
        [`(,rator ,rand)
         ((eval-expr rator env)
          (eval-expr rand env))])))
There's an amazing talk about it: https://www.youtube.com/watch?v=OyfBQmvr2Hc

Great. In 90 minutes I may be able to know what you're talking about.

Is it a complete lisp interpreter, written in lisp?

Yes, it uses pmatch behind the scene to match your expr against the evaluator.


I have to agree, this like rank #1 in my book!

This is exactly what came to mind when seeing the question.

I was just recently (re-)reading an article that goes in depth:

Lisp as the Maxwell’s equations of software


Once you've seen this one, it's difficult not to rank it first :)

Y Combinator?

Any of Norvig's notebooks, http://www.norvig.com/ipython/README.html, especially his Concrete Introduction to Probability https://nbviewer.jupyter.org/url/norvig.com/ipython/Probabil....

Oh my God, his Lisp interpreter in Python: amazing. At the time I was just getting a grip with Python and starting in Lisp, it came at just the right moment in my autodidaction.

Really really good, although for a lot of people here it might be a little elementary (but then, the best code always feels elementary even when doing something advanced!)

IMHO, the most beautiful lisp interpreter in a language that is not lisp is this one in prolog.

https://www.metalevel.at/lisprolog/ https://www.metalevel.at/lisprolog/lisprolog.pl

I'd have to think long and hard for the most beautiful code I've ever read, but I think the classic K&R "strcpy" comes pretty close:

  void strcpy(char *s, char *t) {
      while (*s++ = *t++);
It's short, elegant, and quite readable to the trained eye–a bit sharp too, but if you use it right it's quite functional.

I dunno. The security ramifications of those few lines of code make me squirm. It's like looking at a very beautifully constructed foot gun.

I wonder how much damage that code has caused.

At the time the code was written, the security ramifications were not quite the same as they are now. Even now, I would suggest that there are times where such a construction would be just fine.

Nit, but an important one: the security ramifications were just as large than as now, they were just largely unknown at the time.

I would say it’s clever but not beautiful code.

> quite readable to the trained eye

This kind of code is cute, but awful for readability. The author could easily (more easily!) have written the function to be very readable, but went for the cute ultra-compact style instead.

The only reason it's at all readable is that it's solving such a simple problem. Write more complex functionality in that style, and you quickly get a nightmare.

I believe MISRA C outright bans this kind of thing (three assignments with no sequence point), as it has no place in a codebase of real consequence.

I understand that some C programmers pride themselves on being able to read this kind of code (far more impressive than being able to write it), but I see no reason for it in serious software work.

I've got a better code snippet: your exact same code snippet but for a language which will short circuit based on the lvalue of the assignment expression.

Then t would never be able to overflow s (nor even eat its null terminator)

The size of the buffer pointed to by s may be larger than the current string it holds. It may also be uninitialized.

Or point to the wrong thing, or point to unmapped memory, or be const, or…there's a lot of things that can go wrong with this function.

The things I listed are things the function does right as written but could not do if it worked the way suggested above.

The things you listed are general concerns in C which are unrelated to both the correct implementation and the suggested changes.

That’s what strncpy() is for

Watch out: what strncpy does is usually not what you'd want it to do.

That's what strlcpy() is for.

Or strncpy, if you'd like to stay within the standard and your strings are small.

No, it's not. strcpy is fine to use if the destination buffer is larger than you need or uninitialized. That's idiomatic...

This PNG parser in Elixir uses pattern matching in an elegant way.


Parsing binary data can be...nontrivial. The beauty is in the code that doesn't exist.

Exploit codes are often the most beautiful code I read, they are usually small and take some dazzling brilliance to push the computer and make it do what it wasn't.

I can remember the first code that showed how to exploit IFS, race conditions via symlink, the classic "smashing the stack", RTM's worm.

Beauty of a code to me has nothing to do with the formatting, comments, documentations, but everything to do with the mind that bent it into place. Most of the beautiful code I have ever seen would be classified as ugly, spaghetti, not production worthy.

>RTM's worm

I had absolutely no idea until now that RTM co-founded y-combinator. I remember him from mentioned in Bruce Sterling's The Hacker Crackdown and Clifford Stoll's The Cookoo's Egg as well as an occasional Phrack article.


He was one of PG's Viaweb cofounders as well. He also introduced PG to the third cofounder, Trevor Blackwell.

I have similar admiration of hardware hacks.

My current favorite is the one that dumped the SecureROM out of the iPhone 6 via PCI-e: http://ramtin-amin.fr/#nvmedma, http://ramtin-amin.fr/#nvmepcie

I'm partly impressed by the tooling used; most of the coolness (to me) is the author's self-confidence in his hunch that the SoC _didn't_ have its MMU set up properly, and the way he followed his nose in determining that he was probably right.

I still wonder exactly how much was sunk into the project, before it was possible to determine that the MMU was indeed broken. Heh.

The SQLite source tree and DRH code in general are truly piece of art. Almost every line of code is carefully commented. Despite the complexity of the project, you'll learn a lot of practical concepts including expression tree generation, bytecode execution, how to test your code and so forth.

Not to mention humor -- there's a reference in SQLite to Golding's Lord of the Flies -- and I'm guessing more.

I love the SQLite guy. His own version control, his own DB engine; you can tell from the site that he’s an individualist!

He mentioned on a podcast that he even uses his own text editor.

This reminds me of Chuck Moore (inventor of Forth). He designs his own CPUs, using his own CAD package, written in the programming language he invented!

Wow! Do you remember which podcast? Now I am interested in this interview!

Oh? What podcast?

Implementing FizzBuzz with explicit monoidal operations has been one of those things that I go back to and marvel at the simplicity and elegance of: http://dave.fayr.am/posts/2012-10-4-finding-fizzbuzz.html

It really inspired me to get better at seeking out the fundamental operations of whatever I was implementing.

That is great.

I think c4 (a C compiler in four functions, hence the name) is pretty neat.


That's my choice too. It's a C-subset, but more than a compiler; it includes a bytecode VM interpreter too. The best part about it is that it's tiny, yet very readable for its size and functionality.

Considering the magnitude of its impact on science, engineering and culture (as well as the tremendous force released), this line from the IGNITION subroutine in the BURN_BABY_BURN module of the Apollo AGG source code does it for me:


I love the comments in that file. Especially these two:



For me it's the unix example in [1] "AT&T Archives: The UNIX Operating System".

    $ makewords sentences | lowercase | sort | unique | mismatch -
It reads a file called sentences then prints the words that are not spelled correctly.

To me it's; concise, expressive, flexible, modular... Which makes it beautiful...

[1] https://youtu.be/tc4ROCJYbm0

For me, the answer is - The code that never existed.

Not to sound cheeky but eliminating code, is a beautiful thing. Less code is easier to maintain, understand, and faster to run. So the less code you can achieve, the better overall the software will be.

This brings me to a very nice regular expression. It is the one being recommended in

    RFC3986 "Uniform Resource Identifier (URI): Generic Syntax"
    by  T. Berners-Lee, R. Fielding and L. Masinter

to parse an URI. Having seen so many regular expressions, that try to match it all, this regex tries to match as little as possible, while, at the same time, matches any string, because it matches no string. It does not define what to match, but what not to match and making every match optional.

According part from the spec:

     12            3  4          5       6  7        8 9

    The numbers in the second line above are only to assist readability;
    they indicate the reference points for each subexpression (i.e., each
    paired parenthesis).  We refer to the value matched for subexpression
    <n> as $<n>.  For example, matching the above expression to


    results in the following subexpression matches:

      $1 = http:
      $2 = http
      $3 = //www.ics.uci.edu
      $4 = www.ics.uci.edu
      $5 = /pub/ietf/uri/
      $6 = <undefined>
      $7 = <undefined>
      $8 = #Related
      $9 = Related

    where <undefined> indicates that the component is not present, as is
    the case for the query component in the above example.  Therefore, we
    can determine the value of the five components as

      scheme    = $2
      authority = $4
      path      = $5
      query     = $7
      fragment  = $9

One of my favorite RFCs for sure, and it's a beautifully simple and useful regex. It's a shame RFCs seem to have fallen out of fashion, the WHATWG URL spec is an unreadable mess of a monster, not least the parsing section: https://url.spec.whatwg.org/#url-parsing

Deleting code is a beautiful feeling. Python's new walrus operator has gotten a lot of hate recently, but a few days ago I was writing a script and realized I could get rid of a few lines (AND have my meaning be clearer) using the walrus operator. So satisfying.

As a fan of Erlang’s pattern matching, the walrus operator feels like an obvious win.

I do, however, resent Python 3 for removing pattern matching on tuples in function heads.

Reminds me of a saying I read from moderngpu library's wiki page, a highly optimized yet highly readable GPU basic primitive library: "Software is an asset, code a liability. On good days I'd add 200 or 300 lines to this repository. On great days I'd subtract 500." source: https://github.com/moderngpu/moderngpu/wiki/Introduction

The original `true` was a 0-byte executable file [0] (well, perhaps 1 byte for the newline character). Sadly, that is not longer the case.

[0] https://unix.stackexchange.com/questions/419697/why-are-true...

> For me, the answer is - The code that never existed.

> Not to sound cheeky but eliminating code, is a beautiful thing. Less code is easier to maintain, understand, and faster to run. So the less code you can achieve, the better overall the software will be.

Unless you think compressed/ minified code is beautiful, there must be additional factors involved other than minimizing LoC.

To me it's removing lines of code without affecting readability (or even improving it). Like when you don't know about a language feature so you implement a hacky version, then find out there's already a very elegant standard way to do it so you get to lance that hideous monstrosity you made off your code

Duff's Device: https://en.wikipedia.org/wiki/Duff%27s_device

Very elegant use of the fall-through behavior of the swtich statement.

For those unaware, * do not use this on modern systems. *

Normal for-loops are much faster than they were when Duff's Device was invented, since they take advantage of modern branch prediction.

That depends on how large the misprediction penalty is and how many times the branch executes. Branch prediction isn't magic. It never makes branches beneficial. The best it can do, with perfect accuracy, is bring the amortized misprediction penalty down to zero. The key is that Duff's Device also relies on branching, and it's a less predictable branch that's more likely to be mispredicted. On the other hand, one BP miss is still cheaper than several hits plus one miss at the end. On the other other hand, there are issues to consider like code size and cache turnover (favors a loop), handling of special cases where larger loads/stores can be used (favors DD), processor extensions, etc. If you're really trying to write an optimal memcpy-like function, you'll probably end up using a hybrid of several approaches - including Duff's Device.

Another thing to note, is that loop unrolling can be harmful to performance, as it requires more instruction cache. Beyond branch prediction, modern compilers can also convert loops into vector / SIMD instructions, and other magic.

The Duff's device is still useful for creating co-routines though; a handy way of yielding, then returning to the yield point.

It's impossible to choose a favourite, but here's a beautiful snippet, courtesy of Joe Armstrong [0]

  universal_server() ->
         {become, F} ->
[0] https://joearms.github.io/published/2013-11-21-My-favorite-e...

Bash fork bomb (do not run, this will crash your system if it's not configured properly):

    :(){ :|:& };:

What is a proper configuration to avoid that?

That got my unix account suspended at university....

:-) DOS'ing the system is frowned upon.

I like the Windows one, too. It looks a lot simpler as well.


I don't know Windows scripting, but that looks like something you'd have to put in a file and call for it to work, instead of just running that on the command line. You can do the same thing on Unix systems. Writing

on a file and running it should also work as a fork bomb.

The conceptually same thing will also work on bourne shell. But as in the Windows case only as a script stored in file, the original with inline function definition is self-contained oneliner.

...and it smiles at you with its multiple faces

Faces I can interpret:

    :(   #sadface
    :()  #dopefish
    { :  #smiling one-toothed vampire
    { :| #blank stare with hat
    :|   #blank stare without hat
    |:   #other blank stare without hat
    |:&  #blank stare with bow
    };   #winking face with mustache
    };:  #winking four-eyed alien face with mustache
Only one of those was a smile, and you generally don't want a vampire smiling at you even if they're down a tooth! ;)


Changed the way I think about code

And to think that I once got a job interview question that asked me to invent a spell-checker in 45 minutes.

I completely forgot about this. This code is very beautiful too. Norvig doesn't take a bruteforce approach, his codes are worth studying, his sudoku solver is just as good too.

Thank you for the share! This was excellent.

Could you leave a sentence or two about how it made you "change the way [you] think about code"?

Because of Norvig's spelling corrector, my measure for good code is whether or not I can understand what it's supposed to be doing, and how it's supposed to work, from reading it in one pass.

This code passes that test. Before reading it I would have never thought that could be a goal— I couldn't imagine it was possible to know what a program does without comments and documentation.

When I heard the word "readable" from others, I understood it as "parsable". From an OO context, I thought readable meant neatly formatted lines that let you say "ah, yes, this is an if statement", "this is a constructor", and so on, but with no idea what the code actually does or means to do.

Norvig's spelling corrector is better read without comments. The names of the functions tell you what you need to know— their purpose, and their implementations tell you exactly what the author thinks they mean.

It's "declarative": "def correction(word):..." means exactly "the correction of a word is ___", "def candidates(word):..." means exactly "the candidate corrections of a word are ___". There's a strong functional/LISP influence here, which I only learned later.

Because this code demonstrated it was possible, my standard for all code is that it should tell you what it does from one reading. Most code (including my own!) doesn't come close, but aiming for that goal has tons of good effects on difficult code.

Thank you very much for your response!

Haskell allows you to define the full list of prime numbers in just two lines:

  module Primes where

  primes = 2 : filter isPrime [3..]
  isPrime x = all (\y -> mod x y /= 0) $ takeWhile (\y -> y * y < x) primes
The beauty of this is that it's self-referential. The list `primes` is built by filtering all the natural numbers for prime numbers, using the predicate `isPrime` which itself uses the list `primes`. The only caveat is that we need to encode that 0 and 1 are not prime numbers, but 2 is a prime number, to provide the base cases for the recursion. Furthermore, `isPrime` uses that each non-prime number has at least one prime factor less than or equal to its square root, to ensure that it only needs to look at a finite number of possible prime factors.

If you have GHC in your repo, you can test this by putting it in a file and running `ghci` with the file as the only argument. It will give you a REPL where `primes` and `isPrime` are in scope:

  *Primes> isPrime 200
  *Primes> take 10 primes

It is also dirt slow. This is just trial division, done in a lazy fashion.

At least it is better than the one some haskellers call the sieve of erathostenes (it isn't the sieve of erathostenes), which is so god-awful that I almost vomit every time I see it.

The real lazy sieve of erathostenes is a thing of wonder! There is a paper describing it called "the genuine sieve of erathostenes" and it can be found by googling.


  sieve :: Integral a => a -> [a]
  sieve l =
      sieve' [2..l] []
      sieve' (p:ns) ps =
            sieve' (filter (\x -> rem x p /= 0) ns) (p : ps)
      sieve' [] ps =
          reverse ps

The lazy sieve i have seen, taken from the paper mentioned in my post:

    primes = sieve [2..]
    sieve (p : xs) = p : sieve [x | x <− xs, x ‘mod‘ p > 0]

Curiously Recurring Template Pattern (CRTP) [0] in C++.

  class Derived : public Base<Derived>

[0] https://en.wikipedia.org/wiki/Curiously_recurring_template_p...

That's awesome. I ran into this situation in C#, and looks like it is useful in some cases.


tl;dr: "Generally, whenever you need the derived class information in the base class, you are probably looking for CRTP."

If you create an executable file with the following contents, and run it, it will delete itself.


There is a better example - original /bin/true source. Presented to you below.

Compare that with the version in GNU Coreutils[1] that is 50+ lines long so that false[2] can be written as:


#include "true.c"

[1] https://github.com/coreutils/coreutils/blob/master/src/true....

[2] https://github.com/coreutils/coreutils/blob/master/src/false...

Waiting for a keypress in ZX81 BASIC (maybe it's the opposite of elegant, but I remember it as the first time seeing a kind of easter egg in code):

    10 PAUSE 4E4

stops computing & displays the picture for n frames of the television (at 50 frames per second, or 60 in America). n can be up to 32767, which gives you just under 11 minutes; if n is any bigger then it means 'PAUSE for ever'.

A pause can always be cut short by pressing a key"[0]

(4E4 = 4*10^4 = 40000)

[0] http://www.worldofspectrum.org/ZX81BasicProgramming/chap19.h...

This one is kind of contrived, but a work of mad genius. Mandelbrot code that resembles Mandelbrot, in Python 2:

    _                                      =   (
                                   V       ,B,c
                                 :c   and Y(V*V+B,B,  c
                   (              2+c-4*abs(V)**-0.4)/i
                     )  ;v,      x=1500,1000;C=range(v*x
                      );import  struct;P=struct.pack;M,\
                j  ='<QIIHHHH',open('M.bmp','wb').write
    for X in j('BM'+P(M,v*x*3+26,26,12,v,x,1,24))or C:
                i  ,Y=_;j(P('BBB',*(lambda T:(T*80+T**9
                      *i-950*T  **99,T*70-880*T**18+701*
                     T  **9     ,T*i**(1-T**45*2)))(sum(
                   [              Y(0,(A%3/3.+X%v+(X/v+
                                 /x   -2.7,i)**2 for  \
                                   A       in C
                                           )   )

That's a way to loaded and bait like question. Why not ask for "the most elegant snippet" instead?

There are beautiful code bases like DOOM source or SAT solvers that are like super models beautiful. Complete and hard to improve. Marvel at from a distance.

There is beautiful code in the libraries everybody uses all the time. All the lib* code. Somebody to marry. Discover the good beauty over time.

And then there is the beautiful snippet of clever code at the bar, quick to love, but one you get to know him it's much more trouble than he's worth.

This is very common in Elm but it blew my mind after years of programming in Python. In Python it is very easy to raise an IndexError by getting an element from a list by index that doesn't exist. eg.

    # python
    names = []

    names[0]  # <- raises IndexError
In Elm you are forced to always consider this possibility.

    # Elm
    names = []

    case List.head names of
        Just name ->
        Nothing ->

This always felt weird to me given that Python does a good job with dict:

    d = {}
    assert(d.get("blah"), None)
I was impressed by Rust the first time I run into:

    let mut last;
      for i in &[1, 2, 3] {
          last = i;
    println!("{}", last);

  >> borrow of possibly uninitialized variable: `last`

Idiomatic python:

  v = names[0] if names else "empty"

Yes, but the point is you can forget the check in python but not in Elm.

Pike's code should be studied in schools.

Someone once posted a C program here on HN, where each line, even the comments, seemed to "line up" in 6- or 8- character blocks with a space in between. It made the whole program look sort of like a table.

I felt it was sort of like poetry. I unfortunately no longer have a link to it, and once looked very hard but couldn't find it.

I would be extremely appreciative of someone else saw it and had a link. If I recall correctly it was a type of interpreter, I remember it having code for parsing.

Most likely an IOCCC entry, but there are a lot that correspond to this description. Maybe you can find it back looking at previous winners of the contest: http://www.formation.jussieu.fr/ars/2000-2001/C/cours/COMPLE...

I appreciate your help, but I feel like its unlikely to be in there since it wasn't really obfuscated - it was meant to be clear in some sense of the word and had a bunch of comments explaining it.

The comments sounded natural, but lined up so each Nth column was a space (or punctuation) all of the way down the code. None of the things I am seeing in that list seem to have quite the same layout.

There was some comment about it like "some people feel code should be beautiful to enjoy debugging" or something along those lines but I don't really remember.

I have been looking for a long time though and really wish I had saved it properly when I first saw it.

Bit tricks. Like

    public static int bitCount(int i) {
        i = i - ((i >>> 1) & 0x55555555);
        i = (i & 0x33333333) + ((i >>> 2) & 0x33333333);
        i = (i + (i >>> 4)) & 0x0f0f0f0f;
        i = i + (i >>> 8);
        i = i + (i >>> 16);
        return i & 0x3f;
It was like magic for me when I encountered it first time.

Came here to say this, I first saw this in the book Hackers Delight which is full of 200X-me's mind blowing stuff. It really opened a door to writing (arb)GPU shaders where at the time if's weren't allowed and trying to cut branches from PS2/GameCube/Xbox code

Check out the Othello implementation by Hans Wennborg with a bitboard at https://www.hanshq.net/othello.html

# Permutations of a list:

- haskell:

  perms [] = [[]]
  perms xs = [ x:ps | x <- xs , ps <- perms ( xs\\[x] ) ]
- js: (using https://github.com/tc39/proposal-slice-notation for conciseness)

  const perms = xs => xs.length === 0
    ? [[]]
    : xs.flatMap((xi, i) => perms([...xs[0:i], ...xs[i+1:]).map(xsi => [xi, ...xsi])

# Cartesian product of 2 or n lists

- haskell:

  cart2 xs ys = [(x,y) | x <- xs, y <- ys]

  cartn :: [[a]] -> [[a]];
  cartn [] = [[]]
  cartn(xs:xss) = [x:ys | x <- xs, ys <- yss]
                        where yss = cartn xss
- js:

  const cart2 = (xs, ys) => xs.flatMap(x => ys.map(y => [x,y]));

  const cartn = (...args) => args.reduce((yss, xs) => yss.flatMap(ys => xs.map(x => [...ys, x])), [[]]);
  // or recursive
  const cartn = (xs, ...xss) => xss.length === 0
    ? xs
    : xs.flatMap(x => cartn(...xss).map(y => [x,y]))

Until now I didn't know about flatMap() and flat(). Thanks for this!



20 GOTO 10

30 END


I remember typing this on an ASR-33 and being amazed. I made a computer do that. Then I hit Ctrl-C and learned how to make a paper tape with a "here is" leader, turning the paper punch off, then typing "LIST" and turning it back on before hitting RETURN.

A PDP-11/10 with 16K and 3 20mA TTYs connected, no disks, no storage except paper tape.

I've still got that paper tape somewhere, it's 42 years old now.

Had a KSR-33 wired up to a Synertek single-board 6502 kit... never did get the paper tape working, but man was it weirdly cool to be typing on a 50 pound behemoth to put numbers in the 2kB memory of a 6 ounce PC board.

A fully graphical calculator app in Rebol is just a few lines of code. The formatting looks pretty legible at the below link if you scroll down a bit.

REBOL [title: "Calculator"] view layout [ origin 0 space 0x0 across style btn btn 50x50 [append f/text face/text show f] f: field 200x40 font-size 20 return btn "1" btn "2" btn "3" btn " + " return btn "4" btn "5" btn "6" btn " - " return btn "7" btn "8" btn "9" btn " * " return btn "0" btn "." btn " / " btn "=" [ attempt [f/text: form do f/text show f] ] ]


Nice to see Rebol and Red mentioned!

Here's a Red version of the calculator: https://github.com/red/code/blob/master/Showcase/calculator.... You can see the syntax is very close. We strive for compatibility with Rebol, but are also changing things that we feel could be improved.

A small twist on the UI can use a `panel`, which supports a divider option to make grid layout easier. e.g., for 4 items across, you can do this:

  view [
     style b: button 50x50 bold
     panel 4 [
          b "1"  b "2"  b "3"  b "+"
          b "4"  b "5"  b "6"  b "-"
          b "7"  b "8"  b "9"  b "*"
          b "0"  b "."  b "/"  b "=" 
Red is fully open source, and can be compiled. Most code anyway. You can write code that is too dynamic to compile until we go JIT. Red is bootstrapped in Rebol, but will be self hosted next year. It is also its own, entire, toolchain. Red compiles to Red/System, a low (C) level dialect/EDSL, which compiles directly to machine code. You can mix and match the two in apps as well.

We're still alpha, and have a lot of work ahead: https://www.red-lang.org/

If you liked Rebol, check us out, and help create the future.

Our main chat is at https://gitter.im/red/red

Thanks for taking the time to format and preach about Red. I am cheering for y'all!

Code formatted:

  REBOL [title: "Calculator"]
  view layout [
      origin 0  space 0x0  across
      style btn btn 50x50 [append f/text face/text  show f]
      f: field 200x40 font-size 20 return
      btn "1"  btn "2"  btn "3"  btn " + "  return
      btn "4"  btn "5"  btn "6"  btn " - "  return
      btn "7"  btn "8"  btn "9"  btn " * "  return
      btn "0"  btn "."  btn " / "   btn "=" [
          attempt [f/text: form do f/text  show f]

REBOL and Red are so so cool. I wonder why hackers are not embracing them.

Rebol was closed source back when open source was really starting to take off with Perl and Linux. It is seriously cool (I agree), but the interpreter is a little slow.

Red still has a little ways to go, but could be a game changer some day. The project is insanely ambitious, but I'm optimistic.

The main (IMO sad) reason REBOL sits in a weird half-life position of "not quite dead, but..." is that, while REBOL Core is open, the GUI parts (which are crossplatform, and run on Linux) are still closed source.

It's a bit like .NET Core. Great to base a business on... not so great to tinker around with. Arguably the second (tinkering, learning) comes before the first (using what you've learned to build a business).

Mildly infuriating...

Just to clarify it's Rebol 2 (Core/etc) that is closed source.

Rebol 3 is open-sourced along with the GUI (R3-GUI)....

* https://github.com/rebol/rebol

* https://github.com/metaeducation/ren-c (community fork)

* https://github.com/zsx/r3-gui (Atronix fork)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact