Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What is the most beautiful piece of code you've ever read?
476 points by seisvelas on Oct 28, 2019 | hide | past | favorite | 380 comments
Preferably an elegant snippet rather than an entire (well engineered) codebase.



For me it's a BASIC one-liner which generates mazes

  10 PRINT CHR$ (205.5 + RND (1)); : GOTO 10
I found this specific one via slashdot[1], but something similar, which I've never managed to find/replicate, was used to generate mazes on the Atari 800 XL at my school when I was a kid.

[1] https://developers.slashdot.org/story/12/12/01/1847244/how-d...




No, it does not generate a maze. It generates a random sequence of \s and /s, that can trigger a maze being generated in a typical user's mind.

This is the art of illusion.


What is a maze? One could argue that if the user perceives a maze, then it produces a maze.


> One could argue that if the user perceives a X, then it produces a X.

This can be stretched to turn anything into anything else. For example, why can't this comment be a maze? In that case 10 REM; 20 END is my even shorter, more elegant maze program because I see a maze in it.

Some constraints that are typically implied when people say "computer generated mazes":

- They are solvable (have a start and end)

- OR they loop endlessly with no dead ends.

- Walls and spaces consume 1 element on the grid.

- No space on the grid is surrounded by all 4 NWSE walls.


>>One could argue that if the user perceives a X, then it produces a X. >This can be stretched to turn anything into anything else. For example, why can't this comment be a maze?

Because unlike the one-liner nobody perceives it as a maze, so the "this can be stretched to turn anything into anything else" argument is tenuous...


I believe your third criterion is actually met by this generator. The grid is just tilted 45 degrees.


> This can be stretched to turn anything into anything else. For example, why can't this comment be a maze?

Name one person who perceives your comment as a maze.

It must be perceived as a maze by people for it to apply.

As it stands, I’m not sure if people perceive your comment as a maze.

Maybe theoretically there’s a nonzero number of people who perceive the comment as a maze. But unless your audience is those few people, there wouldn’t be much utility in it. In the OP’s example this maze perception is not only highly widespread but it’s also leveraged for a particular intended effect.


Never expected to find myself on the path to enlightenment after reading a line of BASIC.


Maze has a starting point and a goal. The goal must be reachable from the start.


Only if you want to give the occupant of the maze a chance to escape :-)


So this would be more of a labyrinth than a maze?


No, a labarynth is a single twisting route without branches. A maze has branches and thus can have dead-ends. Both must have an entry and exit point (technically mazes may have more than one exit and entry, but most don't).


That’s open for debate. https://en.wikipedia.org/wiki/Labyrinth:

”Although early Cretan coins occasionally exhibit branching (multicursal) patterns

[…]

both logic and literary descriptions make it clear that the Minotaur was trapped in a complex branching maze

[…]

In English, the term labyrinth is generally synonymous with maze. As a result of the long history of unicursal representation of the mythological Labyrinth, however, many contemporary scholars and enthusiasts observe a distinction between the two.”


> a labarynth is a single twisting route without branches. A maze has branches and thus can have dead-ends.

I was today years old when I learned this.



Jeopardy this: Only then can you realize, there is no maze.


Everything is an illusion. \s and /s are just liquid crystal pixels on the screen, filtering some light. However it's easier to say that if something has a name and you can identify it, then it's probably that thing. I see a maze it means it's a maze :)


>I see a maze it means it's a maze :)

I'd suggest to not go bar-hopping in Thailand with that mindset...


Nice to see a bit of casual transphobia on hackernews.


Nice to see confusing a joke reference to a regional fact (as to things that can seem X but be Y) with transphobia. Anybody told you that merely referencing trans women in Thailand is opposing them? Projecting much?


I'm amazed that you're defending that comment.


Well, I am amazed that one can be so rude.

First, you come blazing with accusations of transphobia, because of a joking mention of Thailand's ladyboy scene, to challenge the parents' notion that everything is WYSIWYG. Do you even know me? Or you have a hobby of randomly assuming things about people you don't know?

Second, when I don't take that quietly, you add the above comment about having the gal to defend myself, going for the "This animal is extremely vicious. When attacked, it defends itself" angle...

Perhaps you can go even lower against some stranger on the internet, but you wont be getting any replies from me in this thread.


Good on you for standing for what you say. I personally didn't see any transphobia in your comment, what you said is merely a joke on a fact of that part of the world. The "phobia" was added on by Zhyl


The transphobia isn't related to the 'fact' about Thailand. If anything, the fact that it's a reference point is the reason why the sentiment is distasteful. The implication/joke is:

* A person goes into a bar in Thailand, sees what they think is a girl and are surprised to find that things are not what they seem.

Broken down:

* Person sees a girl

* It is not a girl

* This is a bad thing

Now, it would be easy to claim (as indeed you and GP seem to be doing) that this isn't a slur on the second party in this encounter. It's a fact of the world. They exist. OP, the person who is saying that 'things are what they seem to be' would probably not want to find themselves in a situation where they making an assumption that the person is female.

The fact that this is negative can be seen in the 'I hope you don't' words which set the tone and are the basis of most of the sarcasm. I have to say that even if GP had picked a different example, I don't like this kind of comment anyway. It's a bitter, jabby comment that I don't think furthers discussion and overall makes HN a more negative and uncomfortable place to be.

So an assumption I made, which of course is debatable, is that the setting of 'bar' implies a romantic or sexual intent on the first party. GP will probably deny or refute this, but really without this implication there isn't much of a joke to speak of. "I spoke to this girl and it turned out she was a dude! What larks we had!".

GP seems to be making this claim in this comment:

>Nice to see confusing a joke reference to a regional fact (as to things that can seem X but be Y) with transphobia. Anybody told you that merely referencing trans women in Thailand is opposing them? Projecting much?

If it was 'merely referencing' then it wouldn't be a joke. 'Merely referencing' would we to reword the comment as:

"Things are not always what they seem, like commonly accepted phenomena of Thaiwanese ladyboys".

I mean, not much different, and still in poor taste for a thread that is about beautiful code, but the tone of the comment is completely changed. It's now no longer a jab at the PC, but using a counterexample to illustrate a point.

And why is suggesting that someone being shocked to discover a ladyboy transphobic anyway? Basically, it boils down to the fact that trans people have to suffer against stigma that they're somehow 'out to trick' people and that unassuming straight males need to 'watch out'. This comment is a prime example of this. It's literally a warning. Sure, it's mild transphobia, but it's perpetuating this stigma and I feel like Hackernews generally, and this thread specifically, are not really places for this kind of comment.

I normally wouldn't write this much out about this topic, but GP seemed very upset about their comment being called transphobic. Note, I didn't call them transphobic at all. I didn't actually make any comment about them, their person, their intent or anything. Just that the comment was 'casual transphobia' which I firmly believe that it is.

Let's break down the rest of their comment:

>Well, I am amazed that one can be so rude.

In both cases they use my own words back to me. I'm not sure why. It seems to be a mix of defensiveness and sarcasm. I don't actually think I was being rude - certainly no ruder than they were being to who they were commenting to. I felt that their original comment was snide.

>First, you come blazing with accusations of transphobia

I called the comment transphobic. Addressed above.

>to challenge the parents' notion that everything is WYSIWYG

Not very well.

>Do you even know me? Or you have a hobby of randomly assuming things about people you don't know?

This is incredibly defensive. Especially the second part. It's not enough to call out transphobia, they have to then pile on unfounded accusations of 'randomly assuming things about people' I don't know. I think the irony of this is amazing. They're literally accusing me of doing exactly what they're doing. It's like a bad girlfriend stereotype. (If you want to do some whataboutism, why not pick apart the inherent sexism of that comment? I'll give you a starter for 10 by lampshading it).

>econd, when I don't take that quietly, you add the above comment about having the gal to defend myself, going for the "This animal is extremely vicious. When attacked, it defends itself" angle...

So GP edited this comment a couple of times. Originally this was basically all there was. 'Having the gal' to defend themselves is a laugh. There was no self reflection, no admission that what they wrote could have been seen as transphobic. Just immediately jump to a victim paradox. 'Poor old me being attacked by nasty SJW just for merely mentioning ladyboys. Is it a crime? What did I do wrong????'

>Perhaps you can go even lower against some stranger on the internet, but you wont be getting any replies from me in this thread.

I mean I love this sign off. It's the reason I knew I had gotten to them and the reason I didn't reply. They didn't want to hear it. It was clear they weren't going to learn, and in fact they're telling me that they don't want a response anyway. I love the way that they're so deep in the role of the victim that they think I'm going to go off and harass more strangers. It's a lovely thought, but I don't do that. I just calls em as I sees em.

So anyway, I'm glad that you don't see the transphobia, but I don't think I was wrong to call them out on what I saw as implied transphobia.


Is the user then not amazed?


A maze does not have to be solvable.



There’s actually an entire book on this at: https://10print.org/


324 page book originating from 1 line of code! Amazing.


Was kinda hoping the maze on the left was procedurally generated.


This book is some nice addition to "Computer Culture".


Perl version if you don't have a BASIC interpreter handy:

perl -C -e 'print chr(9585.5+rand) while 1'


  Wide character in print at -e line 1.


Oh, Windows maybe? add:

binmode(STDOUT);

To the front.

Unicode on languages made before it existed is...argh.


What is the probability that it prints a completable[0] maze?

Expressed in terms of line width (w) and number of lines (n).

[0] Where there's a valid path from the first line to the last.


1/2^n, I believe (where n is the number of lines after the first one, otherwise 1/2^n+1). So halved every time: 100% with 1 line, 50% with 2 lines, 25% with 3 lines, and so on. Width is irrelevant (you can do this in your head comparing w=1 n=2 to w=2 n=2 for example).

Edited for clarity and accuracy.


One line, not completable:

    /\/\/\/\
Also, I don't believe it doesn't depend on width. I think with constant height, the probability of completable one should increase with width, and become almost 1 for very large widths.


Hm, good points, looks like I need more sleep. Looking at the video again, a path is valid if two slashes follow each other, with the same pattern shifted by 1 the line below.

But I wasn't treating /\ as an invalid on the first line. Eg thinking of:

    /\/\/\/\
    \\//////
    /\\\\\\\ … and so on
(Using the /\ pattern, paths can go back up and come back down, so this requires a lot more thought than I instinctively put into it)


This, as a WebGL shader in shadertoy: https://www.shadertoy.com/view/ld23DW


Oh wow this is a cool loop closure. I wrote a CHIP 8 emulator to learn stuff and my main test rom was this program. I walked through the bytecode very slowly as I learned and debugged and repeatedly wondered how the heck the program was so tiny.


Wow! This is really cool. I just entered it quickly in a C64 emulator and it ran, creating a maze. Not sure, whether it would be a solvable one, but it is interesting to note, that the paths are really connected!


> the paths are really connected!

What do you mean by that?


The paths make sense. I.e. you got uninterrupted lines, surrounded by walls, so it is a real maze and not just a lookalike or partial maze. Oh, and it's a-maze-ing! :-)


I'm still not sure what you mean. It just prints / and \ randomly. What is a maze lookalike and partial maze? Would you mind sharing any examples?


If we define "elegant" as getting a lot done with a little, another BASIC (kinda..) one: https://linusakesson.net/scene/a-mind-is-born/ (The last part of the song is so great, too.)


The mysterious maze algorithm of Atari game Entombed: https://www.bbc.com/future/article/20190919-the-maze-puzzle-...


That was great, thanks


while [ 1 ]; do for i in `seq 1 80`; do printf "\xE2\x95\xB$(( ( RANDOM % 2 ) + 1 ))"; done; printf "\n"; done


That is very cool, but does it produce no branches?


ohh wow,

I remember being fascinated with Pascal program that makes snow fall.

Basically, it was white dots on a blue background in an infinite loop. :)


Somewhere I have a copy of a BASIC program I typed into my TRS-80 Color Computer as a kid, from an issue of K-Power magazine.

I don't recall if it was one of the full issues, or one of the "insert issues" found within Family Computing of the era.

It was a special kind of "snow fall" generator; IIRC, there were only a couple of versions, one for the Apple IIe and the other for the PCjr - I likely used the PCjr version for the conversion since the BASIC language was nearly identical between the PCjr and my CoCo 2.

What was special was that it managed to seemingly move hundreds of pixels relatively rapidly on the screen; now, this might not seem like such a feat, but it really was considering we're talking about a computer system running an interpreted BASIC program, with a max speed below 1 MHz in standard mode (0.89 MHz - there was a so-called "speedup poke" that "doubled" the speed - POKE 65495,0 if anyone cares).

I don't recall what trick it used - or if it really used anything special at all - but I do remember being impressed by that bit of code.


I love it!


The one that blew my mind when I was in college was a simplified version of quicksort in Haskell. It's just so elegant and clean.

    quicksort :: Ord a => [a] -> [a]
    quicksort [] = []
    quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater)
        where
            lesser = filter (< p) xs
            greater = filter (>= p) xs
Now surely someone may come along and point out how this isn't a true quicksort[0] because it doesn't partition the elements in place, but it's more of the simplicity of the logic and its readability that showed me how beautiful functional code can be.

[0] https://stackoverflow.com/questions/7717691/why-is-the-minim...


Quick sort comes with a steep penalty. Worst case is O(n^(2)). The reason quicksort is good is because it's in place. Once you throw away the in place aspect of quick sort, it's straight up bad.

This implementation of quicksort is actually a great example of why functional programming sucks. It silently transforms an O(1) space algorithm into an O(n) space one, and adds an enormous constant time overhead.

Algorithms that are optimal under the mutable data assumption are different than algorithms that are optimal under the constant data assumption. So a normal programmer might sort via quicksort in Haskell because it's the optimal sort in imperative languages even though naive Haskell quicksort is objectively worse in every way than naive Haskell mergesort.

Performant programming in Haskell requires a much more intimate understanding of the underlying architecture than performant programming in, for instance, C++. And that's a very low bar.


Ok, I’ll bite.

If having to think a bit harder about your sorting algorithm is why functional programming sucks, can I give examples of every bullshit concurrency problem I’ve had in Java as an example of why imperative programming sucks?

Persistent data structures are a bit slower but largely become non issues if you deal with concurrency and avoid a lock that you would have otherwise required in C or Java.

It’s not like the people who wrote Haskell are idiots; most of these data structures end up sharing a lot of data, and a “new” structure is often only the diffs from the previous version. Not to mention that since you have a compile time guarantee that the data isn’t going to change, you effectively avoid any need for defensive copying.

I’m sure you can find some benchmark that proves C++ is faster in the most technical sense, but for network and multithreaded applications, it’s not even close; it’s so much easier to make sure a functional concurrent language is actually correct that the benchmarks become almost irrelevant.


I say this as a die hard FP fan: I agree with your parent comment. I think FP is generally good enough or sometimes even quite close. But I don't know how many times I ended up writing C in Haskell or scheme to make an algorithm really fast (once all other algorithms have been tried or discarded).

The quicksort example is a shitty quicksort, because it will be slow as molasses. A proper quicksort in Haskell will degrade you to writing C in Haskell (with the benefits of a large standard library)and then you have lost. C in C will always beat C in Haskell or any other language.

Is it a worthwhile tradeoff? I believe so. The few times I am limited by speed in that way are few and far between, and most often it was because I thought something like O(n2) would be good enough which is easily fixable. Sometimes I just need to make optimal code faster. By then I always wish I would be using an imperative language instead.


I'm genuinely curious; how often do you actually end up writing sorts in Haskell? I just use the built-in sort function, which is fast enough.

Maybe your use-case is different than mine, but I typically use Haskell (or another functional language) for network applications, which typically don't benefit from tight-loops since the network is almost always the bottleneck anyway. For these kinds of applications, having proper concurrency and IO handling matters a lot more than worrying about whether your loops result in a cache-miss.


I have had a couple of times where the overhead of idiomatic Haskell warranted rewriting it in a mutable imperative way. Sorting was never a problem, but things like functional heaps (leftist or pairing heaps) will never be as efficient as mutable, cache aware ones.

The lower level mucking around when you have to do those kinds of optimizations is, IMO, much more pleasant in imperative languages.


Sure, I won't argue with that. I've never used Haskell for that low-level of work, so I can't speak with any kind of expertise on that; I've never found Haskell's FFI to be terribly hard to use though, so you could conceivably get the best of both worlds.

Also, have you tried Liquid Haskell? It uses refinement types to let you use the "unsafe" and fast versions of functions to guarantee correctness while also increasing performance.


Is the level of my English lacking? The comment you're agreeing to clearly seems to argue that fp sucks as a whole because it's not as fast for tight looped algorithms, not that it sucks only for them.


> most of these data structures end up sharing a lot of data, and a “new” structure is often only the diffs from the previous version.

Like everything else in life, those data structures come with tradeoffs. Accessing an array element is an indexing operation into a continuous block of memory. The indexing is built into the instruction set as an addressing mode, the array's memory is continuous (so better locality of reference for caching), and if you're traversing it predictably, prefetch hardware can being it into cache before you issue the instructions that do the read.

The same operation in a persistent vector is a function call, several pointer dereferences, and a more complex on-memory structure with more overhead. It works elegantly, but it's a completely different (and likely slower) level of abstraction from a simple in-memory vector.

A few years ago, I switched a toy search program away from ImmutableJS and over to naive copying of raw JS data structures. I don't remember the exact performance delta, but it was something like an x10-100 speedup, for code that was otherwise structurally identical:

https://github.com/mschaef/react-matchstick/commit/070802b69...

In this case, I had a goal to achieve, a timeline on which to achieve it, and it was the ability to switch to simpler, closer-to-the-metal data structures that made it happen. Had I needed more optimization, the most logical approach would have been to remove even more copying and rely even more heavily on mutation. This shouldn't come as a surprise, because at their core, these machines are fundamentally built on mutation, and there's a cost to pretending otherwise.

Now, in fairness to the persistent structures, this is something like a worst case scenario for their use - the algorithm is almost entirely dominated by manipulating relatively small data structures without much opportunity for sharing. It was also single threaded, small, and developed by a single developer in a small amount of time, so the organizational benefits of immutability were not apparent either.

If this was a huge redux state shared across a team of 100 developers and updated only a few times a second, I could probably tell a completely different story. It's easy to imagine (based on firsthand experience!) the kind of havoc that can be inflicted on a project with an erroneous update.

I get the enthusiasm for functional programming and persistant structures, etc., but at the end of the day it's just an engineering approach. One of many, and there's room to do the work to choose between them.


> I get the enthusiasm for functional programming and persistant structures, etc., but at the end of the day it's just an engineering approach. One of many, and there's room to do the work to choose between them.

I don't disagree with this but that's not what you said initially. You said "this is why FP sucks".

I definitely think that if your work is doing tight-loops where every micro-second matters, an imperative language will usually be the correct approach. I don't think anyone is (or would) argue against that point, including most Haskellers.

However, functional languages do simplify things with network/concurrent applications. There's a reason that something like MapReduce is popular. Something like Spark or Onyx is just simpler to do correctly and work with than trying to achieve something equivalent using C or C++; maybe this is just me.

EDIT: My bad, I was responding to the wrong person, this person never said FP sucks. I apologize!


> You said "this is why FP sucks".

No, I did not.


You are absolutely right; I misread some usernames. I apologize for the misattribution.


No worries... I figured it might be something like that.


> It silently transforms an O(1) space algorithm into an O(n) space one, and adds an enormous constant time overhead.

Please tell me what imperative quicksort algorithm has O(1) space. All versions I've seen and could recall use recursion; although each recursive call uses O(1) space, in the worst case of bad pivot element selection each recursive call would only really sort one element resulting in a worst-case O(n) space. Use of randomization would result in a high probability of choosing a good pivot element, but even then you can expect approximately O(log n) space.

Also would like to see why you think the Haskell version has enormous constant time overhead. Where do you think this overhead comes from? If you are comparing to an equivalent program in C++ then sure allocations and stuff, but compared with the typical Haskell list-processing programs I don't see any significantly larger overhead.


I don't know that I would ever do this, since O(lgn) space is normally trivial, but couldn't you use a RNG that's based on like the depth and start position of a given "stack frame" of a recursion-less quicksort?

Like to "recurse" (not actually recurse, but pretend) you would increment depth, the update the start position, and calculate the new partition index based on the (depth, start) tuple?

And run that in reverse for going back up the stack.

edit: Hah. This is fun. There's a variant where you do tricks with the elements of the array to get constant space.

https://link.springer.com/chapter/10.1007/BFb0016252

The idea is that you partition the elements, but then instead of storing the bounds of the left and right sides, you switch the element from the start of the right side with the partition element of the "stack frame" above you. This later serves as a flag indicating the end of the right side, since the partition of the parent "stack frame" is greater than all elements on the right you know you've hit then end of the right side when you see a larger number than the parent "stack frame"'s pivot.


The path towards O(1) space instead of O(log n) space is fraught with technicalities. At the bottom, you realize that no matter what algorithm you use, you always store the array size in lg(n) bits! So it makes more sense to say “less memory” rather than O(1) memory for sorts.


For non recursive quicksorts see code at bottom of this link: https://www.geeksforgeeks.org/iterative-quick-sort/


That link basically just simulates recursion by defining a stack in the function. It recognizes that in the recursive version among the stack variables only the two indexes need to be stored so it stores them. It is even less space efficient than the recursive version because it always allocates O(n) space.


I don’t see how this is a great example of how functional programming sucks. It is easy in any programming language to build a slow sorting algorithm. The fact that you need to use different algorithms with immutable data is irrelevant. You need to use different algorithms for pretty much any type of data structure.

Functional programming makes a different set of tradeoffs than imperative programming. You have to work harder to get peak performance, but in many cases it is actually quite performant. Just look at how fast Elm is for web applications, and how that approach allows for a time-traveling debugger and virtually no crashes.


> It silently transforms an O(1) space algorithm into an O(n) space one, and adds an enormous constant time overhead.

It doesn't silently transform O(1) to O(n), the code is explicitly O(n^2) space worst-case. The only 'silent' transformation that could happen here is an optimization to improve performance. Also, I don't know where you got the 'enormous constant time overhead' part.

> Algorithms that are optimal under the mutable data assumption are different than algorithms that are optimal under the constant data assumption.

A normal programmer wouldn't be defining writing their own sort functions. A normal Haskell programmer would understand mutability in Haskell.

> Performant programming in Haskell requires a much more intimate understanding of the underlying architecture than performant programming in, for instance, C++.

It really depends on what you're writing and how much performance you actually need. Implementing a moderate-complexity parallel data processing algorithm in Haskell may result in a slightly slower but much simpler implementation than C++. An implementation of the same complexity in C++ may be slower than Haskell. Writing performant, parallel, safe code for a moderately complex algorithm in C++ is far from easy.


I remember on olympiads one trick that I though was brilliant out-of-the-box thinking was to shuffle input before sorting to avoid worst-case-prepared inputs.


> Quick sort comes with a steep penalty. Worst case is O(n^(2)). The reason quicksort is good is because it's in place. Once you throw away the in place aspect of quick sort, it's straight up bad.

That's why you shuffle the list before you sort it :)


This is why Python uses Timsort: https://en.wikipedia.org/wiki/Timsort


I saw a similar version in erlang (in Joe Armstrong's Programming Erlang) and agree it beautifully illustrates the concept of quicksort. Apparently, Tony Hoare came up with it when he took a class teaching him recursion (i.e. he was a student).

But the flaw you note unfortunately undermines performance... the "quick" in "quicksort".

Thus, IMHO it's a compelling illustration of the profound strengths and weaknesses of fp.


Agreed that this is beautiful purely from the standpoint of how there is essentially no disconnect between the concept of the algorithm and the way it's expressed in code. It's almost the math/algo just translated directly into code (not commenting on the efficiency or other issues that may exist).


I won't comment on how quicksort-y this is, but the combination of pattern matching and recursion is bread-and-butter Haskell. A number of seemingly complex algorithms can be implemented in a similar way.

Having said that, it is elegant and clean and -- taken by itself -- does make Haskell very attractive.


When it comes to Haskell, this function that generates all the fibonacci numbers did it for me:

  fibs = 0 : scanl (+) 1 fibs


Another beaut!


The problem with this piece of code is not that it doesn't do it in-place. The problem is, if the input list is (almost) constant, then this code will (almost) certainly take quadratic time to sort it, even if it is shuffled before being fed to this function.

However, I get what you mean. It really is beautiful!

EDIT: grammar.


Though it's not true quicksort, it's possible to explain the idea of this algorithm with code like this in just a few minutes while the truest implementation in C with Hoare partition looks really confusing


This may be an elegant piece of code at first glance except the niceties of quicksort such as cache locality and stability come entirely from the in-placeness of the partition procedure.


Unfortunately it's also incorrect! It discards NaNs, as they are neither < nor >=.


In the example there was no talking about floating point numbers. The ordered type _has_ a total ordering as shown by the Ord typeclass restriction.


I would argue that this is better than most alternatives.


Haskell blows your mind whatever ;)


I began coding in IBM/LCSI PC Logo. The first line of code I ever wrote was:

  FD 100
That's the "hello, world" of turtle graphics in Logo. While probably not as beautiful as the several splendid examples posted in this thread, that simple line of code changed my world. I could make stuff happen in an otherwise mostly blank monochrome CRT display. Until then I had seen CRTs in televisions where I had very little control on what I see on the screen. But now, I had control. The turtle became my toy and I could make it draw anything on a 320 x 250 canvas.

The next beautiful piece of code I came across in the same language was:

  REPEAT 360 [FD 1 RT 1]
The code above draws an approximation of a circle by combining 360 short line segments. It showed me how control flow can be used elegantly to express complex ideas in a simple expression. And then I came across this:

  REPEAT 20 [REPEAT 180 [FD 1 RT 2] RT 18]
The above code draws 20 overlapping circles. The output looks like this: https://susam.in/files/blog/dosbox-logo-1.png .

At an impressionable age of 9, reading and writing code like this, and using simple arithmetic, geometry, logic, and code to manipulate a two-dimensional world had a lasting effect on me. I like to believe that my passion for software engineering as well as my love for writing code, sharing code, and open source development are a result of coming across these beautiful code examples early in my life.


The true beauty of Logo comes in the realization that you can add your own words to it, and that all these specialist words you devise are themselves first-class citizens of the language, of syntax and status equal to its generic builtins. Hence:

    TO CIRCLE
    REPEAT 360 [FD 1 RT 1]
    END

    TO FLOWER
    REPEAT 20 [CIRCLE RT 18]
    END
and so on ad infinitum, until you arrive at a complete custom vocabulary that concisely and precisely expresses the particular concepts and behaviors of interest and importance to you. In doing so, you move from thinking algorithmically (which is glorified spaghetti) to thinking compositionally, which is the key to scalability – managing complexity as your needs and ambitions grow.

Whereas Algol-y languages treat user-defined vocabulary as second-class citizens, beneath their own privileged built-ins. Which is a ridiculous of status when you consider which is actually important to the user: precise, powerful, tailored words that describe their particular problem, or the crude primitive undifferentiated building blocks that the language dumps out of the box?

The beauty of bottom-up programming (as any Lisp fule kno:) is that endlessly tests your own understanding of the problem domain: to define effective, productive words you must have some idea of what you’re talking about; you can’t help but learn the foundations of the problem space as you go. There’s a basic humility to this approach; there’s nowhere to hide laziness or ignorance.

Whereas in top-down programming it’s much too easy for highly-educated highly-paid absolute know-nothings to bullshit eternally, constructing great theatrical class architectures; grand mechanical castles in the sky that look all very difficult and impressive to observers while never saying anything relevant or useful.

That key switch from algorithmic to compositional thinking is not a natural conceptual leap for self-learners – it takes a carefully directed prod at a particular point in the learning curve to jump those rails – but it opens up worlds. #PlatosCave


Thanks for writing this. I think you've explained why I've always liked the ability to invoke functions without parentheses. I always thought it simply appealed to my sense of aesthetics, and I've had somewhat of a hard time defending it but now that you mention it, building my own first-class primitives is a big part of it.


Oh man, PC Logo takes me back down the memory lane. We were introduced to PC Logo in 4th grade, that's about 9 years ago. I instantly fell in love with it. The amount of power I had over the computer - ordering it to draw what I want, how I want was really exciting.

My father had installed Kubuntu on our home computer which had an amazing suite of educational applications, including KTurtle which was (almost) the same as PC Logo. I got so addicted to it, my father suggested me to create a blog and regularly update it with the drawings and the code for it. And so I did [1]! I used Google Blogpost (that was the thing back then). Good times :).

1: http://kturtlecommands.blogspot.com/

Edit: Just checked the blog after almost 6 years, it still gets 100+ weekly impression, haha :D


I clicked on your blog and all the images are gray exclamation marks :(


Wayback Machine to the rescue: https://web.archive.org/web/20121023081714/https://kturtleco...

I recommend adding bookmarklets for Wayback Machine: https://en.wikipedia.org/wiki/Help:Using_the_Wayback_Machine...


Thanks for this! But it none of the saved webpage seem actually saved or clickable.


Yeah, the images got corrupted or lost in Google's servers somehow some time around 2013. Couldn't get them back :/


Logo was the best!

I still remember my father ‘explaining’ me Pythagoras theorem when I was around 5 to show me how to draw the roof of a house on our hand-soldered Philips Apple II clone.

I’ve been hooked ever since.

The best thing was that 25 years later I opened a Logo emulator again and when faced with having to clean the screen somewhere deep, deep from my muscle memory the right command sprang forward: CLEAR


For all the Logo fans in this thread, I have created a Slack workspace here for us to get together: https://bit.ly/fd100slackinvite

Please do join it even if you don't remember Logo anymore. The intention here is not to discuss Logo but to share the joy of computing that we discovered through Logo and has remained in our lives. I hope to see you all there. :-)

By the way, there is also #fd100 channel on Freenode IRC but I am not sure whether most HN users prefer IRC or Slack.


In the same vein :

CALL -151

That was the entry point to Apple 2's "monitor" (ie bare bones assembler). Not the most beautiful, but very evocative, gave me a sense of power :-)


How my, so many souvenirs. Especially the one when our teacher took us to the courtyard and had us do the turtle in real life, him giving the instructions.


For all you Logo fans out there, you should really play "Duskers". It's Logo meets Aliens.


Cool, we got Logo, BASIC and now Prolog. All one needs to begin programming, eh.


John Carmack's Fast Inverse Square Root: https://en.wikipedia.org/wiki/Fast_inverse_square_root#Overv.... The first time I really and truly felt that people approach problems differently from how I, by default, go about them.


IIRC Carmack didn't write that code, as brilliant as he is.


It even says that he didn’t write it in that wikipedia page.


The most beautiful piece of code you've ever read contains a magic number with the comments "evil floating point bit level hacking" and "what the fuck?"... you're a madman.


CPUs have much faster (2-5 cycles latency) and much more precise (<0.04% relative error) version in hardware, since 1999: https://www.felixcloutier.com/x86/rsqrtps


I wonder which one used less energy in 1999?


This is still one of my favorites. In a list of magic numbers, sorted by magic-ness, 0x5F3759DF would sit pretty high.


it's the antithesis of maintainable code. But given that one is unlikely to want to change from calculating the inverse square root to inverse cube root or any other variation....


I disagree strongly. This code is very maintainable: it does not have dependencies, it is trivial to test, it is wickedly short, and with the appropriate comment there is no confusion regarding its purpose. Also, its field of applicability is clear from the context: replace this code with a call to fsqrt if you happen to have a fast hardware implementation of it. It is the most easily maintainable code, ever!


By "the appropriate comment" you mean, and I quote:

"// what the fuck? "


the one maintainability metric, and i think is the most important one, is whether it's easy to modify (to make it do something slightly different).

You'd be hard pressed to write an inverse square cube without basically rewriting the whole function. There's nothing that can be reused. The only saving grace is that it is side-effect free, so replacement is trivial, unlike a lot of other code that's not maintainable.


Your metric is only applicable within the context of the problem domain. In 3-d graphics, the inverse square root (1 divided by the square root of x) is an extremely common operation (such as for normalizing vectors), but the inverse cube root pretty much isn't used, so it's safe to assume you will not need to extend it to arbitrary powers.

As the designer of the code, you would understand that the inverse square root is a standalone problem.


The question is how likely are you to need it to do something different, and what are the different things it might need to do? This will depend on the domain, but making things too flexible hampers maintainability too.


>inverse square cube

You mean inverse cube root? I believe the geometry ain’t easily upgradeable this way.


it's a NUMBER - there's zero maintenance required for a number of constant value.


If you enjoy this sort of thing, this book is full of bit twiddling algorithms you never thought possible: https://www.amazon.com/Hackers-Delight-2nd-Henry-Warren/dp/0...


How do I get a job in /this/ area? I couldn't care much to do JavaScript programming, but this is awesome.


I came here to post this. Remember back in the day when it was wildly discussed. Still impressive to this day.


And it also made the guys a ton of money :) They found a way to take something useful in theory and other practical applications and use it to make something never before seen for the massmarket. That one magic number alone is responsible for a good portion of his net worth and of course his Ferrari


Pick a random line from a file / stream without knowing how many lines there are to choose from in one pass without storing the lines that have been seen.

  perl -e 'while(<>){$x=$_ if rand()<=(1/$.)}print $x'
For each line, pick that line as your random line if a random number (0<=n<1) is less than the reciprocal of the number of lines read so far ($.).

It hits my elegant bone. Only one line... rand < 1/1, pick it. Two lines, same as one, but the second line has a 1/2 change of replacing line one. Third line same as before but gets a 1/3 chance of taking the place of whichever line has survived the first two picks. At the end... you have your random line.


Same number of keystrokes, but IMHO more idiomatic & readable:

    perl -ne '$x=$_ if rand()<=(1/$.); END { print $x }'


Ha, that's how I originally wrote it, but I thought I should get rid of -n and END{} in an attempt to ward of the eww Perl comments. Sigh.

But at least now I know that it's called Reservoir Sampling. I had wondered how to generalize it to wanting N lines.


Does this algorithm guarantee uniform probability for all lines? Seems like the original ordering of the lines is a factor.


The OP should have mentioned that it's this algorithm:

https://en.wikipedia.org/wiki/Reservoir_sampling

IMO it's a lot clearer if it's not in Perl ...

The pseudocode in Wikipedia also avoids division.


IMO it’s a lot clearer if it’s not in Perl

What isn’t?


If you want to monte carlo it, https://gist.github.com/patio11/546c64c927c749c69964b18527fa... ; feel free to adjust the constants upwards and/or do a chi squared test after doing so.

It didn't sound plausible to me (because I misunderstood what the algorithm was; yay perl), hence quickly testing it, but it seems to work, and after appreciating what the algorithm actually is (see sibling replies), it not only works but it _should_ work. (There is a CS versus engineering joke in here, somewhere.)


Perhaps surprisingly, yes, it does! This is called "reservoir sampling", and a blog post with some more details is available at https://blog.plover.com/prog/weighted-reservoir-sampling.htm....


It does.

It's a special case of reservoir sampling.

https://en.wikipedia.org/wiki/Reservoir_sampling


Something super simple but that really entertained me when learning lisp:

     (loop(print(eval(read)))
to have a REPL. (Just reverse the letters, easy enough to remember).

That to me is elegance. It's simple yet powerful, and just 4 words really.


Nice. I immediately had to try the same in PHP. To make a working repl with newlines in the output etc, this is what I a came up with:

    while(1) {eval(fgets(STDIN));echo "\n";};
I then tried it on the command line like this:

    php -r 'while(1) {eval(fgets(STDIN));echo "\n";};';
Hurray, it prompted me for input! So I typed:

    for ($i=0;$i<10;$i++) echo $i;
Which got me:

    0123456789
So far so good.

I wondered: Can we now run the repl in the repl? So I typed:

    while(1) {eval(fgets(STDIN));echo "\n";};
It kept prompting me for input. Am I in a REPL in a REPL now? I typed:

    echo "Hello from a REPL in a REPL!";
And the reply was:

    Hello from a REPL in a REPL!
I'm not totally sure if I believe it though.


It is that simple! It has practical benefits. For example, if you can call that code from somewhere specific in your program, which is handy for debugging. If your `eval` function takes a lexical environment, you can also implement a debugger easily.

To test whether your logic is working properly, make your repl print out a different character when it prompts for input. For example, the toplevel repl can print out "> " whereas the inner repl prints out "repl> "

For bonus points, your repl should exit if you type Ctrl-D. That way you can go from the inner repl to the outer repl, and from there it should exit your program.


Sounds like a Nine Inch Nails song.

"I am in a REPL in a REPL in a REPL,

Running all my code in nested loops..."

https://youtu.be/pVB_DI4ajKA


This is the first example I've seen in this thread that I would consider beautiful. The rest are neat tricks, but quite difficult to understand, so I wouldn't be happy to see them in production code.


I mean... You really shouldn't have an unchecked repl in your production code either.


True but at least you'd understand what's going on :D


Don’t you have the cart before horse; isn’t the acronym derived from this defintion?


Yeah REPL comes from this definition of "reading, evaluation, printing, looping". But in lisp you write it in reverse order, "Loop, print, evaluate, read".

I'm not saying the acronym came before or after the LISP code :P


  (define eval-expr
    (lambda (expr env)
      (pmatch expr
        [`,x (guard (symbol? x))
          (env x)]
        [`(lambda (,x) ,body)
          (lambda (arg)
            (eval-expr body (lambda (y)
                              (if (eq? x y)
                                  arg
                                  (env y)))))]
        [`(,rator ,rand)
         ((eval-expr rator env)
          (eval-expr rand env))])))
There's an amazing talk about it: https://www.youtube.com/watch?v=OyfBQmvr2Hc


Great. In 90 minutes I may be able to know what you're talking about.


Is it a complete lisp interpreter, written in lisp?


Yes, it uses pmatch behind the scene to match your expr against the evaluator.

https://github.com/webyrd/quines/blob/master/pmatch.scm

I have to agree, this like rank #1 in my book!


This is exactly what came to mind when seeing the question.

I was just recently (re-)reading an article that goes in depth:

Lisp as the Maxwell’s equations of software

http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equat...


Once you've seen this one, it's difficult not to rank it first :)


Y Combinator?


Any of Norvig's notebooks, http://www.norvig.com/ipython/README.html, especially his Concrete Introduction to Probability https://nbviewer.jupyter.org/url/norvig.com/ipython/Probabil....


Oh my God, his Lisp interpreter in Python: amazing. At the time I was just getting a grip with Python and starting in Lisp, it came at just the right moment in my autodidaction.

Really really good, although for a lot of people here it might be a little elementary (but then, the best code always feels elementary even when doing something advanced!)


IMHO, the most beautiful lisp interpreter in a language that is not lisp is this one in prolog.

https://www.metalevel.at/lisprolog/ https://www.metalevel.at/lisprolog/lisprolog.pl



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: