For me it's a BASIC one-liner which generates mazes
10 PRINT CHR$ (205.5 + RND (1)); : GOTO 10
I found this specific one via slashdot[1], but something similar, which I've never managed to find/replicate, was used to generate mazes on the Atari 800 XL at my school when I was a kid.
> One could argue that if the user perceives a X, then it produces a X.
This can be stretched to turn anything into anything else. For example, why can't this comment be a maze? In that case 10 REM; 20 END is my even shorter, more elegant maze program because I see a maze in it.
Some constraints that are typically implied when people say "computer generated mazes":
- They are solvable (have a start and end)
- OR they loop endlessly with no dead ends.
- Walls and spaces consume 1 element on the grid.
- No space on the grid is surrounded by all 4 NWSE walls.
>>One could argue that if the user perceives a X, then it produces a X.
>This can be stretched to turn anything into anything else. For example, why can't this comment be a maze?
Because unlike the one-liner nobody perceives it as a maze, so the "this can be stretched to turn anything into anything else" argument is tenuous...
> This can be stretched to turn anything into anything else. For example, why can't this comment be a maze?
Name one person who perceives your comment as a maze.
It must be perceived as a maze by people for it to apply.
As it stands, I’m not sure if people perceive your comment as a maze.
Maybe theoretically there’s a nonzero number of people who perceive the comment as a maze. But unless your audience is those few people, there wouldn’t be much utility in it. In the OP’s example this maze perception is not only highly widespread but it’s also leveraged for a particular intended effect.
No, a labarynth is a single twisting route without branches. A maze has branches and thus can have dead-ends. Both must have an entry and exit point (technically mazes may have more than one exit and entry, but most don't).
”Although early Cretan coins occasionally exhibit branching (multicursal) patterns
[…]
both logic and literary descriptions make it clear that the Minotaur was trapped in a complex branching maze
[…]
In English, the term labyrinth is generally synonymous with maze. As a result of the long history of unicursal representation of the mythological Labyrinth, however, many contemporary scholars and enthusiasts observe a distinction between the two.”
Everything is an illusion. \s and /s are just liquid crystal pixels on the screen, filtering some light. However it's easier to say that if something has a name and you can identify it, then it's probably that thing. I see a maze it means it's a maze :)
Nice to see confusing a joke reference to a regional fact (as to things that can seem X but be Y) with transphobia. Anybody told you that merely referencing trans women in Thailand is opposing them? Projecting much?
First, you come blazing with accusations of transphobia, because of a joking mention of Thailand's ladyboy scene, to challenge the parents' notion that everything is WYSIWYG. Do you even know me? Or you have a hobby of randomly assuming things about people you don't know?
Second, when I don't take that quietly, you add the above comment about having the gal to defend myself, going for the "This animal is extremely vicious. When attacked, it defends itself" angle...
Perhaps you can go even lower against some stranger on the internet, but you wont be getting any replies from me in this thread.
Good on you for standing for what you say. I personally didn't see any transphobia in your comment, what you said is merely a joke on a fact of that part of the world. The "phobia" was added on by Zhyl
The transphobia isn't related to the 'fact' about Thailand. If anything, the fact that it's a reference point is the reason why the sentiment is distasteful. The implication/joke is:
* A person goes into a bar in Thailand, sees what they think is a girl and are surprised to find that things are not what they seem.
Broken down:
* Person sees a girl
* It is not a girl
* This is a bad thing
Now, it would be easy to claim (as indeed you and GP seem to be doing) that this isn't a slur on the second party in this encounter. It's a fact of the world. They exist. OP, the person who is saying that 'things are what they seem to be' would probably not want to find themselves in a situation where they making an assumption that the person is female.
The fact that this is negative can be seen in the 'I hope you don't' words which set the tone and are the basis of most of the sarcasm. I have to say that even if GP had picked a different example, I don't like this kind of comment anyway. It's a bitter, jabby comment that I don't think furthers discussion and overall makes HN a more negative and uncomfortable place to be.
So an assumption I made, which of course is debatable, is that the setting of 'bar' implies a romantic or sexual intent on the first party. GP will probably deny or refute this, but really without this implication there isn't much of a joke to speak of. "I spoke to this girl and it turned out she was a dude! What larks we had!".
GP seems to be making this claim in this comment:
>Nice to see confusing a joke reference to a regional fact (as to things that can seem X but be Y) with transphobia. Anybody told you that merely referencing trans women in Thailand is opposing them? Projecting much?
If it was 'merely referencing' then it wouldn't be a joke. 'Merely referencing' would we to reword the comment as:
"Things are not always what they seem, like commonly accepted phenomena of Thaiwanese ladyboys".
I mean, not much different, and still in poor taste for a thread that is about beautiful code, but the tone of the comment is completely changed. It's now no longer a jab at the PC, but using a counterexample to illustrate a point.
And why is suggesting that someone being shocked to discover a ladyboy transphobic anyway? Basically, it boils down to the fact that trans people have to suffer against stigma that they're somehow 'out to trick' people and that unassuming straight males need to 'watch out'. This comment is a prime example of this. It's literally a warning. Sure, it's mild transphobia, but it's perpetuating this stigma and I feel like Hackernews generally, and this thread specifically, are not really places for this kind of comment.
I normally wouldn't write this much out about this topic, but GP seemed very upset about their comment being called transphobic. Note, I didn't call them transphobic at all. I didn't actually make any comment about them, their person, their intent or anything. Just that the comment was 'casual transphobia' which I firmly believe that it is.
Let's break down the rest of their comment:
>Well, I am amazed that one can be so rude.
In both cases they use my own words back to me. I'm not sure why. It seems to be a mix of defensiveness and sarcasm. I don't actually think I was being rude - certainly no ruder than they were being to who they were commenting to. I felt that their original comment was snide.
>First, you come blazing with accusations of transphobia
I called the comment transphobic. Addressed above.
>to challenge the parents' notion that everything is WYSIWYG
Not very well.
>Do you even know me? Or you have a hobby of randomly assuming things about people you don't know?
This is incredibly defensive. Especially the second part. It's not enough to call out transphobia, they have to then pile on unfounded accusations of 'randomly assuming things about people' I don't know. I think the irony of this is amazing. They're literally accusing me of doing exactly what they're doing. It's like a bad girlfriend stereotype. (If you want to do some whataboutism, why not pick apart the inherent sexism of that comment? I'll give you a starter for 10 by lampshading it).
>econd, when I don't take that quietly, you add the above comment about having the gal to defend myself, going for the "This animal is extremely vicious. When attacked, it defends itself" angle...
So GP edited this comment a couple of times. Originally this was basically all there was. 'Having the gal' to defend themselves is a laugh. There was no self reflection, no admission that what they wrote could have been seen as transphobic. Just immediately jump to a victim paradox. 'Poor old me being attacked by nasty SJW just for merely mentioning ladyboys. Is it a crime? What did I do wrong????'
>Perhaps you can go even lower against some stranger on the internet, but you wont be getting any replies from me in this thread.
I mean I love this sign off. It's the reason I knew I had gotten to them and the reason I didn't reply. They didn't want to hear it. It was clear they weren't going to learn, and in fact they're telling me that they don't want a response anyway. I love the way that they're so deep in the role of the victim that they think I'm going to go off and harass more strangers. It's a lovely thought, but I don't do that. I just calls em as I sees em.
So anyway, I'm glad that you don't see the transphobia, but I don't think I was wrong to call them out on what I saw as implied transphobia.
1/2^n, I believe (where n is the number of lines after the first one, otherwise 1/2^n+1). So halved every time: 100% with 1 line, 50% with 2 lines, 25% with 3 lines, and so on. Width is irrelevant (you can do this in your head comparing w=1 n=2 to w=2 n=2 for example).
Also, I don't believe it doesn't depend on width. I think with constant height, the probability of completable one should increase with width, and become almost 1 for very large widths.
Hm, good points, looks like I need more sleep. Looking at the video again, a path is valid if two slashes follow each other, with the same pattern shifted by 1 the line below.
But I wasn't treating /\ as an invalid on the first line. Eg thinking of:
/\/\/\/\
\\//////
/\\\\\\\ … and so on
(Using the /\ pattern, paths can go back up and come back down, so this requires a lot more thought than I instinctively put into it)
Oh wow this is a cool loop closure. I wrote a CHIP 8 emulator to learn stuff and my main test rom was this program. I walked through the bytecode very slowly as I learned and debugged and repeatedly wondered how the heck the program was so tiny.
Wow! This is really cool. I just entered it quickly in a C64 emulator and it ran, creating a maze. Not sure, whether it would be a solvable one, but it is interesting to note, that the paths are really connected!
The paths make sense. I.e. you got uninterrupted lines, surrounded by walls, so it is a real maze and not just a lookalike or partial maze. Oh, and it's a-maze-ing! :-)
Somewhere I have a copy of a BASIC program I typed into my TRS-80 Color Computer as a kid, from an issue of K-Power magazine.
I don't recall if it was one of the full issues, or one of the "insert issues" found within Family Computing of the era.
It was a special kind of "snow fall" generator; IIRC, there were only a couple of versions, one for the Apple IIe and the other for the PCjr - I likely used the PCjr version for the conversion since the BASIC language was nearly identical between the PCjr and my CoCo 2.
What was special was that it managed to seemingly move hundreds of pixels relatively rapidly on the screen; now, this might not seem like such a feat, but it really was considering we're talking about a computer system running an interpreted BASIC program, with a max speed below 1 MHz in standard mode (0.89 MHz - there was a so-called "speedup poke" that "doubled" the speed - POKE 65495,0 if anyone cares).
I don't recall what trick it used - or if it really used anything special at all - but I do remember being impressed by that bit of code.
Now surely someone may come along and point out how this isn't a true quicksort[0] because it doesn't partition the elements in place, but it's more of the simplicity of the logic and its readability that showed me how beautiful functional code can be.
Quick sort comes with a steep penalty. Worst case is O(n^(2)). The reason quicksort is good is because it's in place. Once you throw away the in place aspect of quick sort, it's straight up bad.
This implementation of quicksort is actually a great example of why functional programming sucks. It silently transforms an O(1) space algorithm into an O(n) space one, and adds an enormous constant time overhead.
Algorithms that are optimal under the mutable data assumption are different than algorithms that are optimal under the constant data assumption. So a normal programmer might sort via quicksort in Haskell because it's the optimal sort in imperative languages even though naive Haskell quicksort is objectively worse in every way than naive Haskell mergesort.
Performant programming in Haskell requires a much more intimate understanding of the underlying architecture than performant programming in, for instance, C++. And that's a very low bar.
If having to think a bit harder about your sorting algorithm is why functional programming sucks, can I give examples of every bullshit concurrency problem I’ve had in Java as an example of why imperative programming sucks?
Persistent data structures are a bit slower but largely become non issues if you deal with concurrency and avoid a lock that you would have otherwise required in C or Java.
It’s not like the people who wrote Haskell are idiots; most of these data structures end up sharing a lot of data, and a “new” structure is often only the diffs from the previous version. Not to mention that since you have a compile time guarantee that the data isn’t going to change, you effectively avoid any need for defensive copying.
I’m sure you can find some benchmark that proves C++ is faster in the most technical sense, but for network and multithreaded applications, it’s not even close; it’s so much easier to make sure a functional concurrent language is actually correct that the benchmarks become almost irrelevant.
I say this as a die hard FP fan: I agree with your parent comment. I think FP is generally good enough or sometimes even quite close. But I don't know how many times I ended up writing C in Haskell or scheme to make an algorithm really fast (once all other algorithms have been tried or discarded).
The quicksort example is a shitty quicksort, because it will be slow as molasses. A proper quicksort in Haskell will degrade you to writing C in Haskell (with the benefits of a large standard library)and then you have lost. C in C will always beat C in Haskell or any other language.
Is it a worthwhile tradeoff? I believe so. The few times I am limited by speed in that way are few and far between, and most often it was because I thought something like O(n2) would be good enough which is easily fixable. Sometimes I just need to make optimal code faster. By then I always wish I would be using an imperative language instead.
I'm genuinely curious; how often do you actually end up writing sorts in Haskell? I just use the built-in sort function, which is fast enough.
Maybe your use-case is different than mine, but I typically use Haskell (or another functional language) for network applications, which typically don't benefit from tight-loops since the network is almost always the bottleneck anyway. For these kinds of applications, having proper concurrency and IO handling matters a lot more than worrying about whether your loops result in a cache-miss.
I have had a couple of times where the overhead of idiomatic Haskell warranted rewriting it in a mutable imperative way. Sorting was never a problem, but things like functional heaps (leftist or pairing heaps) will never be as efficient as mutable, cache aware ones.
The lower level mucking around when you have to do those kinds of optimizations is, IMO, much more pleasant in imperative languages.
Sure, I won't argue with that. I've never used Haskell for that low-level of work, so I can't speak with any kind of expertise on that; I've never found Haskell's FFI to be terribly hard to use though, so you could conceivably get the best of both worlds.
Also, have you tried Liquid Haskell? It uses refinement types to let you use the "unsafe" and fast versions of functions to guarantee correctness while also increasing performance.
Is the level of my English lacking? The comment you're agreeing to clearly seems to argue that fp sucks as a whole because it's not as fast for tight looped algorithms, not that it sucks only for them.
> most of these data structures end up sharing a lot of data, and a “new” structure is often only the diffs from the previous version.
Like everything else in life, those data structures come with tradeoffs. Accessing an array element is an indexing operation into a continuous block of memory. The indexing is built into the instruction set as an addressing mode, the array's memory is continuous (so better locality of reference for caching), and if you're traversing it predictably, prefetch hardware can being it into cache before you issue the instructions that do the read.
The same operation in a persistent vector is a function call, several pointer dereferences, and a more complex on-memory structure with more overhead. It works elegantly, but it's a completely different (and likely slower) level of abstraction from a simple in-memory vector.
A few years ago, I switched a toy search program away from ImmutableJS and over to naive copying of raw JS data structures. I don't remember the exact performance delta, but it was something like an x10-100 speedup, for code that was otherwise structurally identical:
In this case, I had a goal to achieve, a timeline on which to achieve it, and it was the ability to switch to simpler, closer-to-the-metal data structures that made it happen. Had I needed more optimization, the most logical approach would have been to remove even more copying and rely even more heavily on mutation. This shouldn't come as a surprise, because at their core, these machines are fundamentally built on mutation, and there's a cost to pretending otherwise.
Now, in fairness to the persistent structures, this is something like a worst case scenario for their use - the algorithm is almost entirely dominated by manipulating relatively small data structures without much opportunity for sharing. It was also single threaded, small, and developed by a single developer in a small amount of time, so the organizational benefits of immutability were not apparent either.
If this was a huge redux state shared across a team of 100 developers and updated only a few times a second, I could probably tell a completely different story. It's easy to imagine (based on firsthand experience!) the kind of havoc that can be inflicted on a project with an erroneous update.
I get the enthusiasm for functional programming and persistant structures, etc., but at the end of the day it's just an engineering approach. One of many, and there's room to do the work to choose between them.
> I get the enthusiasm for functional programming and persistant structures, etc., but at the end of the day it's just an engineering approach. One of many, and there's room to do the work to choose between them.
I don't disagree with this but that's not what you said initially. You said "this is why FP sucks".
I definitely think that if your work is doing tight-loops where every micro-second matters, an imperative language will usually be the correct approach. I don't think anyone is (or would) argue against that point, including most Haskellers.
However, functional languages do simplify things with network/concurrent applications. There's a reason that something like MapReduce is popular. Something like Spark or Onyx is just simpler to do correctly and work with than trying to achieve something equivalent using C or C++; maybe this is just me.
EDIT: My bad, I was responding to the wrong person, this person never said FP sucks. I apologize!
> It silently transforms an O(1) space algorithm into an O(n) space one, and adds an enormous constant time overhead.
Please tell me what imperative quicksort algorithm has O(1) space. All versions I've seen and could recall use recursion; although each recursive call uses O(1) space, in the worst case of bad pivot element selection each recursive call would only really sort one element resulting in a worst-case O(n) space. Use of randomization would result in a high probability of choosing a good pivot element, but even then you can expect approximately O(log n) space.
Also would like to see why you think the Haskell version has enormous constant time overhead. Where do you think this overhead comes from? If you are comparing to an equivalent program in C++ then sure allocations and stuff, but compared with the typical Haskell list-processing programs I don't see any significantly larger overhead.
I don't know that I would ever do this, since O(lgn) space is normally trivial, but couldn't you use a RNG that's based on like the depth and start position of a given "stack frame" of a recursion-less quicksort?
Like to "recurse" (not actually recurse, but pretend) you would increment depth, the update the start position, and calculate the new partition index based on the (depth, start) tuple?
And run that in reverse for going back up the stack.
edit:
Hah. This is fun. There's a variant where you do tricks with the elements of the array to get constant space.
The idea is that you partition the elements, but then instead of storing the bounds of the left and right sides, you switch the element from the start of the right side with the partition element of the "stack frame" above you. This later serves as a flag indicating the end of the right side, since the partition of the parent "stack frame" is greater than all elements on the right you know you've hit then end of the right side when you see a larger number than the parent "stack frame"'s pivot.
The path towards O(1) space instead of O(log n) space is fraught with technicalities. At the bottom, you realize that no matter what algorithm you use, you always store the array size in lg(n) bits! So it makes more sense to say “less memory” rather than O(1) memory for sorts.
That link basically just simulates recursion by defining a stack in the function. It recognizes that in the recursive version among the stack variables only the two indexes need to be stored so it stores them. It is even less space efficient than the recursive version because it always allocates O(n) space.
I don’t see how this is a great example of how functional programming sucks. It is easy in any programming language to build a slow sorting algorithm. The fact that you need to use different algorithms with immutable data is irrelevant. You need to use different algorithms for pretty much any type of data structure.
Functional programming makes a different set of tradeoffs than imperative programming. You have to work harder to get peak performance, but in many cases it is actually quite performant. Just look at how fast Elm is for web applications, and how that approach allows for a time-traveling debugger and virtually no crashes.
> It silently transforms an O(1) space algorithm into an O(n) space one, and adds an enormous constant time overhead.
It doesn't silently transform O(1) to O(n), the code is explicitly O(n^2) space worst-case. The only 'silent' transformation that could happen here is an optimization to improve performance. Also, I don't know where you got the 'enormous constant time overhead' part.
> Algorithms that are optimal under the mutable data assumption are different than algorithms that are optimal under the constant data assumption.
A normal programmer wouldn't be defining writing their own sort functions. A normal Haskell programmer would understand mutability in Haskell.
> Performant programming in Haskell requires a much more intimate understanding of the underlying architecture than performant programming in, for instance, C++.
It really depends on what you're writing and how much performance you actually need. Implementing a moderate-complexity parallel data processing algorithm in Haskell may result in a slightly slower but much simpler implementation than C++. An implementation of the same complexity in C++ may be slower than Haskell. Writing performant, parallel, safe code for a moderately complex algorithm in C++ is far from easy.
I remember on olympiads one trick that I though was brilliant out-of-the-box thinking was to shuffle input before sorting to avoid worst-case-prepared inputs.
> Quick sort comes with a steep penalty. Worst case is O(n^(2)). The reason quicksort is good is because it's in place. Once you throw away the in place aspect of quick sort, it's straight up bad.
That's why you shuffle the list before you sort it :)
I saw a similar version in erlang (in Joe Armstrong's Programming Erlang) and agree it beautifully illustrates the concept of quicksort. Apparently, Tony Hoare came up with it when he took a class teaching him recursion (i.e. he was a student).
But the flaw you note unfortunately undermines performance... the "quick" in "quicksort".
Thus, IMHO it's a compelling illustration of the profound strengths and weaknesses of fp.
Agreed that this is beautiful purely from the standpoint of how there is essentially no disconnect between the concept of the algorithm and the way it's expressed in code. It's almost the math/algo just translated directly into code (not commenting on the efficiency or other issues that may exist).
I won't comment on how quicksort-y this is, but the combination of pattern matching and recursion is bread-and-butter Haskell. A number of seemingly complex algorithms can be implemented in a similar way.
Having said that, it is elegant and clean and -- taken by itself -- does make Haskell very attractive.
The problem with this piece of code is not that it doesn't do it in-place. The problem is, if the input list is (almost) constant, then this code will (almost) certainly take quadratic time to sort it, even if it is shuffled before being fed to this function.
However, I get what you mean. It really is beautiful!
Though it's not true quicksort, it's possible to explain the idea of this algorithm with code like this in just a few minutes while the truest implementation in C with Hoare partition looks really confusing
This may be an elegant piece of code at first glance except the niceties of quicksort such as cache locality and stability come entirely from the in-placeness of the partition procedure.
I began coding in IBM/LCSI PC Logo. The first line of code I ever wrote was:
FD 100
That's the "hello, world" of turtle graphics in Logo. While probably not as beautiful as the several splendid examples posted in this thread, that simple line of code changed my world. I could make stuff happen in an otherwise mostly blank monochrome CRT display. Until then I had seen CRTs in televisions where I had very little control on what I see on the screen. But now, I had control. The turtle became my toy and I could make it draw anything on a 320 x 250 canvas.
The next beautiful piece of code I came across in the same language was:
REPEAT 360 [FD 1 RT 1]
The code above draws an approximation of a circle by combining 360 short line segments. It showed me how control flow can be used elegantly to express complex ideas in a simple expression. And then I came across this:
At an impressionable age of 9, reading and writing code like this, and using simple arithmetic, geometry, logic, and code to manipulate a two-dimensional world had a lasting effect on me. I like to believe that my passion for software engineering as well as my love for writing code, sharing code, and open source development are a result of coming across these beautiful code examples early in my life.
The true beauty of Logo comes in the realization that you can add your own words to it, and that all these specialist words you devise are themselves first-class citizens of the language, of syntax and status equal to its generic builtins. Hence:
TO CIRCLE
REPEAT 360 [FD 1 RT 1]
END
TO FLOWER
REPEAT 20 [CIRCLE RT 18]
END
and so on ad infinitum, until you arrive at a complete custom vocabulary that concisely and precisely expresses the particular concepts and behaviors of interest and importance to you. In doing so, you move from thinking algorithmically (which is glorified spaghetti) to thinking compositionally, which is the key to scalability – managing complexity as your needs and ambitions grow.
Whereas Algol-y languages treat user-defined vocabulary as second-class citizens, beneath their own privileged built-ins. Which is a ridiculous of status when you consider which is actually important to the user: precise, powerful, tailored words that describe their particular problem, or the crude primitive undifferentiated building blocks that the language dumps out of the box?
The beauty of bottom-up programming (as any Lisp fule kno:) is that endlessly tests your own understanding of the problem domain: to define effective, productive words you must have some idea of what you’re talking about; you can’t help but learn the foundations of the problem space as you go. There’s a basic humility to this approach; there’s nowhere to hide laziness or ignorance.
Whereas in top-down programming it’s much too easy for highly-educated highly-paid absolute know-nothings to bullshit eternally, constructing great theatrical class architectures; grand mechanical castles in the sky that look all very difficult and impressive to observers while never saying anything relevant or useful.
That key switch from algorithmic to compositional thinking is not a natural conceptual leap for self-learners – it takes a carefully directed prod at a particular point in the learning curve to jump those rails – but it opens up worlds. #PlatosCave
Thanks for writing this. I think you've explained why I've always liked the ability to invoke functions without parentheses. I always thought it simply appealed to my sense of aesthetics, and I've had somewhat of a hard time defending it but now that you mention it, building my own first-class primitives is a big part of it.
Oh man, PC Logo takes me back down the memory lane. We were introduced to PC Logo in 4th grade, that's about 9 years ago. I instantly fell in love with it. The amount of power I had over the computer - ordering it to draw what I want, how I want was really exciting.
My father had installed Kubuntu on our home computer which had an amazing suite of educational applications, including KTurtle which was (almost) the same as PC Logo. I got so addicted to it, my father suggested me to create a blog and regularly update it with the drawings and the code for it. And so I did [1]! I used Google Blogpost (that was the thing back then). Good times :).
I still remember my father ‘explaining’ me Pythagoras theorem when I was around 5 to show me how to draw the roof of a house on our hand-soldered Philips Apple II clone.
I’ve been hooked ever since.
The best thing was that 25 years later I opened a Logo emulator again and when faced with having to clean the screen somewhere deep, deep from my muscle memory the right command sprang forward: CLEAR
For all the Logo fans in this thread, I have created a Slack workspace here for us to get together: https://bit.ly/fd100slackinvite
Please do join it even if you don't remember Logo anymore. The intention here is not to discuss Logo but to share the joy of computing that we discovered through Logo and has remained in our lives. I hope to see you all there. :-)
By the way, there is also #fd100 channel on Freenode IRC but I am not sure whether most HN users prefer IRC or Slack.
How my, so many souvenirs. Especially the one when our teacher took us to the courtyard and had us do the turtle in real life, him giving the instructions.
The most beautiful piece of code you've ever read contains a magic number with the comments "evil floating point bit level hacking" and "what the fuck?"... you're a madman.
it's the antithesis of maintainable code. But given that one is unlikely to want to change from calculating the inverse square root to inverse cube root or any other variation....
I disagree strongly. This code is very maintainable: it does not have dependencies, it is trivial to test, it is wickedly short, and with the appropriate comment there is no confusion regarding its purpose. Also, its field of applicability is clear from the context: replace this code with a call to fsqrt if you happen to have a fast hardware implementation of it. It is the most easily maintainable code, ever!
the one maintainability metric, and i think is the most important one, is whether it's easy to modify (to make it do something slightly different).
You'd be hard pressed to write an inverse square cube without basically rewriting the whole function. There's nothing that can be reused. The only saving grace is that it is side-effect free, so replacement is trivial, unlike a lot of other code that's not maintainable.
Your metric is only applicable within the context of the problem domain. In 3-d graphics, the inverse square root (1 divided by the square root of x) is an extremely common operation (such as for normalizing vectors), but the inverse cube root pretty much isn't used, so it's safe to assume you will not need to extend it to arbitrary powers.
As the designer of the code, you would understand that the inverse square root is a standalone problem.
The question is how likely are you to need it to do something different, and what are the different things it might need to do? This will depend on the domain, but making things too flexible hampers maintainability too.
And it also made the guys a ton of money :) They found a way to take something useful in theory and other practical applications and use it to make something never before seen for the massmarket. That one magic number alone is responsible for a good portion of his net worth and of course his Ferrari
Pick a random line from a file / stream without knowing how many lines there are to choose from in one pass without storing the lines that have been seen.
perl -e 'while(<>){$x=$_ if rand()<=(1/$.)}print $x'
For each line, pick that line as your random line if a random number (0<=n<1) is less than the reciprocal of the number of lines read so far ($.).
It hits my elegant bone. Only one line... rand < 1/1, pick it. Two lines, same as one, but the second line has a 1/2 change of replacing line one. Third line same as before but gets a 1/3 chance of taking the place of whichever line has survived the first two picks. At the end... you have your random line.
It didn't sound plausible to me (because I misunderstood what the algorithm was; yay perl), hence quickly testing it, but it seems to work, and after appreciating what the algorithm actually is (see sibling replies), it not only works but it _should_ work. (There is a CS versus engineering joke in here, somewhere.)
It is that simple! It has practical benefits. For example, if you can call that code from somewhere specific in your program, which is handy for debugging. If your `eval` function takes a lexical environment, you can also implement a debugger easily.
To test whether your logic is working properly, make your repl print out a different character when it prompts for input. For example, the toplevel repl can print out "> " whereas the inner repl prints out "repl> "
For bonus points, your repl should exit if you type Ctrl-D. That way you can go from the inner repl to the outer repl, and from there it should exit your program.
This is the first example I've seen in this thread that I would consider beautiful. The rest are neat tricks, but quite difficult to understand, so I wouldn't be happy to see them in production code.
Yeah REPL comes from this definition of "reading, evaluation, printing, looping". But in lisp you write it in reverse order, "Loop, print, evaluate, read".
I'm not saying the acronym came before or after the LISP code :P
Oh my God, his Lisp interpreter in Python: amazing. At the time I was just getting a grip with Python and starting in Lisp, it came at just the right moment in my autodidaction.
Really really good, although for a lot of people here it might be a little elementary (but then, the best code always feels elementary even when doing something advanced!)
At the time the code was written, the security ramifications were not quite the same as they are now. Even now, I would suggest that there are times where such a construction would be just fine.
This kind of code is cute, but awful for readability. The author could easily (more easily!) have written the function to be very readable, but went for the cute ultra-compact style instead.
The only reason it's at all readable is that it's solving such a simple problem. Write more complex functionality in that style, and you quickly get a nightmare.
I believe MISRA C outright bans this kind of thing (three assignments with no sequence point), as it has no place in a codebase of real consequence.
I understand that some C programmers pride themselves on being able to read this kind of code (far more impressive than being able to write it), but I see no reason for it in serious software work.
I've got a better code snippet: your exact same code snippet but for a language which will short circuit based on the lvalue of the assignment expression.
Then t would never be able to overflow s (nor even eat its null terminator)
Exploit codes are often the most beautiful code I read, they are usually small and take some dazzling brilliance to push the computer and make it do what it wasn't.
I can remember the first code that showed how to exploit IFS, race conditions via symlink, the classic "smashing the stack", RTM's worm.
Beauty of a code to me has nothing to do with the formatting, comments, documentations, but everything to do with the mind that bent it into place. Most of the beautiful code I have ever seen would be classified as ugly, spaghetti, not production worthy.
I had absolutely no idea until now that RTM co-founded y-combinator. I remember him from mentioned in Bruce Sterling's The Hacker Crackdown and Clifford Stoll's The Cookoo's Egg as well as an occasional Phrack article.
I'm partly impressed by the tooling used; most of the coolness (to me) is the author's self-confidence in his hunch that the SoC _didn't_ have its MMU set up properly, and the way he followed his nose in determining that he was probably right.
I still wonder exactly how much was sunk into the project, before it was possible to determine that the MMU was indeed broken. Heh.
The SQLite source tree and DRH code in general are truly piece of art. Almost every line of code is carefully commented. Despite the complexity of the project, you'll learn a lot of practical concepts including expression tree generation, bytecode execution, how to test your code and so forth.
This reminds me of Chuck Moore (inventor of Forth). He designs his own CPUs, using his own CAD package, written in the programming language he invented!
That's my choice too. It's a C-subset, but more than a compiler; it includes a bytecode VM interpreter too. The best part about it is that it's tiny, yet very readable for its size and functionality.
Considering the magnitude of its impact on science, engineering and culture (as well as the tremendous force released), this line from the IGNITION subroutine in the BURN_BABY_BURN module of the Apollo AGG source code does it for me:
For me, the answer is - The code that never existed.
Not to sound cheeky but eliminating code, is a beautiful thing. Less code is easier to maintain, understand, and faster to run. So the less code you can achieve, the better overall the software will be.
This brings me to a very nice regular expression. It is the one being recommended in
RFC3986 "Uniform Resource Identifier (URI): Generic Syntax"
by T. Berners-Lee, R. Fielding and L. Masinter
https://tools.ietf.org/html/rfc3986
to parse an URI. Having seen so many regular expressions, that try to match it all, this regex tries to match as little as possible, while, at the same time, matches any string, because it matches no string. It does not define what to match, but what not to match and making every match optional.
According part from the spec:
^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?
12 3 4 5 6 7 8 9
The numbers in the second line above are only to assist readability;
they indicate the reference points for each subexpression (i.e., each
paired parenthesis). We refer to the value matched for subexpression
<n> as $<n>. For example, matching the above expression to
http://www.ics.uci.edu/pub/ietf/uri/#Related
results in the following subexpression matches:
$1 = http:
$2 = http
$3 = //www.ics.uci.edu
$4 = www.ics.uci.edu
$5 = /pub/ietf/uri/
$6 = <undefined>
$7 = <undefined>
$8 = #Related
$9 = Related
where <undefined> indicates that the component is not present, as is
the case for the query component in the above example. Therefore, we
can determine the value of the five components as
scheme = $2
authority = $4
path = $5
query = $7
fragment = $9
One of my favorite RFCs for sure, and it's a beautifully simple and useful regex. It's a shame RFCs seem to have fallen out of fashion, the WHATWG URL spec is an unreadable mess of a monster, not least the parsing section: https://url.spec.whatwg.org/#url-parsing
Deleting code is a beautiful feeling. Python's new walrus operator has gotten a lot of hate recently, but a few days ago I was writing a script and realized I could get rid of a few lines (AND have my meaning be clearer) using the walrus operator. So satisfying.
Reminds me of a saying I read from moderngpu library's wiki page, a highly optimized yet highly readable GPU basic primitive library: "Software is an asset, code a liability. On good days I'd add 200 or 300 lines to this repository. On great days I'd subtract 500."
source: https://github.com/moderngpu/moderngpu/wiki/Introduction
> For me, the answer is - The code that never existed.
> Not to sound cheeky but eliminating code, is a beautiful thing. Less code is easier to maintain, understand, and faster to run. So the less code you can achieve, the better overall the software will be.
Unless you think compressed/ minified code is beautiful, there must be additional factors involved other than minimizing LoC.
To me it's removing lines of code without affecting readability (or even improving it). Like when you don't know about a language feature so you implement a hacky version, then find out there's already a very elegant standard way to do it so you get to lance that hideous monstrosity you made off your code
That depends on how large the misprediction penalty is and how many times the branch executes. Branch prediction isn't magic. It never makes branches beneficial. The best it can do, with perfect accuracy, is bring the amortized misprediction penalty down to zero. The key is that Duff's Device also relies on branching, and it's a less predictable branch that's more likely to be mispredicted. On the other hand, one BP miss is still cheaper than several hits plus one miss at the end. On the other other hand, there are issues to consider like code size and cache turnover (favors a loop), handling of special cases where larger loads/stores can be used (favors DD), processor extensions, etc. If you're really trying to write an optimal memcpy-like function, you'll probably end up using a hybrid of several approaches - including Duff's Device.
Another thing to note, is that loop unrolling can be harmful to performance, as it requires more instruction cache. Beyond branch prediction, modern compilers can also convert loops into vector / SIMD instructions, and other magic.
The Duff's device is still useful for creating co-routines though; a handy way of yielding, then returning to the yield point.
I don't know Windows scripting, but that looks like something you'd have to put in a file and call for it to work, instead of just running that on the command line. You can do the same thing on Unix systems. Writing
$0|$0
on a file and running it should also work as a fork bomb.
The conceptually same thing will also work on bourne shell. But as in the Windows case only as a script stored in file, the original with inline function definition is self-contained oneliner.
:( #sadface
:() #dopefish
{ : #smiling one-toothed vampire
{ :| #blank stare with hat
:| #blank stare without hat
|: #other blank stare without hat
|:& #blank stare with bow
}; #winking face with mustache
};: #winking four-eyed alien face with mustache
Only one of those was a smile, and you generally don't want a vampire smiling at you even if they're down a tooth! ;)
I completely forgot about this. This code is very beautiful too. Norvig doesn't take a bruteforce approach, his codes are worth studying, his sudoku solver is just as good too.
Because of Norvig's spelling corrector, my measure for good code is whether or not I can understand what it's supposed to be doing, and how it's supposed to work, from reading it in one pass.
This code passes that test. Before reading it I would have never thought that could be a goal— I couldn't imagine it was possible to know what a program does without comments and documentation.
When I heard the word "readable" from others, I understood it as "parsable". From an OO context, I thought readable meant neatly formatted lines that let you say "ah, yes, this is an if statement", "this is a constructor", and so on, but with no idea what the code actually does or means to do.
Norvig's spelling corrector is better read without comments. The names of the functions tell you what you need to know— their purpose, and their implementations tell you exactly what the author thinks they mean.
It's "declarative": "def correction(word):..." means exactly "the correction of a word is ___", "def candidates(word):..." means exactly "the candidate corrections of a word are ___". There's a strong functional/LISP influence here, which I only learned later.
Because this code demonstrated it was possible, my standard for all code is that it should tell you what it does from one reading. Most code (including my own!) doesn't come close, but aiming for that goal has tons of good effects on difficult code.
Haskell allows you to define the full list of prime numbers in just two lines:
module Primes where
primes = 2 : filter isPrime [3..]
isPrime x = all (\y -> mod x y /= 0) $ takeWhile (\y -> y * y < x) primes
The beauty of this is that it's self-referential. The list `primes` is built by filtering all the natural numbers for prime numbers, using the predicate `isPrime` which itself uses the list `primes`. The only caveat is that we need to encode that 0 and 1 are not prime numbers, but 2 is a prime number, to provide the base cases for the recursion. Furthermore, `isPrime` uses that each non-prime number has at least one prime factor less than or equal to its square root, to ensure that it only needs to look at a finite number of possible prime factors.
If you have GHC in your repo, you can test this by putting it in a file and running `ghci` with the file as the only argument. It will give you a REPL where `primes` and `isPrime` are in scope:
*Primes> isPrime 200
False
*Primes> take 10 primes
[2,3,5,7,11,13,17,19,23,29]
It is also dirt slow. This is just trial division, done in a lazy fashion.
At least it is better than the one some haskellers call the sieve of erathostenes (it isn't the sieve of erathostenes), which is so god-awful that I almost vomit every time I see it.
The real lazy sieve of erathostenes is a thing of wonder! There is a paper describing it called "the genuine sieve of erathostenes" and it can be found by googling.
sieve :: Integral a => a -> [a]
sieve l =
sieve' [2..l] []
where
sieve' (p:ns) ps =
sieve' (filter (\x -> rem x p /= 0) ns) (p : ps)
sieve' [] ps =
reverse ps
Waiting for a keypress in ZX81 BASIC (maybe it's the opposite of elegant, but I remember it as the first time seeing a kind of easter egg in code):
10 PAUSE 4E4
"PAUSE n
stops computing & displays the picture for n frames of the television (at 50 frames per second, or 60 in America). n can be up to 32767, which gives you just under 11 minutes; if n is any bigger then it means 'PAUSE for ever'.
A pause can always be cut short by pressing a key"[0]
That's a way to loaded and bait like question. Why not ask for "the most elegant snippet" instead?
There are beautiful code bases like DOOM source or SAT solvers that are like super models beautiful. Complete and hard to improve. Marvel at from a distance.
There is beautiful code in the libraries everybody uses all the time. All the lib* code. Somebody to marry. Discover the good beauty over time.
And then there is the beautiful snippet of clever code at the bar, quick to love, but one you get to know him it's much more trouble than he's worth.
This one is kind of contrived, but a work of mad genius. Mandelbrot code that resembles Mandelbrot, in Python 2:
_ = (
255,
lambda
V ,B,c
:c and Y(V*V+B,B, c
-1)if(abs(V)<6)else
( 2+c-4*abs(V)**-0.4)/i
) ;v, x=1500,1000;C=range(v*x
);import struct;P=struct.pack;M,\
j ='<QIIHHHH',open('M.bmp','wb').write
for X in j('BM'+P(M,v*x*3+26,26,12,v,x,1,24))or C:
i ,Y=_;j(P('BBB',*(lambda T:(T*80+T**9
*i-950*T **99,T*70-880*T**18+701*
T **9 ,T*i**(1-T**45*2)))(sum(
[ Y(0,(A%3/3.+X%v+(X/v+
A/3/3.-x/2)/1j)*2.5
/x -2.7,i)**2 for \
A in C
[:9]])
/9)
) )
This is very common in Elm but it blew my mind after years of programming in Python. In Python it is very easy to raise an IndexError by getting an element from a list by index that doesn't exist. eg.
Someone once posted a C program here on HN, where each line, even the comments, seemed to "line up" in 6- or 8- character blocks with a space in between. It made the whole program look sort of like a table.
I felt it was sort of like poetry. I unfortunately no longer have a link to it, and once looked very hard but couldn't find it.
I would be extremely appreciative of someone else saw it and had a link. If I recall correctly it was a type of interpreter, I remember it having code for parsing.
I appreciate your help, but I feel like its unlikely to be in there since it wasn't really obfuscated - it was meant to be clear in some sense of the word and had a bunch of comments explaining it.
The comments sounded natural, but lined up so each Nth column was a space (or punctuation) all of the way down the code. None of the things I am seeing in that list seem to have quite the same layout.
There was some comment about it like "some people feel code should be beautiful to enjoy debugging" or something along those lines but I don't really remember.
I have been looking for a long time though and really wish I had saved it properly when I first saw it.
public static int bitCount(int i) {
i = i - ((i >>> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >>> 2) & 0x33333333);
i = (i + (i >>> 4)) & 0x0f0f0f0f;
i = i + (i >>> 8);
i = i + (i >>> 16);
return i & 0x3f;
}
It was like magic for me when I encountered it first time.
Came here to say this, I first saw this in the book Hackers Delight which is full of 200X-me's mind blowing stuff.
It really opened a door to writing (arb)GPU shaders where at the time if's weren't allowed and trying to cut branches from PS2/GameCube/Xbox code
I remember typing this on an ASR-33 and being amazed. I made a computer do that. Then I hit Ctrl-C and learned how to make a paper tape with a "here is" leader, turning the paper punch off, then typing "LIST" and turning it back on before hitting RETURN.
A PDP-11/10 with 16K and 3 20mA TTYs connected, no disks, no storage except paper tape.
I've still got that paper tape somewhere, it's 42 years old now.
Had a KSR-33 wired up to a Synertek single-board 6502 kit... never did get the paper tape working, but man was it weirdly cool to be typing on a 50 pound behemoth to put numbers in the 2kB memory of a 6 ounce PC board.
A small twist on the UI can use a `panel`, which supports a divider option to make grid layout easier. e.g., for 4 items across, you can do this:
view [
style b: button 50x50 bold
panel 4 [
b "1" b "2" b "3" b "+"
b "4" b "5" b "6" b "-"
b "7" b "8" b "9" b "*"
b "0" b "." b "/" b "="
]
]
Red is fully open source, and can be compiled. Most code anyway. You can write code that is too dynamic to compile until we go JIT. Red is bootstrapped in Rebol, but will be self hosted next year. It is also its own, entire, toolchain. Red compiles to Red/System, a low (C) level dialect/EDSL, which compiles directly to machine code. You can mix and match the two in apps as well.
Rebol was closed source back when open source was really starting to take off with Perl and Linux. It is seriously cool (I agree), but the interpreter is a little slow.
Red still has a little ways to go, but could be a game changer some day. The project is insanely ambitious, but I'm optimistic.
The main (IMO sad) reason REBOL sits in a weird half-life position of "not quite dead, but..." is that, while REBOL Core is open, the GUI parts (which are crossplatform, and run on Linux) are still closed source.
It's a bit like .NET Core. Great to base a business on... not so great to tinker around with. Arguably the second (tinkering, learning) comes before the first (using what you've learned to build a business).
It's a bit of a tricky question because you can have a beautiful snippet entangled in a mess of a codebase.
I think a much better question would be most beautifully structured codebase.
Code, as a snippet, or line, is constantly struggling between poetic conciseness and verbose clarity... To which I will always pick clarity (for "the next guy"), hence not necessarily elegant.
This allocates a cons cell for Lisp on PDP-10, by using the first instruction in the routine as the head pointer of the free list. The first instruction puts the first word of the free list into A (and then clobbers that word with the original contents of A, initializing the cons cell). The second then swaps the first instruction with the first item in the free list.
The free list is simply a linked list of first instructions, which could be done because the PDP-10 was a 36-bit machine with an 18-bit address space, and the address field of an instruction was the entire right half: the same part of a word that was used as a pointer. So, when interpreted as a pointer, each of these instructions was just the pointer to the next cell.
The beautiful part to me is that, once you ran out of free list, the last word was a call to the garbage collector, which would build a free list of unreferenced cells and return a pointer to the second cell (with the correct opcode field) in A; the second instruction would then finish the cons operation, leaving the address of newly allocated cons cell in A.
I remember seeing Maxwell's equations written in Lisp (I believe it was in the Structure and Interpretation of Programs). This made me feel something very profound, although I am still unable to put this feeling into words.
The idea of lazy is to allow deferment of expensive operations until they are actually needed.
The above implementation allows for something like:
const service = lazy(makeService)
Which will ensure makeService is only called once, and the first caller will have to wait for the result of makeService(). All subsequent callers re-use the first result.
A good debuggable piece of code explains what it is trying to do by being clearly written and conforming to a consistent and logical model. I don’t think I would ever have invented this type of lexer pattern in Go by myself, but would be very grateful to come across it in a code base I had to fix. Like Duff’s Device mentioned in this thread, it has just the right amount of cleverness without becoming inscrutable.
Also, if you’ll excuse some avuncular pride, my niece and I wrote some code yesterday. She asked me how many times grandma’s clock chimes every day and we ended up with the following Ruby:
What’s quite elegant about Ruby (in and of itself, not in comparison to your examples) is how consistently it is implemented using its own simple set of core features, with very little magic or syntactic sugar. reduce, Enumerable, what the & operator is doing etc.
What’s so challenging about Ruby is how the language can be abused by library vendors to make all kinds of surprising magic and homegrown syntactic sugar.
> What’s quite elegant about Ruby is how consistently it is implemented using its own simple set of core features, with very little magic or syntactic sugar.
Honestly, I don't think this has much to do with Ruby. This a consequence of using reduce or any other higher-order functions, which come from functional programming, and are now available in almost all modern multi-paradigm programming languages (including Python and Haskell).
I you like this kind of constructs, you should definitely learn a functional programming language, you'll love it.
When it comes to the weird and wonderful world of esoteric programming languages, the above Brainfuck program is pretty elegant. It's a basic implementation of the echo program: printing back what the user inputs.
The complete language consists of eight single-character commands. The four used in the echo program is:
, – accept one byte of input and store it at the current memory cell
. – output the byte at the current memory cell
[ – if the value of the current memory cell is zero, jump forward to the command after the matching ]
] – if the value of the current memory is nonzero, jump back to the command after the matching [
in vim will go through the whole file, joining multiple consecutive empty lines into one. Not only it is fork-bomb-level cryptic, but also showcases how you can use addresses to do advanced stuff.
Somewhat more readable version:
:vglobal /./ .,/./- join
which is: go to every line that doesn't match (vglobal) /./ (is empty) and join lines from that line (.) to the line before (-) the next line that is not empty (/./ again).
This is a 1811-digit illegal prime number, which when unpacked into binary, becomes a compressed ELF executable which will decrypt DVDs. This was one of many ways used to enable people who had legally purchased a DVD with copy protection to play it on Linux. Here's the story behind finding the prime: https://web.archive.org/web/20070223075434/http://asdf.org/~...
This grammar of the aⁿbⁿ language in Prolog's Definite Clause Grammar
syntactic sugar:
s --> a,b.
s --> a,s,b.
a --> [a].
b --> [b].
s is a non-terminal, a and b are preterminals, [a] and [b] are terminals and
"-->" can be read as "expands to". The syntax is the same as BNF and the
grammar is a Prolog program that is directly executable as both a recogniser
or a generator, depending on instantiation pattern at call time.
How does the grammar work? It must accept, or generate, a string of equal
numbers of a's and b's, but the grammar is not keeping track of the length.
There is nothing to count how many a's have been consumed or produced so far.
How does it know?
s is the start symbol of the grammar. The first production of s, which is also
the terminating condition for the recursion, accepts or produces one a
followed by one b. The second production of s accepts an a, followed by an
s-string, followed by a b.
Suppose we executed the grammar as a generator. In the first step, the output
would be the string S₁ = ab. In the second step, the output would be the
string S₂ = aS₁b. In the n'th step the output would be aSₙb.
So the grammar would always add exactly one a at the start, and one be at the
end of its output, recursively.
And it would always generate the same number of a's as b's.
Similar for when it runs as an acceptor.
You can visualise the first couple of steps as follows:
S
,-------|-------.
| S |
A / \ B
| A B |
a | | b
a b
The code for the "A Mind is born" 256 byte Demo. It produces quite a neat music sequence and visuals. The code doesn't have a code equivalent but the binary version is understandable. This post is 256 bytes. https://linusakesson.net/scene/a-mind-is-born/
One of the first recursive algorithms I learned was the solver for the Tower of Hanoi problem. I was stunned about its simplicity, solving a problem most people couldn't with just a few lines of code:
I once had to find all lines which did not contain a specific word. As back then i was absolutely new to regex stackoverflow came to the rescue - and i still have that link saved to this day - https://stackoverflow.com/a/406408
My first language was QBASIC and much of the first code I read (other than modifying PHP scripts I didn't really understand) was from Pete's QB Site. I'd hardly call it beautiful, but definitely nostalgic and cool.
This python code was part of the imaging, analysis, and simulation software for radio interferometry that led to the historical first 'image' of a black hole.
The other advantage of the filter one is it can be stuck in the middle of a bunch of other list operations - map(), sort(), slice(), other filters(). Also, you can use findIndex() instead of indexOf() to match any arbitrary predicate, instead of exact equivalence.
All of JS's list processing functions are pretty inefficient, since at the bare minimum each one creates and copies to a new array (as opposed to, say, Rust iterators). You use them when elegance is more important than performance; N^2 is fine when N is eight.
That's always such a dangerous proposition though. Sure _you_ remember it's only fine when N is 8, but then the next guy comes along, or the input constraints change, or future you forgets because it _is_ fairly elegant looking.
I try not to leave grenades laying around too often, myself.
You are the second person to reference quicksort in Haskell in this thread. The other person even mentioned it not being true quicksort. Who knew quicksort in Haskell was such a beloved piece of code!
Back when HotOrNot was hot, there was a site like it for rating code snippets. The top-rated one, I think I remember, was a broken version of this sort function: it had left out the recursions or something (I don't remember exactly), which made it even more 'elegant'.
This wasn't the only case I noticed of highly-rated broken code there. It took me a long time to take this lesson to heart about especially slick-looking code of my own.
Along these lines, does anyone else have the experience of reading some code and thinking “Damn this is good. So clear, so well formatted,” only to realize moments later with a bit of embarrassment that it is your own code?
Curious why the comment about STL algorithms written by Alexander Stepanov was downvoted so hard it died? I’m not a C++ dev... is he hated or is the implementation that bad?
I was wondering too. His series of videos on YouTube from A9 on algorithms is excellent and has some very beautiful code but you have to work through the series to fully appreciate it.
// Dijkstra, Edgar. "Go To Statement Considered Harmful".
// Communications of the ACM. Vol. 11. No. 3 March 1968. pp. 147-148
if (neighboridx == target) {
goto OUTSIDE;
}
Here’s one i’m really proud of, i crafted it when learning js a while back. Its a one liner that flattens arbitrarily nested array. Its recursive though, so i guess it has its limits. I remember commenting something along the lines of “dark magic functional prog” (thats how i found it a second ago in GH actually):
I always enjoyed reading Charles Petzold code on Microsoft Systems Journal.
When I discovered the Magazine (back un the 90's) I wasn't ready to code for windows nor win32 because I was a broke student with a 1MB 80286; but I am pretty sure reading and rereading that code and articles made me learn more C/C++ than most books or courses I took later.
Mr. Petzold: If we meet someday, the beers are on me!
Sounds trivial compared to the rest listed here, but for me it was just the first time I got a for loop to work in java.
Like, conceptually I knew programming was about getting machines to doStuff, but this was probably the first time I actually had a machine do something I asked of it directly. Well, that and Logo writer.
For Python, we don't need to supply the optional initializer 1. And reduce resides in the functools namespace (much like mul is in operator). I assume we're not talking about that peculiar dialect of Python that is no longer supported a few months from now.
the first C version looks the best to me, it is extremely clear and concise. For a slight improvement, declare the variable c inside the loop.
Yet, since 20! is the last factorial representable in a u64, there is not much a point for these functions, and you should definitely check for n<21. It would be more elegant to store a lookup table with the 21 possible results.
In practice you would want the logarithm of the factorial, that is computed by the "lgamma" function from the C standard math.h. Is such a thing available in rust?
Edit: if you use doubles (which is more reasonable for that use case), you can also do that:
I've written several pieces of code that I am quite fond of. I guess the most recent was last year. A client asked what was necessary for a totally secure yet extensible blogging system. I ended up putting something together in Vue.js and AirTable that was tight, static, and extensible. That had a sweet feel to it. It was extremely small for having so many features. Fun times.
FYI, there's an effort at the moment to translate XeTeX and dvipdfmx into Rust. Started with c2rust, now we have a test suite checking regressions against the entire arXiv archive. Contributors welcome. https://github.com/crlf0710/tectonic
I'll nominate this pseudocode from a paper of mine currently under review. Treat it like a koan to be meditated upon to gain enlightenment about AGI. The paper introduces an ordinal notation system in which ordinals are notated by computer programs. This pseudocode notates the ordinal omega^2. `, ⌜, and « are progressively higher-level opening quotation marks. ', ⌝, and » are the corresponding closing quotation marks.
LEFT = «X=⌜»;
RIGHT = «⌝; While(True) {Print(X); X=⌜Print(`⌝+X+⌜')⌝}»;
[1] https://developers.slashdot.org/story/12/12/01/1847244/how-d...