Now every other http server implementation should need to explain every byte it is larger and every ms it is slower it terms of "why?" and "what for?" and "who gains from this?" ;)
I don't know, for longer already I don't buy this tales about "it needs to be this large because..." ... mostly legacy, abstractions and ease of code maintenance etc.
This took us all into the world in which the very smallest app on the phone reacting to a click with a "beep" takes how much memory? And the software development craft accepts unbelievable inefficiencies. Memory, Cpu manufactures for long time added to this fires by essentially mis-nurturing devs by optimizing in the background and by creating an environment of limitless virtual resources. I like how "battery-life" enforces, brings back some old ideas on efficiency and software craftsmanship.
Humanity starts to deal with limits of its planet, its own limits and maybe this kind of thinking will bring back some level of limits into the virtual realms as well? I think we would gain from it.
(Now, if you want to talk about the actual efficacy of HTTP vs other stateless protocols, that's a different story... But I doubt we're going to go there in a thread about an assembly HTTP server where people are ooh'ing over the achivement. Not that it isn't cool, TBQF.)
Programmers at large really need to stop being pretend philosophers clutching at straws about "why things are so darn bad today!!!" (among other psuedo philosophical positions). They're mostly terrible at it, and it's almost always just so damn hamfisted and full of itself.
There is a long (albeit minority) tradition of thinking this way in computing, one that values small, intelligible systems as the best way for humans to work with computers. An example of this philosophy surfaced here recently (https://news.ycombinator.com/item?id=9689800) and there are countless others.
HN itself has a rich history with this model. It was created by a practitioner of it, is written in a language inspired by it, and the smallness and intelligibility of the code are always on our minds when we work on it.
We need more projects like this. They are deeply satisfying systems to build and work with, because they're human-scale in the way that behemoth software is not.
It seems close to the Occam's razor.
I like this quote from Chuck Moore (from Forth).
"We need dedicated programmers who commit their careers to single applications. Rewriting them over and over until they’re perfect. Such people will never exist. The world is too full of more interesting things to do. The only hope is to abandon complex software. Embrace simple."
thoughtpolice: "because it shouldn't take very much thought to see why HTTP servers are fairly complex pieces of software"
Dijkstra: "He was the first to make the claim that programming is so inherently complex that (...)" (from Wikipedia)
Down with industry pessimists and "astronauts" that think everything must be complex.
You did overlook the ";)", right? I did not intend to pluck that string of yours. Sorry about that.
However, I wonder who started this trend of bundling "better commented code" with "literate programming" though?
I appreciate the layout and hyperlinks etc - but this really is just a well-structured assembly program laid out in a way that it won't assemble until after it's been through a pre-processor (I'm talking about the program as presented with html/css etc). It's pretty far from "literate programming".
I suppose one could argue that if you manage to simplify the structure of your program to the point that it reads like prose, one has a "literate program". But it's a strange use of the term. The core idea is to have the code be incidental to the commentary, so that, among other things, one would update the commentary whenever one change the program.
Laying out the comments in a funny way doesn't quite do that. I first saw this with the "literate" re-write of coffee script, but perhaps it's older?
Perhaps the difference between "old" literate programming, and this style ("prose programming"?) is similar to the difference between unit testing and TDD, or between TDD and BDD?
I disagree with that. I do think a large part of what makes literate programming powerful is that one can easily sculpt the order of the presentation. It elevates the code and comment to telling a story for humans.
I think the code is not incidental, but is part of the aspect of the story, much like in a fictional story one has descriptive passages and dialog passages. If one updates the story, both may change or perhaps just one of them.
My take on literate programming, using markdown:
* minimal client at https://www.npmjs.com/package/litpro
* core library and docs at https://github.com/jostylr/literate-programming-lib
If they were, we wouldn't need variables, loops, blocks etc. Now you could argue that we don't, we should just write code in lambda calculus -- but we don't do that.
Especially for languages like assembler and C-like languages, it can be very nice to single out smaller sections of code. And while for eg: C, or I suppose a macro assembler, one might be able to in-line a lot of such blocks -- having to stay at the "block-semantic"-level of the host language can make some things pretty hard to communicate to the human reader in a good way.
Ruby might actually be a good candidate for "simple" literate programming, in the sense that one probably could, and perhaps sometimes should, program ruby much like smalltalk -- no method/function longer than five lines or so, except for the most exceptional circumstances.
Even then, I think one would find patterns that would make sense to abstract out of the "function level", or "language level".
At the other end of the spectrum is too much magic, just as with any meta-programming technique, such as proper macros.
I had a most peculiar experience trying to write some (mostly procedural) java with noweb. It was typical intro programming stuff, basically some very simple data structures/algorithms.
It was a very nice fit for literate programming, but not such a good fit for java -- while the literate program read nicely, and was well structured, the mangled java code was unwieldy -- it turned out I'd ended up generating a java program that did what I wanted.
Now, that's not really a problem, we generate assembly programs that are unwieldy all the time -- but the point is it took some discipline and thought to write a literate program that was also readable in it's tangled form.
Only an issue if people are expected to read/modify it in that state, obviously. And they are, as they'll have to debug it at some point ;-)
> I think the code is not incidental
That was a bit tongue-in-cheek -- I meant more in the sense that comments are incidental to code in most programming languages.
Very nice, thank you for sharing.
On the flip side of the tangled form, I do appreciate the "flattening" of potential functions. Imagine code that uses a lot of functions, scattered about. It can be hard to follow all that, rather a bit like the GOTO of old (better, to be sure, but still hard to follow). With literate programming one can get the conceptual separation of writing in different blocks, but then they get put back together and the flattened version can be read quite easily, one would hope. So I agree that a locally readable tangled form is very important.
I'd be curious to see your example's code, if you are willing to share it.
To be honest, that was (part of) my initial reaction. But at the same time, playing with noweb, reading a bit of literate C etc -- I think some tooling on the literate side can be good. Sometimes one wishes for too much, and too late realize that the beautiful unique snowflake of a poem one has wrought is, while splendid in its simplicity, as brittle and hard as ice.
It just not something one will be entirely comfortable handing over to someone else to modify -- because even if they could figure out what it did, they'd be hard press to modify it in any meaningful way. Too dense isn't very good either.
Sometimes I think that literate programming and APL stand at opposite corners of some kind of 2d graph of program complexity/simplicity (not necessarily diagonally opposed). I'm not entirely sure what would be in the other corners.
I came across your blog post, and I think we are in agreement on a lot of things:
But I also think a very real problem with literate programming (to quote Knuth?) is that it demands people to be both good (technical) writers and good programmers. Some are both - more are one or the other.
I think the best way to get a simple, yet featureful literate programming system, is to combine a simple markup language, like RST/Markdown/etc with a language that lends itself to be pulled apart and rearranged. I think something simple, like Smalltalk might be a good candidate.
So far the only real literate programming I do, is with doctests in python -- and to a lesser extent, ipython notebooks.
I remember I looked at Leo: http://leoeditor.com/ -- and have been toying with moving from vim to Emacs+evil partially for the benefit of org-mode -- but these still feel like very heavy solutions to something that I feel should be a rather simple problem. That feeling might be wrong, though.
Another tool I've come across (which might be abandoned, I'm not sure) is: http://pywebtool.sourceforge.net/
Regardless of the state of the tool itself, the page has some interesting points on literate programming.
I must admit, getting something like proper LaTeX typsetting of the code is nice though. I'm still looking for a tool that doesn't botch up the (La)TeX conversion and html+css conversion of simple RST/md-documents -- it seems everyone tries to be way too clever, and bundle the weirdest little themes/template leaving one to pick apart everything just to get some straightforward html/TeX. Not to mention trying to output HTML from (La)TeX.
At least for html output we now have a few half-decent alternatives thanks to everyone writing a static blog engine. And pandoc. Pandoc is great.
Unfortunately it looks like the code that was the best (worst) example of what I'm talking about isn't readily available (I have it some backup-archive or other...) -- and my other code is partially restructured based on what I learnt, and most significantly in Norwegian.
However, one example, while not breaking the function-gap, is a small sub-section from the noweb document that reads (this is java 1.4):
public class AllTests extends TestCase
public AllTests(String name)
public static void main(String args)
/** Testing the test harness */
public void testSucceed() throws Exception
So one could say that this gives us injection without an injection framework. Or something. This is basically fixing a naming issue in java, where you need methods to be bound to a class (except for static stuff etc) -- and is an "abstraction free" (at the language level) way of moving things that go better together -- closer together.
Note that the blocks pretty much lacks comments, as they are a bit gnarly to get to work with both java and noweb and javadoc -- but that is more of a tooling issue than anything else.
In this case (imagine 10s of test cases) the tests might be a bit under-documented when seen in isolation. Note that each block actually holds a couple of test-cases applicable to whatever class/interface-pair it tests.
But it gets even more interesting when you're sharing a block of code that implements a binary search on an array-like structure... does make it a little too easy to trip over local variables though.
The full code (in Norwegian) is at: http://folk.uib.no/st05861/inf101/oblig1/ -- it's not very good, but the noweb source and pdf (oblig1.nw, oblig1.pdf) might be of some very limited interest -- and could be contrasted with the java code under src.
This is all from an entry level programming course, the task was given as a pdf-document with some specs and a few stub interfaces -- the resulting pdf essentially interleaves the questions/specs as given, along with the interface stubs -- and builds up answers to each sub-section/point.
The other code I mentioned was more along the lines of implementing binary search for a an array-like interface etc -- basically Abstract Data Structures.
I like the simplicity of the system calls.
With DynASM (a subproject of LuaJIT), it is possible to target more platforms with this kind beauty.
The spec isn't terse for a reason.
And I'm not sure about your example, a C pointer type would be stored as a number that will be used with "lea" or "mov [pointer], foo". Elaborate.
It's simple, right?
(This is sarcasm, by the way.)
Although I will point out that C has its esoterics also.
Unfortunately, compilers aren't indeed very good at picking optimal instructions when it could take good advantage of instructions such as these. No wonder, though.
Packing and unpacking are often needed in SIMD context. There are a lot of such instructions, including shuffles and permutes. Individually they may sound esoteric, but actually cover a nice number of real life data shuffling needs and are extremely fast.
Fused-multiply-add instructions can double effective FLOPS.
However, just because something is useful does not preclude it from being esoteric.
In particular, there are an astounding number of such miscellaneous and less-often-used instructions in x86 and extensions, and trying to remember which ones exist, and which ones have which limitations, is... a fair feat. Hence, esoteric. Understood only by a few with special knowledge or interest.
You might be missing an intermediate certificate. (Just a heads up.)
If a person does not concatenate authorities signatures to their site certificate, Firefox and mobile Chrome will alert, desktop Chrome and Safari won't.
It's easy to make something fast when you only implement a very small number of features.