The only person I know to regularly write even large programs by hand is Donald Knuth. This includes the early heroics of his college days (). It includes TeX and METAFONT which he wrote in the 1980s over several months, before typing them in. (Corroborated right here on HN  by David Fuchs  who worked very closely with Knuth on TeX.) I'm pretty sure from looking at some his recent programs  that he still writes by hand.
Of course as a programmer Knuth is sui generis and has developed his own idiosyncratic style of programming over the decades, different from the rest of the world (consider his idea of Literate Programming). He thinks differently, tends towards monolithic rather than modular, and is always in book-author mode even when writing programs. He has some abilities and disciplines that more than make up for what some others would consider defects. What works for him may not work for others. Still, it would be interesting to hear of more people's experiences writing by hand.
A couple of months ago I wrote a small program in an assembly-like language for fun, and I felt that writing out the first couple of drafts by hand was quite helpful, and prevented me from jumping to an implementation too quickly.
: http://archive.computerhistory.org/resources/text/Knuth_Don_... (Note "half-true" on the cover page is penciled by Knuth!)
: http://www.cs.stanford.edu/~knuth/programs.html (I've generated PDF versions: https://github.com/shreevatsa/knuth-literate-programs/tree/1...)
Writing code with pencil and paper was a normal undertaking well into the 80s. The alternative required either unreasonable hours at a console for days on end or an unreasonable number of unproductive hours each day.
I actually thought this was going in a different direction:
This is Iverson discussing APL (predecessor language of J) and its utility/advantages as a notational tool. In particular, something that I find interesting to see here since I just reread (most of) it, is a mention of ad hoc notations introduced as needed by the user. What today we would call domain specific languages.
I would argue that (a big) part of the issue is that proponents often overstate the readability of such code. My usual experience from the J mailing list is that people "read" code by looking at the prose description of what it does, puzzling through how they would do that themselves, and then looking for common structure between the original code and their own reimplementation. It's a good learning experience as far as solving that particular problem is concerned (as there's often more than one way to do it), though it definitely runs counter to the meme that comments and meaningful names are superfluous. The other usual excuse that "the alphabet just isn't the one you're familiar with" isn't really true of mailing list regulars.
I have not seen it mentioned on HN before.
Have I missed the discussion?
It is called ELI.
I find it very sad that bloggers and commenters who are presumably employed as software engineers are continuing to discount the value of these languages simply because someone finds them incomprehensible. Failure of someone to comprehend is not the fault of the language. Persons having no prior knowledge of programming do not make complaints about APL versus more verbose languages. It is all the same to them, all incomprehensible in the beginning. To learn, work is required. And generally, more volume means more to learn. More work.
Here's the hobby thing I'm doing with this. I have an initial investment in Vanguard and contribute X additional per year. I'm at 90/10 stocks/bonds ratio and plan to, between now and retirement, gradually shift to a more conservative ratio. I have a rough distribution of RoR for both stocks and bonds. With all of these constraints in mind, I want run a few thousand simulations of expected savings at retirement. I want to easily compare these simulations to more pessimistic RoRs.
(No real reason, just seems fun.)
I'm pretty far along with the J. I've also written a bit of this of python as comparison, and from what I can tell so far it's an order-of-magnitude difference. I don't know numpy, though, which probably makes a big difference.
In the end, though, the benchmarks don't matter as much to me as the sheer delight in snapping my brain in two.
and at the individual line level it's as easy as any dynamic language to work your way through a head-scratcher by testing progressively larger chunks/stepping through with a debugger until you find where it starts breaking from your expectaions.
Unorthodox creative approaches are particularly vulnerable to this kind of thing and deserve much better.
That said, having an APL IDE that had pen input would be a really cool idea.
Stopped right after the author explained that line. What benefits does this offer over LtR programming languages? I would consider debugging this as a form of punishment.
2 + 2 * 4 / 1 + 1
Easily hand written?