Hacker News new | comments | show | ask | jobs | submit login
Handwriting Programs in J (hillelwayne.com)
100 points by qznc 9 days ago | hide | past | web | 29 comments | favorite





The article went in a different direction from what I was expecting, but leaving aside J, the original motivation — the topic of writing programs by hand (I mean with pen/pencil on paper) is interesting to me.

The only person I know to regularly write even large programs by hand is Donald Knuth. This includes the early heroics of his college days ([1][2][3][4]). It includes TeX and METAFONT which he wrote in the 1980s over several months, before typing them in. (Corroborated right here on HN [5] by David Fuchs [6] who worked very closely with Knuth on TeX.) I'm pretty sure from looking at some his recent programs [7] that he still writes by hand.

Of course as a programmer Knuth is sui generis and has developed his own idiosyncratic style of programming over the decades, different from the rest of the world (consider his idea of Literate Programming). He thinks differently, tends towards monolithic rather than modular, and is always in book-author mode even when writing programs. He has some abilities and disciplines that more than make up for what some others would consider defects. What works for him may not work for others. Still, it would be interesting to hear of more people's experiences writing by hand.

A couple of months ago I wrote a small program in an assembly-like language for fun, and I felt that writing out the first couple of drafts by hand was quite helpful, and prevented me from jumping to an implementation too quickly.

[1]: http://ed-thelen.org/comp-hist/B5000-AlgolRWaychoff.html#7

[2]: http://archive.computerhistory.org/resources/text/Knuth_Don_... (Note "half-true" on the cover page is penciled by Knuth!)

[3]: http://www.metafilter.com/122701/STORIES-ABOUT-THE-B5000-AND...

[4]: http://archive.computerhistory.org/resources/text/Knuth_Don_...

[5]: https://news.ycombinator.com/item?id=10172924

[6]: https://www.tug.org/interviews/fuchs.html

[7]: http://www.cs.stanford.edu/~knuth/programs.html (I've generated PDF versions: https://github.com/shreevatsa/knuth-literate-programs/tree/1...)


Surely you meant "The only person I know to regularly write large programs in the present-day..."

Writing code with pencil and paper was a normal undertaking well into the 80s. The alternative required either unreasonable hours at a console for days on end or an unreasonable number of unproductive hours each day.


Knuth is from a time when professors had secretaries who would type their handwritten manuscripts (maybe he still has one). That's why he thinks it is OK to handwrite all his papers and even code. Normal people (us) may even think that handwriting is easier than typing, but very few of us are willing to spend the time it takes to both handwrite and later type the same document. So, in my opinion, the reason for the choice is pragmatic, given the way we need to prepare software.

Ideally, you should be able to scan in what you wrote using your phone, then have it OCR’ed. With the recent advances in deep neural nets, this should be close to possible.

Maybe this will be easier in the future, but nowadays there is still the issue of scanning, running OCR, and fixing the many errors that the OCR process generates.

Writing small programs with pen/pencil would be useful. Then have my phone scan in my work to continue writing it. I’d like some sort of shorthand then refactor to longer vars, etc on a computer.

I spent some time about 10 years back learning J, it was (next to Forth and kin) a great way to learn about tacit programming. However, I found the programs being discussed in the J discussion lists completely impenetrable. I liked it for what I could express, but I hated it because I couldn't understand what others had done (making sharing work very challenging). However, it is, due to its brevity, rather useful for hand notated code.

I actually thought this was going in a different direction:

http://www.jsoftware.com/papers/tot.htm

This is Iverson discussing APL (predecessor language of J) and its utility/advantages as a notational tool. In particular, something that I find interesting to see here since I just reread (most of) it, is a mention of ad hoc notations introduced as needed by the user. What today we would call domain specific languages.


However, I found the programs being discussed in the J discussion lists completely impenetrable.

I would argue that (a big) part of the issue is that proponents often overstate the readability of such code. My usual experience from the J mailing list is that people "read" code by looking at the prose description of what it does, puzzling through how they would do that themselves, and then looking for common structure between the original code and their own reimplementation. It's a good learning experience as far as solving that particular problem is concerned (as there's often more than one way to do it), though it definitely runs counter to the meme that comments and meaningful names are superfluous. The other usual excuse that "the alphabet just isn't the one you're familiar with" isn't really true of mailing list regulars.


That's one of the major reasons I think handwriting J is more interesting and understandable than typing it. You can more clearly mark which pieces are conjunctions and what they operate on, you can present the explicit expansion tree as a reference, etc.

I thought the punchline was that he was going to use image-recognition to read the hand-drawn J programs.

If you want that kind of overkill, how about using interprocedural data-flow analysis to try to parse APL?

https://dl.acm.org/citation.cfm?id=805380


There is a third freely available ASCII character APL in addition to K and J.

I have not seen it mentioned on HN before.

Have I missed the discussion?

It is called ELI.

I find it very sad that bloggers and commenters who are presumably employed as software engineers are continuing to discount the value of these languages simply because someone finds them incomprehensible. Failure of someone to comprehend is not the fault of the language. Persons having no prior knowledge of programming do not make complaints about APL versus more verbose languages. It is all the same to them, all incomprehensible in the beginning. To learn, work is required. And generally, more volume means more to learn. More work.


> The proper way to critique these languages IMO is to do benchmarks. For example, the author of the blog post could state specifcially the problem he is trying to solve and then perform the solution in two languages or more languages, including an APL, including both the code size and the timings.

Here's the hobby thing I'm doing with this. I have an initial investment in Vanguard and contribute X additional per year. I'm at 90/10 stocks/bonds ratio and plan to, between now and retirement, gradually shift to a more conservative ratio. I have a rough distribution of RoR for both stocks and bonds. With all of these constraints in mind, I want run a few thousand simulations of expected savings at retirement. I want to easily compare these simulations to more pessimistic RoRs.

(No real reason, just seems fun.)

I'm pretty far along with the J. I've also written a bit of this of python as comparison, and from what I can tell so far it's an order-of-magnitude difference. I don't know numpy, though, which probably makes a big difference.

In the end, though, the benchmarks don't matter as much to me as the sheer delight in snapping my brain in two.


I don't program in J, but I breathe numpy. I can tell you that numpy broadcasted operations are probably an order of magnitude faster than explicit for loops in python...simply because numpy is of course written in C.

Comments don't look well planned. First they double down on brevity, then the go with 'NB.' for a comment heading instead of a quote or something.

It makes perfect sense. A symbol is way too valuable.

I've been a bit surprised that array based languages like APL and J haven't taken off with GPU programming. It feels like it would be a natural fit for the domain, but I'm probably missing something.

The Futhark language is the closest thing to "APL for the GPU" I'm aware of.

https://futhark-lang.org/


A while ago, there was also this: https://news.ycombinator.com/item?id=13797797

I posted this on here yesterday...wonder y it didn't go through. Neat article and beautiful language. I just wish it was easier to train my brain to use it and easier to get data in.

“Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?”—Brian Kernighan.

The APL lineage is generally extremely clever in the micro but completely normal levels of clever in the macro.

and at the individual line level it's as easy as any dynamic language to work your way through a head-scratcher by testing progressively larger chunks/stepping through with a debugger until you find where it starts breaking from your expectaions.


Next, train a neural network to read your handwriting and spit out the code.

Please don't post drive-by dismissals to HN. We're hoping for better than that here.

Unorthodox creative approaches are particularly vulnerable to this kind of thing and deserve much better.


I thought about doing this, but 1) I have zero experience with neural networks, and 2) I think "convert handwriting to code" would work better with something like APL, which has more distinct symbols.

That said, having an APL IDE that had pen input would be a really cool idea.


> 6 - 1 = 5 > 6

Stopped right after the author explained that line. What benefits does this offer over LtR programming languages? I would consider debugging this as a form of punishment.


Most languages aren't LtR, either! For example, what's this in python?

    2 + 2 * 4 / 1 + 1
In J, that's very obviously 6: as it's 2 + (2 * (4 / (1 + 1))). In Python, it's 11, as there's order of operations to think about: 2 + (2 * 4 / 1) + 1. Strict LtR is not always preferable, but when you're pipelining a lot of tricky array operations it makes it a lot easier to understand the code.

For simple arithmetic I find that extremely confusing. I also would have guessed that the equals was an assertion for the addition.

> what benefits

Easily hand written?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: