Ken Iversion originally designed APL as exactly this - an algorithm notation, then it was noticed that -- since this was well specified -- it could actually be executed. I urge anyone who finds interest in this to read Ken's "the description of finite sequential processes", especially the description of the simplex algorithm, which is perhaps the most elegant I've ever seen.
I am actually surprised that kragen did not mention APL more, as I know he is familiar with it, and he is usually pretty good about giving credit where credit is due.
Also the point about pseudocode being unnecessarily cumbersome is not true IMO, because we don't need to adhere to any language syntax that we find redundant. I've never seen an audience to a whiteboard script complain about a missing colon or self/this keyword when the human context made it unnecessary.
I go so far as to say any coding in stuff like google docs can miss items like semicolons without a bat of an eye from me. Sure, they are typing. But if it is not their normal coding environment, the aesthetics and finer points of the language are... superfluous at best for the points I generally care about in that kind of correspondence.
I'll state the obvious, though: this isn't going to be useful for the #1 time I need to code on a whiteboard: interviews. Both parties need to be familiar with the syntax in order for it to be useful.
Tangentially related: check out these alternative musical notations: https://www.quora.com/What-are-some-alternatives-to-standard...
It was based on shorthand writing systems that emphasize a minimal amount of strokes, and the ability to continue strokes into each other.
Most of the visual, programming environments failed. The successful ones still did text for the logic. Scratch, made to teach kids, was a neat hybrid that made text visual in a lego-like way that made connecting pieces easier.
It is unfair to expect that pseudo code notations be static at this point in history when new programming languages that look "weird" pop up all the time.
Though, I also feel that that adds to the time of creating these papers. So, by my own view this might be more expensive than it is worth.
In particular, it saddens me that I can't find any examples of papers that are literate examples. I started dabbling with the idea here, but I think I fundamentally misunderstood some of the ideas when I did that. I keep meaning to take up the idea again. I can't deny that it is a slow process, though.
Also, there is no incentive because of the reasons I stated (pedagogy to someone new vs. succinct explanation to expert)
And I fully note that the book I linked to is exceptional. It won a oscar. And was basically required reading at pixar, if I recall.
Papers are very condensed and probably scrutinized more than textbooks are, via the peer review process. Papers have to have proofs for any theorems, whereas textbooks might omit them (especially the less important ones).
Also, to be fair, a textbook might rely on a dozen different papers in the span of a single page (just taking key insights or arguments for the papers).
I would be interested in seeing the publication times for many text books. Empirically, we should see something in the difficulty in relation to the time it took to create. Right?
Edit, since I saw your point of referencing many papers in a single page. Don't papers do the same?
I mean, a textbook might take years to write/edit (on and off) But it takes at least a month (on and off) to write/edit a paper.
As it is, the code is often neglected. And rarely shared. Both bad facets.
And making a full text book that is executable is what I was comparing. That entire book is a program. I'd imagine that was much harder than the typical paper. Many of which are surveys and gloss over deficiency of the supporting code. (Again, I still concede the black swan nature of that book.)
I'd even go so far to say that as a notation for interviews, if the candidates were informed ahead of time, then I can see it working out better than the current approach for both candidate and interviewers.
At the very least, you could see whether the candidate could learn a new notation, and that alone is a pretty valuable signal.
I could see that as a benefit. Much like how they would have to learn unusual requirements or API's on the job then solve problems with them. A notation like this and basic specs might be a useful filter.
But I'd rather see papers include code snippets.
If a widely used and highly readable language is used (Python, for example) then the meaning of what is going on would be much more clear, and in some cases could be immediately tested out with ones own data or at least played with in an existing, widely installed, live system.
None of this garvage of arrows and triangles. I'm not Greek, I don't have that kind of stupid keyboard.
Also, it should follow programming practices for variable names.
Rather then A, B, C, D, X and the lot use forward_momentum or ForwardMomentum. We're not using typewriters any more. The extra few keystrokes for typing out words, and not symbols, is well worth it.
The people downvoting these comments don't represent a majority opinion on pseudocode in any way. Quite the opposite.
It's a damn mess.
Much like programming and code review.
This notation is not to say that ideas can be expressed better on paper with this notation, but rather that, in situations where you must use pencil (such as whiteboarding), code is overly verbose and this notation is less so.
Then again, I do see some benefits of writing this on paper over writing code on a keyboard, like the flexibility of being able to write wherever you want, and write notes or pictures in the margins. But these are minor.
Also I may have misunderstood your comment.
Edit: actually, upon rereading: I guess you were referring to this line in the opening paragraph
>But our programming languages are very poorly suited for handwriting: they waste the spatial arrangement, ideographic symbols, text size variation, and long lines (e.g. horizontal and vertical rules, boxes, and arrows) that we can easily draw by hand, instead using textual identifiers and nested grammatical structure that can easily be rendered in ASCII (and, in the case of older languages like FORTRAN and COBOL, EBCDIC and FIELDATA too.)
which is an interesting thing to think about. So I guess your criticism of modern computers for representing ideas is valid. But I suppose the advantage of the more restricted inputs of a computer (keyboard into a 1d string of ascii text) is the structure that it inherently has, that paper and pencil does not. So there are tradeoffs.
Personally I find ml to be much of what concise and unambiguous hand notation should be.
Probably why I like it.
This misses both of those advantages.
> The purpose of using pseudocode is that it is easier for people to understand than conventional programming language code, and [...].
So in common usage, "psuedocode" is supposed to be easily understood. I imagine that that isn't one of the goals of this new notation, though. It looks more useful for internal use (e.g., within your team) than external use (e.g., a whiteboard interview), since it's basically unreadable if you haven't seen it before.
All notation has a learning curve, PHBs won't just be able to understand them without putting in some effort.
Far as a learning investment, that's true for any precise notation, modeling tool, or formal language. Even English takes Americans years to learn. So, pointing out pseudocode has a learning curve like everything else doesn't invalidate my claim that it reduces detail to aid understanding and by many people.
Note: Im also not arguing group-specific pseudocodes cant be developed. Only that the normal kind is meant to be accessible and usually is.
So, what is "the notation that everyone can read?" Since you've claimed that it exists and is well defined, it must be describable. Surely, it isn't because this notation is different and everyone doesn't like different things?
I guess you just don't like new things, it makes sense now.
Now, the OP designing a new method for perceived benefits is fine. I even encourage experimentation. That's a different topic. I haven't even responded to that one that I recall. I was countering misinformation about common pseudocode. Discussing OP's would require me actually using it on a number of problems along with diverse set of others with a meta-analysis of reported pro's and con's. Obviously haven't done that... ;)
Does that seem on the right track? I don't mean to put words in your mouth.
1. Private exploration as you said.
2. Publication of algorithms in a form most will understand and be able to duplicate.
There's an example of 2 on the front page right now:
In the PDF, they describe their algorithms in pseudo-code that combines the common, BASIC/ALGOL-like text with some common notation (i.e. division) from math. I immediately understood the algorithms enough to implement them myself in about any language without soneone telling me what the notation meant. A common effect of published pseudocode since it's intended to be widely understood.
This new notation Id have to think about and practice with. Just using it in a paper with the label pseudocode would cause confusion. It's less intuitive in a world of widely-deployed, ALGOL-like notations. Maybe it has benefits worth sacrificing the wide usability but person switching it better be OK with that.
I get the impression that the work here is more about thinking and iterating on paper, not just communicating on paper, which, IMHO, is a very niche use case that most people aren't going to run into. We don't do anything in our C-style pseudo codes but present.