
Was Knuth Really Framed? - DSpinellis
https://www.spinellis.gr/blog/20200225/
======
pdpi
Somebody, somewhere, wrote the Text::LevenshteinXS module. Somebody,
somewhere, had to write sed, awk, head, sort, tr...

It's all fine and dandy to say "look at these awesome tools that make tasks
like these trivially easy. See how powerful Unix is". But this fails to
consider that somebody, somewhere, has to be a tool _writer_ , not just a tool
user. Knuth's code was a tool writer's code, exemplifying a technique
(Literate Programming) that is aimed at long form code writers in general.

As with others before, the author fails to grasp that this is an apples to
oranges comparison.

~~~
crystaldev
Just like apples and oranges can be compared as foods (and fruits), bespoke
vs. reused solutions can be compared so software. The reused parts are of
higher quality than their corresponding bespoke implementations _and_ can be
composed to accomplish the same task as well as many others. It's a powerful
lesson.

~~~
JadeNB
> The reused parts are of higher quality than their corresponding bespoke
> implementations ….

I think that stating this as an inevitable and uncontested fact might be
pushing the point.

~~~
crystaldev
Most of the components are multi-platform, partly or fully POSIX-standardized,
battle-tested, blazing, fast, etc. It's some of the most widely-used and
arguably greatest software ever written.

~~~
nyberg
And some of it is absolutely terrible with several tools overlapping in use
cases along with unclear perfornance hints. Even worse is the combination
performing worse which may not matter on small inputs but has a huge impact
when you crank up the size. Then there's the issue of understanding why
performance suffers and in what cases it's better to roll your own custom
solution that better fits the problem.

Reuse saves work when your problem maps perfectly to existing tools while what
may seem like a minor difference will propagate down your program to end in
countless issues from a codebase you don't know.

------
svat
I'll avoid repeating my comments from the previous thread
([https://news.ycombinator.com/item?id=22406070](https://news.ycombinator.com/item?id=22406070))
and say just the ones that I think didn't get enough attention:

1\. I can't speak for Hillel Wayne who used the word "framed", but I didn't
understand his newsletter post as _Bentley_ having "framed" Knuth -- I
understood his post as pointing out that in the popular imagination/folklore,
the story had mutated over the course of years from the original setting (a
program that Knuth was asked to write in WEB specifically as that was the
point, and a review of that program by McIlroy evangelizing the then little-
known Unix philosophy) to a "framing" where two people were competing to solve
the same problem with the same available resources, and one of them did it in
a "worse" way. (Also left this comment on the blog post above.)

2\. Here's a comment on the previous thread from someone who says they read
the column when it was posted, and their reaction they say was one of cringing
-- so at least at that time it probably wasn't perceived that way:
[https://news.ycombinator.com/item?id=22418721](https://news.ycombinator.com/item?id=22418721)

3\. Much of the space taken by the literate program is for explaining a very
interesting data structure that we could call a hash-packed trie (AFAICT,
devised on that occasion for that problem -- a small twist on the packed tries
used in TeX for hyphenation, and described in the thesis of Knuth's student
Frank Liang). One cannot obtain this data structure by combining other
programs, only by combining other ideas. (I mentioned this in the previous
thread as well:
[https://news.ycombinator.com/item?id=22413391](https://news.ycombinator.com/item?id=22413391))

4\. So as far as evaluating literate programming goes, the real question (and
the answer is not obvious to me!) is: if you're going to write a program that
uses a custom data structure (like this), _how_ should you organize that
program? Should you write it as Knuth does, or as a conventional program (like
I tried to do with my translation:
[https://codegolf.stackexchange.com/a/197870](https://codegolf.stackexchange.com/a/197870))?
And as for estimating the value of a new data structure in the first place: as
of now (at that question), solutions based on a trie are about 200 times
faster than the shell pipeline, on a large testcase. (The hash-packed trie,
which Knuth calls "slow" in his program, is not so bad either, and it does
economize on memory a bit.)

~~~
mikekchar
I have my own answer for #4 (which, to me, is the only interesting question
about this affair). I've actually done a fair amount of literate programming
on my own, although I only have a couple of examples that one can look at
these days. Here is a small library for fluent matcher system for Jasmine and
React: [https://github.com/ygt-mikekchar/react-maybe-
matchers/blob/m...](https://github.com/ygt-mikekchar/react-maybe-
matchers/blob/master/src/ReactMaybeMatchers.litcoffee)

You will see that I've included yet another monad tutorial :-) I don't link to
this as a way of saying that I think this is a good example of LP. It's not
really. I was experimenting quite a lot. However, I can tell you one thing
about it: it is practically impossible to refactor.

As a result, I decided that LP is not particularly good for working on living
programs. Or, at least, it is not conducive to my style of programming, which
encourages refactoring. Nothing I write is "frozen". It is _all_ in flux and
so the value of documentation is transient. Additionally, it is rare that a
programmer wishes to read code from the top to the bottom. If they ever do,
it's usually the first time they have read the code. After that, they will
want fast access to the parts that they want to modify. Sorting out the code
from the text becomes difficult. If you make a change, you also have to review
_all_ of the text to make sure that you haven't clobber something that is
referenced elsewhere. It will work well for something short, but it's not
great for large projects.

I still do LP style things. Here is an unfinished blog post on ideas about OO:
[https://github.com/ygt-
mikekchar/oojs/blob/master/oojs.org](https://github.com/ygt-
mikekchar/oojs/blob/master/oojs.org) However, to contrast with this, I would
invite you to look at
[https://gitlab.com/mikekchar/testy](https://gitlab.com/mikekchar/testy) where
I put some of those ideas into action (especially see the design.md and
coding_standard.md documents to show what constraints I chose in this
experiment). Crucially, after this code had run its course, I'd changed a lot
of my ideas and never went back to my blog post. For me, the actual code is
far more instructive than the blog post ever was. Of course, I'm the author,
so I understand what I was trying to say and I only need a quick peek at the
code to remind me what I was thinking.

For me, that's the dilemma of LP: once you know what you want to know, the
text is in the way. New people will benefit from the Gentle Introduction
(sorry, couldn't resist the TeX reference...), but 99% of the time nobody will
benefit from it. Is the other 1% of the time worth it. It may be, actually,
but boy is it hard to convince yourself of that!

~~~
zimpenfish
> Here is a small library for fluent matcher system

It doesn't seem like Literate CoffeeScript lets you reorder the code blocks
which, as I understand things, is the fundamental part of Literate Programming
- code follows documentation, not the other way around.

(Although, to be fair, 99% of things I've seen labelled as LP aren't either.
There's only WEB, CWEB, and noweb I can think of that'd count just now.)

~~~
mikekchar
Yes you are absolutely correct and it's definitely a big problem from an LP
perspective. Babel gives me a bit more leeway, though. But from the
perspective of "is this worth it", not having it reordered actually makes it
_easier_ to work with, in my opinion. I think if you had tools that allowed
you to work with the generated code and be able to jump back and forth to the
sources it might be OK.

------
shaggyfrog
> perl -a -MText::LevenshteinXS -e 'print distance(@F), "\n"'

Step 2: draw the rest of the owl

~~~
jiofih
The rebuttal itself is a lot more comical than any joke: literally countering
Knuth’s “code should read and flow like prose for full understanding” with “f
you, look at what I can do with my 20 years of hacking and thousands of lines
of code I’ve never seen, written by someone else”.

------
coolreader18
I really like the _idea_ of the Unix model as well, but you're not going to be
able to use it effectively to write an actual application. If you're writing a
word processor and you need to find the levenshtein distance between the most
frequent word pairs (maybe some measure of how alliterative/consonant/assonant
your document is?) then you're probably not going to be building the word
processor using the Unix model, and even if you are (the closest you can get
is probably using Tcl/Tk?) then it's still best to write out what you're doing
as clear as possible. Note that it took me about 5 minutes to figure out what
the shell pipeline presented in the article actually does, and multiple times
my reasoning about it lead me to think "wait, does this actually do what it's
supposed to do?"

~~~
iamapipebomb
acme could be seen as a word processor in the unix model. It's easy to pipe
text to composable commands.

I can put the Levenshtein perl one-liner in the blue bar and middle click it
to get the distance of two highlighted words.

~~~
pjmlp
ACME is a word processor in the Oberon System model, which has very little to
do with UNIX model and plenty with Xerox GUI based REPL environments.

------
samatman
Thank you for adding to the considerable weight of literature completely
missing the point of Dr. Knuth's literate software, which is:

tr, sort, uniq, and sed, should all be literate programs.

They would be easier to read, reason about, modify, and extend. At this point,
tooling for literate programming lags considerably compared to illiterate
programming, and that's entirely because of the determination to miss the
point exhibited here.

Too bad really.

~~~
shp0ngle
As I wrote in other thread - try to read the TeX source

[https://mirror-
hk.koddos.net/CTAN/systems/knuth/dist/tex/tex...](https://mirror-
hk.koddos.net/CTAN/systems/knuth/dist/tex/tex.web)

and compare it with coreutils source code

[https://github.com/coreutils/coreutils/blob/master/src/tr.c](https://github.com/coreutils/coreutils/blob/master/src/tr.c)

what’s easier to read and understand?

~~~
samatman
There are several orders of magnitude in complexity between tr and TeX, making
such a comparison fruitless.

That said, have a gander here:
[http://brokestream.com/tex.pdf](http://brokestream.com/tex.pdf)

Not nearly as good as the hardcover, which has a proper table of contents and
index.

The key is to imagine 40 years of progress along these lines. I can't imagine
our default target would be paper.

~~~
kevin_thibedeau
The PDF still doesn't help much. The expositionary style of breaking out inner
code blocks from their call site harms the ability to understand what's
happening. It's nearly impossible to follow in the raw source. Hyperlinks
don't improve matters much and the PDF rendering doesn't have rational layout
for details like numeric tables.

Try unraveling the numeric code in Metafont:

[http://tug.ctan.org/tex-
archive/systems/knuth/dist/mf/mf.web](http://tug.ctan.org/tex-
archive/systems/knuth/dist/mf/mf.web)

[http://www.tug.org/texlive//devsrc/Master/texmf-
dist/doc/gen...](http://www.tug.org/texlive//devsrc/Master/texmf-
dist/doc/generic/knuth/mf/mf.pdf)

------
neves
BTW, Spinellis MOOC about unix tools must be great:
[https://www.edx.org/course/unix-tools-data-software-and-
prod...](https://www.edx.org/course/unix-tools-data-software-and-production-
engineering)

I'm really a fan of Spinellis, his books are excellent:
[https://www.spinellis.gr/pubs/index.html#book](https://www.spinellis.gr/pubs/index.html#book)

Effective Debugging book is a must read for any software developer. The
Elements of Computing Style is useful for any knowledge worker. Code Reading
is probably the only important book in the subject.

The only problem with his books is that they are rather expensive, specially
for developers who doesn't earn in dollars :-(

~~~
DSpinellis
That is indeed a pity. I try to compensate by making as much material as
possible openly available, such as through the MOOC you mentioned (I've been
working for five years on it), through my blog, and through open source
software and content.

------
virtue3
He, self admittedly, slightly cheated by evoking perl to do the "hard" bit.

TFA is not so much about Knuth but mostly about unix being highly capable.

------
benchaney
Yes he was. The modification to the problem being discussed here is really
besides the point.

------
robertlagrant
If I can use Perl, I can do it in one line of bash.

~~~
jeremyjh
The author did it in one line of Perl, using an existing library. How is that
different from using awk? Yes awk is widely deployed but so is CPAN. In any
case deployment isn’t part of the argument for using the UNIX philosophy.

~~~
08-15
What exactly is the UNIX philosophy?

As far as I can see, it's roughly "Data structures are hard, so let's pretend
everything is ACSII text. Now we can use a really difficult systems
programming language (C) to build functions with weird calling conventions
("tools") and glue them together with an awful scripting language (sh)."

'awk' fits into this framework awkwardly. It implements a restricted pattern
(go line-by-line, match actions to lines), it doesn't want to be a full
programming language, even though it really is.

But 'perl' is a programming language, and it wants to be one. Once you have
'perl', what is the point of using a reasonable scripting language (perl) to
build functions with weird calling conventions and gluing them together with
an awful scripting language? You're better off writing functions(!) with
normal calling conventions (a library) and gluing them together using the good
scripting language.

That logic taken taken to its conclusion replaces the shell with a clean
language, encourages libraries instead of "tools", and embeds the 'awk'
pattern into said language instead of relegating it to an incomplete secondary
scripting language. In one word: 'scsh'.

~~~
rifung
> What exactly is the UNIX philosophy?

I believe it is the idea of writing small tools focused on doing one thing
well with reusability in mind as opposed to writing larger complicated tools
that do multiple things.

[https://en.wikipedia.org/wiki/Unix_philosophy](https://en.wikipedia.org/wiki/Unix_philosophy)

~~~
pjmlp
A cargo cult philosophy never adopted by commercial UNIX clones and adored by
UNIX FOSS, where the man page of each GNU tool describing the available set of
command line arguments looks like a kitchen sink.

------
dang
The previous episode:
[https://news.ycombinator.com/item?id=22406070](https://news.ycombinator.com/item?id=22406070).

~~~
giancarlostoro
According to a sibling comment this is the follow up to this post. Sibling
comment here:

[https://news.ycombinator.com/item?id=22436816](https://news.ycombinator.com/item?id=22436816)

~~~
dang
Sorry, I'm afraid I don't understand.

------
pjmlp
> In my everyday work, I use Unix commands many times daily to perform diverse
> and very different tasks. I very rarely encounter tasks that cannot be
> solved by joining together a couple of commands.

Others just use a REPL instead, where tr, sort, uniq, and sed get to be
function calls with a threading macro.

~~~
chaps
Meh, a shell is just a REPL and a pipe is just a threading macro. What's your
point?

~~~
pjmlp
The UNIX shell is a primitive REPL, without the capabilities of the REPLs
developed at Xerox PARC, TI and Genera, regarding structured data, debugging
tools, function composition, inline graphics, ability to directly interact
with OS APIs.

A chariot in the age of cars.

~~~
chaps
A car without infrastructure is just a fancy box. A chariot without
infrastructure is a rideable horse.

A rideable horse in the age of broken, disparate, infrastructure.

But, at the end of the day, it all depends on what you're trying to
accomplish. I use repls, shells, notebooks, etc, on a regular basis. Unix
tools solve some problems. Repls solve other problems. Notebooks another.
What's important, to me, is to be able to be able to make the most out of them
all, despite their flaws, because they're simply the tools that we have in our
toolchain. It would be a shame to not learn our own tools, when they can offer
us so, so much.

------
musicale
Everyone knows Donald Knuth would have finished TAOCP faster if he only
understood how to use UNIX pipes.

------
somesortofsystm
I think its entirely fair to say that Knuth was in a frame to demonstrate one
thing, and implement something second. That he was ideating about the subject
didn't - necessarily - prevent a successful implementation.

Certainly, as another set of eyes, the lower character count matters most,
though.

------
GedByrne
I want to read the follow up article where the challenge is to create a
typesetted document. Bentleys criticism includes a single like Shell script
invoking Latex.

------
jagged-chisel
“Through this demonstration I haven't proven that Bentley didn't frame Knuth;
it seems that at some point McIlroy admitted that the criticism was unfair.“

------
kencausey
I feel like 'by Jon Bentley' should be restored to the end of the title to
help differentiate it from the earlier posting to which this is a reply.

------
scythe
To me the question doesn't depend so much on whether Knuth was "framed".

The meaningful criticism leveled at Knuth's code was that it was monolithic.
It's true that it was long because he wrote it from scratch, but that's not
enough to force you to be tightly coupled.

Did Knuth try to make his code reusable? Was it reusable? I think those are
the key questions.

~~~
gmfawcett
That's not really a meaningful criticism. As others have pointed out, the
point of Knuth's exercise was not to optimally solve the technical problem,
but to demonstrate the effectiveness of Literate Programming (or the lack
thereof). The technical problem was just a strawman, so that Knuth had a non-
trivial program to demonstrate. With this in mind, McIlroy's pipe example
isn't a critique of LP at all -- if anything, it was just a distracting
advertisement for the Unix style of composing programs in the shell.

What McIlroy could have examined -- and chose not to at the time -- is whether
awk, sed, tr, and friends could _themselves_ be written in a literate style,
and whether such a rewrite would have achieved the goals that Knuth was
setting out for LP.

Knuth could have chosen to break his monolith into multiple, loosely-coupled
programs, and then written then all in an LP style. But would that have really
made the demonstration any more effective?

~~~
scythe
> Knuth could have chosen to break his monolith into multiple, loosely-coupled
> programs, and then written then all in an LP style. But would that have
> really made the demonstration any more effective?

I would say yes. Clearly loose-coupling isn't necessary for a program that
small. And no, it isn't always optimal.

But I have clear memories of being asked to re-do an intro CS assignment three
times because it wasn't in a properly object-oriented style. Modularity is not
a necessity all of the time, but it is important sometimes. Demonstrating the
potential to write reusable code seems just as important as demonstrating
anything else. (If the conclusion is "LP helps clarity, but you can't write
libraries", is that even positive?)

As far as I can tell, "this code has only one useful point of entry" was a key
part of the anti-LP argument leveled by McIlroy. After all, isn't the goal to
demonstrate that LP works for people who aren't Donald Knuth?

~~~
gmfawcett
I think McIlroy missed the mark here. Single points of entry, and tightly-
coupled code, might be reasonable criticisms of Knuth's personal style, but I
don't see them as inherent limitations of the LP approach. You could write
multiple useful, interesting narratives about a library's core elements --
algorithms, data structures, etc., -- and then write a simple appendix
documenting the entry points / API. The style itself doesn't have to get in
the way of good program structure.

My own critique of LP is really more about the act of writing itself. Many
people, programmers included, just aren't skilled at it! Knuth's literate
programs are interesting because he's got something interesting to day, and
his writing style is engaging. But I wouldn't enjoy having to read (or
maintain!) a literate program that was written by a poor writer in a dull,
meandering style.

Also, Knuth seems to think that the literate style ought to make us into
better programmers, simply _because_ we're writing prose along with the code
-- that the combination somehow unlocks a better understanding of the problem,
how to solve it, and how to explain it to others. That sounds inspiring, but
I'm not sure it's really true in the general case. Perhaps more research is
needed to find out. :)

------
jeffdavis
There's a careful balance when using tools and libraries. It's obvious that
they are a good choice sometimes, but I've been surprised at the number of
times where a tool/library that looks like a perfect fit is actually not, and
the whole problem needs to be reconsidered and I end up writing a lot of
original code.

------
lotwxyz
My Unix philosophy to showing people what a web-based 'ls' command looks like
is this:

    
    
      $ import fs && import util && comstr --nowrap ls | pretty | less
    

(Works here: [https://dev.lotw.xyz/shell.os](https://dev.lotw.xyz/shell.os))

~~~
DSpinellis
This looks intriguing. Can you please explain what I'm looking at? I feel like
Bowman waking up at the end of "2001: A Space Odyssey"

------
carlsborg
The premise was that piping together shell commands was “better engineering”
than a computer program that captured the authors thoughts? Which is more
robust for debugging, producing diagnostics, error handling and reporting,
extending, and code reuse?

Laughable and sad at the same time, because ACM would publish that.

------
j88439h84
> In fact, one of the reasons I sometimes prefer using Perl over Python is
> that it's very easy to incorporate into modular Unix tool pipelines. In
> contrast, Python encourages the creation of monoliths of the type McIlroy
> criticized.

Python's Mario tool makes it easy to use Python code in pipelines.

[https://github.com/python-mario/mario](https://github.com/python-mario/mario)

------
nixpulvis
I find it comedic that someone called LaTeX error handling "phenomenally
good".

~~~
svat
It's my comment you're referring to (linked from the post). The full sentence
was “BTW, TeX's error handling is phenomenally good IMO; the opposite of the
situation with LaTeX.” You've reversed the meaning(!) but I stand by my
original comment: I invite you to try plain TeX (instead of LaTeX) for a few
weeks/months, and see how you feel about the way it handles errors.

Unlike LaTeX, where the (TeX) error messages usually appear arbitrary /
incomprehensible / unrelated to what you're doing, in TeX (IMO) all the error
messages are very informative and include a lot of information and give you
ways to recover from your problem and poke around, get more context, etc.
First you'll have to have read a manual (or I recommend _A Beginner 's Book of
TeX_ by Seroul and Levy), but my claim is about the user experience in the
steady state.

Of course, part of the reason is that LaTeX is much more complicated than the
low-level things one may be doing with plain TeX. Another reason is that the
LaTeX authors were working with severe constraints, one of which was of their
own choosing: their choice of using TeX macros as a “programming language”
(which it was never intended to be, and at which is it horrible).
Nevertheless, a big part of the reason is that they were trying so hard to
make things "easy" for the user in the typical case that they didn't care as
much about ways in which things can go wrong and how surprising errors can be.

------
jimbokun
Did Knuth ever reply personally on whether or not he felt he was framed?

------
blackandblue
> In contrast, Python encourages the creation of monoliths of the type McIlroy
> criticized.

unfair and gratuitous criticism of python... i have seen and written many
small tools you can run using "python -m".

~~~
btilly
As someone who uses both languages extensively, I disagree.

You are right that Python is great for writing small tools that you can run,
just like Perl.

But Python does not lend itself to writing them inline in a command line like
was done here. Perl not only does, but has a number of useful features
specifically added to fit this common use case. 3 of which were used in this
example. (-a for autosplit, -M to load a module, and -e to have the code
passed as an argument on the command line rather than having to have it saved
to a file.)

Secondly, Perl lends itself to being used as a "better shell" while Python
does not.

What I mean is that anything that can be written in bash can be trivially
rewritten in Perl, and the program that you get tends to be substantially more
maintainable if the bash script is at all complex. In such a rewrite there
usually isn't a good reason to change the structure of the program and make it
into a single Perl program.

By contrast Python has focused on the "One True Way" to do things, and the
plumbing work for calling external commands is just verbose enough that a
Python rewrite of a bash script is not necessarily better than the bash
script. And furthermore it is much more likely that the Python rewrite of the
bash script is much better rewritten as a Python script.

The result is that for someone who lives on the Unix command line, Perl
integrates into their world better than Python does. If you have never lived
on the Unix command line, the objections may sound silly. But spend months
typing commands and doing the extra steps that Python requires Every Single
Time will get old.

(This is historically not surprising. Perl 1 was focused on generating text
reports. Perl 2 moved into being a sysadmin tool. Perl wound up as a web
language because it is what all of the sysadmins recommended for text
manipulation to people writing early CGI scripts.)

~~~
codetrotter
I use a few different languages, one of which is Python, and I use the command
line a lot, and I agree that Python is too verbose for a lot of the things
that I do on the command line. Therefore, Python is not something that I reach
for when doing simple tasks involving pipelines and/or file operations.

I have not yet put time into learning Perl. In no small part because I was
intimidated by the weirdness of some of the Perl code that I've seen. The
terseness that Perl allows, and which I desire, is at once compelling and
scary at the same time. For this Perl also has earned the reputation that it
"Write Once, Read Never".

But let's assume that I overcome my fear of Perl. Which version of Perl would
you recommend that I learn? Perl 5 or the language formerly known as Perl 6?

~~~
btilly
Learn Perl 5. Perl 6 is an interesting research project for future directions
that it doesn't look like the programming world will go.

Most of the weirdness of Perl goes away when you read $ as "the", @ as "these"
and a hash lookup as "of".

~~~
perigrin
That's not exactly fair to Raku. A more fair critique (and keeping with the
theme of this thread) is that Raku is less focused on integrating with the
Unix command line than it is on tool building putting it closer to Python than
Perl(5) in the spectrum of things. This was a specific design influence dating
back to the first days of Perl 6, so it makes some sense.

~~~
btilly
_That 's not exactly fair to Raku._

I know that Raku supporters disagree with me, but that has been my considered
opinion for several years. And this has been something I put a lot of thought
into.

Let me lay out the case.

What are the key ideas invented or promoted in Perl 6 / Raku that people get
excited about?

\- Object-oriented programming including generics, roles and multiple dispatch

\- Functional programming primitives, lazy and eager list evaluation,
junctions, autothreading and hyperoperators (vector operators)

\- Parallelism, concurrency, and asynchrony including multi-core support

\- Definable grammars for pattern matching and generalized string processing

\- Optional and gradual typing

I got this list from [https://www.raku.org/](https://www.raku.org/). It is
what Raku people think is interesting about their own language. (So I don't
get to bring up things I really don't like, like twigils.)

Some of these ideas are mainstream, some not. According to Tiobe (yes, not to
be taken seriously but it is accurate enough), the top languages today are
Java, C, Python, C++ and C#. Let's eliminate from the list of Raku features
anything that is supported by at least 2 of them to come up with things that
are novel in Raku while not being broadly adopted today. The list gets much
shorter.

\- Roles (OO programming)

\- Junctions, autothreading and hyperoperators (functional programming).

\- Definable grammars for pattern matching and generalized string processing

\- Optional and gradual typing

How many of these will be widely adopted by top languages in 25 years? My best
estimate is 1. Could be none, could be 2, I'd be shocked if there were 3.

I say opinion, but it is a fairly well educated opinion. Here is my argument
about each.

\- Roles. They have been around for some years. The only language where I have
seen them used heavily is Perl 5. Nobody else seems excited.

\- Junctions are mostly a bit of syntax around any/all which is pretty
convenient already. Autothreading and hyperoperators are a cool sounding way
to parallelize stuff, but getting good parallel performance is complex and
counterintuitive. I don't think that this is a good approach.

\- Definable grammars are an interesting rethinking of regexes, but parsing is
a difficult and specialized problem. I don't see an interesting approach in an
unpopular language changing how the world tackles it.

\- Optional and gradual typing sounded great when it made it into the Common
Lisp standard. But over 30 years later, only Python supports it of the top 5.
And it isn't widely used there. I see nothing about the next 25 years that
makes it more compelling than in the last 25. (Though Raku's implementation is
far, far better than Perl 5's broken prototype system. But that is damning
with faint praise.)

So use Raku if you find it fun. You'll get a view into an alternate universe
of might have beens. But I still believe that the ideas that are new to you
won't be particularly relevant to the future of programming.

\-----

It is hard at this date to make what a similar list would have been for Perl 5
at a similar stage. People were excited about CPAN. Perl people kind of took
TAP unit testing for granted and didn't appreciate exactly how important it
was. Perl people liked the improvements in regular expressions but probably
couldn't have guessed how influential "perl compatible regular expressions"
would become across languages. Ideas we were excited about like "taint mode"
went approximately nowhere. And some ideas that Perl helped popularize, like
closures, were ones that few Perl programmers realized were actually supported
by the language.

However it would be a true shocker if Raku was anywhere near as influential on
the programming landscape 25 years from now.

~~~
zimpenfish
> Junctions are mostly a bit of syntax around any/all

A quick look at Raku junctions makes me think they're basically a slightly
tarted-up version of Icon's generators and goal-directed execution (which is
no bad thing, of course but hardly novel.)

~~~
lizmat
Then you definitely did not grok it. What I gather from
[https://en.wikipedia.org/wiki/Icon_(programming_language)#Go...](https://en.wikipedia.org/wiki/Icon_\(programming_language\)#Goal-
directed_execution) is that Icon's goal directed execution is more like `react
whenever` in Raku
([https://docs.raku.org/language/concurrency#whenever](https://docs.raku.org/language/concurrency#whenever))

Junctions autothread. What does that mean? Using a Junction as an ordinary
value, will cause execution for each of the eigenstates, and result in another
Junction with the result of each of the executions. An example:

    
    
        # a simple sub showing its arg, returning twice the value
        sub foo($bar) { say $bar; 2 * $bar }
    
        # a simple "or" junction
        my $a = 1 | 2 | 3;
        say foo($a);  # executes for each of eigenstates
        # 1
        # 2
        # 3
        # any(2, 4, 6)
    

Documentation:
[https://docs.raku.org/type/Junction](https://docs.raku.org/type/Junction)

~~~
b2gills
Multi-threading and Junctions auto-threading are NOT the same thing.

Calling it auto-threading has lead many people to the wrong conclusion.

(It is possible that auto-threading may be done multi-threaded in the future,
but it doesn't do it currently.)

