Hacker News new | comments | show | ask | jobs | submit login

> then it got broken

Got broken? I think of it more as having failed to obtain/coordinate the resources needed to progress.

What it means to have a healthy language ecosystem has advanced. 1970's Prolog implementations couldn't standardize on a way to read files. 1980's CommonLisp did, but had no community repo. 1990's Perl did, but few languages then had a good test suite, and they were commercial and $$$$. Later languages did, but <insert-your-favorite-thing-that-we-still-suck-at>.

And it's not easy for a language to move on. Prolog was still struggling with creating a standard library decades later. CommonLisp and Python had decade-long struggles to create community repos. A story goes that node.js wasn't planning on a community repo, until someone said "don't be python".

The magnitude of software engineering resources has so massively ramped, that old-time progress looks like sleep or death. Every phone now has a UI layout constraint system. We knew it was the right thing, craved it, for years... while the occasional person had it as an intermittent hobby project. That was just the scale of things. Open source barely existed. "Will open source survive"? was a completely unresolved question. Commercial was a much bigger piece of a much smaller pie, but that wasn't sufficient to drive the ecosystem.

The Haskell implementation of Perl 6 failed because the community couldn't manage to fund the one critical person. It was circa 2005, and the social infrastructure needed to fund someone simply wasn't the practiced thing it is now.

And we're still bad at all this. The javascript community, for all it's massive size, never managed to pick up the prototype-based programming skills of self and smalltalk. The... never mind.

It's the usual civilization bootstrap sad tale. Society, government, markets, and our profession, are miserably poor at allocating resources and coordinating effort. So societally-critical tech ends up on the multi-decade glacial-creep hobby-project-and-graduate-student installment plan. Add in pervasively dysfunctional incentives, and... it becomes amazing that we're making such wonderful progress... even if is so very wretchedly slow and poor compared to what it might be.

So did CL get broken? Or mostly just got left behind? Or is that a kind of broken?




The critical person behind the Haskell implementation of Perl 6 got ill. It had nothing to do with funding.


You raise interesting history and it's a good thing to see the perspective as you've given.

I don't know if Common Lisp got left behind or just took a completely different path. From my perspective, it got broken with its macro system decisions, it dynamic/static environment decisions and its namespace decisions. It created too many second class citizens within the language which means that you have to know far more than you should in understanding any part of the programs you are looking at.

Every choice a language designer makes affects what the language will do in terms of programmer productivity, not only for the original developers of programs using that language, but also for all those who come later when maintaining or extending those programs.

I have come to the conclusion that a language can be a help when writing the original program and become a hindrance when you need to change that program for any reason. It is here that the detailed documentation covering all the design criteria and coding decisions, algorithm choices, etc, become more important than the language you may choose.

Both together will enable future generations to build upon what has been done.

All the points that you have highlighted above are important, but the underlying disincentive to provide full and adequately detailed documentation will work against community growth. No less today than in the centuries past is the hiding away of knowledge where individuals are not willing to pass on the critical pieces unless you are a part of the pack or do not think it is important enough to write down because it is obviously obvious.

To understand a piece of Lisp code, one has to know what the special forms and how they interact, what the macros being used are and what code they are generating and what the various symbols are hiding in terms of their SPECIALness might be. These things may help in writing the code, but they work against future programmers in modifying the code. Having had to maintain various code bases that I did not write in quite a variety of different languages, I have found that "trickily" written code can become a nightmare to bring about required changes. I have found that Lisp code writers seem to like writing "trickily" written code.

Now, that is only one person's perspective and someone else may find something completely different. That is not a problem as there are many tens of .... programmers in the world. Each one having a perspective on how to write good code.


> namespace

Nod. I fuzzily recall being told yeas ago of ITA Software struggling to even build their own CL code. Reader-defined-symbol load-order conflict hell, as I recall. And that was just a core engine, embedded in a sea of Java.

> second class citizens

I too wish something like Kernel[1] had been pursued. Kernel languages continue to be explored, so perhaps someday. Someday capped by AI/VR/whatever meaning "it might have been nice to have back then, but old-style languages just aren't how we do 'software' anymore".

> detailed documentation covering all the design criteria and coding decisions

As in manufacturing, inadequate docs can have both short and long-term catastrophic and drag impacts... but our tooling is really bad, high-burden, so we've unhappy tradeoffs to make in practice.

Though, I just saw a pull request go by, adding a nice function to a popular public api. The review requested 'please add a sentence saying what it does.' :)

So, yeah. Capturing design motivation is a thing, and software doesn't seem a leader among industries there.

> enable future generations to build upon what has been done.

Early python had a largely-unused abstraction available, of objects carrying C pointers, so C programs/libraries could be pulled together at runtime. In an alternate timeline, with only slightly different choices, instead of monolithic C libraries, there might have been rich ecology. :/ The failure to widely adopt multiple dispatch seems another one of these "and thus we doomed those who followed us to pain and toil, and society to the loss of all they might have contributed had they not been thus crippled".

> To understand a piece of Lisp code [...struggle]

This one I don't quite buy. Java's "better for industry to shackle developers to keep them hot swappable", yes, regrettably. But an inherent struggle to read? That's always seemed to me more an instance of the IDE/tooling-vs-language-mismatch argument. "You're community uses too many little files (because it's awkward in my favorite editor)." "You're language shouldn't have permitted unicode for identifiers (because I don't know how to type it, and my email program doesn't like it)." CL in vi, yuck. CL in Lisp Machine emacs... was like vscode or eclipse, for in many ways a nicer language, that ran everything down to metal. Though one can perhaps push this argument too far, as with smalltalk image-based "we don't need no source files" culture. Or it becomes a "with a sufficiently smart AI-complete refactoring IDE, even this code base becomes maintainable".

But "trickily" written code, yes. Or more generally, just crufty. Perhaps that's another of those historical shifts. More elbow room now to prioritize maintenance: performance less of a dominating concern; more development not having the flavor of small-team hackathon/death-march/spike-into-production. And despite the "more eyeballs" open-source argument perhaps being over stated, I'd guess the ratio of readers to writers has increased by an order of magnitude or two or more, at least for popular open source. There are just so very many more programmers. The idea that 'programming languages are for communicating among humans as much as with computers' came from the lisp community. But there's also "enough rope to hang yourself; enough power to shoot yourself in the foot; some people just shouldn't be allowed firearms (or pottery); safety interlocks and guards help you keep your fingers attached".

One perspective on T(est)DD I like, is it allows you to shift around ease of change - to shape the 'change requires more overhead' vs 'change requires less thinking to do safely' tradeoff over your code space. Things nailed down by tests, are harder to change (the tests need updating too), but make surrounded things easier to change, by reducing the need to maintain correctness of transformation, and simplifying debugging of the inevitable failure to do so. It's puzzled me that the TDD community hasn't talked more about test lifecycle - the dance of adding, expanding, updating, and pruning tests. Much CL code and culture predated testing culture. TDD (easy refactoring) plus insanely rich and concise languages (plus powerful tooling) seems a largely unexplored but intriguing area of language design space. Sort of haskell/idris T(ype)DD and T(est)DD, with an IDE able to make even dense APL transparent, for some language with richer type, runtime, and syntax systems.

Looking back at CL, and thinking "like <current language>, just a bit different", one can miss how much has changed since. Which hides how much change is available and incoming. 1950's programs each had their own languages, because using a "high-level" language was implausibly heavy. No one thinks of using assembly for web dev. Cloud has only started to impact language design. And mostly in a "ok, we'd really have to deal with that, but don't, because everyone has build farms". There's https://github.com/StanfordSNR/gg 'compile the linux kernel cold-cache in a thrice for a nickle'. Golang may be the last major language where single-core cold-cache offline compilation performance was a language design priority. Nix would be silly without having internet, but we do, so we can have fun. What it means to have a language and its ecosystem has looked very different in the past, and can look very different in the future. Even before mixing in ML "please apply this behavior spec to this language-or-dsl substrate, validated with this more-conservatively-handled test suite, and keep it under a buck, and be done by the time I finish sneezing". There's so much potential fun. And potential to impact society. I just hope we don't piss away decades getting there.

[1] https://web.cs.wpi.edu/~jshutt/kernel.html


My point about "understanding the code" and the burden of additional information to retain is about the semantics applicable to the language itself, not about the tooling that we have build around it for development.

Lisp started with some core simple ideas to which were added many others. For some like the dynamic scoping, simple idea that it is, it has complexity interactions with the rest of the language. These interactions increase the knowledge burden that must be retained at all times to be able to make sense of what you are reading. This burden is on top of any knowledge burden you need to carry in relation to the application you are modifying or maintaining.

This is about what are the things you design as part of your language, not the things you do with your language. This was what I was trying to somewhat humorously write in my first comment. As I look back over it, I failed to make that clear.

Lisp had the beginnings of "wow", but then it took a wrong turn down into a semantic quagmire. Scheme started to fix that and later Kernel was another attempt.


> failed to make that clear [...] the burden of additional information to retain is about the semantics applicable to the language itself, not about the tooling that we have build around it for development. [...] knowledge burden that must be retained at all times to be able to make sense of what you are reading

Not lack of clarity I think - it seems there's a real disagreement there. I agree about the burden, and the role of complex semantics in increasing it. But I think of bearing the burden as more multifaceted than being solely about the language. I think of it as a collaboration between language, and tooling, and tasteful coding. For maintenance, the last is unavailable. But there's still tooling. If the language design makes something unclear and burdensome, it seems to me sufficient that language tooling clarifies it and lifts the burden. That our tooling is often as poor as our languages, perhaps makes this distinction less interesting. But a shared attribution seems worth keeping in mind - an extra point of leverage. Especially since folks so often choose seriously suboptimal tooling, and tasteless engineering, and then attribute their difficulties to the language. There's much truth to that attribution, but also much left out.

Though as you pointed out, cognitive styles could play a role. I was at a mathy talk with a math person, and we completely disagreed on the adequacy of the talk. My best trick is "surfing" incompletely-described systems. His best trick is precisely understanding systems. Faced with pairs of code and tooling, I could see us repeatedly having divergent happiness. Except where some future nonwretched language finally permits nice code.


> These interactions increase the knowledge burden that must be retained at all times to be able to make sense of what you are reading. This burden is on top of any knowledge burden you need to carry in relation to the application you are modifying or maintaining.

That's why you have a Lisp where you can interactively explore the running program.

> Lisp had the beginnings of "wow", but then it took a wrong turn down into a semantic quagmire. Scheme started to fix that and later Kernel was another attempt.

I think that's misguided. Lisp is not a semantic quagmire that was tried to be fixed with Scheme or Kernel. As a Lisp user, I find it great that someone tries to revive Fexprs, but practically it has no meaning for me.

Lisp is actually a real programming language and its users have and are determining what features it has.


node.js's API and require() was based on CommonJS [1], and server-side JS was a thing since the Netscape times around 1996.

Prolog's API (and syntax) became a formal standard only in 1995, but Edinburgh Prolog was widely used way before (early 1980's or earlier) [3].

[1]: http://wiki.commonjs.org/wiki/CommonJS

[2]: https://www.iso.org/standard/21413.html

[3]: https://www.cs.cmu.edu/Groups/AI/util/lang/prolog/code/tools...


> node.js's API and require() was based on CommonJS [1], and server-side JS was a thing since the Netscape times around 1996.

I'm sorry, I'm missing the point. Perhaps I should have said community code repository/database? CPAN, PyPI, npm.

> Prolog's API (and syntax) became a formal standard only in 1995, but Edinburgh Prolog was widely used way before

As was Quintus prolog, the other big "camp". A SICStus (Edinburgh camp) description: "The ISO Prolog standardization effort started late, too late. The Prolog dialects had already diverged: basically, there were as many dialects as there were implementations, although the Edinburgh tradition, which had grown out of David H.D. Warren’s work, was always the dominant one. Every vendor had already invested too much effort and acquired too large a customer base to be prepared to make radical changes to syntax and semantics. Instead, every vendor would defend his own dialect against such radical changes. Finally, after the most vehement opposition had been worn down in countless acrimonious committee meetings, a compromise document that most voting countries could live with was submitted for balloting and was approved. Although far from perfect," [...] "contains things that would better have been left out, and lacks other dearly needed items," https://arxiv.org/abs/1011.5640 The later took more years.

Similar 'incompatible dialects' balkanization afflicted other languages. CommonLisp and R?RS were the lisp equivalents of ISO Prolog. Acrimonious committees.

It's happily become increasingly hard to imagine. A general pattern across languages of nested continuums of incompatibility. No sharing infrastructure. Diverse tooling. Hand documentation of dependencies. Each company and school with their own environment, that had to be manually dealt with in any struggle to borrow code from them. Small islands of relative language compatibility around the many implementations, nested in less compatible camps, nested in a variously incompatible "language" ecology. More language family than language. To download is to port. With no way to share the NxN effort, so everyone else gets to do it again for themselves.

Perhaps an analogy might be python 2 and 3 being camps that aren't going away, with largely separate communities (like openbsd and linux), with no language test suites and variously incompatible implementations (cpython, jython, ironpython). No BDFL, and an acrimonious committee process struggling to negotiate and nail to the floor a CommonPython standard. And far far fewer people spread among it all, so each little niche is like its own resource-starved little language. Imagine each web framework community struggling to create/port it's own python standard library and tooling.

The opportunity to complain about left-pad was... like complaining about wifi slowness for your kindle, at your seat at 30 thousand feet over the atlantic. :)


> I'm sorry, I'm missing the point. Perhaps I should have said community code repository/database? CPAN, PyPI, npm.

Maybe I've missed your point, I just wanted to point out that node.js' API and core module system was based on community consensus.


> Maybe I've missed your point,

Ah, ok. I was pointing out there that something we now take for granted, being able to "yarn add foo" (ask a build tool to download and install a foo package/module from the central javascript npm database which collects collective community effort), didn't used to be a thing. It was once "search (on mailing list archives, later on web) for where some author has stashed their own hand prepared foo.tar file (no semantic versioning originally) on some ftp server somewhere (hopefully with anonymous login; and reading the welcome message go by often describing how/when they did/didn't want the server used; and often groveling over directory listings, and reading directory FILES and READMEs, to figure out which might be the right file); download it; check the size against the ftp server file listing to see if it's likely intact (no sums, and truncation isn't uncommmon); check where it's going to spray its files (multiple conventions); unpack it; read README or INSTALL to get some notes on how it was intended to be built, and perhaps on various workarounds for different systems; variously struggle to make that happen on your system; including repeating this exercise for any dependencies; and then hope it all just works, because there's no or only very minimal tests to check that".

Python was originally like this. Then there were years of "we're trying to have a Perl-like central repository... again... but we're again not yet quite pulling it off as a community..." There's a story that the python experience was the cautionary tale which motivated having a single officially-sanctioned npm code repository to serve node. Instead of not, and hoping it would all just work out. Using 1990's python was a very different experience than using 2010's python, far more than the difference of pythons 1 and 3. And the 2020's python experience may become something where you can't imagine ever going back... to how you're handling python now.


Sorry, I've only now come to read your post(s). I guess if one were to compare npm with anything, Java's maven central is named as reference at multiple places in the npm source and on gh forums, and is also the point of reference for CommonJS modules, since many of the early CommonJs/node adopters were Java fall-outs.

I know very well how downloading packages and patches used to be like in the 90s ;) and I think it was Perl/CPAN who nailed the idea of a language-specific central repository and automatic dependency resolution, though it was practiced in Debian (and possibly pkgsrc) before that. Not that I had much success using CPAN to resolve old .pm's; these seem to be lost to bitrot.


This is a fantastic perspective, one that I hadn’t thought about as clearly as you’ve laid out.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: