Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What Happened to Elm?
261 points by ak_111 on Feb 10, 2023 | hide | past | favorite | 335 comments
I remember years ago I spent sometime to learn it as I was curious how a purely functional front-end language feels like, and genuinely thought it had a good chance of being massively adapted in the near future, especially with what seemed like a vibrant community.

However, I got distracted by a data science career and toolchain. Am in the process of starting my first SaaS as a solo founder and was looking at what languages to use and was surprised to see that Elm...seems dead? As an example I did a search on hackernews and it hardly got mentioned in the last year (both comments and posts!)

Am wondering if anyone can provide some more light on what happened to the language? Is it a safe bet? And what is a suitable good replacement for it?




In version 0.19, the core team hardcoded a whitelist into the compiler of which projects are allowed to use native code (which happened to basically be the core team's pet projects). This basically crippled the language beyond usability for everyone else. https://news.ycombinator.com/item?id=22821447 https://news.ycombinator.com/item?id=17842400 https://news.ycombinator.com/item?id=16510267

As for a replacement, I'd look towards Haskell. At a language level, it looks really similar to Elm, and it's been having a lot of work done towards being more practical on the Web lately, e.g., GHC 9.6 getting a WASM backend merged.


Yep. And the core team has that "happy cult" vibe where they think that if they're (passive-aggressively) polite enough, they don't have to worry about anyone's opinion outside their insular little group. I tried to use it for a little project about a year ago, ran into some problems, and, searching around, found my problems were 1. not unique to me, and 2. were definitely not going to be solved by the language designers, despite a lot of people having the same suggestions. Their attitude is "either you design your app how we'd prefer you to or you don't use Elm at all." This doesn't work in any other technical project, but they think it's going to work for them for some reason.


"Happy cult" vibe is exactly how describe that faux-niceness they throw around. Thanks for coining that phrase.


Honestly, the boldness of the opinions is also what makes elm well designed.

I never really used it, but I see plenty of other languages that are perhaps too much driven by consensus rather than vision.


Opinionation is often useful but it's best implemented with an escape hatch of some sort for when you run into a sufficiently weird situation there's not actually a sensible way to deal with it.

I don't particularly mind if a project holds their opinions sufficiently strongly that they mark the escape match big red letters with "BY USING THIS YOU ARE VOIDING YOUR WARRANTY, IF IT BREAKS YOU GET TO KEEP BOTH HALVES" and then tapping the sign any time somebody asks for support with using it, but it should still -exist-.


I think it's also fine to just say no when people ask for an escape hatch. Say, no and tell them use another tool.

Not every tool, has to solve all problems.

But I do think people should spend more time articulating what their tools don't do, rather than what they do :)


The escape hatch is Haskell. The only reason Elm exists is to remove escape hatches from Haskell.


The escape hatch of elm is the ports system to interface with javascript.


Ports is explicitly async and therefore only viable for a significantly limited subset of purposes.

This has been discussed at length elsewhere in this thread, on elm related discussion boards, and in multiple articles.

Ports is the -preferred- way to interface with javascript, certainly, and when its inherent limitations aren't a deal breaker, its being preferred is entirely sensible.

It is not, however, an escape hatch in any meaningful way, hence why people were extremely upset when the actual escape hatch was disabled by maintainer fiat.


I guess I am less strict with what I consider an escape hatch. It does not need to be optimal or follow my preferred method IMHO.

Other than performance I do not see any limitation of the async port system. You can call back from the js side. It is an especial way, I agree, but one that keeps the Elm paradigm intact. It is quite clever. It is "different", but I do not consider that a limitation, but a matter of preference.


What were those problems you ran into?


Also the fact that their package manager is strongly coupled against https://package.elm-lang.org/. There is no way to override it. This means that if that package site ever goes down, you will be left with an unbuildable project. Seems pretty risky to me.


I mean you should be proxying/caching your repo anyway, non? Even just to save on bandwidth.


It's a lot more fiddly to do that if the package manager doesn't support you doing it. (In fact it might be impossible if it's using HTTPS)


No.


You don't need native code to be productive in Elm. Elm is perfectly useable without it. Use of native code was not widespread before 0.19 anyway.

I would know. I was working on a large native code project that was killed by 0.19. It sucked for me that my project could not continue, but, I can't say that it has made much of a practical difference when it comes to making software applications. My project was called "elm-canvas" and it was about providing canvas support. Since 0.19, other people have found implementations of the HTML canvas element that do not rely on native code (Like Joakin's great 'elm-canvas' https://github.com/joakin/elm-canvas). For my own HTML canvas based software projects, I have been able to find my own non-native code implementations that work pretty well.

So, I think I faced a disproportionate amount of this problem, and I can't even say it has made much of a difference to my own productivity in Elm (and I have written a lot of Elm).


I've never used Elm or interacted with the community but reading some of those posts remind me that the B in BDFL is quite important.


Evan's hella nice. Some folks just really really don't like being told "No."

(There are comments upthread calling the Elm community "happy cult" because they said "no" so nicely!)

The complaints you hear about Elm are pretty much all of the "Evan said no" ilk.

What you don't hear is are actual bug reports, eh? People aren't kvetching about how their Elm apps broke, eh? That silence is deafening, once you listen for it. Elm works. It may be the Easy-Bake Oven of web dev, but the things it cooks are perfectly edible!

(Following that metaphor, most JS frameworks, etc. produce muffins with bits of broken glass in them.)


"Nice" and "benevolent" are not the same thing.


Do you have an actual point to make, or are you just being pedantic for fun?


Some of the most manipulative and self-serving people you’ll ever meet are “nice”.


So, is that a claim you're seriously making about Evan?

- - - -

Not to be arch, but this whole line of discussion is uninteresting to me. I'm glad to discuss Elm, but I don't want to hear unfounded character assassination of a kid who's only guilty of being willing to say "no" to people who want to change his language in ways that he's not into.

From my POV Elm stands as a serious challenge to the entire JS ecosystem. I keep asking, "What's the business value argument for not using Elm?" and no one has a good answer.


The crucial point is exactly what you're referencing ("his language"). The way that contributions, feedback, and issues were treated was very much not in line with a tool that people depend on for their work. It was in line with a personal hobby project, but the marketing and promotion did not set those expectations. They strongly promoted the project as a tool for use in industry. So your defense is not convincing to me as someone who did try to use this tool in a professional setting where I was accountable to a larger organization for delivering on technical requirements.

People weren't asking to compromise or control the design of the language. They wanted to be able to contribute to the project (usually just first-party packages) in order to have their problems solved in a timely manner or have their issues addressed in some other way. This was usually after having invested hundreds of hours in the language, because the basic golden path was very polished in Elm.

I can only speculate on the reasons why there was this disparity between marketing and reality. Certainly the language gained a lot more interest and community investment than a Show HN hobby project, and some people did benefit from that. I'm not making any specific claims about Evan. The "nice" aspect is relevant because while following the Elm community closely, I witnessed many harsh interactions on the part of Evan and Richard Feldman that were simply wrapped in gentle verbiage.

Here's just one example (check the edit history): https://github.com/gdotdesign/elm-github-install/issues/62#i...


So what I'm not hearing is that Elm didn't work. It worked, right?

Evan and co. just wouldn't agree to let you contribute to solve your problems and issues in the way you would have preferred?

(In re: the "harsh" comment you linked to, I don't know what to tell you. It doesn't read as "harsh" to me, if anything it shows admirable restraint. In any event, I'm not interested in "tone policing" Richard Feldman. I know nothing about him.)

- - - -

I'm one of those people who are like, if you don't like what Evan's doing just write your own, eh?

Meaning no disrespect to Evan and co. and what they've achieved, it's not that hard. It just isn't. Elm-style languages are pretty easy to implement (that's kind of the whole point, yeah?)

It divides the complainers into two groups: those who can write their own but instead prefer to leverage Elm devs via what amounts to various forms of emotional and reputational blackmail; and those who can't but still want to leverage Elm devs via what amounts to various forms of emotional and reputational blackmail. Either way, I personally feel comfortable dismissing their complaints without further consideration.

- - - -

Do you have any technical complaints about Elm? Did it ever break in production? What was the most interesting bug you encountered using Elm? That's the discussion I'd like to have.


Why are you interested in having a discussion on any of this when you can just talk to yourself (i.e., write your own)? It’s really not that hard.

Yes, Elm and its core packages had bugs and critical omissions that were outstanding for years, many of which affected our app. It was a running joke on our team to try to guess the vintage of the extant GitHub issue whenever we ran into a problem. I consciously fled the ecosystem and community many years ago so I don’t remember specifics anymore. And the repos were scrubbed of all past issues when 0.19 was released so we can’t go check either.

If you appreciate implied threats for creating a package which gives people some (unauthorized!) control over their own destiny, then I can see why this comment wouldn’t trouble you. Have fun in the sandbox.


> Why are you interested in having a discussion on any of this...

I'm interested in systems which permit bug-free programming. Elm seems like a step in the right direction, bringing semi-obscure ideas closer to the mainstream. Lot of JS programmers got some exposure to Elm-like FP.

> you can just talk to yourself

I mostly do. (I'm a recluse, this account on HN is my primary channel of communication to the outside world.)

> (i.e., write your own)? It’s really not that hard.

I am writing my own system, and it's really not that hard.

> Yes, Elm and its core packages had bugs and critical omissions that were outstanding for years, many of which affected our app.

That sucks. But really, without any details your vague memories aren't much to go on.

> appreciate implied threats

I'm not sure what you mean, maybe I misread the message. It didn't sound to me like anyone was threatening anyone.

> Have fun in the sandbox.

Thanks! I will.


This is starting to sound unhinged. You're identifying strongly with Evan but you're also denying how deeply he matters to you. That seems mentally unhealthy.

Are you ok?


Are you? You're making drive-by psychological diagnosis on the Internet. Maybe rethink your rhetorical style?

It's uncool to troll like that. This is a two-day old thread. Were you just hanging around waiting for me to respond so you could troll?


I think anyone can see something is wrong upstairs, man. If that sounds like trolling, I'm sorry. It sucks that HN is your only contact with the real world and I hope you can find something to worship besides Elm/Evan.


> What's the business value argument for not using Elm?

The time and energy investment into using any JS code in Elm for things not currently sorted out is severely disproportional to the value provided by Elm. The ROI is just not there. If you compare it to the FFI layer of PureScript, there is no question that it's nice to have validation, etc., but it's just not worth it.

The very restrictive view of packages (where to get them, what you can upload and use, etc.) is also a very obvious deal breaker.

I would be surprised that you find these two things that everyone keeps bringing up not to be good answers to your question, but then I remember how being in the Elm community felt like: "Everything Evan says or thinks we all think". It's the same culty feel that Elixir has a lighter version of.


> "What's the business value argument for not using Elm?"

The ecosystem of Javascript is vast, in comparison to Elm, which can save time and money (especially now that apparently Elm doesn't allow JS code). JS devs are likely also cheaper than Elm devs due to the high supply. Just two reasons why I as a business owner would still write in Javascript/TypeScript over Elm.


> From my POV Elm stands as a serious challenge to the entire JS ecosystem.

Lol, OK. Spoken like a cultist. Bad news for you: It's not going to happen.


FYI use double newlines between paragraphs/links, otherwise it shows up as one line.

I completely agree with you, the way the Elm team handled it was not conducive to collaborate usage of their language, it was more like only they wanted to use it fully.


> This basically crippled the language beyond usability for everyone else.

This is not true. You can argue that it was perhaps the downfall in terms of contributions and popularity. But you don't need native code for writing apps in Elm.


The fact that the maintainers made a whitelist for their projects disproves your point.


"Their projects" were core libraries or experiments IIRC.


You don't need a hammer to hit a nail, but if you hand me a screwdriver I'm going back to Node.


[flagged]


I’ve written quite a bit of elm professionally and it was an issue for me and was a contributing factor (but not sole factor) to leaving Elm behind.

A viable solution to my problem in professional contexts would have been private packages, which I believe Gren supports, so perhaps Gren would be a solution today, but I’ve moved on. That said, what’s happening with Gren is interesting and I applaud it.


Yea, exactly. I have written a complete functional game in Elm, with multiplayer over websockets, push notifications and whatnot. It has daily users to this day. I never needed "native" code.

Maybe some decisions were wrong, as I said. Elm is not popular today. But they are also right in some aspect: I don't need another TypeScript (but with weird syntax).


Would you mind sharing a link to you game :)


Does this still assume a separate TypeScript/React frontend or the like, or can you actually do the whole frontend in Haskell?


[flagged]


> This is FUD. For any normal web app you don't need that functionality, just use the built in "ports" concept to interface with JS stuff.

The change was handled extremely poorly. I only knew one team using Elm in production at the time. They went from being huge proponents of Elm to recommending against it after the 0.18/0.19 changes and drama upended a lot of their work.

There may have been a "right" way to do things under the new version, but breaking major functionality in a minor release without warning is a sign that you don't really care about your users. Changes like this require discussion, warning, and long transition periods.

It was bad enough that they got it wrong, but then Evan doubled down by dedicating the opening of his next conference talk to mocking the users who were unhappy about the changes: https://www.youtube.com/watch?v=uGlzRt-FYto

The part where he mocks a commenter for asking "Is Evan killing Elm's momentum?" is weirdly prescient, given that these events ultimately did mark the slowdown of the Elm community. Elm development came to a near halt about a year later.

> And it being open source means nothing stops you from forking it and adding your own native modules. Reason no one has done it is it's no need, and doing thinga that way would break the promises the compiler makes about no runtime bugs.

I disagree. Maintaining an in-house fork of a compiler just to accomplish what could be done with past versions isn't a reasonable suggestion. The reason people weren't maintaining in-house forks of the compiler just to keep their projects going was because that's a silly thing to do when you could just use any number of more reasonable mainstream projects that don't require such extreme lengths.

It's weird to see people casually suggest "Just maintain a fork of the compiler" as a reasonable solution.


I was using Elm full time back then, and I think some important context to note is that 0.18 had outstanding issues which had seen no movement for years (and PRs were generally ignored). The official guidance was “wait for 0.19.” While 0.19 addressed many if not all of these issues, it came with a number of breaking changes, some very significant.

0.19 also coincided with a reorganization of the GitHub repos which happened to scrub all past issues, PRs, and discussion. A more stringent moderation policy was also instituted on the official forums which forbade discussion of potential forks, unapproved workarounds of the new limitations, or any criticism which was seen as too harsh.


Well that last bit alone would put me permanently off a language. It's always wild to me when open-source leaders let the power go to their heads. If they want that kind of control over a community, they should go start a backwoods cult. Or at least start a company where they're signing the paychecks.


That's why I left Elm. There were major unfixed bugs for years. I liked the concept, but I couldn't deal with the slow progress to fix things.


> It was bad enough that they got it wrong, but then Evan doubled down by dedicating the opening of his next conference talk to mocking the users who were unhappy about the changes: https://www.youtube.com/watch?v=uGlzRt-FYto

> The part where he mocks a commenter for asking "Is Evan killing Elm's momentum?" is weirdly prescient, given that these events ultimately did mark the slowdown of the Elm community. Elm development came to a near halt about a year later.

Wow, that was hard to watch. He takes people being passionate about Elm and discussing potential issues and mocks them.

If I was a proponent of Elm that felt uneasy about its direction at the time, this would have made it easy to walk away from.


It would be weird to create such a fork casually, but it's a bit surprising that nobody in the community did it by now. Isn't forking the open source way of dealing with political disputes like this?


Gren is apparently such a fork: https://gren-lang.org/


Is it? https://gren-lang.org/book/faq.html#what-are-the-differences... doesn't mention anything about native code.


> While we could spend considerable time and effort in creating a language which would look very similar to Elm, we've instead decided to fork the compiler and core packages, and use that as the basis of Gren.

https://gren-lang.org/news/220530_first_release/


One of Evan’s colleagues from NoRedInk actually started his own language heavily inspired by Elm that aims to fix a lot of its ecosystem problems.

https://www.derw-lang.com

I’m not sure if it’ll go anywhere but I like the idea that it’s interoperable with existing JS/TS packages.


If it's really just a "hardcoded whitelist" that causes trouble, said fork would truly be easy to maintain.


Until they do some other halfbaked thing that's harder to follow along with. When people show you who they are, believe them the first time.

Compilers are supposed to be boring, especially when your language has ~exactly 1. If you can't trust the people maintaining it, it's too much risk to take. Having to do an entire rewrite in a different language can easily kill a company or a project.


It would be, if the code wasn't so horrifically bitrotted at this point. GHC has changed a lot since Elm last got any meaningful update, and they decided to use unsafe GHC primitives everywhere that have no support guarantee. You have to fix a bunch of issues with the compiler source before it will even compile on a relatively new version of GHC.


Until another change against the community interest comes along. You only trust the "benevolent dictator" for as long as they remain benevolent.


Would you need to maintain a fork at all? Couldn't you obliterate the rule when building the compiler?


What's the difference between that and a fork?


One doesn't require a "community of maintainers", its an adhoc tool you use to get your own stuff done, every project has them what is one more?


> And it being open source means nothing stops you from forking it and adding your own native modules. Reason no one has done it is it's no need, and doing thinga that way would break the promises the compiler makes about no runtime bugs.

No, it's because one of the core team members made threats against everyone who said they were thinking of forking it: https://lukeplant.me.uk/blog/posts/why-im-leaving-elm/#forka...


Wow, that is wild! And yes, that kind of thing can totally drive off contributors.

Long ago I went to a local Scala unconference. At the time there were at least two popular web frameworks, and one of the authors was in attendance. [1] Chatting during one of the breaks, I asked him some question about his framework, something about testability of apps. He quickly grew enraged and shouted at me. I slunk off, wondering what I had done wrong.

At the bar after the conference, a few different people apologized to me for his outburst. Not him, mind you, just other people. I was like, "Aha, I recognize this. He's a missing stair." [2] He was clearly an asshole to people on the regular, but the community just put up with it.

That was the last time I went to a Scala event. Life's too short to put up with bullies, or people who accommodate them.

[1] I no longer remember which framework or which author.

[2] https://en.wikipedia.org/wiki/Missing_stair


Presumably Lift / David Pollak, which haven't been relevant for about 10 years; the Scala community has gone through a lot of effort to remove bad actors since then, for the record (and while the drama lingers on, I don't think anyone would deny that standards are much higher now).


I'm glad to hear it, but for me that still leaves Scala one down on every other language. Once bitten, twice shy.


This is a poor line of reasoning - pretty much every community acquires an ass at some point and when it's not something they've dealt with previously it often takes them a while to figure out how to react.

The question is how they do react once they've figured it out, and Scala's come along in leaps and bounds in that regard.

If anything, you should be more wary of groups that haven't yet dealt with that problem, because they've had no chance to develop the relevant antibodies yet.

I mean, I get it, if you have a bad experience with X, it's going to colour your perceptions going forwards, being human be like that - but similarly to throwing away a dice because you rolled a nat 1 with it at an inopportune moment, it's far from the most rational response available to you.


A community isn't "a dice". It has state. Indeed, dice are precisely designed to be uncorrelated roll to roll. You're claiming it's still weighted, just in a different direction now.

Did they really clean things up? Or is it just talk about how things have changed? The way to find out for sure is for me to spend time in the community. It could be better, or the abuse could just be better hidden. And honestly, given the revolution in recent years in codes of conduct and DEI, I suspect my odds are better with newer communities, as there are now plenty of "antibodies" out there.

But all this tickled my memory. Wasn't their an FP advocate who was also an abusive jackass and far-right kook just a few years back? Yes, yes there was. And gosh, what community was he prominent in? Scala. So if I had waited 5 years and given Scala another chance, I would still have landed in a community where there was significant support for abusive behavior.

Maybe he was booted in 2019 or so? Hard to tell from the web. From his tweets, it look like he has crept back in. Is that because he and Scala are worlds better? or is it just the usual missing stair routine that I observed more than a decade ago? I don't know, can't tell, and won't be finding out, because there are plenty of other technologies for me to pick up.


> given the revolution in recent years in codes of conduct and DEI, I suspect my odds are better with newer communities, as there are now plenty of "antibodies" out there.

A lot of that stuff can easily just enable a different set of broken stairs though.

Crafting a code of conduct such that it makes it easy to deal with both reactionary -and- wokescold flavoured bullies is something of an art form, and I've seen a number of communities manage to completely fail at one of the two albeit usually in a very well-meaning way.

Antibodies are great up until they give you an auto-immune syndrome, basically.

(I definitely saw the far right kook get defenestrated, though the community may still be sufficiently quokka to've let him sneak back in through a different door, I'm mostly an observer here - and mostly only looking at all because as a veteran code of conduct advocate I tend to find the various failure modes worth examining)


Sure, and I never said otherwise. I'm just saying my personal heuristic is "once bitten twice shy". If you would like to apply a different one in your life, godspeed. But move along with your "poor line of reasoning" accusations. I've just demonstrated how it was not only a reasonable heuristic, it was also proven correct for the very community I was talking about.


An N=1 example of an instance where a line of reasoning produced an accurate answers proves that it -can- produce the right answer, but does not, in fact, tell you anything beyond that about how likely it is to produce the right answer in any given case.

So "I've just demonstrated how it was [...] a reasonable heuristic" is simply unsupported here. If it's working for you, great, maybe you're encountering a different subset of communities than I am, maybe you've been lucky, maybe I'm simply dead wrong, but getting Elm-maintainer-style passive aggressive over somebody pointing out logical flaws in the underlying claim as to why the heuristic is 'proven correct' is a rather unfortunate way to respond to what was written as constructive criticism of said logic.


I see. You'll only listen to me when I've been abused in a statistically significant number of communities. Definitely somebody I'm going to listen to on the topic of where the good communities are.


Not much has changed, no worries.


Hah, thanks! A nice counterpoint to the person who said otherwise. And especially to the person who seems to think that I must have some sort of brain damage to be concerned.


One way Scala experts deal with people disagreeing with them is to contact those peoples' employers/universities, maybe you should revise your profile accordingly.


Come on, this is FUD. You're talking about one or two guys who got into an enormous personal fight years back and were condemned by everyone else.


Who condemned Martin? Sources please.


I've never seen such an openly hostile OSS maintainer team as Elm's.


sveasoft


Sveasoft wasn't OSS.


Sure they were, they just didn't want to admit all their code was GPL.


Ports are not really a solution for rendering code. Obvious example is internationalization API. It’s not feasible to communicate back and forth through port every time you want to format a date.

Some folks propose web components but that’s another can of worms. Increase in complexity and still no help from compiler.


> It’s not feasible to communicate back and forth through port every time you want to format a date.

Why not? Is it code complexity or runtime inefficiency you are referring to?


Actually both.

Let's say that on desired page, you have three components with dates somewhere - date of article, dates of comments below and date of last login in the header.

After some user action, you navigate to this page. In Elm terms that mean your Model updates and then your rendering functions will run.

Only then you can see that you need to format dates - your rendering function should be responsible for formatting.

Rendering function should be synchronous - so you need to return some string for date - "loading state" or default format. So user gets "flash of unformatted date".

Now, you should somehow communicate with Port - that mean, send asynchronous message "format this date for me". (And sending messages in render function is not something really supported out of the box in Elm)

Then Port comes back with response, you need to put this formatted value in your global Model (some kind of Dict from unformatted to formatted dates?) and that triggers another render with formatted value.

In summary: - you have multiple render loops where one would be enough - you pollute your global Model with junk that could be easily derived - you introduce latency in rendering because Ports are asynchronous

Ports are great for interaction - user clicks something, you can send command and wait for response and update your model. But there are a lot of cases where you want use synchronous, pure function from Javascript like Intl API.


So, the elegant way to solve this would be to use an Elm i18n library and just use ports to get the users region instead of formatting every single string.

I understand that it is hard for a software developer to accept that these things need to be reimplemented, but it also seems to be a natural consequence of "up-typing" an entire eco system.

I studied formal techniques for PL. A large portion of the work was in reimplementing existing things in Coq, HOL, etc. Not particularly interesting, but necessary to reach a mature ecosystem.

I do undestand that this makes it a no-go for a lot of people until these things have been implemented. This is also natural.


> but it also seems to be a natural consequence of "up-typing" an entire eco system.

Reimplementation of i18n API in Elm would make sense only if benefits from types could outweigh robustness of browser implementation. Due to not very expressive Elm type system I really doubt that. You can’t capture intricacies of i18n in Elm type system.

I have more confidence in already tested and used by millions API in the browser than in reimplementation in niche language.

Elegant solution would be if I could write well typed layer that calls browser API. Low bar of entry would drive adoption and in effect real world test. But that’s not possible anymore.


> Elegant solution would be if I could write well typed layer that calls browser API

Not if your goal is correctness.

Again, it is completely OK that Elm is not your choice if it does not align with your values, which appears to be the case.


> Not if your goal is correctness.

Could you define correctness in this context? I'm asking because I don't believe that you can achieve correctness using Elm type system. I don't see how you can encapsulate i18n rules inside Elm type definitions. Maybe something like Coq could do this ...

You can't express something like "if passed 3 as group size, integer part of the number should be represented as string with separators between three digits". Even in Elm that's something that you need to handle not in type system but in runtime code.

What is more - many core packages are just this - typed layer over browser API. elm/regex introduce barely any type safety. It assumes that browser API is "correct".

> Again, it is completely OK that Elm is not your choice if it does not align with your values, which appears to be the case.

I often see this response from Elmers but really I don't understand the point of it. I responded to claim that Ports solve all problems. I provided good, real example where Ports are a bad solution. In return I was scolded that I don't understand software engineering.

I don't jump into elmconf and complain or demand changes. I just provide my take on the discussed issue. And to be frank, I think that's very valuable take for someone who decides if Elm is good for them. That kind of information is not present on elm homepage and is not obvious for new users.


Well, as with all logic systems you can only express correctness up to its granularity. Elm's type system is, as you mention, very rudimentary and basically just HM + reactive programming.

First, I don't write any serious applications in Elm, and would never do it. I don't find that it is productive because of the reasons you list. Just as I would never write any serious applications in Coq.

But with Elm, as with Coq, I can appreciate that their goals are not immediate developer productivity. Maybe some day when the ecosystems needed are fully implemented. Until then I just applaud everybody working on systems that are strongly typed and ensure that all their dependencies are.

I believe this will provide tremendous value – when ripe.


The problem with I18n in particular is that reimplementing in Elm causes pretty unreasonable download size bloat.


Why do you even need to care about this as a user?

And mind you this is a front end tool.

They did a big change of stuff that people depend on, without consulting the community, and then doubled down, threatened people that wanted to fork, all with a weird passive-aggresive tome.


You don't need to care about this as a user. That's my point. People at HN complain about elm things that won't affect you.


not sure if FUD or not, but i distinctly remember people saying that they feared to fork the compiler because they were threatened with being banned from all the official channels and fora if they did.


How hard could it be to spin an anonymous fork though.


if you're going through all the trouble of forking you want to build a community around your fork, so you're not the only one maintaining it.


C'mon, it's forking a whitelist.


Once the maintainer has ignored all community feedback on said whitelist and then threatened said members of community for considering removing the whitelist, why in the world would you assume it'll remain "just a whitelist"?


no, i was around in the elm community at the time, and what people wanted to fork for was a better way to support native js code in their own projects. there was a bit more to it than just adding filenames to a whitelist.


Forking a compiler is a gigantic pain. None but the most hardcore would bother. It's far easier to give up pushing the boulder and go use something else, which I hazard to say many have done. Or, they see the difficulty in using ports for stuff and never even start using the language.


TypeScript won. I was watching this video by Evan Czaplicki the creator [0], and what struck me was that back then, Evan was absolutely right. He was right that in the high level, we needed a typed JS, but he was wrong in the lower level details, at least in terms of market share. TypeScript won precisely because it doesn't start off as intimidating for JS users, while others like Reason and Elm died off, but ironically TS is way more complex than either of the others in its type system.

Another factor for this is that Syntax Matters™. When Gleam (language on the Erland runtime, cousin of Elixir) started out, it had a Haskell-like syntax, but this was found to be a barrier for adoption, so when the creator changed it to be more C and Rust-like, the adoption rate picked up significantly [1]. In the same vein, perhaps Elm was just too esoteric for those coming from JS. I strongly believe the reason Rust became so popular is that it's an ML in C clothing, rather than being a pure ML like OCaml. With curly braces and all, it's more accessible and then people start learning about the more functional features like maps, folds, and so on.

Elm also had quite a rocky history with the BDFL not allowing other people to use the features the compiler maintainers could use, such as escape hatches, which are documented in other parts of HN. This article [2] is a particularly good overview of why some people left.

[0] https://www.youtube.com/watch?v=oYk8CKH7OhE

[1] https://news.ycombinator.com/item?id=22902462#22903210

[2] https://news.ycombinator.com/item?id=22821447


It's rather unbelievable to me that people find ML syntax unfamiliar and off-putting. It makes no sense, and I have always found it disappointing that both Rust and Gleam moved away from it.

People seem to love Python syntax. ML syntax is just more regular, consistent, and typed. How is that worse?


It's not worse. It's just not better in any significant way. So if you're new to this language, and the first thing it makes you learn is this new syntax which isn't better in any significant way, learning that language immediately feels like you're wasting your time learning something that isn't better in any significant way.

Of course, if you started with ML syntax, you'd feel the same way learning a language with C syntax.

But if you're introducing a new language, it makes sense to target the language the most people are already familiar with. Maybe your language does really bring some real syntax improvements, but it's a silly hill to die on, because syntax just isn't very important.

If syntax changes are all your language brings to the table, your language really isn't worth learning. And if your language brings more important ideas to the table, then it would be a shame for people to never make it to those ideas because they got bored learning unimportant syntax ideas.


"Not better in any significant way" is a hot take, and pretty hand-wavy way of dismissing a whole group of languages. With so many people preferring that syntax, it might just be that you're actually wrong..

For instance, I'd say the piping is significantly better than the backwards reading of nested function calls you have to do in Python.

Data |> map |> filter |> group |> sum

Vs sum(group(map(a) for a in data if filter(a)))


> "Not better in any significant way" is a hot take, and pretty hand-wavy way of dismissing a whole group of languages. With so many people preferring that syntax, it might just be that you're actually wrong..

I can see how it could be interpreted that way if you didn't read anything else in my post.

A lot more people use C syntax than use ML syntax, so your own argument, "[w]ith so many people preferring that syntax", would support C not ML.

> For instance, I'd say the piping is significantly better than the backwards reading of nested function calls you have to do in Python.

> Data |> map |> filter |> group |> sum

> Vs sum(group(map(a) for a in data if filter(a)))

That's not coherent Python syntax, but knowing both languages, I'll agree that the ML syntax is definitely better. But significantly better? Reading that sort of code in Python simply isn't a problem I run into. Neither syntax solves any problems I have, because understanding syntax isn't ever the problem I have (in languages that aren't intentionally opaque).

Incidentally, if you want that sort of thing in C-ish syntax, it's not hard to get it in (for example) C#:

  data.map(f).filter(p).groupBy(g).reduce(add);


I first learned of programming with C, C++, Java, and Python. I was unimpressed and remain so. It wasn't until I learned of visual dataflow languages and functional languages, in particular functional-first or pragmatic functional languages that I began to like programming.

But, I was also not indoctrinated by a computer science degree, which probably has a lot to do with it.


> if you started with ML syntax, you'd feel the same way learning a language with C syntax.

I always wonder this but I’ve always found Lisp syntax to be a lot more intimidating and less readable for whatever reason. I should give it a fair shot one of these days.


Lisp syntax is about the vocab. Because the grouping (what expression is child of what other expression, and at which position: third child, fifth child, ... is clear). If you don't know what the word means, like what is foobly in (foobly ((x 3)) (blorch x)), and don't yet have the intuition to make an accurate guess, you are screwed if you don't look it up. The meaning of everything inside (foobly ...) could depend on the definition of foobly. foobly could be in the language itself, or some extension in the language implementation, or it could be defined in the code base.

Newcomers to some kind of Lisp get confronted with identifiers that they haven't seen anywhere.


Lisp syntax is unreadable because there is not much syntax. Functions, macros, variables are all in a mix of gazillion parentheses.

ML syntax is really clean.


A few of the things I hate about ML syntax:

* Infix semicolons instead of paired delimiters to denote code blocks. These cause shift-reduce errors and make indentation unintuitive.

* Implicit currying turning what should be obvious argument count errors into type errors on unrelated lines.

* Implicit currying making parameter order overly significant. This drives languages to introduce infix pipe operators with unintuitive precedence and to bikeshed forward vs backward piping. This could all be done away with by having a dedicated partial application syntax.

* Postfix, of all things, for type constructors. Except for the ones that are, sigh, infix again.

* Inscrutable type signatures on higher-order functions, largely as a result of the above and of the convention of single-letter type variables.


I don't hate ML or its syntax, but this feels like a reasonable set of irritations to me. I may need to introspect a bit more.


This is my really high level and poorly explained thoughts on this. I have enormous trouble reading functional programming languages.

Python gets away significant whitespace by reading like pseudo code and is fairly clean. It uses words rather than symbols.

C syntax provides visual structure using brackets.

It seems to keep things readable, you get the option of using symbols or significant whitespace for scope.

Functional languages have a tendency to do both. Lisp’s syntax is defined by replacing syntax with parentheses.

You basically get into a situation where you need to be able to read and comprehend math-like lines of code. Which doesn’t seem to mesh well with the structure of nature language.

It’s kind of like abstraction in programming. Some people kind of get it. Some people really get it. And other people need to consider a different career.

I think there are just too good programmers who don’t work well with functional programming to say it’s a problem with lack of familiarity. I think it could be a fundamental lack of talent.


Have you tried F# or OCaml? I would argue that programming in Python is much harder than it is in F#. It is for me.


I tried F# seriously for a few days and the tooling was so annoying to set up it made me crave going back to pip and virtualenvs

I guess if you're a .NET developer that may be a non-issue, but I literally couldn't get all but the simplest examples to work and just gave up even though I loved the language and its design overall


How recent was this?

If not recent, steps would be:

* Download and install .NET 7 SDK (very easy on basically any platform): https://dotnet.microsoft.com/en-us/download

* Already, you have F#. You can run `dotnet fsi` to enter F# Interactive (FSI), the F# REPL. Or you can create a new solution and project with the dotnet CLI: https://learn.microsoft.com/en-us/dotnet/fsharp/get-started/.... Once you have tests, you can just use `dotnet test` and it will find and run all tests.

* You can also just download VS Code and install the Polyglot Notebooks extension to write F# code in a notebook. All you need is the .NET SDK, VS Code, and the Polyglot Notebooks extension. https://marketplace.visualstudio.com/items?itemName=ms-dotne...

* Either in an F# script (.fsx file), FSI (ran by `dotnet fsi` again), or in a notebook, you can install NuGet dependencies by just writing

    #r "nuget: <package name>"
https://learn.microsoft.com/en-us/dotnet/fsharp/tools/fsharp...

I'm confused about the tooling being bad. Python doesn't have any management like solutions and projects like F# and .NET has. Visual Studio on Windows and macOS are good, there's JetBrains Rider, and also VS Code with Ionide for IDEs. I've never had issues with the tooling with F#. That cannot be said for Pip, in my experience.


> the tooling was so annoying to set up it made me crave going back to pip and virtualenvs

Damn, if you wanted to go back to pip and virtual envs, then that's really saying something about how bad the tooling is, because pip and virtualenv are the bane of my existence when writing Python. Give me cargo or even npm and I'd be happy.


F# tooling is not bad actually. In fact it is much easier to have project level dependencies unlike python where dependencies are system wide or virtual environment wide. I just think OP is unfamiliar with the kind of tooling that DotNet has and got deterred by that. I'm a python developer who have tried F# casually and adding a dependency is as easy as copy pasting a line of command from the nuget repository online


Something I like about boring Java: I hardly ever think about the dependencies management tool.


Java is even worse to be honest, you basically need an IDE for it, so much that VSCode literally ships a build with Java preconfigured.

https://code.visualstudio.com/docs/languages/java#_install-v...


I was specifically thinking of dependencies management. VSCode/IDE is kinda out of scope here.

I find the InteliJ product are good IDE. I use VSCode for node stuff and it's fine.

As daily driver, I personally use vim and build with maven. It's boring, nothing ever happen. I need 2 binary and one edit to my $PATH. Once in a while I do use IntelJi, to explore new codebase.


Gradle, Maven etc are also a pain, especially when compared to cargo or other all-in-one package management and build tools.


How so? I hardly have to interact with it. If I need a new dependency, it’s a new line.

Node or Pip will found innovative way to break in that same scenario.


It couldn't possibly be that VSCode ships with a build for a language that is enormously popular preconfigured instead?


So? Lots of languages work well with VSCode that aren't TypeScript.


My point is that the REASON VSCode ships with java support is NOT that Java "needs" it.

But I think you knew that.


You need an IDE for any language honestly. And I don't see that as a negative.


I have never used an IDE for any language. Some languages make it very easy to set up. Python, Haskell, Rust, Node are extraordinarily straightforwards with basic CLI tooling. others like C++ are more complicated, but the compilers and tooling are extensively documented.


Every company I've worked at has extensive setups regardless of the language. Even supposedly "simple" languages like Python will benefit from IDEs, and it shows. It's especially the case when programs become larger and more complex.


Static types mean some languages benefit from IDEs more than others. Code completion is not very meaningful without types unless you use some machine learning magic.

The nice thing about Typescript is that it was basically designed as an IDE language. It has just enough typing to make the IDE useful and let's you ignore the typing well enough when you need to. It isn't a safe language, just toolable.


F# was one of the languages I was thinking about with I wrote my comment. :)


Very strange. When I write F# code, it almost directly mimics the problem statement. For example, I have implemented a fair amount of The Ray Tracer Challenge in F#, and it feels like I'm literally just writing down the description and tests from the book and have working code.

Do you have a particular example of Python that you feel it shows off its pseudocode abilities? Because one can write F# almost identically to Python, so I just can't imagine Python being superior given that it lacks several domain modeling tools that F# has (records and discriminated unions).


Speaking as a python dev with some experience in F# , one maybe minor point is the unfamiliar function call syntax that could throw off programmers not familiar with curried functions. Most other languages have parenthesis for function calls, so it can be a bit jarring to have a sequence of strings and not know whether the result is a function or a value. Furthermore some experienced F# devs love making custom symbols, which can also obfuscate the code


> so it can be a bit jarring to have a sequence of strings and not know whether the result is a function or a value

I think that the nice thing about F# is it is always a value. Function evaluation and currying can be taught fairly readily, especially given that once you get it, there's no gotchas.

> Furthermore some experienced F# devs love making custom symbols, which can also obfuscate the code

I agree that is painful when it occurs, but that style should absolutely be de-recommended and is so by Don Syme. I haven't really encountered it much in my personal development though (or I have avoided it).


> You basically get into a situation where you need to be able to read and comprehend math-like lines of code. Which doesn’t seem to mesh well with the structure of nature language.

"The Art of Doing Science and Engineering" has a few paragraphs about this. He points out that language has some intentional redundancy, because humans are unreliable and require it. But programming language designers are often skilled in logic, and prefer that to verbose "human" communication.

The language referenced is APL, but it made me think of functional languages which have a lot of custom infix operators, etc.


> But programming language designers are often skilled in logic, and prefer that to verbose "human" communication.

Reminds me of a thing my dad quoted about the problem with Newtonian Notation. AKA dot notation or fly spec notion. Because with it a fly can do differential math. As a result people use different notation usually.

That said redundancy allows for both readability by humans and better resync by parsers when there are errors. And it's easier to code gen and auto refactor code.


My way of explaining this is that we have a part of our brains which is intended to handle and recognize grammar. If we offload some of the work to it, it feels easier to deal with code. Even though what we are actually doing is certainly harder.

The tradeoff of this kind of grammar is that metaprogramming becomes harder because by the act of making some of the structure into special grammar, it becomes harder to metaprogram.


Familiarity.

The preferred syntax is the one that incurs the least cognitive load. Which in turn depends on what they are already familiar with.


It makes perfect sense. Most programmers learn to program on a C-like, Basic-like or Python-like language. Not ML-like. What's familiar is what's easiest to adopt.


But those languages weren't familiar to begin with. It's like people are saying programmers can only learn to program one time.


They can only learn to program the first time one time. After that, everything is relative to their first language. So learning cost becomes a function of the distance between old languages and the new one.

Moreover, the gain in going from zero programming languages to one is huge. You can make things! Going from one to two is small in comparison. You can make things, but hopefully somewhat better.

So the ROI on learning a very different language isn't great for most. And that's when the technology adoption curve comes in handy: https://en.wikipedia.org/wiki/Technology_adoption_life_cycle

For any technology, some people will like the novelty and will be risk tolerant, so they'll pick it up for fun. Others will be seeking some strong benefit; if the technology provides that they'll adopt it too. If that happens, another chunk of people will pick it up as the coming thing. But circa half of people adopt the new thing only when it becomes dominant or when they're forced to by circumstance.

Max Planck said, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” And that's often shortened as "science proceeds one funeral at a time." Software's not quite that bad, but it's certainly heading in that direction.


Given how much effort has been spent in shoving Javascript into places it doesn't belong solely so that programmers don't have to learn something else, that seems to be more true than not.


No, but that's what most people learned, so most people will continue to favor more C-like languages than languages that are not C-like. If you want people to favor ML-like languages, you'd have to start off teaching them ML first (which some colleges do).

Industry-wise as well, C-like langs have network effects through the past 30 years, especially in JS land which is a lot of the professional software engineering work done these days.


This is incorrect. People are born with an innate biological bias towards C like syntax OVER ML syntax or even FP syntax.

It's easy to prove this fact with some natural example outside of programming.

Let's say I have several tasks I need you to do. If I wanted to present those tasks to you in written form, what would those tasks look like?

They would look like a series of procedures. Any human would present imperative instructions to you WITHOUT being taught anything beforehand. Nobody is going to put those tasks into a single run on sentence because that's less intuitive to us.

Additionally if I want you to repeat a task I'm not going to write something self referential. I just say repeat this task four times or something. We communicate this way through language and through writing, so OF COURSE c like syntax is more natural.

(for those who didn't pick up on it, I'm referring to function composition and recursion respectively in the examples I just gave.)

You got it backwards. The industry teaches C-like syntax because that syntax is naturally more intuitive. This is despite the fact that in the long run FP-syntax is overall better.


> People are born with an innate biological bias towards C like syntax OVER ML syntax or even FP syntax.

Are there some studies that back that up? My wife took the How to Code course on edX, and the use of Racket was a non-issue. The use of it was quite literally transparent.

The issue is that humans react to something foreign with huge amounts of bias. So if I person has any introduction at all to some form of a C-type of language, it seems to shut down all forms of curiosity, generally speaking. I honestly wonder if that has been studied.

I was first taught C, Python, and Java, and was totally uninterested in programming. Once I found functional-first and visual dataflow languages, I was hooked.

I'm generally curious what C and Python people think when they learn something like Prolog? Do they exclaim "this isn't hard enough?".


>Are there some studies that back that up? My wife took the How to Code course on edX, and the use of Racket was a non-issue. The use of it was quite literally transparent.

There's no studies. It's like asking for a study about how people tend to smile when they're happy. Do you really need a study for something so obvious? Do you go around the world thinking nothing is true or real unless someone spent money on some sort of quantitative analysis?

If you read the rest of my post you'll see that what I stated follows from logic. When we have a series of tasks we want written down, we automatically write it down via a list of imperative procedures. This is done automatically. We haven't taken any TODO list writing lessons or anything before hand we just do it procedurally meaning we're born to do it this way. It's innate.

This is the obvious default. And thus as a result any programming language similar to that default will be more intuitive.

As for your wife, who's to say procedural programming is not easier? She may have found racket easy but that doesn't mean she wouldn't find imperative languages far more intuitive.


> People are born with an innate biological bias towards C like syntax OVER ML syntax or even FP syntax.

I disagree with this, as most people do not use any of these languages. Excel is by far the most popular programming language, and it's a mostly-functional, reactive, lisp-style language. That's very far from C, and the closest we have to end-user programming. So if anything, the evidence says that people have an innate biological bias toward Excel-style programming; whereas we, the small fraction of a fraction of people who can program in C, are the weird ones.


Wrong. Excel is biased towards familiarity with mathematics. It just builds on our pre-existing familiarity with mathematical formulas. The intended usage is ALSO just for matehmatical formula calculations. Excel is not typically used to execute step by step procedures across IO.

To analyze our innate biological biases you must remove all context of mathematics and programming and just frame it from the context of a typical activity:

What do you get when you ask someone to write down a series of tasks?

you get a list of imperative instructions. This happens without ANY additional learning or education.

For excel you need to be taught excel and there's a per-requisite of already being familiar with basic mathematical language.

I mean honestly this is all obvious to me. I can come at it from another angle to help you see why it's so obvious.

What type of universe are humans designed to live in? We live in a universe that moves forward in time, with each change in incremental step forward in time comes changes in the properties of things around us. State changes and mutations. Our brains are tuned to live in this type of state changing universe ALONG with being state changing entities that are an intrinsic part of the universe itself.

Logically a programming style that imitates how the universe, and how our minds work will be MUCH more intuitive.

What sort of universe is functional programming? It's a place where time doesn't exist, state changes don't occur. This universe is actually impossible to exist.

Think about it. You know about von neumann machines right? Can you build a lambda calculus machine where no imperative instruction ever occurs? No it's impossible. In order to even simulate this universe we need access to the time domain to execute an evaluation step.

Theoretically if you build some sort of adder that can add 50 values at the same time you can have a 50 operation FP program evaled in one step. Practically speaking the evaluation must happen in several steps because adders only add a couple values at a time then save it to memory. We only know time doesn't exist in the FP world is because these equations can be evaluated in different orders. That's why haskell can do lazy evaluation.

Basically functional programming is a sort of clever fantasy universe we made up. We aren't biologically tuned to operate or even analyze such a universe. We have to take extra steps in imagination to make sense of it. It is NOT intuitive.


> What do you get when you ask someone to write down a series of tasks? you get a list of imperative instructions.

I think this is where we differ, and why everything is clear to you and not to me. You're taking this as axiomatic, but I think there's more to it than that.

Of course you get back an imperative list of instructions when you ask someone to write down a series of tasks. It's almost tautological. What you're missing is not all programming is writing a list of instructions, because indeed not all programming is imperative. I'm sure you know this, but you've been conflating programming paradigms and syntaxes, so I want to make sure we're clear with the terminology. I can agree that the imperative paradigm is indeed very natural for many people to understand, but I don't think that means people are biologically predisposition to C-style syntax.

We can look at how young children learn how to program in order to get a feel for this. Seymour Papert famously studied this with his language Logo, which is a lisp-like language. Its turtle graphics module allows kids to give imperative instructions to a computer which cause it to draw shapes on the screen. But it's a far cry from c-style syntax, so I think the link between a paradigm being "natural" and a predisposition for any particular syntax is tenuous. Indeed many new programmers reach for Python over C, which while imperative, eschews much of the syntax that really gives C its style.

Moreover, it seems like neither the syntax nor the paradigm is what causes young learners to really grok Logo. In his book "Mindstorms", Papert demonstrates how he can use his system to get young students in elementary school to simulate systems of differential equations. Alan Kay does something similar in his Etoys environment, which itself isn't imperative. As an educator, I've never seen anyone get young students reliably do the same with C.

> This happens without ANY additional learning or education.

It does happen after some brain development. Much easier than coming up with a list of steps to execute is just to ask something else to give you the result you want, and allow it to fill in the details a.k.a. declarative programming. Children will do this far before they gain the ability to formulate a correct step-by-step program.

To test this, ask a child what they want for lunch? They will not give you a list of steps to complete to make the lunch:

  - go to the kitchen.
  - put bread on plate
  - put cheese on bread
  - put turkey on cheese
  - put bread on top
  - return to me with plate 
No. They will say "I want a sandwich with cheese and turkey", and then will expect a sandwich to be delivered that meets the specifications. This is the declarative paradigm, so if we're using the developmental timeline of children to determine to which programming paradigm they have a biological predisposition, it would then have to be declarative, not imperative.

> What type of universe are humans designed to live in? We live in a universe that moves forward in time, with each change in incremental step forward in time comes changes in the properties of things around us.

If our world were a programming language, it would be continuous, distributed, parallel, and asynchronous. It's like that movie: everything is happening everywhere all at once, which is pretty much the exact opposite of imperative "one thing at a time, then move to the next thing" programming. The reactive paradigm exhibited in Excel actually is closer to this reality than the imperative paradigm.

Note I'm not really arguing for the functional paradigm, I'm arguing for Excel as the most natural for people. I still think that's the case after our exchange, especially considering the points you raised.

> That's why haskell can do lazy evaluation.

Haskell can do lazy evaluation because it was explicitly designed by scholars to specifically study lazy evaluation. You can have a functional strictly evaluated language. It's not key to the paradigm.

> Basically functional programming is a sort of clever fantasy universe we made up. We aren't biologically tuned to operate or even analyze such a universe.

Given what I wrote above about being continuous, distributed, and asynchronous, it would seem that imperative programming on discrete logic is also a clever fantasy universe.


>Haskell can do lazy evaluation because it was explicitly designed by scholars to specifically study lazy evaluation. You can have a functional strictly evaluated language. It's not key to the paradigm.

The point I made here is haskell CAN be made to do lazy evaluation. An imperative program CANNOT be made to do lazy evaluation.

    example:

    imperative:
    let mut x = 1
    let mut y = x + 2
    x = 3
    y = x + 2

    functional:
    x1 = 1
    y1 = x1 + 2
    x2 = 3
    y2 = x2 + 2
You will note that in the second example the order of all the expressions do NOT matter while in the former is absolutely matters. Changing the order of the first program can change the meaning of the program itself. It is not a "design decision" these are distinct and intrinsic properties of imperative and functional programming paradigms respectively

>If our world were a programming language, it would be continuous, distributed, parallel, and asynchronous. It's like that movie: everything is happening everywhere all at once, which is pretty much the exact opposite of imperative "one thing at a time, then move to the next thing" programming. The reactive paradigm exhibited in Excel actually is closer to this reality than the imperative paradigm.

So? It's still imperative. Everything is executed procedurally through temporal space. One thing happens after another. This is ORTHOGONAL to parallelism and async because each thread must be executed in sync. The concept of continuous space has been shown by physics to NOT be the case but this is besides the point as it's just another orthogonal concept.

>But it's a far cry from c-style syntax, so I think the link between a paradigm being "natural" and a predisposition for any particular syntax is tenuous.

This is a bad example. All this proves is that people are capable of learning functional syntax. It does not say whether functional syntax is more natural then c-like syntax. That is the heart of the question which one are we more biologically predisposed to? You put people in a situation where they have to learn functional syntax of course most people can learn it in the same way people can learn reading, writing and math even though we aren't biologically predisposed to it.

You need an experiment given to a person with no concept of either paradigm and see what choice they make. Like you said, it's "tautological," they make the imperative choice without any prior education or influence.

>To test this, ask a child what they want for lunch? They will not give you a list of steps to complete to make the lunch:

If you ask a child what they want for lunch they will give you the black box library definition of "lunch". if f(x) = x + 1, they do not tell you x + 1, instead they tell you f. You asked for f, they gave it to you.

If you asked the child "how do you make lunch" and they told you, "I want a cheese and turkey sandwich", well that would be the wrong answer. You asked for the definition of f, and they didn't give you the definition they gave you f.

In short let me put it this way. Your question was demanding a declarative answer. The child gave it to you because you didn't ask for anything else.

If you reword the question to ask for a temporal answer that flows across the time dimension where each step is dependent on the previous step the child will give you the answer in imperative form rather then functional. "How do you make lunch?"

Bending your series of instructions on making a sandwich into the do notation of monadic closures of haskell is really trippy, no adult thinks this way, let alone children.

This is important to know. Because functional programming is essentially asking the question "How do you make the sandwich in a way that doesn't involve time" This is a question that does not have an intuitive answer. This is the real question that needs to be asked.

That is why FP is much less intuitive then imperative programming.


> An imperative program CANNOT be made to do lazy evaluation.

That's not really true, as the execution strategy is not necessarily connected to the programming paradigm. You can create a strict functional language or a lazy imperative language. These are orthogonal dimensions of the design space.

For example:

  z = expensive_calculation_take_ten_minutes();  // defer evaluation of z
  x = get_user_input();                          // strictly evaluate x
  y = some_other_thing(x + 3);                   // strictly evaluate y
  if y > 10 {
    print(z);                                    // lazily evaluate z, print(z) here
  } 
There's no reason at all this imperative program must be strictly evaluated. Evaluating z can be deferred until it's clear that it's needed in the print call.

> This is ORTHOGONAL to parallelism and async because each thread must be executed in sync.

It's not really orthogonal, it's a dual. Imperative code can be made to run in parallel by running it on multiple machines simultaneously. But because the imperative model clashes with asynchrony and parallelism, it's much harder to express these concepts in imperative code. It's much easier to express asynchrony in an asynchronous-first language, but in those languages it's harder to express an ordered sequence. Many of the problems I encounter with distributed async programming in imperative languages exist at the boundary between asynchrony and synchrony. The best solution we have today to write async code is to enter a special sync runtime, which needs to exist in order to explicitly support the distributed/async/parallel semantics that imperative paradigm cannot model well. This results in a phenomenon called "function coloring", where only "async colored" functions can only be used in the async code, essentially bifurcating programs into async and synchronous parts.

The notion of "callback hell" should make plain why imperative programming is not the most natural paradigm to model asynchronous communication. Callback hell exists because shoehorning the imperative paradigm into an asynchronous runtime is clunky at best.

> The concept of continuous space has been shown by physics to NOT be the case

But is that understood at an innate biological level? I think not. Human beings, and in particular students of programming, experience their world continuously in time and space. This is why they are surprised that 0.1 + 0.2 != 0.3 in most languages.

> It does not say whether functional syntax is more natural then c-like syntax.

To be clear, Logo is an multi-paradigm Lisp-like language. You can express imperative code cleanly in Logo, and that's how children use turtle graphics. The example is to show you that the c-style syntax is not at all connected to an ability to program at a young age without any concept of either paradigm. Insofar as you are claiming that the c-style syntax is what makes imperative programming biologically innate, the existence proof of Logo must cause you to be wrong at that point, as it has a Lisp-like syntax. If anything, my experience teaching students C shows me that the syntax hinders the learning process (e.g. = vs == and imperative c-style for loops are the source of very many bugs for new programmers).

> You asked for f, they gave it to you.

Right, and you asked for a list and they gave it to you.

> If you asked the child "how do you make lunch"

Throughout your posts, you are treating "here is how I make a sandwich" as programming, while you don't seem to consider "I want a sandwich" as programming. These are both programming. The former is imperative, the later is declarative. If we want to understand whether a young child is capable of imperative programming, we ask them your question. If we want to understand if they are capable of declarative programming, we ask them my question. The question is: which question can they answer first? Child development science tells us it's the declarative program they will be able to express first.

> Your question was demanding a declarative answer. The child gave it to you because you didn't ask for anything else.

Right, indeed. But your question demanded an imperative answer, and that's what you got. My point is that developmentally, children will have the capacity to give you the declarative program long before they can formulate the imperative one. So does that mean we have a biological predisposition to declarative programming? Or maybe there's no such thing as a biological predisposition to any particular paradigm?

> If you reword the question to ask for a temporal answer that flows across the time dimension where each step is dependent on the previous step the child will give you the answer in imperative form rather then functional.

I'll note here that the only temporal notion offered by the imperative paradigm is the notion of an ordered sequence. There are far richer treatments of time in other paradigms, which include temporal operators specifying "as soon as", "before", "after", "until", "whenever", etc. These concepts cannot be expressed directly in imperative paradigms.

The notion of time in most imperative languages I know of is CPU-time, or instruction time. It has little to do with wall time, so appeals to how close imperative programming adheres to an concept of time that's intuitive due to human experience falls short. Indeed, I often experience students frustrated by the inability to cleanly model these concepts in imperative languages.

> Bending your series of instructions on making a sandwich into the do notation of monadic closures of haskell is really trippy, no adult thinks this way, let alone children.

It's interesting that this is how you phrase it, because in my experience children do indeed naturally ask for sandwiches often before they can even describe how they're made. Have you ever seen a kid get mad that the jelly is on the bottom of the PB&J, only to placate them by turning it over? Such a child has no idea how the sandwich is made.

If that means they are using the notation monadic closures of Haskell naturally, it would seem to cut against your argument that this notion is not natural.

> That is why FP is much less intuitive then imperative programming.

To be clear, I'm not saying it is more intuitive. I'm not even mounting a defense of functional programming here. I'm pushing back against the notion that c-style syntax is innately understood at a biological level. In your various posts here you seem to conflate syntax and semantics, which I don't understand because you seem experienced enough to know the difference. Or maybe I'm misunderstanding you, but I feel like you've offered a pretty full-throated argument, so perhaps I'm just not following. But I think we're talking past one another at this point.


>Right, and you asked for a list and they gave it to you.

No I didn't. I asked how do you make a sandwich. The answer doesn't have to be in list form. The child chooses to give me the answer in list form.

Your question, however, the only possible answer is declarative form. There is no other choice. You structured the question in such a way that it demanded a singular answer. "What do you want for lunch?"

>I'll note here that the only temporal notion offered by the imperative paradigm is the notion of an ordered sequence. There are far richer treatments of time in other paradigms, which include temporal operators specifying "as soon as", "before", "after", "until", "whenever", etc. These concepts cannot be expressed directly in imperative paradigms.

Incorrect. the functional paradigm is a subset of imperative paradigm. Imperative is simply a series of functional programs with mutation. All "richer" treatments are therefore part of the imperative style.

>The notion of time in most imperative languages I know of is CPU-time, or instruction time. It has little to do with wall time, so appeals to how close imperative programming adheres to an concept of time that's intuitive due to human experience falls short. Indeed, I often experience students frustrated by the inability to cleanly model these concepts in imperative languages.

There is a notion of time that doesn't have to be so technical. The notion is that one thing happened after another thing, or another thing happened before something else. This concept is expressed through change. Mutation. When you remove mutation then this concept of time disappears.

>It's interesting that this is how you phrase it, because in my experience children do indeed naturally ask for sandwiches often before they can even describe how they're made. Have you ever seen a kid get mad that the jelly is on the bottom of the PB&J, only to placate them by turning it over? Such a child has no idea how the sandwich is made.

No, this simply means they know about the sandwich as a blackbox abstraction. This has nothing to do with monadic IO. If a child knew how to make a sandwich, he for sure wouldn't know how to express the construction process as monadic do notation.

>To be clear, I'm not saying it is more intuitive. I'm not even mounting a defense of functional programming here. I'm pushing back against the notion that c-style syntax is innately understood at a biological level. In your various posts here you seem to conflate syntax and semantics, which I don't understand because you seem experienced enough to know the difference. Or maybe I'm misunderstanding you, but I feel like you've offered a pretty full-throated argument, so perhaps I'm just not following. But I think we're talking past one another at this point.

I'm saying that c-style syntax is innately understood at the biological level. C-syntax is equivalent to procedural syntax or semantics or whatever you want to call it.

> There's no reason at all this imperative program must be strictly evaluated. Evaluating z can be deferred until it's clear that it's needed in the print call.

The imperative part of the program must be strictly evaluated. You only lazily evaluated the declarative part of your program where the values are immutable. x must be evaluated before y and before print because those values mutate. They cannot be done in different order.

All the immutable parts of your program can be evaluated out of order. But the mutable parts cannot be.

>To be clear, Logo is an multi-paradigm Lisp-like language.

I will have to see this language to know what you're talking about. But lisp is usually taught functional first. Children searching for documentation have to dig deeper to find the c like syntax. By nature of documentation and usage, children are more likely to bias towards functional over imperative simply because the bias of the manual and users are also heading in the same direction.

>But is that understood at an innate biological level? I think not. Human beings, and in particular students of programming, experience their world continuously in time and space. This is why they are surprised that 0.1 + 0.2 != 0.3 in most languages.

Yes. A student readily understands 1, before understanding 1.0000111111. There's a dual interpretation arising from two separate modules in the brain. There's a part of the brain that's visual and there's another part that's symbolic and handles language.

The language aspect of our brain is by default discreet, the visual part is continuous. Our interpretation of symbols and language flowing through time is not continuous at all. But our interpretation of movement through space is.

>Throughout your posts, you are treating "here is how I make a sandwich" as programming, while you don't seem to consider "I want a sandwich" as programming. These are both programming. The former is imperative, the later is declarative.

No I'm saying here is "how I make a sandwich" can be EITHER declarative OR imperative. I am saying "I want a sandwich" is declarative programming with no way to convert to imperative.


> No I didn't. I asked how do you make a sandwich. The answer doesn't have to be in list form.

This is what you argued originally is proof of your claim:

  Let's say I have several tasks I need you to do. If I wanted to present those tasks to you in written form, what would those tasks look like? They would look like a series of procedures. Any human would present imperative instructions to you WITHOUT being taught anything beforehand. Nobody 
This is a very strong claim. "Any human" includes even the smallest children. "Nobody" precludes any one. All we have to do to prove your claim false is to find a single human who won't do as you said, which is trivial. I've already identified one: small humans whose brains haven't developed enough to comprehend ordered instructions.

Moreover, it's leading. You start with "I have several tasks" (which sounds like a list), and you want them written down (okay, so you want someone to write the list you just told them).

> Incorrect. the functional paradigm is a subset of imperative paradigm.

I must state again that I'm not defending or referencing functional programming in any of my replies. I started with a reference to Excel, and have also mentioned Etoys and Logo. None of these are considered strictly functional. When I mentioned temporal programming, that's another paradigm entirely separate from functional and imperative.

> The notion is that one thing happened after another thing, or another thing happened before something else. This concept is expressed through change.

This is a very indirect way of expressing time. The only thing that needs to mutate for time to pass is the timer. In the imperative model, the world is stopped until the next statement is executed. This is to match the paradigm to the computing hardware, not to the real world as you keep insisting.

> I'm saying that c-style syntax is innately understood at the biological level.

Okay, if you say so. You literally don't have a single study backing this up. The only argument you've offered is "it's obvious", which I assure you, it's not. If it's only obvious to you, then maybe all you're arguing is that you innately understand c-style syntax. Which, sure. But that isn't generalizable to "every human", as you are literally claiming.

Can you answer me this: have you ever taught C to anyone, like an 6-year old? Can you honestly tell me it the case that they were presented with C, and they immediately understood the syntax? I remember learning C at 6 and it wasn't a smooth process. Then again, maybe that just indicates I'm not a human.

> All the immutable parts of your program can be evaluated out of order.

That doesn't make it not imperative. If you think your imperative code is being executed in the order it's written, you should seriously investigate your compiled code. Optimizing compilers make all kind of decisions to reorder statements when they can.

> I will have to see this language to know what you're talking about. But lisp is usually taught functional first. Children searching for documentation have to dig deeper to find the c like syntax.

The first program all students write in Logo is to draw a square:

  pd
  repeat 4[forward 100 right 90]
  pu
This doesn't look anything like C. It's completely imperative. It's understood and used by 6 year olds. These same 6 year old are lost when it comes to C. There is a mountain of research to support this, I've already referenced for you. That you don't know about these languages really cuts against your argument that c-style syntax is biologically and innately understood by all humans, as it seems that you're arguing from ignorance.

> Yes. A student readily understands 1, before understanding 1.0000111111.

I'm not disagreeing with that. I'm saying that children understand their world to be continuous even before they understand the number 1. Students then learn about integers, and then learn about then about the continuous nature of the number line. That nature is intuitive precisely because they understand the concept from physically interacting with the world. Seymour Papert discusses this idea in Mindstorms.

> No I'm saying here is "how I make a sandwich" can be EITHER declarative OR imperative.

In what way? Declarative programming is all about eliding the how and leaving it up to the compiler. If you answer this question declaratively, it won't contain instructions to make a sandwich. Such details are left to the implementation.


>This is a very strong claim. "Any human" includes even the smallest children. "Nobody" precludes any one. All we have to do to prove your claim false is to find a single human who won't do as you said, which is trivial. I've already identified one: small humans whose brains haven't developed enough to comprehend ordered instructions.

It is a strong claim and I stand by it. In your mind you feel as if you identified a counter example. I plainly already told you that it is NOT the case.

First off lisp like languages are by default geared toward an FP style. Documentation and syntax makes it easier to do the FP style over imperative. The language you chose has clear bias towards the FP style.

Additionally. Nothing stops people from doing python in the FP style. Python is very amenable to that. One could make the argument, according to your flawed logic, that with list comprehensions, reduce and recursion one should, if you're theory is correct automatically do FP when using the the python language.(I didn't choose this example earlier because it suffers from the same problem as logo, there is a clear bias towards imperative that's intrinsic to the language)

But this doesn't occur. People choose the paradigm that fits them better. Imperative.

Those kids may understand FP programming. They may be able to learn logo. But that doesn't mean that the imperative style would've been easier and more natural.

>Moreover, it's leading. You start with "I have several tasks" (which sounds like a list), and you want them written down (okay, so you want someone to write the list you just told them).

This is you illustrating my point. I have 5 things, I have 2 tasks, I have several things, I have several tasks. None of these things explicitly point to procedures that have to be in order. It's simply your natural bias. If I say write something down but I don't say write it down in a single sentence or write it down as a numbered list, your bias automatically inserts imperatives in there. The language itself is neutral, but your natural tendency to insert additional meaning into the sentence is proof of my point.

>I must state again that I'm not defending or referencing functional programming in any of my replies. I started with a reference to Excel, and have also mentioned Etoys and Logo. None of these are considered strictly functional. When I mentioned temporal programming, that's another paradigm entirely separate from functional and imperative.

Understood. My point here is just saying that the additional "features" you mentioned aren't exclusive to the declarative paradigm because the declarative paradigm is actually a restrictive style. Imperative programming has more features and more freedom then functional programming but that doesn't necessarily make it better.

>This is a very indirect way of expressing time. The only thing that needs to mutate for time to pass is the timer. In the imperative model, the world is stopped until the next statement is executed. This is to match the paradigm to the computing hardware, not to the real world as you keep insisting.

No. This is the most natural way. For most of human civilization mathematics did not exist. The concept of time as a symbolic measurement did not exist. People measured time through change. When you take symbols and language and strip it down to our base perceptions, time is mutation, time is change. It is the most primitive and fundamental measurement of time.

Again your bias with what you know (timers) influences your view here.

>Okay, if you say so. You literally don't have a single study backing this up. The only argument you've offered is "it's obvious", which I assure you, it's not. If it's only obvious to you, then maybe all you're arguing is that you innately understand c-style syntax. Which, sure. But that isn't generalizable to "every human", as you are literally claiming.

Why do we need studies for everything? It's like I can't have thoughts or strong opinions on everything if there's not some scientific study backing it up? I strongly believe that you are human. Do I need a study to prove such an obvious point? Science is just the definitive data driven way at arriving at an accurate conclusion. The downfall is that it's slow and expensive. You can arrive at similar conclusions using less accurate pathways like logic and common sense.

> There is a mountain of research to support this, I've already referenced for you.

links? From what you described though, the "research" doesn't confirm or deny anything. Like I said it only shows that children can learn functional. Doesn't show that functional is more natural then imperative.

>This doesn't look anything like C. It's completely imperative.

Do you mean functional?

>In what way? Declarative programming is all about eliding the how and leaving it up to the compiler. If you answer this question declaratively, it won't contain instructions to make a sandwich. Such details are left to the implementation.

No it can. Simply treat the time dimension as a spatial dimension. That's how FP languages do it. The "how" is simply a series of temporal tasks done in a certain order. If you treat time as physical geometry then you simply "declare" the whole procedure in one shot.

>I'm not disagreeing with that. I'm saying that children understand their world to be continuous even before they understand the number 1.

No this is not necessarily true. The brain is made up of multiple modules that understands things in different ways at the same time. The language module in your brain understands things discreetly by default, the spatial part of your brain understands things continuously. There is no "singular" form of understanding these concepts where one concept is understood before the other.


> It is a strong claim and I stand by it. In your mind you feel as if you identified a counter example. I plainly already told you that it is NOT the case.

Strong claims require strong proof, and can be summarily dismissed if said proof isn't provided after a discussion spanning thousands of words. Sorry, but I don't think there's much more to say here until you've familiarized yourself with the literature. You've only just heard of Logo from me, which is decades old and is exactly the kind of programming language that touches on what you're trying to say here. The Logo team backs up their work with a system they implemented to explore how children learn to program, and then studied this system extensively with actual children. Their findings don't support what you're trying to say here. Your entire argument is based on "it just makes sense to me", which is fine if it makes sense to you logically, but that's not what the literature shows. Sometimes reality is counterintuitive. As proof of your assertion, you state that literally any single human being would give you an answer that comports with your point of view. I find this unconvincing when held up against the body of literature I've referenced here, which studies exactly the kind of person you're looking for to prove your point: people without any preconceived notions about programming. The literature emphatically shows that they are not innately and biologically predisposed to prefer c-styles syntax. It's just flat not true according to the literature.

I'm surprised you're so sure of yourself seeing as that you haven't read anything from this body of work. Maybe I'm not the one going off my own preconceived notions? Because I'm here citing sources, and all you can say is "it just makes sense to me, you're wrong because you're biased". When asked for any sources to back up your assertion, you declined to provide one, and stated you didn’t even need anything to back you up. This argument is unconvincing in any context, but especially when there's actual research contradicting you.

> First off lisp like languages are by default geared toward an FP style.

Let me stop you right there. Logo is not a functional language; it supports programming in multiple paradigms including imperative. You just learned about it yesterday, so I don't know how you can keep asserting this. The syntax is lisp-like, but it's a multiparadigm language. Earlier I claimed that I thought perhaps you knew the different between syntax and semantics, but now I'm not so sure, as you keep insisting that `lisp syntax == fuctional language`.

> links? From what you described though, the "research" doesn't confirm or deny anything.

It shows young children can write very sophisticated programs when handed Logo, but are at a complete loss when given C. It's pretty hard to square that research against your assertion that humans are biologically predisposed to c-style syntax. If you were right, they would take to C just as naturally as they take to Logo. Research has found that's not the case.

I've given you all the information you need to find the sources I referenced. You can find "Mindstorms" at your local bookstore or library. The author is Seymour Papert. You can find out about Etoys from Alan Kay's body of work. You can find the Logo language online and code in it yourself to see it's multiparadigm and not strictly functional. It's an example of imperative programming used by kids that is devoid of the c-style syntax. Logo is taught to kids because they take to it better than C. C existed at the time Logo was created, and the designers of the language specifically made it to be optimally applicable to the learning style of children. The key insight of Logo (particularly the turtle graphics module) is to frame the program as kinematic actions from the point of view of the child. In this sense you are right that children can understand imperative programming as early as 1st grade. But it severely cuts against your argument that c-style syntax is what is innately familiar to humans at a biological level. It's this notion against which I push in these comments.

> Like I said it only shows that children can learn functional. Doesn't show that functional is more natural then imperative.

That you still call Logo "functional" tells me you haven't read any of the literature, nor used the language, so therefore you don't actually know what it shows. Insofar as Logo offers functional capabilities, they aren't presented to children in the highly imperative turtle graphics module.

And again, I'm not arguing functional is more natural than imperative.

> Do you mean functional?

No, the code I wrote is completely imperative. In C it would look like this:

  pd();
  for (int i = 0; i < 4; i++) {
   forward(100);
   right(90);
  }
  pu();
Children have been shown to regularly take to the Logo program, while the equivalent C program is inscrutable to them. Again, see the body of work produced by Seymour Papert and the Logo team.

Anyway, that's all from me for this thread. Feel free to reply but I'm done here. Cheers!


Sure, I'll agree with that. Imperative code is simply more natural for people.


If you change the order of language introduction, people's preferred languages and paradigms change, based on my experience TA'ing in undergrad. Our program started off with python, but immediately introduced lisp and prolog right after. The output was CS grads who were more varied in their language choices.


I honestly think that ML is too consistent and regular.

Natural languages inevitably build in a lot of redundancy that can seem irrational but that serves a very important purpose in decreasing confusion and increasing the rate of comprehension. The redundancy makes the language harder to describe, but it reduces cognitive load once you've learned it because it's easier to distinguish between different elements, even in less-than-ideal conditions.

I think this is where both Lisp and ML fall short and why they consistently lose to the more convoluted syntaxes: what seems like a flaw is actually an important feature that makes comprehension easier. Sure, you can go overboard (looking at you, Perl), but the irregularities present in C-style languages are valuable extra information channels that increase cognitive bandwidth.


> People seem to love Python syntax.

I sure don't. I'll take explicitly-delimited blocks over having to slap a ruler to my screen to know what scope I'm in any day. It's a prime example of why form over function is a horrible tradeoff.

I use Python in spite of how much I hate its syntax (and myriad other aspects of it) because it's often the path of least resistance.

> ML syntax is just more regular, consistent, and typed.

My sample size is "Elm and Haskell", and of that sample size, my impression is the precise opposite. I encountered enough cases of whitespace that shouldn't have mattered but does anyway that it put me entirely off both languages.

Lisp? Tcl? Forth? Now those are regular and consistent (though admittedly rarely typed).


Neither Elm nor Haskell are really ML-dialects though. They're more ML-adjacent. I honestly rarely have any issue with indentation in F# that isn't immediately fixable by some editor feedback. Even thinking about it, I can't think of any issues I have had at all with any consistency.

I think Lisp in the flavor of Common Lisp isn't so regular or consistent, but Scheme and Forth certainly are. I basically consider ML as a typed, indentation-sensitive Scheme, at least the way I think and program in them both.


Editor indentation guides have been a thing for a decade or two.


That's cold comfort when I need to make some quick fix in nano or vi on some remote server somewhere.


A practice to avoid. Cattle, not pets.

Not to mention, well factored functions/methods shouldn't be so large they are hard to manage. Never had trouble with terminal editors either, but I keep things simple.


If only all of us were so fortunate to live with you in such a utopia where servers are perfectly fungible and automation scripts are written by competent software engineers :)


Penny wise and pound foolish, your org’s decision.

Currently on a project that had 12 years of tech debt. Have taken out about 60% of the garbage in three years. Indentation never a significant issue. Everything else has been.


> People seem to love Python syntax.

Python's syntax is just yet another C-like syntax that goes back to ALGOL. People love it because it's familiar, it's not doing anything special.


Yes I find this point unconvincing too. I do not think elm is intimidating, lots of FE devs jumped on its bandwagon and it’s considered easy to learn. If it’s true that it is, the syntax seems like the last thing people would find intimidating about elm, or haskell. The intimidating part would be the programming model.

Reason basically started with that premise that syntax is a problem to solve and we saw how successful it turns out.


It's very easy to see why. People naturally are drawn to procedures rather then formulas.

When given a task of several things to do, that task is naturally given and expected in a list of procedures. Nothing needs to be explained here. People intuit this naturally. A list of things to do is grammar we all universally understand.

But ML syntax and FP?

Who composes all the procedures into a single run on sentence? And to top it all off it's some sort of functional grammar that's highly divergent from a list of procedures? That's essentially FP and ML.

This initial barrier is what prevents people from adopting FP.

People who say it's because C like syntax is the default syntax taught in schools are missing the point. C like syntax is MORE intuitive and THAT is why it is taught in schools.

That is not to say C like syntax is better then FP. I prefer FP, but I'd be lying to myself if I said it was naturally intuitive.


Luckily, the vast majority of FP languages operate on lists of procedures combined together with the `.` operator. Similar to how C composes things with the `;` and many pythonic languages compose with the newline operator.

Realistically, most FP programmers program ML-like languages (Haskell and Elm included) in an imperative manner. It's an extremely straightforward translation, and in Haskell it's basically syntactically identical due to do notation.


I'm not talking about tricky stuff like do syntax monads.

I'm talking about FP at it's core.

The '.' compose operator is not procedures. In haskell it does something completely different.

>Realistically, most FP programmers program ML-like languages (Haskell and Elm included) in an imperative manner. It's an extremely straightforward translation, and in Haskell it's basically syntactically identical due to do notation.

No Elm doesn't do any tricky stuff. So elm is not imperative Haskell can look imperative with do notation. That's about it.

I mean you can put put elements of your equation on different lines but it's still not imperative.

   1 +
   2 +
   3
I mean ..yeah... if you want to call that (1 + 2 + 3) imperative, be my guest. But obviously that's not what I'm talking about.


Function composition is the most basic thing in FP languages and is equivalent in spirit and nature to the semicolon or newline operators of c and python like languages. Not sure why you chose addition as an example. Addition is order independent in most languages.

In elm (not 100% familiar with the operators), ordered compute can be expressed as follows:

a >> b >> c

Which says do a first, then b then c. Due to how data deps work in an FP language, a is always done first here, b next and c last.

Very imperative. No do notation.

In fact all the major Haskell monad instances are variations of the (.) operator, including IO.

So... If ordered compute extending all the way to IO is not what you're talking about... What are you?


Composing things with the dot doesn't execute anything.

It creates a new function. In Haskell it's actually composed backwards. The first function executed after you compose your function and apply that function is to the right.

Additionally each composition must have inputs and outputs that match the neighboring functions.

Nobody thinks this way when writing procedures. Each instruction is independent of the other. Composition is about building a pipeline, very very not imperative.

Do notation in Haskell is the closest thing in go to imperative. It is not composition. It's just an series of endlessly nesting closures which technically is even more harder to reason about. Do notation unravels this in ways that are hard to understand what's truly going on.

The point is this. Haskell and elm are not imperative, at times they can imitate imperativeness but to deal with these languages 100 percent involves thinking differently in ways that are unnatural. No amount of bending and breaking is going to turn these languages into something you can call imperative.


Yeah and composing things with a semicolon creates a new c program

Haskell is an expression language and a runtime environment. The expression language is more powerful than c, but c et al are also expression languages plus environments, just not typically talked about that way. Because the expression language is quite simple.

Imperative do this then that is a exactly function composition where the pipeline is the state. In other words imperative languages provide an implicit function. This is most obviously seen with stack based concatenative languages.


ML Syntax seems to be made for machines, not for the average human mind. Parsing language that is 'too unnatural' is a bit of a strain when compared to something that looks like average grammar if you squint at it, so it doesn't surprise me at all.

It's also a bit in the realm of scientific/academic programming vs. real-world bulk programming where it's mostly just CRUD, mapping and business rules. It's all still important if that's how money is made, but as far as I can tell, most programming isn't really all that involved with 'quality languages' and more about feature output from a backlog. If you can do that for cheap with a fresh can of Python developers and some extra EC2 instances on AWS, that's an easy choice vs. someone who might want to do it in Lisp instead for example.


I learned OCaml in college and I would not liken it to Python at all. Python is very easy to grasp since it's basically pseudocode, but OCaml, for example, is based entirely on recursive constructs (yes, imperative versions exist but only as a last resort). It's an entirely different way of thinking. The syntax therefore also looks unfamiliar, coming from `for` loops in Python.


Whose pseudocode? I have not once written pseudocode that looks like Python, though apparently this is odd. In high school pseudocode was BASIC with structured control flow.

I tend towards pseudo-SML for scribbling things; once got in trouble for using "language specific constructs" like map() for university work. They weren't a fan of APL for adjacency matrix munging either, though that's how I went through that part of the course. In either case pseudocode is just ignoring the unpretty parts of any language, and it can hardly be said any language is more like pseudocode.


> I have not once written pseudocode that looks like Python, though apparently this is odd.

Well, looks like you're an outlier. Most people I know who write pseudocode don't use filters, folds, maps or APL like constructs either, even though I totally would when writing the actual code. They write it in a Python-like format with for-loops and if-statements.


F# can be written nearly identically to Python if one so desires.


It's a matter of personal preference, and there's a lot more people who are familiar with C-style syntax. A good amount of those people also dislike Python, in fact.


> Reason and Elm died off

I'm still quite sad that Reason died. The syntax was much more comfortable and familiar than OCaml's while still delivering a ton of power. Have an optimizing compiler run on your JS was actually quite helpful. As a bonus you could have your backend running either Node or OCaml depending on your needs (ecosystem vs perf).

Was really unfortunate when the drama around Reason the language vs the Bucklescript compiler blew up and resulted in ReScript (a language that appealed to approximately no one in the existing user base).


Reason didn't exactly 'die', the toolchain that most people used to call 'Reason' just rebranded as ReScript. And no, you could not have your backend running either Node or native OCaml (I assume you mean binary executable), no one ever made a viable library or framework to do that. Sure, it was theoretically possible if you did a ton of work, but no one did that, because it would have been a huge amount of work (seriously, a lot).


Hi Yawaramin, I remember you fondly from the discord and hope you're well.

> Sure, it was theoretically possible if you did a ton of work, but no one did that, because it would have been a huge amount of work (seriously, a lot).

I think I communicated unclearly. It sounds like you read my statement as "run the exact same code in node or OCaml" which I agree would have been very hard. I had a much simpler multi-service setup where I could share some business logic.

Basically I had two services with the code unique to them stored in different directories, /ocaml-server and /node-server respectively. Both called into a common /lib directory. /ocaml-server had some very hot loops in it, /node-server had more standard business-y code that hooked into various npm packages.

> 'Reason' just rebranded as ReScript

Between the breaking changes and the general change in development philosophy (not really caring about OCaml server side) I felt it was a completely different language. Certainly switching to the ReScript compiler for my project would have required nearly a complete rewrite.


> It sounds like you read my statement as "run the exact same code in node or OCaml" which I agree would have been very hard.

Hello! Indeed, I did misunderstand you. I agree that it was possible to share some parts of the code between Reason's JS target with BuckleScript, and native target with the stock OCaml compiler. I think a pretty reasonable number of people did that. Actually, it's still possible to this day even with ReScript e.g. https://github.com/aantron/dream/tree/master/example/w-fulls...

> Between the breaking changes and the general change in development philosophy...switching to the ReScript compiler for my project would have required nearly a complete rewrite.

There were perhaps a couple of minor breaking changes but can you explain why it would have required a near complete rewrite? I wasn't aware of anything major like that. ReScript even supported and as far as I know, to this day continues to support the old Reason syntax.


I thought ReScript was mainly the result of renaming things for clarity (between reason and buckles crypt there seemed to be a lot of confusion for new people)?

It seems to be still alive (10.1 just released last week... i keep meaning to try it, but haven't gotten around to it yet...)

https://rescript-lang.org/blog/release-10-1


Yes it really was nice. I thought Reason could give Rust a run for its money as something similarly ML-like but without the manual memory management via the borrow checker. But sadly the Reason and OCaml ecosystem is still very much fragmented, there is nothing like what cargo does (entirely, not piece-wise) in the OCaml space.


If syntax matters, JSX/TSX will forever reign supreme to DSL's like Angular's templating syntax and Svelte. Angular didn't even start shipping intellisense until years after they invented their markup, and it wasn't all that good: it would crash and use an appreciable amount of system resources. Language and library authors need to learn to meet people where they're at if they want actual market share. Some amount of change is good, but too much is a non-starter, if your concepts can gracefully inter-op with other tooling developers already use.


Agreed, that's why I don't like using templating languages, TSX is still too good to stop using.


Typescript won because is doesn’t ask developers to write better code, only that they write more code. As you pointed out, its syntax is familiar enough to be non-threatening to existing devs and its type system is completely optional in so far that I’ve worked with clients whose devs proudly proclaim their project is typescript yet they have _no_ types defined. Not that the type system is good anyway and I’m not even talking about `any`. That’s why it is so frustrating to me. Yes, we need a strong typed language but typescript’s type system is just disappointing, so of course it’s popular.

Typescript does make the devs _feel_ like it’s secure and that’s more important. Marketing works.


Why is it disappointing? TypeScript's type system, via pseudo-dependent-types, is much more powerful than Elm's, Rust's, Haskell's etc. The only problem is that it's not sound, but JS itself is not sound, so TS can never be.


> JS itself is not sound, so TS can never be.

What?


The type system is not sound… that is there are things which the type checker cannot prove.

Not that the languages themselves are unsound.


1. That's a horrible description of unsoundness; you're making it sound like incompleteness!

2. What I meant to ask was: what does satvikpendem mean by "JS is unsound"? It's a dynamically typed language, so they can't be talking about soundness in the type system...


I did a bit of a writeup on the gaps in soundness for TypeScript in the docs, https://www.typescriptlang.org/play?strictFunctionTypes=fals...


I should clarify, the JS type system is unsound, and because JS is valid TS, TS is by extension also unsound. Not sure why you'd think a dynamically typed language cannot be unsound, it still has types, they're just not static.


Define soundness. Here's something close to the definitions that I've seen:

> The central result we wish to have for a given type-system is called soundness. It says this. Suppose we are given an expression (or program) e. We type-check it and conclude that its type is t. When we run e, let us say we obtain the value v. Then v will also have type t. - https://papl.cs.brown.edu/2014/safety-soundness.html

This only makes sense in the context of static types afaict, because you do not "typecheck an expression" in a dynamically typed language.


Types can be checked at runtime too, not just compile time. A language is sound if all types, even at runtime, are what they say they are. Zod for TypeScript is one example of checking this at runtime, although it's not fully safe either.


Soundness is not decidability. The reason TS is unsound is not directly because it's undecidable, as any Turing complete system is undecidable, and the TS type system is Turing complete.


Typescript exists so you can press ‘.’ and find out what the npm library you just imported can do.

Types are more important/useful for library code than business/presentation logic. Typescript lets you can hire devs who can’t read documentation or understand type systems write javascript to stitch together enough imports to make something work.


Even if you aren't using types in first-party code, you're getting autocomplete and inline docs for all the third-party code, including the standard DOM API's, or NodeJS, or whatever types from your NPM modules. There's value in having that even if you live in a mostly `any`-typed codebase (which is likely more common than anyone's letting on).


And just to add, the most durable thing that Elm has contributed to the frontend world is TEA, The Elm Architecture, upon which most reactive libraries and frameworks now sit, such as React, Flutter, Vue, Angular etc; UI is a function of State.


Didn’t both of them start around the same time? React released close a year later but it reportedly had been running internally at Facebook for a couple years by then.

It’s also a relatively modest extension of decades of prior art so I think it might be more accurate to treat it as a formalization of that practice rather than implying it was foundational.


Perhaps it's a case of convergent evolution, since React similarly has a strong functional influence (immutable tree nodes in the VDOM).


Definitely - it’s always tricky tracing the history of these things since these ideas are coming out of the same larger pool of experience and convention, and people are understandably prone to identifying an idea with the first place they encountered it, lacking visibility into other pockets of the primordial soup.

Looking at the language used to describe it on https://guide.elm-lang.org/architecture/ I’m struck by the resemblance to the message-passing systems which were common in the 90s and 80s due to the influence Smalltalk had in the 70s.


Seems like both of them are inspired by Functional Reactive Programing, so yeah, hard to say Elm was foundational


Is that the case? I thought Bonér's Reactive Manifesto came out around the same time and was based on early Scala work.


Typescript won over people who mainly wanted static types I think but not the functional programmers.

I think people preferring FP on the frontend are using ClojureScript, PureScript and ReasonML or ReScript now. Eg here's Dark's interesting story going to ReasonML via automatic translation: https://blog.darklang.com/philip2-an-elm-to-reasonml-compile...

(ClojureScript ecosystem was much bigger than Elm to start with in any case I think)


Typescript didn't win anything and its type system doesn't deliver nowhere close to elm s value.

Elm was simply to risky given how quickly the maintainers fenced off native access.


It won user market-share which is what OP's post is about. TS is not "dead" like Elm is, quite the opposite.


They were never competing with each other and serve completely different use cases.


Are both not used for frontend engineering by compiling to Javascript?


Like what?


One is an imperative gradual opt-in superset of JavaScript, the other is a strictly typed functional programming language with a fairly opinionated runtime.


This defines what they are, but not what use cases they serve. From what I see of the comments and understanding of both TS and Elm, they have use case overlaps.


A hammer and an axe have use case overlaps too. Just because you can build front end apps with both doesn't mean that they serve similar use cases. Elm is more of a framework.


Typescript definitely won something. Just looks at the current usage of both languages.


And elm delivers what? Some people writing some apps that they claimed to have less bugs and are easier to maintain? Who uses these apps? Who can tell? It’s all hearsay.


I maintain a 50,000 line elm app. It's not the most used app in the world - has between 50-100 users. But it has had a sum total of 2 bugs since launch - both logic bugs in some complex financial projections.

Contrast with an elixir app of similar size that I also maintain. I fix on average 1-2 bugs per week, almost all of them are things that the elm compiler would have prevented like calling a function that was removed or renamed last month or not handling a case where a database request returned 0 records instead of 1.

Honestly though the biggest draw for me is the refactoring experience. For various reasons I had to do a major refactor of both code-bases. Not adding new features or anything, just restructuring the code and module layout to line up with the way the code actually functioned. Paying the tech debt essentially. I'd say both refactors were of roughly similar complexity, touching probably 75% of the files in each project.

The elm refactor took about 5 hours and worked as soon as I fixed all the compiler errors. I never had to fix a bug from that refactor - it just worked. And the code was way easier to understand after paying all that tech debt back.

The elixir refactor took 4 days, full time. 25ish hours. It also required me to write about 150 tests that weren't there before because the compiler wasn't able to point out code that would always fail at runtime. After deployment I had to fix a handful of bugs that still made it into production, despite how careful I was to not break anything.

Idk, being able to completely restructure the code without worrying about breaking anything really appeals to me for some reason.


Your argument is more about static versus dynamic typing rather than anything Elm specific, as you can get all of that with TypeScript too, but without opting into Elm's entire framework. One reason why, in my opinion, TypeScript won, as I mentioned above, because it lets you do what you need to do without arbitrary restrictions by the compiler authors.


My experience has been due to the type system, I agree. (Also the compiler enforced semver.) I've never had the opportunity to use type script, so maybe it does offer similar benefits to Elm in that area.

That's extent my personal experience can push things as far as active production applications go, but it seems to me there's something a bit more in Elm beyond just the type system. Haskell and PureScript both have static typing, but I've had runtime errors in both those languages. Not as often as JS or C or even Rust, but they do happen. In Haskell because the 'error' function exists and so people use it. In PureScript because it has its version of Elm's native modules, permitting arbitrary functions to be implemented in JS. The JS can (and does) produce runtime exceptions.

It seems to me that the promise from TS, Haskell, PureScript, etc. is something like "with this language/type system runtime errors will rarely happen" whereas the promise from Elm is "with this language/type system runtime errors can never happen."

I think that transition from 'rarely' to 'never' has been a big part of the Elm project for a long time. It's been on their homepage for as long as I can remember. At least prior to the big 0.18 native modules controversy.

If the idea is "no runtime exceptions can ever happen" with the implicit addendum "no matter what you do" native modules have to go. They were a vector by which user code could crash the application, breaking the promise. In that sense the restrictions are not arbitrary, but instrumental in upholding a major design goal of the Elm project.

That said, I don't know how much value there is in the "can never crash" promise. It feels like a step in the right direction, but at the same time it seems all the benefits in production code that I've personally experienced and cited in my earlier comment don't fall under the edge cases Elm prevents with its stricter promise. The two or three exceptions I've encountered in PureScript as a result of its version of native modules existing were all in toy programs, so maybe it's just not a big deal in real production apps.


That makes sense but it also strikes me as what I feel Haskell is like; it is so wrapped up in the ivory tower and preventing runtime bugs that it's hard to get actual work done. In TS, I also have not had runtime crashes but I'm able to use the vast array of packages for JS (which usually have TS types too nowadays) and get done what I wanted to. If I had to do the same in Elm but with much fewer packages, I mean sure I could reinvent the wheel but it doesn't bring us business value.


The guy just said earlier that he managed to do a full refactor in very little time and had super few bugs in production. I think that's a prime example how you gain very big productivity boosts using elm.

I've used typescript and elm, the ridiculous amount of time that we spend fixing bugs in typescript dwarf the benefits of having a bigger ecosystem.

Elm was the first time I could release something and say that it was done. I had built an app in elm for 4 months behind closed doors and then went live and everything worked with the only bug being in an interop to JS (2 lines of code, I fixed it in less than an hour). It meant that I could continue developing new features as requests came in without stopping to fix things. In return, also meant I could easily predict how long time takes to make. The benefits you get are massive. The cost you pay is that you do things properly.

After that project, it had been a while since I had used Typescript, and so I wanted to give that a try again, thinking that it's basically elm if you are strict about the typing. I fired up a new react project, added some tests using react-testing-library which passed, typing passed and then I booted the app and it crashed. Later on I needed to plot a graph and imported a library for it. I followed the instructions and no graph showed up. This would never happen in elm, if it compiles it pretty much always works. Typescript offers nowhere near the same experience and I lost so much productivity trying to debug the earlier issues.


> imported a library for it. I followed the instructions and no graph showed up. This would never happen in elm, if it compiles it pretty much always works

except that in elm that library would not even exist, and you have to write the binding yourself, which is why AFTER you write the binding, everything should work.


Actually there was a graph library for elm which I did use and it worked perfectly.

The elm ecosystem is surprisingly rich for having such a small community. The libraries are also much better designed and much safer to use due to the pure nature of elm.


Elm is perfectly stable and usable. It's a shame there's not a little more communication / a wider core team working on the the parts of Elm that could reduce the friction for the average web development task.

There's Gren https://gren-lang.org/ which is a fork of Elm. Gren has nodeJS and web storage a support, along with a package manager which can install from github (iirc). I'm eagerly watching the development of this project, hopefully it can provider value and grow into a strong alternative.

I loved working with Elm. It forced me to grow as a developer and appreciate all the benefits advantages that comes with a pure functional programming language. I truly hope Evan can start to win back the lost mindshare.


There is also Roc-lang: https://www.roc-lang.org/


Personally, I wouldn't trust anything that Richard Feldman was involved in. He was instrumental in making the Elm community a hostile and unwelcoming place[0]. To my recollection he has never come out and admitted that the Elm core team was wrong in how they handled any of those things, so why should anyone assume any better from Roc or anything else he's involved in?

0 - https://github.com/gdotdesign/elm-github-install/issues/62#i... (see edit history for full impact)


I was upset and said things I regret and which the poster didn't deserve. I apologized but I still feel bad about it.

Not that it excuses my behavior, but I have mentioned this elsewhere - e.g. https://old.reddit.com/r/haskell/comments/qc4bxd/outperformi... - but I also can't blame anyone for not knowing that.

Edit: I decided to edit the original comment to make my feelings about it clear for anyone else who comes across it in the future.


It's commendable to reflect so publicly on this and to make it clear that you made a mistake. Not that it means much, but this comment and the one you linked actually softened my view on this debacle considerably.


Made by the same people who maintain elm. So there is always a chance of similar mismanagement


I'm excited for the progress on gren as well. It's very clear the lead maintainer wants to build in regular updates into the dna of the language, as evidenced by the very frequent updates on zulip

https://gren.zulipchat.com/


They made HUGE mistakes not listening to feedback from their userbase. It's still a cool piece of tech, but they wouldn't budge on a lot of things that were dealbreakers to a lot of people. That's their right to do so of course. In a very competitive area of frontend, people chose to spend their effort and time elsewhere.


My professional Elm experience is as follows:

* 2017 - 2019

* 2021 - 2023

* 2023 - ?? (I'm starting a new job next month with Elm)

In my experience Elm isn't dead. But it is good enough. The lack of rapid changes, updates, etc are a feature, not a bug. I hated working in React TypeScript and what felt like constant updates to major libraries, including React and TS themselves.


> The lack of rapid changes, updates, etc are a feature, not a bug

This position in the Elm community is epitomized by this site:

https://iselmdead.info/


Are you, by chance, a "washed-up local"?


Indeed I am


My opinion: I wouldn’t use Elm as a solo founder, as you want to put your energy into shipping and Elm, while very good will probably slow you down when you hit the thing it cannot do, and then you need to fork it and change the core code.

React on the other hand, has a vastly larger ecosystem already and is designed to work well with 3rd party code.

Middle ground is using an Elm-like architecture within another framework. The keyword here is “TEA architecture” where TEA means The Elm Architecture, or also MVU (Model View Update)

Elm is excellent and has done a lot of good for the front end world through knock on effects. It is good fun to use. I have a few projects based on it open source. So I am not against Elm I just think it will be annoying if the goal os to ship fast.


> I wouldn’t use Elm as a solo founder, as you want to put your energy into shipping and Elm, while very good will probably slow you down when you hit the thing it cannot do

This was me a couple of years ago and I even gave it a fairly serious shot. It felt like the ecosystem buy-in was huge, and once I ran into something like a missing CSS rule it was a mess. And iirc all of that code would have been basically useless for portability in case I needed to switch.

Fortunately my tingly senses told me to stay away, and now I really feel like a dodged a bullet.

The main thing I consistently dislike with JS/TS is that there aren’t any easy ways to work with immutable data. I’m not the kind of person who needs full-on paradigm, I don’t mind getting dirty with an imperfect language, I just want the error-prone and common use cases to be decently supported.


You may have to write a few bindings, but Fable can give you an Elm like language (F#) with full JS ecosystem interop including React components.


> “TEA architecture”

The Elm Architecture Architecture ;)

https://en.wikipedia.org/wiki/RAS_syndrome


I like tea (and coffee) though!


Elm is still really good for web applications, and the community is still active.

I used Elm in production for many years and it is my choice for most production projects.

Exceptions are

- If you need to do a lot of number crunching and can't use Elm ports

- If you need to use WebGL

- If you depend on very specific JS libraries and can't use web components

Many of the controversial decisions do make sense if you dig deep enough, but... You need to dig enough.

I agree that there is an utter lack of transparency and communication from Evan and the core team.

Worse, there is no acknowledgment of the problem nor any attempt to try and truly involve the community in the future of the language.

Many enthusiastic people have left the community, leaving only those who can tolerate being told what is best for them.

Personally, I stopped contributing to the community in any way.

I started to write my own replacement language, http://squarepants.io/ which also tries to address some of Elm's weaknesses when scaling up (I was fortunate enough to use Elm on some large projects).


A while ago I picked up PureScript again, and it has a new life. The tooling (both cli and editor) has matured, the number of code examples has grown a lot. Its quite pleasant to use!

All the tools and other goodies are listed here https://discourse.purescript.org/t/recommended-tooling-for-p...

What I found particularly fresh for a Haskell-like language was the cookbook (https://github.com/jordanmartinez/purescript-cookbook) which contains lots of small-to-medium-size realistic example. They are to get you started and as a result do wonders for the learning experience.


Check out Roc[0][1] by Richard Feldman; it's early-stages (perhaps earlier stages than Elm?) but from everything I've seen it looks a bit like a spiritual successor to Elm, though focused more on native applications (but still seems to have its sights set on webassembly support too)

[0] https://www.roc-lang.org

[1] https://github.com/roc-lang/roc


Ironic, given what Feldman said about other Elm forks.

https://lukeplant.me.uk/blog/posts/why-im-leaving-elm/#forka...


Roc is not a fork. And Evan itself don't care about forks, unless it uses related name or logo.


Elm is and always has been (AFAIK) a one-man project. It's not suitable for anything that you depend your livelihood on. Look at Svelte, Vue, React, Angular.


I have to politely disagree. I have personally staked a lot of my career and livelihood in Elm and it has paid off quite well. I love the language clearly, but also being a part of a community of nice people has been amazing too. I have worked at 4 companies that use Elm, some of which turned into big successes imo. Programming in Elm is always how I have supported my family.

It is going really well! Whenever I hear Elm can't be done, I am always a bit baffled. I am right here! Look at me! I am doing it!


Another happy Elm programmer. Elm has provided my livelihood for the past 4 years and still going.

All these people complaining about the lack of native scripts, yes it becomes a problem sometimes but it’s not a show stopper, there’s ports and web components.

I recommend Elm to any developer wanting to get into Functional Programming. Elm official docs itself will take you far.


I really don't like such comments that like any nuance.

I know multiple companies that use Elm and many people depend on them for their livelihood.

I would not recommend it due because it has never been a really open source project and only the core maintainers were allowed to write libraries that interfaced with JS apis without the need of ports.

Still, it baffles me how many people would chime in and feel compelled to comment regardless of the fact that hundreds of projects out there make money with Elm.


And if the sole developer of Elm forgot to pay for their domain that hosts packages for Elm all those projects would be broken forever.


When your parent said "one-man project", he is referring to Elm, not the project using Elm.

Elm was/is highly dependent on one person.


> hundreds of projects out there make money with Elm

Hundreds? I think that's overshooting it.


I know half a dozen in Italy alone.


There were (are?) multiple core contributors. A person at my previous company even got to spend one day a week or something contributing.

But even with multiple people, I did get the impression it's not exactly a committee when it comes to what changes get merged.


There's an active fork of Elm in https://gren-lang.org/.


Elm is like American politics. The most vocal people are divided as strongly as if they'd stabbed each other in the back. The majority people aren't vocal and just go about their day, sometimes using it and sometimes not.

My 2 cents. Don't use this place as a metric for whether or not something is successful or worth trying. 20 years ago this community would tell you not to use Ruby or Javascript, but today they'd call you an idiot for not using them.


the elm ecosystem is thriving.

what's interested me most recently are the various efforts to bring elm to the backend -

* choon keat's https://package.elm-lang.org/packages/choonkeat/elm-webapp/

* mario's https://lamdera.app/

* dillon's https://package.elm-lang.org/packages/dillonkearns/elm-pages...

What initially got my attention was that release which removed signals and abandoned FRP. I remember being struck by the idea of an emerging language that was stripping out major features. To me that suggested a strong vision. So I looked closer, and the suggestion was confirmed. My joy of Elm is as much about what it does not allow, as what it does.


I loved working with Elm. For me, it was the best of the frontend frameworks due to its functional, Haskell-like nature (Edited to add this comment: totally my subjective opinion, of course). I've found Phoenix's LiveView to be the best alternative for me so far.


After working 3 years using elm exclusively for the frontend, it felt dirty going back to react when changing jobs.

The previous huge project had basically zero bugs. The code was nice to work with, could trust things were working after a refactor without even clicking around in all interfaces. Could trust the typing and forced handling of all state to notify if you broke something. Of course, no escape hatches meant you had to program things the one correct way, no shortcuts possible.

While the react codebase I work in now is so brittle in comparison, and scary to make changes in.


I couldn't have said it better myself. The peace of mind being able to refactor often even in large projects and not be worried about bugs or runtime errors once the project compiles again in incredible.

I couldn't imagine having such a large project/codebase in react and need to do a huge refactor...


This was my experience as well. Elm is/was damn solid.


As someone who has done commercial and personal work in Elm for many years, I know that lots of people are actively and productively using it with fewer issues than most other mainstream languages. Of course there are issues in any language or ecosystem

So what is HAPPENING with Elm, is people are productively using it.


Can you give examples of any companies that are still using (as in writing new code not just maintaining old Elm code) Elm in 2023? StackShare doesn't list any recent ones



Also, brilliant.org and exosphere.app, both with large and active Elm code bases. I worked for brilliant for a time and now work for exosphere.

Also, my own project, https://scripta.io. About 45kloc of code split between the app (lamdera/elm) and the MicroLaTex-to-HTML compiler (elm).

Have been working with Elm for five years and have been happy with both the experience and the results. E.g. radical refactors of core data structures are simply no big deal.


Some Elm companies I know of are (writing new code) Permutive, Scrive, Vendr, DX, and (not sure what the current status is) GWI, Holmusk, NoRedInk, Dividat, Brilliant.


F# fable Elmish is really nice and similar.

I've heard good things about purescript too.

I love using the SAFE stack in F#. You get the Elm architecture, and the same language front and back. You can even have shared code between the two. Fable remoting makes api calls as easy as an async function call. The back end has access to most of the .net eco system.

F# isn't the purist choice but it's a nice pragmatic functional first language.


Sadly people will keep overlooking F# in favor of literally anything.

It is an amazing language within a solid and productive ecosystem that in my opinion just feels good to use.

It's a shame not many company are betting on it despite it being rock solid for almost two decades.


I tried using Elm for a side solo. Lots of things to like about it and I learned quite a bit. Played with a lot of toy projects.

But I hit a wall when I wanted to use some templating and communication modules written in Javascript. Working with things like protocol buffers and externally generated html through ports was very frustrating. As I kept working around Elm's limitations I began feeling this wasn't a language I could use part-time. I either needed to go "all in" or find something that made it easy for me use external Javascript. Looked at giving dillonkearns' Typescript api product a try but his e-commerce site crashed...

So now I'm trying out Yew, Rust and WebAssembly. I miss Elm's compiler but wasm_bindgen is way easier to use than Elm ports.


I just applied for an Elm job a month ago. It's not dead, it's just super niche.


the core team took a "my way or the highway" attitude, and people picked the highway. the language is still around and being actively developed but the community has largely moved on.


For anyone who saw this headline and wondered what happened to Elm the email client, the short answer is it was overtaken by Pine and Mutt and is essentially dead. RIP elm, you were my first client and a dear friend.

https://en.wikipedia.org/wiki/Elm_%28email_client%29?wprov=s...


I started with elm, moved on to Emacs, pine, then mutt, then taught mitt to my spouse and moved on to a range of odd and unpredictable clients for work including outlook, web based gmail, etc. My wife still uses mutt today as her main client. I have largely given up on email as a main means of communication, but I am happy that there is a subset of the world that goes by with the plain text based clients.


fwiw, Elm is still my daily driver for my one man projects. Though I understand why people wouldn’t want to begin learning an ecosystem that seems dead. Elm is still my favorite solution to front end development.

But compared to an ecosystem that is truly dead, Elm has lively Slack and forum communities, and every StackOverflow question is answered such that I don’t even both trying to procrastinate on SO by helping beginners anymore—someone will have beaten me to offer help!

Right now, Elm’s Slack is one of my favorite places to hang out and chat on the internet. While it is a bit confusing why compiler work has slowed down so much, it’s not the best way to judge an ecosystem.


Maintainers were too tenacious imo. Inflexible to a frightening degree. I liked the architecture and used it extensively with f# + fable + elmish but these days...I'm using sveltekit + typescript for everything frontend, for it's unmatched velocity.


When you look at Elm with similar projects, like Melange or Rescript, they’re all “dead” in the sense that they aren’t the moving targets that most people expect of the Javascript ecosystem.

That said, they are still under active development with their own small communities achieving what they set out to do, and no more. No expansive rewrites or fancy new features.

Compare with Typescript, which always is “improving” because it’s trying to provide type inference around a loose Javascript model, so there’s always more complexity to be added.


> they’re all “dead” in the sense that they aren’t the moving targets that most people expect of the Javascript ecosystem.

Are we talking about the same Elm? The 2019/2020 uproar in the Elm community was due to the Elm team making significant, restrictive changes against the will of the userbase: https://news.ycombinator.com/item?id=22821447

> That said, they are still under active development

I don't know who would consider Elm to be under active development, given that the last release was in 2019 and most of the repo hasn't been touched for 2 years or more: https://github.com/elm/compiler


Evan works privately and sync with public repo every release.


I don't think Rescript[0] in fact they are pushing active updates. I think they are trying to make a push into supporting the ocaml ecosystem as well, and not just be a compile to JS language

[0]: https://rescript-lang.org/


Ironic, when they could've just stuck with Reason and gotten all of that for free instead of reinventing the wheel.


Any typed language should always be improving at least because there will always be meaningful propositions which the type system cannot prove. As the language team develops ways to infer or preserve type information they may wish to modify semantics to express these things.


I think the alternatives to consider would be:

- https://www.purescript.org/

- https://reasonml.github.io/en/

Or like others have suggested typescript and the usual libs and frameworks


Reason is mostly not about JS compilation any more, the alternative is really ReScript.


AppRun is a JS/TypeScript library inspired by Elm: https://apprun.js.org/

Does very well on comparisons of performance and code size: https://medium.com/dailyjs/a-realworld-comparison-of-front-e...


It truly was the best frontend language

Unfortunately or fortunately, beauty and utility emerge out of decentralised chaos not out of dictatorship, no matter how enlightened


This was just discussed in the Elm Discourse as well: https://discourse.elm-lang.org/t/request-elm-0-19-2-any-upda...

This thread (mostly) by current Elm users mostly tracks emotions here on HN


> Am in the process of starting my first SaaS as a solo founder and was looking at what languages to use

Use the language you know best. Your tech stack is not the product:

https://hoho.com/posts/your-stack-is-not-the-product/

As for what happened to Elm? It reached a local maxima. Evan didn't want its purity to be compromised by external JS interop which cut it off from most of the open source ecosystem. If you never need to leave its confines, it's a wonderful place to be, but an engineer's job is to ship.


I'd never heard of Elm until I ran across Iced [1] ("Iced is a cross-platform GUI library focused on simplicity and type-safety. Inspired by Elm.").

Has anyone who liked Elm tried Iced? Any thoughts?

I've been checking out various GUI libraries for Rust, but I don't have time to investigate them in depth--so I thought maybe someone might have already done the investigation for me :)

[1] https://docs.rs/iced/latest/iced/


I've developed a few Elm apps, and tried Iced. I liked Iced, but I'm not a proficient rust developer, which really limited what I could do with it.


system76 is using it to develop their new cosmic desktop environment for linux. The code is here https://github.com/pop-os/cosmic-epoch

We'll see how it starts panning out this year or next.


To add to the discourse here, it's worth considering that since Elm came about WASM has come along in huge strides, and at this point you're faced with a "Why learn Elm when I can use <lang I already know> and compile to WASM".

See projects like Yew and Blazor.


The issue I have with WASM is that its threading model is basically the web-worker model: each thread has to have its own module and can only communicate through pure data via shared memory.

TS/JS can do a lot these days, but threading is its achilles heel. Mechanisms like green threads and fast n-way dispatch for parallelization are basically still out of reach.

You can emulate some of this with WASM and worker pools, but it seems like you'd need a fair amount of boilerplate to actually make that work properly. And if you want to interface with native web APIs, you're stuck with the same limitations.

e.g. You can share memory with a web worker, but if you want to pass handles to resources around, you are extremely limited and it requires a custom approach for each particular API.


WASM is still not realistic for most apps especially if you care even a little about bundle size.


AssemblyScript produces the smallest binary and then Zig. Rust produces bloat binary by default but can be small by https://github.com/johnthagen/min-sized-rust for hello word type of app. I have no idea if the gc proposal could make those langs produce smaller binary size. That said, .wasm is generally smaller on wire and faster-to-execute on host.


Smallest binary compared to what? What will a typical AssemblyScript bundle size be like if it has similar functionality to a typical React app, let's say a todo app implementation?


Smallest compared to any of current capable compile-to-wasm langs. How dare you bring up React, when it's already lose to Solid which is apple-to-apple (js). React is freakin bloat. Numbers of functionality is the problems produced by itself and then solve its own problems.

If you ask me to compare to React for todo app, wasm with gc + interfacing with DOM, is going to blow React out of water in both size and performance (logic wise). DOM-wise, thing like "signal" already won anyway.

I honestly don't understand why you bring up React.


Any actual numbers from an actual comparison? Also, 'how dare you', lol, calm down dude.


Sorry about that I was saying in kidding tone. We're all good.

Here is the wasm binary size number (2019): https://stackoverflow.com/questions/55135927/how-do-webassem...

I didn't bookmark Zig and other for the number, it's over here and there on HN threads and Github issues.

---

I think you conflated "wasm binary size from AssemblyScript" with AssemblyScript size itself, that's why you brought up React to compare with AssemblyScript. AssemblyScript doesn't compete with React. And comparing React with the others is also tricky because of paradigm difference. React view and logic are very coupled. Compile-to-wasm langs only compete with js/ts part excluding the DOM part .. at least for now because wasm can't even share string ref right now (the "stringref" proposal is currently phase-1 ), let alone DOM access.

Wasm-gc proposal is already in phase-3. I think languages that used to also compile their runtime with app code could be smaller because wasm host manage collect the garbage for you in terms of "struct" and "array" (you can check out https://github.com/WebAssembly/gc/blob/main/proposals/gc/Ove...)


> "Why learn Elm when I can use <lang I already know> and compile to WASM".

Especially now that Haskell is getting an official WASM backend.


Agreed. Also think Elm could benefit a lot from good interop with Wasm.


> Is it a safe bet?

Yeah, for web apps it's a good choice if you want simple robust maintainable code.

I mean, for quick throwaway scripts I'd go with JS (as a Bash replacement), but for web apps, Elm strikes a really good balance of features.

The language is not dead, the community mostly lives on [Elm Slack](https://elm-lang.org/community/slack) as opposed to Reddit or StackOverflow.


this question comes up often enough there is a site dedicated to it: https://iselmdead.info/.

for the open minded: building in elm will change the way you see software development. like the matrix red pill blue pill scene - you won’t be the same afterwards


isn't that a bit of an exaggeration, especially if you have experience with other functional languages? maybe if you've only ever written JavaScript prior to trying Elm…


I programmed in functional languages prior to writing Elm, but the reactive part was new to me. I found the signals primitive to be pretty mind-blowing. I haven't programmed in Elm since then, and some time ago they simplified that feature into other language primitives, so I'm not sure what it's like now.


ReScript - super good, actively developed: https://rescript-lang.org/


Yes.

I never understood why ReScript didn't get more love.


The split between the Reason and BuckleScript/ReScript dealt, I believe, a fatal blow to both communities which were now split. It's a shame, I could have seen Reason in the same space as Rust, an ML-like language (well, literally an ML via OCaml) except without a borrow checker. It would have been great to use it in place of TypeScript on both the server and client, but now ReScript is incompatible with Reason, by design.


I never understood the reason for the split.

Was it because native had different requirements than JS?

Why couldn't they both play along and target WebAssembly?


It seems like they simply wanted to add stuff that might not have been possible without changing ReasonML, and so they built their own framework and rebranded it to ReScript: https://ersin-akinci.medium.com/confused-about-rescript-resc...


If you're curious for a successor language that has better interop - along with fixing some of the problems that I saw a lot with Elm (former core team member), check out Derw: https://www.derw-lang.com/. The blog is updated frequently with whatever I'm working on, check out the latest post talking about what features are coming to Derw this year: https://derw.substack.com/p/things-coming-to-derw-feburary-2...

And to kick things off, here's a blog post on "Why Derw?": https://derw.substack.com/p/why-derw-an-elm-like-language-th...


I love elm! It changed the way I program in every other language. Its philosophy resonates with me.

What happened with it? Not much. It just continues working and being useful for the people that use it.

Some folks have started working on a fork of the language, called gren.

Also, last year a package was published that added a different way of interoping with javascript (elm-taskport).

Appart from that, the language is pretty stable and solid in its niche. I don't see it going away in the next 10 years at least.


They essentially stopped development at 0.19 saying that it was good enough, development went quiet, and then the lead developer went off to work on other things.


>went off to work on other things

I heard he had a baby in recent years, is that what you meant? ;)


No, I don’t know anything about his personal life, so I wasn’t alluding to that.

I think at some point in the last year he posted about his explorations with a new server-side language, maybe something to do with database drivers. Unfortunately I can’t find it.


This isn't actually an elm post or a functional post although I know "a little" about that specific topic.

All new industries or genres or fads or anything really, go thru rapid growth of small players which get gobbled up into a couple giant monoliths as time goes on, which makes them really slow, which allows distruptive newcomers to arrive eventually (although not yet in this case).

So about five years ago I remember a study of the then new-ish functional front end project repos on Github and there were only 25x as many scala repos as elm and about 15x as many haskell repos as elm. So without knowing anything about the techs or even talking about the techs you know as rapid growth ends and consolidation begins that elm's going to disappear because its simply too small. Every 2018 elm project converting to scala before 2023 would just be a decimal place in scala or haskell numbers.

For another example see the auto industry. A century before "the big three" there were hundreds of small car companies. All gone!


Pardon the excessive length of this response but I think about Elm all day every day and love the chance to talk about it on HN! Currently and for a couple months I've spent several hours per day working on my side project in Elm. Over the two years before that my usage was lighter, probably averaging out to a dozen hours per month. I consider myself experienced in Elm and my history with it goes back to basically the launch of 0.19, the last major version (which is four years old now, I believe).

Elm the language:

Generally rock solid for my purposes as a single dev making "home cooked" apps and games. There are very occasionally things where I'm held back by its inflexibility, but these are more than made up for by the positives. What elm has going for it is that it's totally type safe (not just mostly type safe like typescript) and that Evan and the community did a really good job designing primitive libraries for getting stuff done that have all the virtues you could wish for in other languages. Like great error messages, trying to make sure there's only one way to do something, etc.

Elm the community:

Extremely welcoming and helpful if like me you stick to Slack help and discussion channels. In Github comments threads and on forums like HN things can get spicy though. Elm is generally not as welcoming to folks looking to blaze their own path and build empires of their own as most of the JS ecosystem is. It's not that it never happens or can't happen; people do build their own large empires within Elm (I'm using Lamdera at the moment which is a fork of Elm making it work on the backend as well as frontend and it's definitely built by someone who had a vision related to Elm but which required drastic changes). It's just that that's less common and harder to do here than in other languages. I think there are a lot of folks like me though who are content to paint their masterpieces "within the lines" so to speak and even find comfort in the limitations. It's way easier if you are like me and find yourself doing like I said "home cooked" apps that you can cut scope on if you have to because of the language.

One final note which I've said before and I'll say again: I don't think the language is dead but rather "done". It works great for making a frontend app, and can continue to exist in this state through the year 2100 as far as I care. The creator has promised security updates and perhaps some non-foundational improvements down the line, and that's good enough for me. I'm much happier with that arrangement than with, say, React land where for all you know next year they're gonna release Hooks 2: Electric Boogaloo and you have to learn a bunch of new stuff and all the old libraries will have to update.


One of my previous employers had an Elm frontend and a React frontend. Basically all development on the Elm side was slower. New hires generally had used React before in at least some capacity, but everyone had to learn Elm from scratch. When I left Elm was slowly being phased out but there were still a lot of Elm components left over.

For me, it felt like a heavier cognitive load to do anything with Elm since it was another abstraction level away from the DOM. The quote "I know this is possible in Javascript, but we chose Elm and it makes it very hard" from the "Why I'm leaving Elm" article (https://lukeplant.me.uk/blog/posts/why-im-leaving-elm/#techn...) is something I never said to my boss but still resonates strongly as how I often felt internally frustrated when trying to work with Elm. Some of this might have been because I was very familiar with JavaScript and less familiar with something like Haskell, but I think it's fair to say that most people working on the front end web are likely the same.

For example look at the "simple" counter example... there's a ton going on that doesn't seem intuitive, and even having used Elm professionally I double take looking at lines like "update : Msg -> Model -> Model" or "main = Browser.sandbox { init = init, update = update, view = view }": https://guide.elm-lang.org/architecture/buttons.html

The equivalent React simple counter example (https://reactjs.org/docs/hooks-intro.html) is much more understandable IMO/. Even if you haven't ever used React before, it looks much more like typical HTML and JavaScript. React has plenty of gotchas and is far from perfect, but in terms of getting junior web developers making their first pull requests in our codebases React took way less effort and learning.

Elm isn't dead, it has some dedicated FP fans as users and works fine. It is niche though, and I don't expect that to change.

--

As a side note, it always irked me to see the word "I" in either an Elm error message or the official documentation. In errors it felt overly friendly or personified rather than giving me straight technical info, and in documentation it felt concerning that a single person's opinion might have an overly strong influence and that person might not listen to the broader community on some key issue and prevent progress.


> The equivalent React simple counter example [...] is much more understandable IMO/.

Eh, I'm a backend dev, and it isn't more understandable for me. setCount is a constant yet we're calling it as a function? And what the hell is useState?

I've written a fair bit of Elm on some toy projects and enjoyed it.


In JS functions are first class citizens that can be saved to variables, passed as arguments or returned by other functions, etc.


More understandable if you already know Javascript of course. If you don't, of course you'd be confused why const is used (it's technically as if the constant were a pointer to a function, but in practice a function and a pointer to a function are one and the same in JS).

useState is one of those things that's a React specific concept, yes, but the rest of the code is very familiar to non-React frontend developers in general.


Might check this discussion on Elm's forum.

https://discourse.elm-lang.org/t/request-elm-0-19-2-any-upda...


I think creator's burnout happened due to personal style of working was misaligned with users (developers)' demands. I think there are some points the creator made the right choice such as disallow native code, no more fancy type like typeclass etc. And the bell curve of developers can't stand pure <-> impure interop. They rather like messy types like Typescript where js thingy can float around freely with all insane effects everywhere, plus hackable and ugly structural typing .. unsound. They thought people don't love type like they do but people rather love elegant type! .. it's off-topic now, yeah.


I still program in it for my day job, as I have at various companies for the last 8 years. I still love it!

It has been fairly quiet as far as development is going. But I do know there are things in the works for another big update.


Based on this thread and many others, one thing’s for sure, Elm was polarizing!


- Elm is still great but it seems to have lost momentum

- The compiler allowlist in Elm did not go down well

- Fable (F#) is a great alternative to Elm

- F# is a better server language than Elm on Node.js IMO so is a great full-stack pick


Elm was fun and I learned a lot from it, but a purely functional language doesn’t fit cleanly on top of a real-world web browser environment, which is rife with states and side effects.

It was possible to build some elegant projects with Elm if you accepted the limitations, but the messy reality of web apps doesn’t fit within its pure model of the world.


I found myself able to build more impressive things with elm than without, quicker, with no runtime errors


We wrote some of the most complicated web pages ever in Elm. It manages complexity to a level that imperative languages (of, gods forbid, OOP) cannot even dream of.


Precisely because the web browser environment is rife with state and side effects, that is exactly the reason why a purely functional language is a good idea for it.



There was this blog post which highlights the issues https://lukeplant.me.uk/blog/posts/why-im-leaving-elm/


Looks like the last update was in 2005:

http://www.instinct.org/elm/#download

Oh, you weren't talking about the email client. :)


I write it every day and it whips ass.


Most users switched over to pine, but the final nail in its coffin came when mutt was released; mutt stills sucks, but it sucks less.


\(^▽^)/


For some reason I thought they changed the license, thus people leaving it.


It wasn't the license. It was adding pseudo-DRM to the compiler. (Not quite real DRM, since people were technically capable and legally allowed to create a fork without it.)


My experience with Elm was very short and rather funny: a programmer I vaguely know approached me (on Twitter?) and with the usual zealotry of functional programmers started convincing me "You just have to try this language out, it's the best, and they've made it IMPOSSIBLE to get a runtime error!"

Now I don't believe in miracles, the Easter bunny or code that has no way of failing, but I said fine, JS can be terrible (it was even more so I'm those days), so I googled the project webpage, opened it and WHAM, blank page, with only "Error: database connection not available" or something like that written on it. Let me tell you, I laughed so hard as I was closing the tab! And while I'm sure it was just extremely bad luck, I never looked at the project again. They had their chance, and maybe don't make bold promises you can't keep?


I'm sorry for being blunt, but if you are mistaking a database server being down (or CDN issues, or faulty backend code, or...) with the frontend code misbehaving then either you are very ignorant to how "computers" work or you are being (very) intellectually disonest. Don't do that.

For the record: yes, as much as it may sound bold or marketing-y, the statement is mostly true. You can have runtime errors if you write a bad regex though. After almost 3 years working full-time with Elm on a daily basis, I have not faced a single runtime error.

And by the way, this is coming from someone who moved away from Elm for the same reasons mentioned throughout this thread, so I'm definitely not an "Elm fan boy", but I do vouch for this programmer's statement.


I'm not ignorant of the difference, but at the end of the day, what's the difference? It was definitely Elm throwing that error and they certainly weren't adding "well, for the frontend part anyway, you're out of luck for the rest, lol". And even if we stay on the frontend, I've been doing web development for a long long time and I've seen some weird things. Are you telling me that Elm will catch the user's local storage becoming full? An AJAX URL being blacklisted? Chromium playing tricks with your DOM because it found a field entitled "username"? You have to catch all these things, like in every other language. Sure, the mechanism may differ; you may have Java style mandatory exceptions, or Go style error status that you can check. It may be done reasonably better than with competing languages. But there are no magic bullets and I don't appreciate being mislead that there are, and _certainly_ not with a holier than thou attitude.

I'm not saying Elm is bad, I was merely recounting a personal anecdote that for better or for worse prevented me from experiencing Elm.


You realize that Elm is for Frontend, right? It doesn't prevent backend runtime errors.

Edit: When I see errors like this, I think it's gaining some interest/popularity and try to remember to come back later.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: