I've only recently started to pay attention to Elm after hearing some good things about it so this article was timely for me. I've also dealt repeatedly with the example the author uses, a wizard or multistep form, so I was definitely interested to see how it played out.
And I was nodding my head in rhythm with the author until he got to the section on lenses. He writes:
That’s 14 lines of code to update one single field. Not only is this single example somewhat confusing to follow, you also need to imagine how this update function will look when taking into account the five or so other fields just in the address record! This is — quite frankly — pretty terrible. The trick here is not to stare out of the window and contemplate rewriting everything in ClojureScript. Instead, the thing to do is write a whole bunch of lenses.
To my eye, the lenses code he ended up writing didn't look all that much prettier or shorter to me. I guess it's more neatly structured and could even be organized its own file?
Is the advantage that you're providing access at each level of the nested record as you code it out? Are there other approaches to this nested field problem?
The lenses code is not much better to access one field, but creating equivalents for other fields then becomes a composition of bits you mostly already have.
I did elm for about a year professionally, and loved it. Reading the discourse thread he links, I''m guessing one of Richard Feldman's recommendations could be to flatten the structure. I'm sure there are times when that's no better than 14 lines to update 1 field, but it should probably be used more than it is.
As for why his lenses code can be better than the 14 lines, you're exactly right, it can be extracted into its own file making it an easy-to-use API made of stable code. That lenses code is simple, though unusual at first glance. Once it's an established pattern in your codebase it'll take 5 seconds to grock an entire file of it.
> This [flat Msg design] does work, but at some point it becomes cumbersome to support a large number of constructors.
As guessed, I'd personally stick with the simpler flat approach the article acknowledges and then tries to improve on.
Nesting that data structure is (according to the article) an ergonomics improvement, but then the rest of the article is about how to solve ergonomics problems caused by the very nesting that was supposed to make things nicer!
Given that, surely it's reasonable to raise the question of whether this nesting was actually an improvement after all. My conclusion at the end of the article is that in retrospect nesting did more harm than good, and knowing that, I would have happily left it flat.
Incidentally, "leave it flat" is not my stance in all cases. For example, I gave a whole talk that could have been titled "when to nest" - https://youtu.be/DoA4Txr4GUs - and my most popular Elm project uses nesting in multiple places - https://youtu.be/RN2_NchjrJQ
But indeed, in this case, I'd happily leave Msg flat and not bother with all the lens stuff!
> To my eye, the lenses code he ended up writing didn't look all that much prettier or shorter to me. I guess it's more neatly structured and could even be organized its own file?
I would typically shuffle all these lenses into their own file, yes.
It doesn't look shorter in the small example, but applied to a real-world example — given that lenses compose — the savings quickly begin to add up. The result (at least, in my experience) is that `update` functions are much shorter and easier to follow.
Notice that these functions are almost identical to the ones he wrote in order to build the Lenses.
This update functions can be composed easily:
updatePersonDetailsAddressLine1 : (String -> String) -> Model -> Model
updatePersonDetailsAddressLine1 fn model =
updatePersonDetails (updateAddress (updateLine1 fn)) model
On the other hand, nesting records is discouraged because it brings no benefit at all and it creates the problem of having to write extra boilerplate. He wouldn't even have the problem that lenses solve if he had just stuck to the "flat" record.
Btw: I think it was a great article! I didn't mean to sound too harsh criticizing; just wanted to clarify that point.
1. What you've written looks quite similar to the lenses approach.
2. I disagree that having nested records brings no benefit at all. What about when the types are generated on the backend so the shape of the model never goes out of sync across that boundary? For a non-trivial size application, I believe it's cheaper to use code generation to derive the type definitions, JSON encoders, and JSON decoders than it is to manually write all of it and try to make sure everyone on your team is disciplined enough to always make those changes.
It's not perhaps a surprise that they're similar to lenses; what you've written is a part of the lens hierarchy (they're setters). This is why people often say you end up just recreating the lens hierarchy anyways if you work with nested data structures a lot. Lenses just combine getters and setters (so they're just a pair of two functions; libraries like Monocle just give them a name instead of making them a literal pair).
They're nice if you have nested data structures that aren't records (e.g. naked product types a la "MyType Int Int Int").
But yes in general flat records are the way to go (but what about nested message types? Turns out that the lens hierarchy helps there too in the form of prisms. Or it would be nice if Elm had extensible union types so we could have flat message types too...)
> complex React/Angular applications, where invented complexity is the status quo and this kind of misdirection is simply what you have become desensitised to
This. Redux is the worst for this.
"Compose your reducers, which magic a state object out of defaults, and split your switch statement across a zillion files, and isn't it amazing how it's just Javascript?" - feels like a direct quote from every Redux tutorial and blog post ever.
I yearn for "make your state an object, and have a big switch statement. Oh, and if that ever becomes unmanageable, only then think about reducers."
It’s pretty odd right now that most languages are trying to add more ml features to classic oo and imperative languages. Design frameworks that are much easier to use in functional languages. Then try to make it work with imperative language or classic oo language and wonder why it seems to get add unnecessarily complexity as the app gets bigger.
Seems like we are at a start of a transition period where we will move to more functional languages over the next 20 to 30 years. I wish a large company would design and modern ml and not try to make it backwards compatible with an existing runtime.
If you haven't looked at Redux recently, you should look again. "Modern Redux" code is drastically simpler than the Redux code you've probably seen before. In particular, our official Redux Toolkit package [0] and the React-Redux hooks API [1] make it much simpler to write Redux code, by auto-generating action creators, writing reducers as object lookup tables instead of switch statements, letting you write "mutating" immutable updates in reducers, and providing good defaults out of the box and utilities that handle common use cases.
FWIW, we've just completely rewritten the official tutorials in our docs to show how much simpler "modern Redux" code is, as well as better explain the fundamentals and why many of these patterns exist:
- The "Redux Essentials" tutorial [2] teaches "how to use Redux, the right way", by building a real-world app using Redux Toolkit
- The "Redux Fundamentals" tutorial [3] teaches "how Redux works, from the bottom up", by showing how to write Redux code by hand and why standard usage patterns exist, and how Redux Toolkit simplifies those patterns
In addition, we've added a "Style Guide" docs page [4] that lists our recommended patterns and best practices, which help you write Redux logic that is simpler, easier to understand, and easier to maintain. That includes things like "write Redux logic as single file 'slices'", "put as much logic as possible in reducers", "model actions as 'events', not 'setters'", and so on.
We've gotten a lot of great feedback telling us that RTK and the docs improvements have really made it a lot easier for people to learn and use Redux effectively.
Why? Why should I look at "Modern Redux"? Why wasn't enough thought put in in the original Redux? Like preommr mentioned, the basic flux model isn't that complicated.
Why do you guys complicate things? Why do you think new concepts are required? And new terminology? Why do you write so much code? Why don't you see how these things have been done in the past? This has been done in Mithril for ages.
My suggestion is - next time you guys decide to create something new, don't just keep thinking of new concepts and write code. It's all been done before. Just make it easier to use.
So, no, I'll not be looking at Redux, now or in the future. Because I've got applications to deliver and the end user doesn't give a shit about how fantastic your architecture is, if it takes ages to understand, implement, maintain & debug.
> Why wasn't enough thought put in in the original Redux?
Because times change, ecosystems change, patterns change, and user bases change.
Why isn't React still using `createClass()` and mixins? Why were ES6 classes and hooks invented? Why did JS add arrow functions and `async/await`? Why did C++ add an `auto` keyword? Why is there ever a new version of a language or library? Wasn't enough thought put into them in the first place?
> Just make it easier to use
And that's exactly what we did :)
"Modern Redux" code is different because:
- We've seen how people have used Redux in practice, what they wanted to do, and what problems they've run into
- We've seen what solutions and other Redux-related libraries the community has invented to solve their problems
- And there's new tools and APIs that have come out since Redux was created, like Immer for immutable updates and React Hooks for writing reusable logic in React components
Like I said, this isn't about "new Redux concepts". It's still state, actions, reducers, dispatching, and subscribing in the UI. It's just much less code to do it, and prevention of common mistakes while using Redux. That means smaller bundles, faster development, and fewer bugs.
This tutorial page shows how Redux Toolkit simplifies all the older Redux logic patterns:
I can't force you or anyone else to use RTK, or to use Redux. If you've got other options that are working for you, please use them.
I'm just pointing out that we _have_ specifically addressed the very concerns you're raising, and if you are genuinely concerned about those issues, you _might_ want to take a look at what we've put together... and I've got a whole lot of enthusiastic RTK users who can vouch for how it's made their development lives and Redux usage experience a lot better.
> Because times change, ecosystems change, patterns change, and user bases change. Why isn't React still using `createClass()` and mixins? Why were ES6 classes and hooks invented? Why did JS add arrow functions and `async/await`? Why did C++ add an `auto` keyword? Why is there ever a new version of a language or library? Wasn't enough thought put into them in the first place?
I think the comment you are responding to is a little harsh, but I find this part of your argument disingenuous. JavaScript's syntactic expansion is not definitely a good thing — it isn't all progress.
The rhetorical questions you have presented seem to imply that JavaScript gaining classes is obviously a good thing, whereas I strongly doubt everyone agrees. Did prototypical inheritance work? Yes. Does JavaScript still have prototypical inheritance? Also yes. Is it better to have more choice? Not necessarily. The result of this has been more fragmentation, more documentation, more for beginners to learn, more framework churn, etc.
JavaScript the language was originally invented in just 10 days, and the author himself has explained that aside from the deadline pressure, there are regrettable reasons for the language's inconsistent API.
I'm afraid that if you're focusing on the technical merits of prototypal inheritance vs classes, you've missed my point completely.
The parent comment seems to be arguing that a library or tool should be perfect in its very first release - that the designers should have anticipated every possible use case, target audience, and piece of API design.
My point was that _everything_ in technology evolves over time, and Redux is no exception.
The Redux core _is_ exceptionally well designed, especially given that Dan and Andrew really only worked on it for a two-month stretch. During that time, they iterated on multiple ideas, particularly the transition from "stores" to "stateless stores" to "reducers", and coming up with the middleware API as a way to allow pluggable side effects logic. Similarly, the design discussion for what became `connect` [0] shows from the very first comment exactly what design constraints were relevant, and while it took some time to nail down the specifics, Dan clearly had the key criteria in mind.
At the same time, many of their early ideas about how Redux would be used turned out to be wrong. Dan assumed that people would likely only connect top-level components, and it turns out that connecting many components across the tree is better for performance. Andrew came up with the "Flux Standard Actions" standard, but his idea of having `{error: true, payload: new Error()}` and reusing the same action type for errors and successes has been completely ignored by the ecosystem - it's a lot easier to have separate action types for success and failure. They both thought that thunks were a stopgap that would quickly be replaced by something else, and neither of them could have anticipated the creation of Immer.
So, in the real world, no tool is ever designed perfectly from day 1. Requirements change and usage patterns patterns change. That's why libraries and tools publish new versions - to respond to those changing situations.
That's why we created and designed Redux Toolkit - to respond to the changes in how people want to use Redux, and the pain points they've experienced using it.
> I'm afraid that if you're focusing on the technical merits of prototypal inheritance vs classes, you've missed my point completely.
That's not what I'm focusing on, and I don't know how you've managed to interpret my comment that way.
In your comment, you said this:
> Why were ES6 classes and hooks invented?
That's why I used classes as a concrete example of the larger point that I was trying to make. My point is that change is fine, but just continually adding stuff to appease people is not necessarily a good thing. All that's happened to JavaScript — as I already pointed out — is the foot-gun now has a wider blast radius and less travel on the trigger.
A perspective from a backend developer who is spoiled from working with very good programming languages:
> Why isn't React still using `createClass()` and mixins? Why were ES6 classes and hooks invented? Why did JS add arrow functions and `async/await`?
Sorry, but all of these are mistakes. I predict that hooks will not survive the battle of time as well as react as a whole will. Arrow functions are mere syntatic sugar, but async/await was a mistake.
No it's not. It is saying Javascript has asnyc/await which is just a specialization of the more general monad-o-syntax (or however you want to call it). And that is precisely what I meant.
I can second that, I’ve just recently started migrating my open source application [0] to use Redux Toolkit and the hooks API (+Typescript) and it’s been a pleasure to use and MUCH less boilerplate than the previous implementation. Thanks for your efforts working on this!
Yeah, RTK is written in TS and specifically designed for a good TS usage experience with a lot of type inference, and the React-Redux hooks API is _much_ simpler to use with TS since it's really just a matter of declaring the type of `state` in your selectors. (Well, okay, to use thunks in components you gotta supply a customized type for `dispatch` that knows it can accept thunks too.)
FYI, we've got some new RTK APIs in the works that are close to alpha status that I think are really gonna surprise people and make some use cases a _lot_ nicer too. Hoping we can formally announce them here in the next couple weeks.
Lemme glance at that repo you listed there...
Oh wow, that's a really impressive project there, and some neat larger-scale use of RTK! Thanks for linking that.
A few quick bits of feedback on your code since I've got it open:
- A couple of those extra middleware look like they could maybe just be hand-written thunks instead, such as `projectMiddleware`
- for `projectMiddleware` in particular, you probably ought to be doing `storeAPI.dispatch(loadProject))` instead of trying to unwrap the thunk yourself
- strictly speaking, a reference to `window.width` shouldn't be in the reducer itself - the value should be in the action, preferably via the `prepare` callback. (In practice it won't really make a difference, just pointing it out as an FYI.)
- always nice to see actual uses of `undoable()` in the wild :)
I really appreciate you taking the time to code review the Redux store! This is some really great advice that I'll definitely be implementing. The project has definitely been a big learning experience :)
Very much looking forward to these new RTK APIs. Thanks again for your help.
I hate that it got chosen as the de facto solution, and that there was such a strong push for it as the main state management solution for react. Even though redux devs themselves have come out and said that it's not for every situation. Where does that leave people?
I really think there should be a greater focus to be more ergonomic rather than theoretically good. There should be one package that someone can just jump into. Separating react-redux, redux, and now redux-tookit is just frustrating.
I just get this gut feeling with redux that people just wrote and wrote stuff, without really stopping and squashing it down. The fact that part of the messaging with using a toolkit also involves reading up on the essentials is a problem.
Even if the problem space is very complex there shouldn't be this much information - I think it might be easier to learn some programming languages than the entirety of the redux ecosystem. And frankly, I don't even think the problem is that complicated, the basic flux model isn't that complicated. And yet it's been made to be so complicated. I recently did a refresher, and on the react-redux example, just in the intro of a basic todolist, there's ~50 things that are highlighted - implying you should remember them - in just the section of setting up a store. Not the entire section on stores, just the intro outlining the brief overview of the rest of the tutorial.
There is just so much stuff.
I don't even think the toolkit makes things that much simpler either. It's a step in the right direction, but it feels really leaky when I get into the intermediate and advanced sections.
I get what some of what you're saying here, but I also feel a couple of these complaints are misguided.
For one thing, the Redux team has never intentionally tried to make Redux "the main state management tool for React". We've never deliberately sold it that way, and goodness knows the React team has said "React is the only official state management tool for React". Redux got popular because it solved the problems people were dealing with when it came out, better than the existing tools in the ecosystem, at a time when React was taking off. We've done our best to clarify in the docs when it does actually make sense to use Redux.
Redux Toolkit is still Redux. You're still writing reducers, creating a store, dispatching actions, and updating the UI based on state. RTK just allows you to do so with less code. So, the "Essentials" tutorial starts by introducing those terms, concepts, and data flow, because you need to understand those regardless of how you're writing Redux logic.
While Redux _is_ normally used with React, it's also UI-agnostic, hence the separation from React-Redux, and using it with other UI frameworks is a valid use case that we support. We had some long debates about whether to add these additional APIs into the Redux core [0] [1], and concluded that there was too much inertia around the existing Redux ecosystem to build those into the core `redux` package. As it is, even though RTK isn't required, we've had folks say they didn't like some of the opinions we've built into RTK (like using Immer for writing reducers). So, combining all three of those packages into a single super-package is not feasible at this point.
I've tossed around the idea of trying to merge the Redux core, React-Redux, and RTK docs into one site, but that would likely require merging all three libs into a monorepo. Not impossible, but not something that's likely to happen any time soon, if only because my own time is very limited and I have a lot of other docs work I need to focus on [2].
Which specific docs page are you referring to? You said "intermediate" and "advanced" - sounds like you're referring to the RTK docs site tutorials [3]? FWIW, I wrote those a year ago, and at the time the main Redux docs were frankly very out of date. I've spent much of this year rewriting the Redux core tutorials, as listed above, and those Redux tutorials are where I would direct anyone who's new to Redux or really wants to know about RTK. We do also have an open issue to discuss what tutorial content should be on the RTK docs site [4], and tbh I'm really not sure _what_ tutorial content should actually be on the RTK site now that the "Essentials" and "Fundamentals" tutorials exist in the core docs.
You're right that the actual Redux core itself isn't very complicated, but the biggest issue is trying to teach people to "think in Redux". In fact, it's kind of _because_ the core is so small that there's a lot of documentation needed - so much of this is actually in the patterns around how to use Redux effectively.
Anyway, if you can point to some more concrete concerns or have specific suggestions on how we can improve things, I'm very interested - please feel free to comment over in the issues.
If Elm had extensible unions I would advocate as much as possible staying away from nested hierarchies. They lock you into a single access pattern. Unfortunately Elm does not, so nested hierarchies it is.
>I will at this point sympathise with people who have criticised Elm in the past for some of its most vocal proponents being frustratingly unhelpful.
As always, Elm's biggest flaw is the sanctimonious nature of its design committee. No native modules, no custom operators, no lenses (would they really ban them?), etc.
That second one even comes back to bite the code here, because compared to e.g. Haskell lenses, the Elm lens code is ugly and verbose. Having to manually write lenses instead of being able to auto-generate them with macros is yet another Elm failing, though to be fair, I don't know that macros are opposed by the committee.
As someone who entered the functional world through Elm, trying to learn Haskell made me appreciate all the things it doesn't have. Operators are a particular offender, because every time I look up how to do something I have to memorize a new sequence of characters that doesn't mean anything. Incredibly frustrating.
Elm for me is like a refuge from JavaScript where anything goes (good and bad).
It’s not a hammer for very nail, but it’s simple and predictable for the most part.
There are other frameworks like bucklescript / reason, purescript, closurescript, etc. with all the bells and whistles.
Ultimately, I think Elm is in a good place. That being said you have to pretty quickly jump into the deep end once you reach a fairly moderate sized code base.
> compared to e.g. Haskell lenses, the Elm lens code is ugly and verbose.
Monocle isn't the only game in town for lenses.
Arguably, you're still going to need to write the code that isn't generated, but the composition of lenses can be done using the generic composition (<</>>) too, which makes them much more elegant.
The only feature I miss in Elm is do-notation. Idris does a great job of introducing do-notation without requiring inclusion in monad and simply being sugar for the bind operator.
> These functions almost inevitably end up needing something from the top-level application-wide state, so you’ll often see some type signature like this:
> This is way too complex already, and this approach doesn’t even actually buy you anything.
Is this accurate? I feel like this may not be what the author intended to write, since the update function isn't given any kind of message.
The author's approach has data for every page of your app stored in the model at once, and available to every other page. Also, any page can edit the data for any other page and send messages to any other page. That makes me nervous.
We went with having a small amount of global state that was visible to every page, and that every page could update (returning (Model, GlobalState, Cmd Msg)). That seems... basically fine? It's still a bunch of boilerplate, but not that much, and I don't think we've had significant problems with it. We need Cmd.map and Html.map but I don't know why the author thinks this is bad.
(The boilerplate is mostly turning `(model, cmd)` into `(model, globalState, cmd)`. On the other hand, we don't need to wrap each of our messages with `ThisPageMsg`. Most pages don't need global state, and we let them use a regular Msg -> Model -> (Model, Cmd Msg) update function.)
(Oh, I think very occasionally we do need to send a message up to the top-level as well. I don't remember exactly how we do that. It might be something like... those pages' update functions take a `PageMsg` and return a type `PageMsg PageMsg | GlobalMsg GlobalMsg`, and the top-level update function interprets `ThisPageMsg (GlobalMsg ...)` and passes through `ThisPageMsg (PageMsg ...)`. Again, boilerplate, but not significantly problematic, and only on pages that need it.)
Author here. You're right, there was a line missing. That's now fixed. Thanks for spotting that.
To address your other points: in just about every Elm application I've ever worked on, there was an initial ambition to separate and encapsulate things nicely, for reasons of safety or Clean Code or some other philosophical ideal.
Then requirements change (which is normal and good — businesses change and evolve, and the product should mirror that). Then you discover that oh, you need a bit of state from one page in another. Oh, and modifying the state of something on one page should affect this other page. Hmm… Why does this view function now take seven arguments?
Your final paragraph hints at the kind of engineering that inevitably begins to appear as a workaround to solving the wrong problem in the first place. All of it is extra complexity that doesn't need to exist at all.
> The author's approach has data for every page of your app stored in the model at once, and available to every other page. Also, any page can edit the data for any other page and send messages to any other page. That makes me nervous.
Why exactly this emotional reaction specifically? Does prior experience with bad surprises in Elm give you this intuition? Or is it from experience from a language like JavaScript where global mutable state is indeed something to be wary of?
> Your final paragraph hints at the kind of engineering that inevitably begins to appear as a workaround to solving the wrong problem in the first place. All of it is extra complexity that doesn't need to exist at all.
I agree it's extra complexity, and it's not strictly necessary. But it's not that much extra, and it doesn't affect pages that don't need it, which in the app I work on, most don't. (Whereas if all your pages take a top-level model, and return a top-level message, they all need to deal with that even if they don't need it.)
> Does prior experience with bad surprises in Elm give you this intuition? Or is it from experience from a language like JavaScript where global mutable state is indeed something to be wary of?
Not specifically Elm, I've only written it in one app.
But I'm imagining bugs where a page mutates its state in ways that satisfy some invariant, and then that invariant is violated and no one can work out how it happens. Turns out some other page is editing its state, in ways that probably made sense at the time. (Granted the debugger will probably help, but... I dunno, I'd rather just not have to.)
I'm not convinced large Elm apps don't have problems like this. And saying "no, if you want to communicate with another page, you have to do it through this specific mechanism" seems like a fine way of avoiding it.
The solution to this particular problem is to stop thinking in terms of page.
Which page you're on is only a matter of view, and maybe a minimal amount of view-related model. Most of your model data should be related to your application logic, not to a specific page or component. Separating your model logic and your view logic helps a lot to avoid having this problem, because your model logic becomes agnostic to what page modifies it.
I've never tried it your way, so maybe I'm missing something here. But thinking in terms of pages seems like a very useful abstraction to me. I can't keep the whole application state in my head at once, so I want to be able to look at a small part of the app and know that I don't need to think about anything else right now. (It maybe won't be "literally just this page", but "this page plus a small number of well defined mechanisms for bringing data in and out of it".) I wouldn't want to give that up, and I don't know how I'd split the state if not by page (or at least something similar, like "groups of closely related pages").
If my model logic is happy to be modified by any page, and if there's a bug in my model logic, then I have no idea where to look.
You can think of it as an API. If your API is called, should it work differently depending on whether the request came from a browser or from a terminal curl? Probably not.
Similarly, if you have a message that, calls for your model to be changed (for instance, adding a product to the cart). That message describes cart logic. The product could be added directly from the product page, or the frontpage, or from a suggestion in another product page. It is probably not relevant to the cart how exactly the product came to be added.
Not all the model is like that. There is a subset of the model which exists only to describe the state of the view. But, usually, a significant part of the model is dedicated to page-independent data.
> I want to be able to look at a small part of the app
In my app, the small part of the app I'm going to look at is a single feature which may be used in multiple pages. It is easier for me to keep a consistent behaviour if that feature is independent from the view.
Fair enough. I get the sense we're working on fairly different apps.
It's also possible that if the app I'm working on were structured differently, we'd have done things more like the way you're doing things. I'm getting vague thoughts in my head along those lines, which I'll have to let ruminate and see if they go anywhere.
> If my model logic is happy to be modified by any page, and if there's a bug in my model logic, then I have no idea where to look.
I don't mean this sarcastically (hard to convey tone in written form), but is this genuinely something you have encountered with Elm in practice? Even with the time-travelling debugger?
No. Like I said, I've only worked on one Elm app, and that one doesn't let model logic be modified by any page (with some well-defined exceptions). But it's what I'd expect to sneak in if we hadn't architectured it to forbid that kind of thing. I agree that the time-travel debugger would help, but... I guess, even if that makes it less costly, I'm not seeing the benefits.
I know the point was "this is a bad type signature", but I guess I'm still confused. I can't think what problems someone is trying to solve by writing a function like this.
I can maybe see why it might have both `Model` and `PersonalInformationModel`: the author doesn't want to prefix everything with `model.personalInformation` so they pass both `model` and `model.personalInformation` to it. (Maybe they forget that they can start the function with `let personalInformation = model.personalInformation`?) So I can maybe guess why `PersonalInformationModel` might be there, and if my guess is right I agree it's silly.
But I have no idea what the `Cmd PersonalInformationMsg` is doing, and it makes me think I don't understand the `PersonalInformationModel` either. Where does that input come from, and what does the function do with it?
If it seems confusing, it's because it is. That's the point I'm trying to make. I have seen — several times across a number of projects — people writing code with a philosophical ideal along the lines of "well, this page should only know about the state (a subset of the model) that it is responsible for, so I won't give it the whole model. Also it will only send the messages related to just this page. Oh, requirements changed. No problem, I'll just add another part of the global state as an argument. Oh, requirements changed again, and I need to send a global message. Hmm… I'll just… etc"
Furthermore, I've observed over a few years the behaviour in which developers with significant JavaScript ecosystem experience will impose these philosophical ideals far too early in an Elm application's design because they aren't familiar with Elm's refactoring story and are justifiably wary of a codebase growing to the size and level of complexity that is beyond saving.
I understand and agree with the idea of generally narrowing types to simplify APIs, but in my experience, the typical "page" in a multi-step webform actually contains too many cross-cutting concerns for this approach to remain frictionless as business needs change (and if your business/product needs don't change, then I would like to buy whichever crystal ball you're in possession of).
> If it seems confusing, it's because it is. That's the point I'm trying to make. I have seen — several times across a number of projects — people writing code with a philosophical ideal along the lines of "well, this page should only know about the state (a subset of the model) that it is responsible for, so I won't give it the whole model. Also it will only send the messages related to just this page. Oh, requirements changed. No problem, I'll just add another part of the global state as an argument. Oh, requirements changed again, and I need to send a global message. Hmm… I'll just… etc"
But... none of that, as far as I can tell, gives you a signature that looks like this? Even making the kind of changes where you think "okay, this is an ugly hack, if I'd known about this in advance I'd never have written it this way, but I don't have time to go back and do it properly so..."
If you've genuinely seen signatures that look like this, then my sense is that the people who wrote those functions have different problems than the one you describe.
Maybe I'm pulling on something that turns out not to matter here. Like, if you've experienced the problem you describe, and you just came up with an example silly type signature that turns out not to really work as an example, then fair enough, that's not a big deal. But if this post was inspired by seeing type signatures looking like that, and trying to explain those type signatures, and trying to come up with advice that will prevent them, then I would guess the advice is mistargeted. Or maybe I'm missing something and the process you describe really does lead to a signature like that, by people who are generally more-or-less competent; I don't say that it couldn't happen, only that I don't see how it could.
(I disagree with the sentiment "If it seems confusing, it's because it is." If people who lack a certain mental model repeatedly make the same sort of mistake, and generate something that seems confusing to people who have that model, it can be extremely helpful for the people with the model to figure out what's going on so they can try to share the model.)
Did you, perhaps, mean something like
updatePersonalInformation
: PersonalInformationMsg
-> Model
-> Result (PersonalInformationModel, Cmd PersonalInformationMsg)
(Model, Cmd Msg)
? That would let a person say "this branch gets to update the global model, but all the others can stick with local changes and I just need to wrap their return value in `Err`".
(Which is still silly, just start the function with
let local : (PersonalInformationModel, Cmd PersonalInformationMsg) -> (Model, Cmd Msg)
local = ...
and wrap the return values in that. But I can at least see what someone would be doing with a type signature like the `Result` version.)
Right. So we need to pass the model for the entire app to our sub-update functions. This is the problem with the elm architecture; it requires a massive amount of boilerplate for anything more complex than a wizard.
One nice trick I use to avoid boilerplate is SetModel msg which just sets new model state (passed in message itself). Good for simple and stupid messages like text input that just updates the model.
This seems like it would introduce bugs, where the model changes before that message is handled, and your old model you put in your message overwrites those changes.
If the model changes, a new view is generated, so your message should take that into account.
I can't remember of any concurrency issue arising from handling multiple messages from the view, to be honest. I don't know if there's a specific mechanism blocking them, but it's simply never happened in 3 years of building elm apps for a living.
> If the model changes, a new view is generated, so your message should take that into account.
You don't have any guarantees about when that will occur though. Updates and view rendering are run independently.
I have an application that tracks mouse movement using pointer events for the user to draw, and I've definitely had a message from that come through after my model has updated and I'm no longer on that screen (and so nothing in the view generated from the model should produce that message), so it definitely can happen, even if you haven't encountered it.
Even if you don't encounter it that way, there are a bunch of footguns using this approach (e.g: doing this with tasks, HTTP requests, anything that has a defined delay before the message comes back).
But it's not just that, you may e.g. be racing against messages from a subscription (this one is really dangerous if your subscriptions are things like listening for resizing, where that can mean your resizing message is effectively dropped and users see a really needed up UI).
Correct, you shouldn't do that. On the other hand, event of simultaneously resizing and writing input is kind of rare, and this will be fixed on next resize/input event, so I wouldn't be too bothered.
Well the tricky thing is you can't "not do that" because your Elm code doesn't have control over when subscriptions fire.
Note that this applies to any subscription and that it is effectively dropped forever; the next input event won't recover it (so in the case of resizing only the next resize event will change things, not the next input event). So e.g. you could have a long running operation in a Web Worker over ports and then if its result comes in right when your user happens to be inputting something then it is dropped and never returns.
No they still can. You can still have concurrency bugs in the absence of parallelism (the classic example of why concurrency and parallelism are not the same thing) as long as you have a concurrent API (which the Elm architecture is an example of).
The basic problem is that there is an undefined delay between a message is issued and when `update` is called on the message which can be caused by other messages being in the queue.
Here's an example with subscriptions and views.
Time 0: A subscription arrives with message `A` which is added to the message queue in Elm's runtime. Note that one "tick" in Elm's runtime has not yet passed so we don't process the queue just yet (to see why this can happen we can either look at Elm's source code or note that the fact that Elm doesn't lose a subscription even if updates are busy processing forces there to be a message queue of length > 1).
Time 1: Your view sends a message `B` which contains a snapshot of the model `M0` at Time 1. `B` is added to the message queue.
Time 2: Your update function pops off the message queue and processes `A`, resulting in a new model `M1`.
Time 3: The Elm runtime processes another tick and sees no new messages from either views or subscriptions.
Time 4: Since the runtime is now done examining subscriptions and view messages, your update function pops off the message queue and processes `B`, resetting the model back to `M0`.
At the end of this sequence you effectively haven't handled the subscription at all, and all this is on a single thread.
To add to that, there is a profusion of fine-grained actions like SetFirstName, SetLastName is endemic in Redux-inspired codebases. They are a lot of noise, but more importantly, they force us to keep all our domain logic in a single massive reducer, with no encapsulation between different sections.
This can be improved by using coarse-grained actions like say `UpdatePerson(person)`. And this new `person` value can be obtained by the event handler itself when the user changes the name. However, the actual logic can sit inside a Person module, allowing us to do something like: `onChange=dispatch(UpdatePerson(Person.setFirstName(newName, currentPerson))`
This approach lets us keep the behaviour close to the view while encapsulating the actual implementation inside a rich domain model.
As I note in my other comment, this approach is fundamentally unsound in Elm.
The view is generated in its own thread synced to the browser's rendering system. It is entirely possible for the model to have changed without the view having been updated yet to match.
If your model changes under your feet while you use this approach, it result in you silently overwriting that change with the old data.
@Latty points out a very valid point that this is a concurrency bug. However, you can recover it in the same way that other concurrency system use optimistic concurrency (e.g. most STM implementations).
You have a `numberOfOperations` field in your model that is incremented every time a message is processed and then check that your incoming new model has a `numberOfOperations` value that is exactly one larger than the current value. If not you then fire off a `Cmd` asking for a recomputation of the original message.
However, that can get pretty hairy pretty quickly (since you may not have a nice way of asking for recomputation) so I'm not sure if it ends up working out in the end.
Since 0.19 banned operators, I would avoid using monocle.
If you don't need prisms or isos, I would suggest looking at https://github.com/bChiquet/elm-accessors, which uses basic composition to make composable lenses/traversals.
Nice article. One counter point though, OP uses Elm-bridge to generate Elm types from Haskell. I'd advise not to.
Elm-bridge is not maintained anymore, lacks documentation and some useful feature (no phantom types for example). https://github.com/Holmusk/elm-street is probably a more viable solution though I've never tried it!
Whoops! I misremembered. I think we originally considered elm-bridge, but we're actually using haskell-to-elm[0]. I am familiar with Veronika's work and it is consistently of high quality, so I don't doubt elm-street would be a solid choice. In fact, elm-street looks like it supports generating exhaustive sum type lists, which might be the killer feature over the alternatives that leads me to switch.
It's astonishing how much category theory jargon is required to build a frickin wizard. Wizards could be made almost entirely in a GUI - double click a field or button to associate code - way back when I started programming in the mid 90s. I really struggle to see how this is progress.
I used to think that the jargon is stupid. Nowadays I think it is just impractical and prone to abuse. Haskell (or Elm) is not going to become an industry language because the barrier is too high and people love building their tiny islands of expertise. Lowest common denominators are actually a feature of programming languages used by industry. It's why we have languages like Go or Javascript.
> Haskell (or Elm) is not going to become an industry language
I'm not really sure what this means. At Riskbook we have about 50,000 lines of Haskell and about 20,000 lines of Elm running in production. It's working great for us.
That may be true (I'm not actually sure), but it's not at all clear to me why that matters.
Elm won't be used in industry because the language designer used to work at a company and now he doesn't work there anymore? Perhaps I'm being obtuse, but this is a total non sequitur to me.
Prezi was the first major investor in Elm (because they hired Evan) but definitely not the first major adopter.
Source: back in 2015 I was personally in a meeting at Prezi's San Francisco office where Prezi employees were asking me (and two of my coworkers) about our experiences having adopted Elm at work, because they were considering doing so for the first time!
It seems kind of... hubristic? for a company like Prezi to go out on a limb for an obscure language. Did they read PG's old essays about lisp and plan on repeating a fluke?
Lenses are not a concept from category theory and to understand them no knowledge of category theory is required. Category theory just gives you generalizations (other 'optics') and better encodings, but none of that is mentioned in this post.
I hadn't seen it until you linked to it from one of your earlier comments. I have no particular attachment to elm-monocle, and I appreciate you sharing :)
Lazy takes beget lazy responses: Evan's iron grip on Elm is why Elm is good and simple and not a kitchen sink. Other solutions like Purescript already exist if that's what you want.
But when you don't like how something is done that other people do like, there comes a time when you just need to pinch it off and leave the pot so other people can enjoy it.
For some reason it's small niche languages that really bring out the hubris in some HNers. Maybe because they have a sense of ownership over it by virtue of merely using it, something they know is silly when it comes to larger languages.
You are right that I was too lazy to even bother explain my statement.
But you are wrong about the sense of ownership. The bitterness comes from Evan deciding to make backwards breaking changes to the new version of the language and forcing you to spend days to update your code base.
It’s been so long that I don’t even remember what the changes were and can’t bother going back and look.
One example that stuck in my mind is when he decided that users should not be allowed to use the prime(‘) character in variables name.
It was not because that this character was now used in some other way. It was just him deciding he didn’t like that in variable names, so he not only forced it to the language codebase but to all the users as well.
Of course this was easy to change but gives you an idea. The rest of the changes where in the same spirit but much more difficult to change.
So it has nothing to do with small vs popular language. Imagine if Stroustrup had come out and said that since C+11 variable names would be case insensitive and operator overloading was forbidden.
I'd like to agree with you, but I don't think you're helping your argument by picking this specific example.
Using prime characters in bindings is a convention that Haskell programmers (myself included) occasionally use, but it is by no means clearer nor more Googleable. Furthermore, a single project-wide find and replace hardly takes "days".
> Furthermore, a single project-wide find and replace hardly takes "days".
That’s what I said too, no? Unfortunately I can’t remember the changes that required more work. Sorry about that.
> Using prime characters in bindings is a convention that Haskell programmers (myself included) occasionally use, but it is by no means clearer nor more Googleable.
Maybe you are right, but it’s not his place, as a language designer, to dictate that. Especially when he has been promoting this style up until last release.
It’s no different than the example I gave: Forcing everyone in C++ to have case insensitive names because it’s more googleable.
For some, maybe including you, it may not be a big deal. But I was trying to justify my “iron grip” characterization and that it had nothing to do with sense of entitlement or sense of ownership. It was purely because he forced me to change my application code for no good reason.
> Maybe you are right, but it’s not your place, as a language designer, to dictate that.
I disagree with Evan on this question and many others, but I just don't understand your criticism here. That seems like exactly a language designer's place? Someone needs to decide how the language is tokenized, what counts as a valid variable name, and so on. Very few languages have the answer "everything not explicitly used for something else is okay in variable names". (And Elm does use the ' character for other things, the rule which would allow ' would allow many many other things.)
Maybe you mean he should have made a decision and stuck with it? But that would severely limit exploration.
Is your hypothetical that Bjarne Stroustroup tries to do that now, when C++ is 35 years old and ISO certified? That doesn't seem a close match with Elm, which is less than nine years old and not yet on version 1.
> Maybe you mean he should have made a decision and stuck with it? But that would severely limit exploration
Yes that’s what I meant. That’s what one should do if they want to be taken seriously. I assume you are a software engineer? Do you build APIs for other teams? How well an argument like this(I changed the whole API so I don’t limit exploration) would fly with the rest of the teams in your company.
> That doesn't seem a close match with Elm, which is less than nine years old and not yet on version 1
Yeah, when everything else fails, that’s the only argument left. But this is like saying to not take it seriously and don’t try to use it on anything else other than experiments. Which is not how they try to promote Elm.
I believe, based on my experience, this is just an excuse. And I am afraid that after v 1.0 when he will want to make the similar changes he will just name it v2.0 and have another excuse.
Maybe for you it’s good enough, but for me it’s not. He lost all good faith with me and that’s the reason I don’t use it and recommend others to do the same.
PS: sorry for being negative and apologize if I offended you with my criticism
> Maybe because they have a sense of ownership over it by virtue of merely using it
As a remark: I've been accused of feeling ownership over Elm. What I actually felt was investment in Elm, which is different. And I think it's an entirely reasonable thing for me to feel.
I know you used the word maybe. Still, I'd generally be wary of attributing motives like this towards people.
And I was nodding my head in rhythm with the author until he got to the section on lenses. He writes:
That’s 14 lines of code to update one single field. Not only is this single example somewhat confusing to follow, you also need to imagine how this update function will look when taking into account the five or so other fields just in the address record! This is — quite frankly — pretty terrible. The trick here is not to stare out of the window and contemplate rewriting everything in ClojureScript. Instead, the thing to do is write a whole bunch of lenses.
To my eye, the lenses code he ended up writing didn't look all that much prettier or shorter to me. I guess it's more neatly structured and could even be organized its own file?
Is the advantage that you're providing access at each level of the nested record as you code it out? Are there other approaches to this nested field problem?