I've seen something similar when using Pydantic to get a list from structured output. The input included (among many other things) a chunk of CSV formatted data. I found that if the CSV had an empty column - a literal ",," - then there was a high likelihood that the structured output would also have a literal ",," which is of course invalid Python syntax for a list and Pydantic would throw an error when attempting to construct the output. Simply changing all instances of ",," to ",NA," in the input CSV chunk reduced that to effectively zero. A simple hack, but one that fit within the known constraints.
Imagine you have users that want to view an XML document as a report of some kind. You can easily do this right now by having them upload a document and attaching a stylesheet to it. I do this to let people view after-game reports for a video game (Nebulous: Fleet Command). They come in as XML and I transform them to HTML. Now I do this all client-side using the browser support for XSLT and about 10 lines of javascript because I don't want to pay for and run a server for file uploads. But if I did the XSLT support in the browser would make it truly trivial to do.
Now this obviously isn't critical infrastructure, but it sucks getting stepped on and I'm getting stepped on by the removal of XSLT.
I've seen this idea a lot from members of my team that when you use a state management library then absolutely everything has to go through that state management library. This results in some of that pain you describe with state that's specific to the UI. If you're still working with React, try keeping that menu or input state in the component with setState instead of putting it in the global state store. I've found that simple change to clarify our projects immensely by cutting out lots of unnecessary code and highlighting what state is actually important to the functioning of the application.
That seems like a good approach. Still, it doesn't seem like the rendering code or the Store code could ever really be worked with in isolation. As I said above, picking a coupling boundary is like picking a poison. They all seem to suck for different reasons; I just personally tend to choose component boundaries.
I'd recommend starting with the application as an abstract entity: start with the state and not think about the UI. Once you build a data-model and action scheme from there, working a de-coupled UI is usually simpler.
Part of the reason why I personally was so upset by the talk was that it felt as if there was no room for discussion or debate on the points raised. In addition there was what felt like a lot of sniping towards features of statically typed languages that felt designed just to get a reaction from the crowd. The fact that there was an entire slide designated to tearing down a series of videos by SPJ felt not only irrelevant, but also disrespectful. There seemed to be a lack of willingness to meet halfway and concede there was anything useful from the other side.
Perhaps most frustratingly, I know that RH is capable of much better, much more informative presentations. There might have been something worthwhile in here, but the tone, style, and majority of the content didn't make it worth digging out in my opinion.
Regarding Haskellers' attitudes, I'll add that I haven't seen anything like what you describe at least on the Haskell subreddit. It could be happening in other forums but by and large it's been a welcoming community even to those that come in skeptical.
It's a keynote talk, not a panel discussion. Most keynotes are expressions of strong opinions.
> The fact that there was an entire slide designated to tearing down a series of videos by SPJ felt not only irrelevant, but also disrespectful.
From the transcript:
"Simon Peyton Jones, in an excellent series of talks, listed these advantages of types."
...
"And I really disagree just a lot of this. It's not been my experience."
How is that tearing down or disrespectful? I get there were a lot of glib bits in the talk, but as you point out, he's talked about these issues with more nuance at other times. It's a shame that hyper-focus on a couple of thrown off jabs at the costs associated with types is distracting folks from the very useful larger point he's discussion about levels of problems in programming, contexts of programs, and how languages that impose strong opinions about how to aggregate information can be counterproductive.
I think the Haskell community is, overall, very good and welcoming, but smugness does creep in a lot, IME. But if you want to talk about meeting halfway, I find that it's much less common to see static FP folks concede any benefits of dynamic languages (besides that they're "easier" in a kind of condescending way).
And that "one single place" is for this trivial toy program. Now imagine the impedance across the delivery of a real business app.
That seemed very incongruous with my experience. I think the problem is that in my experience the number of types necessary scales sub-linearly with the amount of code. This is because Haskell (and other languages with what one might consider pleasant type systems) generally have full type inference. So while you might need the explicit type for the toy example, scaling that example up means only a handful, and possibly a net negative, other necessary explicit types.
This comes up in a concrete manner in your example. The add function is almost too small to be typed appropriately. There isn't enough information for GHC to constrain the choice of numeric representation so it picks one. If you were to embed that in a larger program with more specific operators no types are necessary.
What you did here is very opposite from what real-world dev looks like. Here, you picked a `foo` so that the type checker could find consistency.
Typically in the real world, the customer or product manager asks for the thing-of-value. The job is not find a `foo` that reduces the type annotations. The job is to add business value to your system, to your code.
The probability that the world asks for something that fits so nicely into your type taxonomy is low. The real world and the Future is often more creative than our invented taxonomies support. In these cases, what you need to do is go fix up your types/taxonomy so that consistency ensues again. Whether or not that results in less type annotations doesn't ultimately matter. It's the work you had to do (the game you had to play) to make the type checker happy.
Note: The implicit assumption behind what I'm saying is that we want to create less work for ourselves, not more. If we don't care about work/time/efficiency, then my arguments in this thread are all moot/baseless.
I did no such thing. I picked a `foo` to prove a point: That the type inference engine is more than capable of inferring every type in a reasonably sized program, and the toy examples we are dealing with are below a minimum threshold. It had nothing to do with satisfying the type checker or finding some "consistency". Furthermore, the larger point was simply that the "type burden" is often not a burden at all and can in fact be completely invisible to you the user. Look at https://hackage.haskell.org/package/turtle-1.4.4/docs/Turtle... and see how very few of the examples have explicit type annotations, and when they do they exist for explanatory and documentation purposes only.
Please don't assume the motivations or intentions behind my actions and then attempt to use that to prove a point.
To address that point explicitly: In general, if requirements change then the code needs to change. Neither Clojure, nor Haskell will make that fundamental truth go away. You say you want to create less work for yourself when this happens. I'd like to propose that there is different types of work with differing qualities. When you receive a requirement change, the first step is to determine what that means for the system as it stands. This is by far the hardest step and is where the majority of the "hard" work lies. The next step is to change the system to fit the new requirement. This is primarily a mechanical task of following directions you laid out in the first step. I think it's preferable at this point to have some tool or process, even multiple, to make sure you do this part correctly. Tests, types, and code review are the three major tools I've seen aimed at this problem. Tests help ensure you've specified the changes correctly by forcing you to write them down precisely. Types help ensure you've implemented the changes correctly by forcing consistency throughout the system. Code review helps ensure you've implemented the changes in a manner that is understandable (maintainable) by the rest of your team. Note also that tests require an investment in writing and maintaining them, types (with global type inference) require no upfront investment but can require some maintenance, and code review requires participation of team members of similar skill level to you. They also all require some level of expertise to be effective. It seems shortsighted to throw away static types when they are, in my opinion, the cheapest method of helping correct implementation.
> Tests, types, and code review are the three major tools I've seen aimed at this problem.
I did a job for a defense contractor once that brought the entire team of 10 into a room to code review page by page a printed out copy of the latest written code. Now whether this was rational or not, I'm not sure, but I will give it to them that in mission critical systems you don't want to take any chances and that you're willing to be way less efficient to make sure you don't deploy any bugs.
I've been at a few companies where one of the engineers discovers code coverage tools and proposes that the team maintain 100% coverage of their code. These were mainstream industry software businesses that could afford some minor bugs here and there, so no one thought this was a good idea. Most engineers I know doing industry programming think that trying to sustain some specific percentage is a bad idea. Most say that you have to write your tests with specific justification and reason.
So no reasonable person I knows would advocate some overall mandate to do Code Reviews like above, or to hit some specific metric on Automated Tests is a good idea. And yet the general tendency of statically typed programming languages is to enforce the usage of types through and through. This should be really interesting, considering that Code Review and Automated Tests are far more likely to root out important bugs than a type checker.
I'm not arguing against type verification. I agree with you that it is one tool among many. It's the weaker tool among the three tools that you mentioned but we are still somehow arguing whether it should be (effectively) mandated by the programming language or not.
Why mandate type verification and not unit test coverage? Why not mandate code review coverage? Because, when mandated, they cost us more time for the benefit they bring.
My main argument is that type verification is a tool that should be used judiciously not blindly across the board.
> This should be really interesting, considering that Code Review and Automated Tests are far more likely to root out important bugs than a type checker.
Not to bring up the tired argument again, but as far as I know there isn't proof for this. Nor is there proof that types are better at catching bugs than tests or code review. The best we can hope for is anecdata. I've got my fair share that shows types in a good light and I'm sure you've got your fair share that shows tests in a good light.
That being said, global type inference is in my opinion a fundamental game changer here. No longer are you required to put effort into your types up front. It is the only one of those three tools with that property. This makes it trivial to get "100% coverage" with types. That is why people argue for static type coverage.
Additionally, Haskell specifically has a switch to defer type errors to runtime, which means you can compile and test without 100% coverage.
I tried this, because my first thought was that this was much the same as the first example. Certainly Haskell doesn't have a variadic addition function by default but I don't think that's the biggest concern here (variadic addition is easily just summing a list). The surprising thing was entering both an "integer" and a float to be read. You certainly can parse floats from input, but this: `(+) <$> readLn <*> readLn` doesn't do it, even though at first glance it seems like it should. If you wanted to handle fractional values, you would have to hint to the type checker that you wanted to do so. I do think that this example is perhaps too specific, and highlights only the specific topic of dynamic number conversion which is, depending on the situation, both blessed and cursed. I will concede though that if you want dynamic number conversion Haskell isn't going to give you that easily.