as a new haskell learner who started this month, I've really learned empathy from banging my head against the language
the docs have a specific tone or pattern that's everywhere (and by docs I mean everything -- library readmes, examples, language intros, and stackoverflow posts)
they all read like 'well, you probably haven't been introduced to monads yet. I can't answer your question directly in a short post (these posts are all like 6 paragraphs, nothing short about them), but I'll tell you why you can't do what you're trying to do and I'll also explain why I can't tell you what a monad is'
also my laptop battery keeps dying because the builds are so expensive
The lecture videos are short, about 10mins each, and Professor Grossman explains the concepts carefully and succinctly, within a fixed teaching framework so you have a predictable learning experience with each concept. It's by far the best way to get into the mindset of typed FP.
Type classes, functors, and monads are assumed knowledge in most documentation, as they are foundational concepts in Haskell. Just like math, you must know addition, subtraction, multiplication when learning algebra.
A bunch of languages have some special syntax to solve hairy problems. Async await, yield for generators, ? for dealing with errors in rust, etc.
If those deal with composition and variable scoping in an intuitive way it's a monad. This "intuitive way" is equivalent to changing block scoping in imperative languages.
{
let x = foo()
let y = bar(x)
let z = baz(x,y)
...
}
should stay equivalent to
{
let x = foo()
let z
{
let y = bar(x)
z = baz(x,y)
}
...
}
and you should be able to add empty scopes without changing anything so
{
foo()
}
should still equal
{
{}
foo()
{}
}
And that's the entire definition of monads. The abstract monad interface is useful to let users define new interpretations for the syntax sugar but not necessary when getting started with haskell - you will pick up intuitions while using it.
I’d just recommend skimming over something like Haskell book to gain some vague familiarity with what concepts exist, then just write code and learn applicative/monad/monoid etc as you run into walls.
Generalizations are notoriously hard to describe as they become more general. Better to understand where and when to use those patterns over being able to write a blog post about what a monad is.
I don't know if they're good for others, but I read Real World Haskell[1] and Learn You a Haskell[2]. There's also Graham Hutton's book which people like and which is free to read this month or something[3].
In principle, you wouldn't do any logging, even debug logging, in a pure function. It always returns the same output for the same input anyway. You'd just check its output.
and they all have easy, one-line, correct examples of how to do it, within the first 20 seconds of reading.
(This is not to criticise -- sometimes one misses the forest for the trees -- as a professional Haskell consultant who runs trainings I am genuinely interested into what problems people face and why.)
If you're serious about picking up the language, I urge you to simply start building something. Build something you can relate with, something similar that you've already built with a different toolset. Like a web server.
A lot of the fancy abstractions don't make sense until you have a problem where you'll actually need them. Artificial problems, like some algorithmic assignments in books just don't click nearly as well to many of us.
Monads are very simple. If you know how futures / promises work (the chain of .then), you already have enough intuition about them.
Also, (Glasgow) Haskell has ghci, a rather nice REPL. You only need to learn to use :{ ... :} for multiline stuff. It's very good for quick experimentation with zero build time.
I worked as a Haskell developer for a year and a half. The best productivity boost I had was switching to compiling on an EC2 instance. I saved battery, sped up compiles with greater parallelism than my XPS had cores for, and didn’t have to worry about cleaning up ~/.stack
Could you give a short explanation or link to a write up? Which size of EC2 instance works well for compiling Haskell? Does the EC2 live only for the duration of the compile, or permanently? If only for the duration, how do you deal with persisting the installed libraries to disk? Thanks!
I can’t recall exactly, I want to say 16 vCPUs and 64GB. How well a project can make use of the cores depends on how wide your dependency graph is vs how tall, as the unit of parallelism is a package.
The EC2 instance was on continuously, and I would ssh+tmux in. You could probably replace this with a server plugged in under your desk, depending on how much you trade off operational costs and capital costs.
More often than not I wanted passing tests to send something for review. If I needed an artifact, I would publish a docket image on an internal registry.
There is this writing style that seems prevalent around certain programming circles, particularly the JavaScript, Python, and Ruby communities. It’s informal and somewhat hammy—- perhaps to seem friendly and more approachable. I imagine it’s pretty fun to write this way. Do people find it easier to read?
I think it's fair to say it was pioneered by why's (poignant) Guide to Ruby [0].
Spolsky is a fan of the style. He's made the case for deliberately writing documentation that's fun to read, rather than using a dry style. As he puts it, nobody read the spec, because it was so dang mind-numbing. [1]
Please don't follow Joel's advice. He's taking the wrong lesson. You can be conversational, you can lade your document with copious real world examples and justifications for rules, and illustrations, and explaining how different parts of the spec interact, and warn about common mistakes and included FAQs, but don't fill it with distracting unfunny garbage like Miss Piggy poking at a keyboard with eyeliner because her fingers are too fat.
The documentation can be enjoyable... the spec I think should be dry as a cracker. But if you want to write enjoyable docs, write like John Carlos Baez writes math/physics paper. The enjoyability should serve the clarity. Anything else is superfluous.
Depending on the circumstances, I often do. For a big topic like this, part of the reason I like it is that the pacing is better. It tends to be less densely packed, and lets me read through something carefully but not feel too strained. It also has a story, if you will, that helps me get back on track later on if I'm sticking with it. It's not to say a more formal style of writing couldn't achieve similar goals, but I just find they don't tend to.
There are plenty of scenarios where it wouldn't fit (references, specifications, etc) of course. So I don't think it's great everywhere. But yeah, overall I find it easier to read when starting out on a big topic.
Personally I found that style funny for a while but now I feel it's becoming tedious and won't age well. In the long run I prefer a style like Kernighan & Ritchie which is concise, easy to read and timeless.
we did this book for our work book club a year or so ago. most people in the club enjoyed reading it. there were issues getting the some of the examples to work at the time but that's another matter.
I learned FP through this book when it was first shared here on HN in 2018. It's very nicely written and clearly explain the concepts. I strongly recommend this book to anyone who wants to learn more about FP!
About This: "We have to bind all over the place lest this change out from under us".
I wonder if there are other languages with similar peculiar behaviour of this.
It's dynamic scoping, as appose to lexical scoping. With the dynamic scoping, the environment of invocation becomes the caller's environment, while with the lexical scoping the environment was tied to the scope or original definition.
JavaScript is kinda weird because it uses lexical scoping for most of the part, but uses dynamic scoping for 'this'.
Lexical scoping looks sane, and it's adopted by most programming languages nowadays. Because it's much easier to reason about - it just worked as how the code was written, in the same file. While dynamic scoping would leak variables everywhere - when you need it it's not there, when you don't want it, it just sneakily replace many variables that you expect to be other things.
But there are still quite some mostly dynamically scoped languages in use like Emacs Lisp, or shell languages like Bash and Powershell. Other than dynamically is debatable faster and easier to implement, it's also handy when you override the environment variables for short shell scripts. But for large codebases it's a nightmare, so traditionally Emacs Lisp scripts have super-long-variable-names-in-order-to-avoid-collisions just like writing CSS, and people had enough now so in newest version people can have a syntax that can enable lexical scoping.
Common Lisp is interesting because it has both lexical and dynamic scoping. This is a quite useful feature as functions can override config variables. The only other language that provides this kind of configuration that I know of is Scala. Scala accomplishes this through implicits, instead of dynamic variables. Implicits are more flexible, but I think there's some good arguments to be made about careful use of dynamic scoping.
Dynamic scoping doesn't require long variable names. Having one namespace is what requires global variables to have reasonably long names.
Under dynamic scope you have the additional problem that a global name can be be overridden by accident by local names that clash with it.
Rather than trusting mere identifier length to address that problem, there is a namespacing convention to deal with it: putting "earmuff" asterisks on the names of global variables.
Namespace and lexical scoping can both solve the long variable name issue. If a language have none like Emacs Lisp or CSS, the issue happens.
For example, when there was no namespace for JavaScript, people can easily simulate modules/namespaces with lexical closures. For a lot (not all of course) of bundling systems, modules/namespaces are just syntactic sugars on top of lexical closure.
> so traditionally Emacs Lisp scripts have super-long-variable-names-in-order-to-avoid-collisions
I don't think dynamic scoping is the only reason for this. Scheme has lexical scoping yet long-identifier-names are popular with scheme as well.
Personally I prefer lexical scoping with long-identifier-names. I find it more pleasant to read. And with modern auto-completion systems, it's no longer inconvenient to write code like that.
Yes but for dynamically scoped languages you have to name it even longer.
Because in Scheme there are safe variables in closures so it can just named as <already-very-long-function-name>, while in Emacs Lisp it's literally leaking everywhere so it need to be <plugin-name>-<feature-name>-<sub-feature-name>-<repeat-the-feature-name-game-for-a-while>-<and-eventually-the-already-very-long-function-name> instead.
the docs have a specific tone or pattern that's everywhere (and by docs I mean everything -- library readmes, examples, language intros, and stackoverflow posts)
they all read like 'well, you probably haven't been introduced to monads yet. I can't answer your question directly in a short post (these posts are all like 6 paragraphs, nothing short about them), but I'll tell you why you can't do what you're trying to do and I'll also explain why I can't tell you what a monad is'
also my laptop battery keeps dying because the builds are so expensive