I think it's neat they exist, but they don't really live up to the "curated" promise in my mind. Mostly they seem to be a grab bag of everything under a topic, and it's left to the reader to sort through. As such they've never come to mind when I'm actually researching anything.
This particular one covers a very broad topic, and mixes material on both functional programming languages and functional programming style in languages that permit it (at least halfway permit it). Just look at the Books section. It's not alphabetized or subcategorized by language, topic, or difficulty, just a list. There are no descriptions of the texts, and no apparent running theme. A learner should not start at the top of that list and work through it, it would be counterproductive to most readers. It would benefit from being broken down into more sections, like:
- What it is and more universally applicable tutorials/texts
-- Books (ordered by difficulty and topic)
-- Tutorials (ordered by difficulty and topic)
-- Libraries (arranged by category)
-- (like above)
Look at this part of a CL one: https://github.com/CodyReichert/awesome-cl#offline. The Beginner/Intermediate/Advanced breakdown is useful, and each title is given a short description. You could, in theory, follow this reading list in order and gain something on each new text as you work down the list, and they'd all be relevant to learning the topic of Common Lisp. It provides a more coherent set of information and resources (at least for that section).
So you end up with lists that really amount to "who asked to be on the list".
Being able to scan https://github.com/avelino/awesome-go, https://github.com/bkrem/awesome-solidity, and https://github.com/gianni-dalerta/awesome-nft for general resources, projects, guides, or just general information was a nice resource to have in my back pocket.
I think your point is valid, but it's also a personal expectation of what you get out of the resource. I think the fact that they are open ended helps both developers who are trying to reach an audience and people who are browsing for new tools, ideas, etc.
I usually put a bookmark and then forget about them entirely. Gives me a little bit of dopamine for a short while though, so it isn't completely in vain.
They have their use as a jumping-off-point, but what I _really_ want is a truly curated list. LessWrong has a list of "best textbooks" where you have to provide a "best" and one (or two?) other textbooks that you've read which you didn't like as much, and an explanation on why the former is better. I think it's a great way to curate a list. I wish there were more like that.
No link to the 'Purescript by Example' book
No mention of Halogen
Some links 404
And it's like that for almost all of these awesome lists, most of which have been abandoned some time ago.
This repository contains a community fork of PureScript by Example by Phil
Freeman, also known as "the PureScript book". This version differs from the
original in that it has been updated so that the code and exercises work with
up-to-date versions of the compiler, libraries, and tools.
I don't think a blog post that identifies a problem someone had to solve and the resources and steps they went through to solve it (even if they don't add any real innovation about tool usage beyond their sources) is bad - but even in that format you're getting a lot more context around why some sources are awesome.
This particular list isn't particularly organized, but I can still C-f for "lens" and find a few articles. I'll open all of them, read all of them, and by the end have a good shot at understanding what a lens is and how to use one.
Certainly, reading multiple sources gives you a better chance at understanding than any one particular source.
The human brain is extremely good at pattern-matching, and so awesome lists are a great way to "train" it with "data".
It is because of this design-space divide that some people claim that (statically typed) object-oriented programming is a dead end.
It definitely requires control of the data - i.e. having a partially evaluated linked list somewhere directly accessible by pointer without signaling any generating function will break a lazy implementation. That said I absolutely love lazy evaluation and have built a lot of tools in less functional languages like PHP and C++ for handling very large data sets.
When dealing with large report generation (i.e. all transactions from a quarter compiled for the accounting department) it can be necessary to avoid ever actually realizing full data sets into memory for efficiency and capacity reasons - that's an area where abstractly defining various transformations on the result set and then executing them in an iterative or batched manner can be exceedingly useful. I have killed many OOMs in my days.
Probably not very efficient. Are there commercially successful examples of such a language?
> An imperative language could use lazy evaluation as well
Certainly! I believe Microsoft's Linq is one such example.
Global lazy evaluation by default is the exception, not the norm. Standard ML, F#, OCaml, Scala, ... I actually would have more trouble finding a classic "functional" language that is lazy like Haskell but isn't Haskell.
Not to mention, almost all the multiparadigm languages that can utilize FP like C++, Rust, etc are eager.
> This is a function but it is not functional because it mutates state and uses iteration.
The state and iteration was only internal, JS doesn't support TCO and the algorithm is very simple. Sure it's only an example but it's not a great one. Then it continues:
> To be functional it is not sufficient to simply remove classes and write top-level functions. You have to completely change your way of thinking. The functional way has no side effects or state mutations. It has no assignments, no iteration, and no conditional statements.
That's not "being functional", that's Haskell. I can write a program with side effects, state mutations, assignements, iterations and conditional statements in OCaml or Scala without issues.
> Assignment and iteration are typical of imperative programming. In functional programming changing the value of a variable using an assignment statement is not allowed.
It is allowed though:
let x = ref 0;
x := 5 (* x is now 5! *)
JS has lazy evaluation with iterators. What it doesn't have is chaining/transformation of those iterators: map, filter, reduce are not defined for iterators. You can make your own prime iterator (with internal state!) and iterate on it though.
No true scottsman fallacy. By the way, Safari implements TCO, so you should be able to do real functional programming on Safari with iterators!
> Functional programming and side effects
Again, that's only for Haskell. And you can write scary imperative code in Haskell too:
toto x = if x == 0 then error "Ooops" else x + 3
False dichotomy, you can write plain imperative code in JS, and it works well.
Again, most existing JS code is imperative style and not OO. No mention how functional code is easier to test? That's a bit weird.
All in all, it's a bad article based on limited views.
That's only because those languages are not purely functional. If you have state mutations and side effects you're not doing FP at all. Just because a language is known as a "functional language" doesn't mean anything and everything you are able to do in that language qualifies as FP.
> That's only because those languages are not purely functional. If you have state mutations and side effects you're not doing FP at all.
You can be doing functional programming with state and mutations. Functional programming is about having first-class functions and using them, and often takes the form of function composition and higher-order functions.
> Just because a language is known as a "functional language" doesn't mean anything and everything you are able to do in that language qualifies as FP.
That is true. But you can have an impure codebase and still be doing FP without issues.
I am glad you said "function programming", as in, programming with functions. That's what you're doing. Don't confuse it with Functional Programming!
I have been maintaining for the last 5 years a slightly richer collection at: https://github.com/sfermigier/awesome-functional-python/
Again, please correct me if I'm wrong since I've been industry for barely 2 years: back in school functional programming looked cool, but when I entered industry(and even in my exploration through personal/OSS projects), I have found it to be nothing more for the most part than a cool toy to play with. I can somewhat see the benefit when it comes to parallelizing Big Data, but that seems to be a very special case.
Again if I'm just being ignorant, please educate me :)
Promises and the async/await syntax sugar on top of them are another JS construct that borrow quite a bit from functional literature.
Purity helps quite a bit with narrowing down bugs; you simply check that your inputs and outputs are correct without having to ever worry about state. We use effects to narrow the types of functions that can be legally called inside specific domain logic, which also narrows down a large class of bugs that we've seen creep into other code bases.
However, it's not a magic bullet. Time to getting someone unfamiliar with Haskell up to speed can be costly. Discovering time-saving idioms in a sea of bad documentation is frustrating. Smaller ecosystem than other popular languages means less blogs, and in Haskell's particular case the information seems either too low or high level with no in between.
Still, with 250k of production lines running as stable as can be, we have no complaints.
Unfortunately it's very tempting to intorduce side-effects 'just this once', which can make those techniques less useful, and takes some discipline to avoid. For example, in Scala it's easier to just throw an exception instead of wrapping results in a 'Try' type; or likewise for null/'Option'; etc. mostly since those results then require map/mapN/flatMap/traverse/etc. to handle, rather than giving us values directly.
However, I think it's usually worth the effort. For example, those map/mapN/flatMap/traverse functions are essentially 'setting policies' for how effects should interact; whilst the 'core logic' can remain completely agnostic.
As a very simple example, if we have 'l: List[A]' and 'f: A => Option[B]', we can combine these in multiple ways, e.g.
l.map(f) : List[Option[B]] // Run f on all elements, keeping all results
l.flatMap(f) : List[B] // Run f on all elements, discarding empty results
l.traverse(f): Option[List[B]] // Run f on elements in sequence; abort if any are empty
More esoterically, we can replace 'Option[T]' with 'Reader[X, T]' for dependency injection of an 'X'; etc.
Hell, Java may actually be more referentially transparent than Haskell, due to it not having macros - the one thing that can break this functionality.
And Github search should display the rendered markdown so we can click the links.
If you order all concepts in prison programming by the product of usefulness and confusion it must be somewhere at the very top.
I'm not sure why it's different than other programming concepts in this way... maybe just because it's such an abstract concept where most people have no frame of reference. You don't appreciate it until you've used it in a bunch of different contexts and finally see why this one abstraction works for all of them
If you really want to, you can use the language of category theory to talk about monads in Haskell. Some people seem to get a great deal of satisfaction out of doing so (and I suppose that it's harmless fun). But it does build up a certain undeserved mystique around monads in Haskell.
It's just that on other paradigms interpreters are a very advanced and completely optional topic. While on pure FP they are fundamental.
If you could share some of these examples, or perhaps a link to those examples, I'd greatly appreciate it.
Though I’m not the best at explaining things, hopefully it makes sense.
Sorry if I misunderstood. I've been reading many explanations for monads, but insight seem to elude me.
For example, mapping some function over a Result type that may or may not have a value in an iterative manner would make me check for the existence of the value, unpack, and apply function. If we have a Monad abstraction then I can just map my function which will be applied in case of a result, or the whole thing will remain None - it is safe either way. And one other useful part is the uniformity of functions working on monads - I can use the same ones for a List, Result, or any similar container, IO, State monads and even Async is a monad.