But I never see contrived real-world examples, those that really show how things are different from imperative or procedural programming. I want to see how you use stateless functional programming to control a garage door actuator. Or how you model a Car which can have their doors open or closed either by the class itself or external actors. How do you receive UDP packets, including a buffer implementation that is able to sort out-of-order packets and send retransmission requests for missing ones? That kind of stuff.
Are there any resources that show these kind of "here is a list of problems, how they are typically solved, and how you could solve with FP" style examples?
EDIT: This message was actually to say thank you and that TFA is an awesome resource to help understand how the Iterator methods work! And then I got lost in my semi-rant and forgot to actually say thanks :)
It shows how a functional approach can embed much of the domain logic into the types used. They are so simply expressed in F# that non programmers can understand what they say about the problem domain.
The book is selling me on the idea that Functional programming could do a better job in CRUD style business apps.
It doesn't cover a lot of good topics in the book, but it might whet your appetite.
This book (along with Domain Driven Design made Functional, mentioned elsewhere) are both great resources for getting introduced to “useful” functional programming.
I'm not sure why you'd want to see contrived examples, but the reason to focus on map, filter etc is that this is where it starts, and this is what functional programming _is_, transforming sequences of values with pure functions, rather than keeping a lot of state and change it imperatively.
How to write a real world program with functional programming is a later question, and pointless until you understood the basics. All your examples also include a bunch of concepts and data that A. can't be purely functional and B. will vary widely between programming languages, so they are very poorly suited for an introduction.
Although I must say, my examples are actual instances where I tried to put the shallow learning I had done about FP and failed spectacularly at doing so. It was no doubt due to my ignorance and lack of experience with FP, but maybe also there is a part of me not working with problems that lend themselves easily to a FP approach...
That is way too much for me to put into a single comment, but the most common approach is to split your program into "pure" and "impure" parts and then model all of your interactions with the real world in a way where the "pure" part of your program becomes an "interpreter" that deals with the pure representations of real-world events.
A common pattern for doing this (but by no means the only one) is using Free Monads.
I guess it's a little easier with OOP, but it isn't very clear to me how "polymorphism" helps me open a garage door, or even achieves it's stated goals. I guess I can at least understand that now I can open any door by calling door.open() or something. Of course it gets much worse when you graduate and people tell you OPP is overhyped, there's too much abstraction here. Then you think "I thought the abstraction was good". Don't just create a garage door, create a door factory! Then you realize you don't have a good defense, and that you don't understand as much as you thought. You were never taught to design programs pragmatically. You just have to figure that part out on your own.
I can understand how mutable state causes problems and would be a good thing to avoid where possible. There's really not much needed to convince even a moderately experienced programmer, of the power of pure functions. It doesn't feel that far off from the OOP example.
Have something nullable? Stick it inside `Maybe` and map it.
Have a side effect in `IO` and want to manipulate the result? Map it.
Following that, look at monadic binding. It's basically flat mapping.
From here you can start to see how programs can be composed in languages like Haskell. You have these foundational typeclasses and you sort of just connect all your functions together with them until you're left with one big expression in `main` which is your application entrypoint.
But I’d say in general that FP is mostly just an alternative to OOP, maybe even leas than that, not a complete paradigm for writing complex programs. At least if you use a mainstream language, then FP is probably something you do pretty locally.
The only caveat is in I/O mechanisms. An IO monad is one way to safely manage IO, but it is not the only way.
One things I’ve found enlightening is when participating in Advent of Code exercises, watching how people are able to solve problems using functional languages. While I may not always grok the solutions in their entirety I’m often impressed by the compactness and elegance.
My only complaint about FP, and Linq in particular, is that it can often result in suboptimal execution when you have preconditions that Linq isn’t aware-of, such as a known input size, or known uniqueness of some key-property - and there isn’t a way to supply hints. And C#’s weak support for contravariance, aieee. Still lightyears ahead of Java though :)
A function using the IO Monad looks like regular imperative code except for the type signature of the function which indicates that the function side effects and thus can only be called by other functions that side effect. Pure functions cannot call a side effecting function. This helps in isolating side effects and capturing it in the type signature of functions.
Plenty of examples here
Functional programming is not stateless programming. Rather it captures any state changes in the type signature. An explicit example of it would be the state monad.
At which point you are happy with "functional" is up to you. Its not all or none.
Here is an educational bulletin board website built with Haskell:
So a functional component in react using hooks is not pure. You can translate it into FP with a bit of magic, but requires a context. The line is not that distinct at all.
Of course you then have a mixture of both, which many if not most modern languages have. Pure FP can be expressed in a language that supports mutation anyway, it's just a matter of how much first class support those features have. Languages like Haskell I think go so far into the abstracting of time, that they reached around and achieve something like mutation based syntax, except there is a tower of abstractions working under the hood, ensuring that it's still timeless equations which result in side effects.
Functions like ‘map’, and ‘fold’, and ‘zip’ are; but Iterator is not the term to describe such function.
Headline here might be a little grandiose. Author has "A puzzle game inspired by functional programming" on the project's Github. I find this more apt
Functional programming really started to resonate with me at that time. Properties like confluence were very useful to understand distributed systems algorithms later on, and e.g. how eventual consistency plays out.
 The teacher was awesome too. If you'd like to learn more about term rewriting systems, go take a peek at his slides http://joerg.endrullis.de/teaching/#trs (Note, just noticed that they are behind a password. A gentle email will probably get you the slides though :)
I'll send you a PM :)
After all, this shows some very convoluted ways of getting a simple end result ;-)
Also, I've only played the first few easy levels, so I guess there will be more than "map" later on? Otherwise I would just leave the "map" word out of it.
Perhaps some of that stupidity is bound to a developer’s understanding of their language at hand. Some languages are more expensive than others. This is where the sad confused developer mandates that functional programming must be declarative... until you point out the languages Red and Rebol are functional imperative languages. It’s not some shallow wishful opinion. That is how those languages describe themselves and it’s what they look like.
When you take all the stupidity, assumptions, and restrictions away functions are most universally a bag of instructions, as are classes and various other things. Functions are unique from other instruction bags in that they execute. That creates potential for instruction reuse and thus portability.
Like what? You've thrown out a ton of generalisations, but nothing here is concrete.
> When you take all the stupidity, assumptions, and restrictions away functions are most universally a bag of instructions, as are classes and various other things.
If you group all things together as a 'bag of instructions' then there's no point in teaching functional concepts or any paradigm's concepts for that matter.
I am reading between the lines here, but I am assuming by 'restrictions' you're referring to functions taking immutable values and returning immutable values. There are sensible reasons for this (along with referential transparency in general) that enable function composition, which is the 'super power' of functional programming. It isn't a requirement, it's just good practice, and so I can understand why FP advocates would promote it (I certainly do).
A class can have those powers too if it is immutable, because ultimately any class is just a set of functions with an implicit 'this' argument, but it's often understood differently. It doesn't seem unreasonable to promote and talk about the differences in approach, especially in languages that allow you to 'cheat'.
Yes, the narrow definition of a functional language is that it has first class functions. Clearly though there's techniques for success within the paradigm.
Again, it's quite hard to latch onto exactly what you're saying here, but I don't think functions as structure is the argument for FP (as such), it's functions that work like mathematical functions, to leverage the centuries of knowledge of mathematical expressions, proofs, etc. Personally, I find my functional code much easier to understand, compose, test, and generally have confidence in. When I compose two functions I don't have to 'go and look inside' to see what it does first, because they're pure and declarative.
The declarative nature of the functions is an artefact of following the good practice of immutable values, referential transparency, and total functions which lead to 'honest' type-signatures, it just happens on its own.
I spent a reasonable part of my career in imperative-land, and honestly I wish I'd found functional programming much earlier. I write fewer bugs and I have more confidence in my code. And yes, it adds constraints that are annoying sometimes, but those constraints ultimately stop the code becoming a cognitively difficult mass of mutation or hidden complexity.
That was intentional. Specifics aren't necessary for making the point. Had I stated the specifics you were looking for I suspect you would have focused only on that, missing the primary point.
Like GP, I would have to guess what your point actually is.
In your other reply you come off as haughty or patronizing.
One does not need high EQ or SQ to be kind.
Neither Red  nor Rebol  actually does that; the Rebol family of languages tends to describe itself as paradigm-neutral or multi-paradigm, but historically stays close to imperative constructs and lacks mainstream FP support (e.g. Red doesn't have closures, currying, and tail-call optimization). If you really look at a generic Rebol code it will look like an Algol 60 ocean with occasional DSL islands floating around.
Each time I ask what "functional imperative" buzzword actually means no one can give me a straight answer. If it means "building imperative constructs on top of recursive functions via macros" , then it describes something totally remote from Rebol. Care to elaborate?
The mathematical problems break the "visual" model. They are harder to understand when you have to spend most of your energy doing 8 simultaneous mental conversions to base 8 while solving.
It is pretty evident that during this you involve yourself in code puzzle / code golf. Rather than understanding the problem at hand. This is fine - until some of your "low-level" stuff leaks or your understanding of it leaks. There is also third possibility for failure - trusting the system too much, despite types being expressive they hide away a lot of "internal" logic, sometimes something looks the same, but it's not and type system doesn't capture it.
I would be interested in seeing this expanded to include passing functions to functions (that is, make the user build a tree of functions, not a list). And yes, that might put you into hot water regarding how to communicate signatures and such.
I agree this game is a bit hard because the “type” of each element passed through the functions isn’t documented (is it a cube? A column? How is a 2D shape’s cubes iterated? Does order matter?)
I get the map and filter operations but I can’t remember ever coming across a function like that before.
The game's "stack equal columns" corresponds to Haskell's Data.List.group function:
> import Data.List
> group [1,2,2,3,3,3,2,1,1]