Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How to be fluent in functional language speak?
103 points by _448 4 months ago | hide | past | favorite | 107 comments
When I read academic articles or blog posts written by functional programmers it feels alien to me because of the terms used in their writing. I am not just talking about monads, monoids etc, but the general programming terminology itself is different to what I regularly hear and read as imperative OO programmer. For e.g. this sentence "abstraction over type constructors"; whenever programmers from C++, Java etc world will read that they are sure to say, "eh?!?" and fall off their chair.

So how does a programmer from non-functional world become fluent in understanding sentence such as "abstraction over type constructors"?




"abstraction over type constructors" isn't very meaningful without more context anyway, but they're probably referring to something like the following:

non-abstracted:

    map :: (a -> b) -> List a -> List b
abstracted:

    map :: Functor f => (a -> b) -> f a -> f b
"List" is a type constructor (it takes a type and returns a type), but we can write a "map" function that works over more types than just List - for example, Maps and Options and so on (aka "Functors" in Haskell parlance). We've "abstracted (the function map) over the type constructor (of the data structure being mapped over)".

To answer your more general question, if you just learn functional programming on a language with a powerful type system (Haskell is probably the most germane example) you will learn at least some of these terms. Most of them turn out not to be very deep, just hard to explain (for reasons unclear to those who understand them), but still very useful.


> We've "abstracted (the function map) over the type constructor (of the data structure being mapped over)".

Then why didn't they just call those types "Mappables"?


I think this is a great question because it tends to come up a lot.

What do you call the following common operations?

  - 1 + 0
  - 1 * 1
  - "string" ++ ""
  - [1, 2, 3] ++ []
As far as I know there is no term for this grouping of operations outside of functional programming languages which use category theory.

We call it a "monoid" based on the following precise definition:

> In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single associative binary operation and an identity element[1].

I used the "identity element" specifically in my examples above since they tend to be a good way to grasp what is going on.

It turns out a "functor" has more properties than just being "mappable", which is why we like using the more precise term in our literature. However, it's still a decent intuition if you aren't familiar with the concept.

For example, in the parent comment the 'f' in the second line of code shows that it is some kind of structure that adheres to a Functor, and also infers that this structure can also be mapped over. This is a nice property because without caring about the implementation, we can still do functor-specific operations over abstract structures.

[1] https://en.wikipedia.org/wiki/Monoid


"binary operators with an identity value"


the binary operation should also be associative (e.g. subtraction is no good)


concat?


I can't think of any way for 1 * 1 to be concatenation.

I think the point GP was making is that a monoid has something to do with a function f that takes two arguments a and b where there is a single value for b such that f(a, special_value) == a. I think.

As someone who doesn't really understand FP concepts I would probably call those functions "functions that can sensibly be used in a call to array.reduce"


It doesn't need to have a perfect metaphor for all possible cases, just something you can picture in an instant: 'behaves like a concating variable length box sequences'. It's a mnemonic / learning aid, not a formal spec. But I can stretch it :)

+: concat marbles sequence

* : concat prime factors sequence

++: concat box sequence

For * , it can also be reduced to + via logarithms. I just don't have a good metaphor for '+ over reals' this morning.


“append”* is the name chosen for the binary operator in the Haskell standard library’s monoid abstraction and “concat” seems like the same genre, so I would expect people would be fine with the intuition for “concat” as well.

Given that this is a thread on functional jargon, it might be interesting to note list metaphors as being useful for thinking about monoid might derive from lists being free monoids (in many contexts). This means that all functions from a collection elements to a monoid can be thought of as functions from that collection into a list with the appropriate type of elements and then a fold (also called a reduction) over that list. This knowledge can be super useful for thinking about program structure e.g. conceptually mapreduce is a general tool for large-scale distributed computation over monoids.

*it’s actually called “mappend”[0], short for monoid append, but I think the point still stands (and there is, in fact, a function called “mconcat”[1] but folds of a list of monoidal values using “mappend” to combine them.)

[0] http://hackage.haskell.org/package/base-4.14.0.0/docs/Data-M...

[1] http://hackage.haskell.org/package/base-4.14.0.0/docs/Data-M...


> I just don't have a good metaphor for '+ over reals' this morning.

Skimming through my thesaurus, I found a word for it: "Totalize".


> "functions that can sensibly be used in a call to array.reduce"

You can use any function you want in a fold in Haskell. You're conflating two typeclasses, Monoids and Foldables, and restricting yourself to the function known in Haskell as foldMap.


You've definitely got the right intuition.

"associative": (a + b) + c == a + (b + c)

"binary": f a b == a `f` b == a + b where f is addition

"identity element": the "empty" value which when paired with a second value, returns the second value as is


> As someone who doesn't really understand FP concepts I would probably call those functions "functions that can sensibly be used in a call to array.reduce"

So reduceables?


That implies that they can be reduced. Maybe "reductors"?

That said, I am on the "name abstractions after the math" side of things. It makes it much clearer what guarantees I have when working with the interface, on either side of it.


That would probably have been clearer to newbies, but they already had a complete terminology ready to go from category theory and it's not hard to remember "Functor ~ Mappable". Plus, "obvious" names break down once you start building up hierarchies of these properties. One of the reasons people have a hard time communicating the (simple) ideas behind other algebraic objects like Applicatives, Monads, etc. is because it's hard to come up with short, self-descriptive names for them. The search for such names has resulted in several famously bad analogies, such as "a Monad is like a Burrito".


Because names don’t carry meaning, they’re pointers: https://www.parsonsmatt.org/2019/08/30/why_functor_doesnt_ma...

“Functor,” points to a specific concept in Haskell and a slightly different one in Category Theory but otherwise it is fairly unambiguous given enough context. The concept not only refers to the type and the associated “map” operation but also the axioms of identity and composition and their properties.

“Mappable,” is a very common name people pick in these discussions but it doesn’t clarify the meaning any more than calling it “Rose.” It may in fact cause more harm than good if a name like Mappable exists in more contexts. There are plenty of objects one can apply a map operation to that are not valid Functors.


Well, Functor is a bit of a misnomer; it really should be called TypeEndofunctor. The short form was really adopted as a mischievous pun on the use of "functor" in the OOP community.


I don't think TypeEndofunctor conveys any additional information when discussing a Haskell typeclass.

> The short form was really adopted as a mischievous pun on the use of "functor" in the OOP community.

Citation needed.


I should note that naming things with mischievous puns certainly wasn't beyond early Haskellers - that's the origin of `return` as a name for part of the Monad interface (since quasi-deprecated in favor of `pure`).

I just have never heard that allegation about `functor`, which seems to have been taken directly from mathematics that predate any notion of function object. It's possible that OO programmers and the mathematicians both borrowed it from the same source (it seems to have originated in linguistics).


> Because names don’t carry meaning, they’re pointers:

That's been debated in philosophy for over 150 years, and there are many alternative theories.

I think a better point is that technical names don't matter because their formal meaning may not correspond with any word, and if there is an everyday word, it's probably slightly but importantly different.


To be fair, the term "map" comes from math [0], and so at the time these functions were created, there weren't other good naming choices. Mappable and Functor are just about as opaque as each other if you don't know what mapping is.

Though now that FP concepts are mainstream, I think there is a good case to offer some friendlier, more familiar names:

Functor -> Mappable

Applicative -> Pairable

Monad -> Thenable

[0]: https://softwareengineering.stackexchange.com/questions/2033...


Giving descriptive names just harms intuition building in the long run.

I've been writing Haskell for over 5 years and to this day when I see <$>, <*>, >>= in my code I don't substitute English words for them.


Was about to complain you can't google them, but I tried it and it worked. Huh. I guess you can google symbols now, even traditionally escapey-ones. News to me.

mentalModel.update().

The query `python @` fails pretty hard though.


For Haskell stuff, you can go one better and use Hoogle: https://hoogle.haskell.org/?hoogle=%28%3C%24%3E%29&scope=set...


I tend to just query the repl when I don't know a symbol.

    :info (>=>)


> Giving descriptive names just harms intuition building in the long run.

Citation needed


(In my experience)


Like seriously isn't it obvious that I'm stating my opinion? And that the citation is my personal professional Haskell experience? We're talking about intuition building here it's not like I'm gonna be able to Debate You with Hard Facts -__-


Yes it's clear that you're stating your opinion. I asked for supporting evidence because I'm not persuaded by your opinion since I have the opposite opinion.


Well then I'll add:

Calling Functor "Mappable" and Monad "Thenable" doesn't really add much to understanding.

The best advice I ever got was that a Functor is `f` with a function `fmap :: (a -> b) -> f a -> f b` that follows its laws. Same goes for Monad. If it has the type signature & follows the laws, it's a Monad. That's the definition. No need for a cute analogy-oriented name like FlatMap-able or Then-able.

Once you do that, you realize that the way to understand these things is to play type tetris with highly-generic type signatures. You stop needing English/real-world analogies to understand them and instead use the type system to understand the world (the reverse of using "nice" names)

It works really well but it is a steep hill to climb. It helps to talk to others who have climbed it, and it helps to just grind on a project and use Monads without being an expert. You can go a long way just knowing how `do`, `traverse`/`mapM`, etc work and not having big Monad intuition.


> Citation needed

Citation needed.


All I see here is weakening the descriptive power of those terms by replacing them with more common ones. It may look easier but I think it’s probably better not to carry in baggage from other languages that doesn’t quite fit.


Because the concept is category-theoretic in nature and they wanted to foreground that.


Practice programming in a functional language. They enforce a lot of restrictions that you won't be used to, and in overcoming these restrictions, you'll begin to learn and understand the reason for the jargon.

It's really hard to explain "abstraction over type constructors" in a pithy manner despite being a relatively simple statement because the obvious follow-up is "but why?" And the answer to that question isn't going to satisfy you without an understanding of the functional programming environment.

It's exactly like when novice programmers get thrown into Java land in CS101 and you have to tell the to just ignore the whole class, public, private, static, etc jargon. They need to gain experience programming in that environment to really appreciate those concepts.



Scott Wlaschin (the author of that site) does an amazing job of explaining functional programming concepts, and I highly recommend both his site and his book, "Domain Modeling Made Functional". But he does usually try to avoid words like "monad" and "functor", so you might not pick up fluency in the mathematically-inspired vocabulary from this particular resource.


I think the first step here is to put your expectations into perspective. Functional programming also is a range of complexity. If you're not used to functional programming and try to read articles targeted at the high-end you'll naturally struggle to understand them.

I still remember "not getting" OO in university in my first lecture. I had programmed BASIC, C, Pascal, PHP before. And if someone would have tried to explain me the idea by means of the visitor pattern, it would surely have made me say "eh?!?" too. So allow yourself to start slowly.

I'm also still dabbling in functional programming - at least that's what it feels like reading about monoids etc - but by actually using functional programming languages in my more serious spare time projects I feel I get a better and better understanding. And "those" articles start to speak more to me.


You're not alone. Even functional programmers struggle to read those papers. I found that in the functional programming world there's a lot of concepts being thrown around with different names.

I think your best option is to directly contact authors or very good educators and ask them exactly what they mean.


First, you probably don't need to read academic articles to learn about functional programming. There's really nothing magical or complicated with functional programming. For instance, Scheme or OCaml are routinely used as beginner programming languages in schools around the world.

You can start with the basic concepts and eventually build your way up to more abstract constructs when you realize you need them. But even there, you don't need to know the academic terminology (or if you do, it'll make more sense when you get familiar with the language).

I frequently use monads and functors, yet I don't know much about category theory (and the little I know doesn't make me a better programmer).


Not sure about Scheme, but OCaml's type system is not powerful/expressive enough that (to use the given example) abstracting over type constructors is likely to ever come up. (No HKTs in OCaml.) Also, OCaml uses a naming scheme for certain things that I haven't seen used elsewhere - e.g. what OCaml calls a "Functor" is a parameterized module, not closely related to Functors in other languages that use that abstraction. I suspect this is a result of the French/English language divide in the initial development of OCaml vs all other major functional languages.


OCaml functors come from Standard ML, which predates Haskell. And you can definitely use them to implement HKTs — they are comparable in expressiveness to single-parameter typeclasses.


I use OCaml at work ~every day, and HKTs (to the extent they can be expressed at all, using HOMs) are so unwieldy that I have never run across them in production code. In Haskell, they are both easy and routine. So I’m approximating this with “no HKTs in OCaml”.


HKTs indeed aren't a common pattern in OCaml, but first-class modules and similar patterns are common enough in Core. Your statement was that the type system was not powerful enough to express them at all. Whether or not it's practical is a bit different :)

You also said that their use of "functor" was probably the result of being French vs English, when I pointed out that in fact it came from Standard ML, which is a British design. Moreover, the creators of Haskell originally built typeclasses as an extension to Standard ML (1980s), and Functor was added in the same version of Haskell that Monad was (1.3, roughly 1998).

The connection between ML functors (specifically the interface/signature) and typeclasses in Haskell is relatively clear, if you understand how they both work. I don't disagree that typeclasses are more general, but modules have their appeal as well. I'm pretty excited to see where Backpack ends up in Haskell, personally.


I definitely sufficiently hedged my original claim:

> OCaml's type system is not powerful/expressive enough that (to use the given example) abstracting over type constructors is likely to ever come up.

And I stand by that. It’s not likely to ever come up, even if it’s possible to kludge it.

You’re right about the name though - it seems like the anomalous use of “functor” predates francophone intervention.


It's weird to call it anomalous ML came first in the programming sphere.

What you said is that it's not powerful or expressive enough, but my impression was that they were both expressible in terms of System Fω, an understanding I would like to correct if it's wrong.


> It's weird to call it anomalous ML came first in the programming sphere.

Eilenberg and Lane introduced functors (in the sense more or less used by Haskell) in the 40s.

> my impression was that they were both expressible in terms of System Fω

This is not correct anymore - Fω is insufficient to describe GADTs. You need coercions and equality constraints as well.

It's also approximately as useful a claim as "the languages are equally powerful because they're both Turing complete".

I'm referring to practicability, not theoretical possibility (and I've clarified as much several times now).


>Eilenberg and Lane introduced functors (in the sense more or less used by Haskell) in the 40s.

Yes.

>This is not correct anymore - Fω is insufficient to describe GADTs. You need coercions and equality constraints as well.

Of course, OCaml has GADTs these days. I suppose neither of them are really Fω anymore, then.

>It's also approximately as useful a claim as "the languages are equally powerful because they're both Turing complete".

Agree to disagree. I think when discussing how powerful a language is, a rigorous notion like Turing completeness or position on the lambda cube is still significant, even if it's not the most immediately visible.

>I'm referring to practicability, not theoretical possibility (and I've clarified as much several times now).

Fair enough, I'll chalk this up to a misunderstanding.


I'd go a step further and say trying to learn all the convoluted language and terminology actually does very little to understand the benefits of functional programming in the real world and it should best be avoided.

It's, in my opinion, better to just grab a functional language, built some software with it, and learn about the benefits and disadvantages of functional languages by interacting with them.


It's not the jargon of functional programming, it's the jargon of type theory and PL design. So maybe start there?


As with so many things in life, a lot of it boils down to practice.

Are you just trying to read academic articles and blog posts? Have you tried working in one or more of the languages? Beyond "toy problems" to trying to solve larger problems?

The more you work in a language, the more you might start to understand the needs that underlie things like category theory (the mathematics of monoids, semigroups, monads, etc) and where other higher level abstractions start to play in how you want/need to write software.

There's also places to practice even in C++, Java, etc. A lot of functional programming is making its way "stealth" into even the most imperative OO languages. Are you handy with your languages map/filter/reduce libraries around iterables? Have you tried pushing that deeper towards other monads such as query transformation (Linq) or time (async/await, ReactiveX)? Have you thought about how those work under the hood? (Explored what a iterable generator or an async/await decompile to?) Have you started yet to think in terms of high level "lambdas", places where functions themselves can be reused by other objects/functions? (Beyond "callbacks" towards thinking of functions themselves as lego bricks to build with.)

There's also middle ground languages such as F# or Clojure that you can use side-by-side (in some circumstances) C# or Java (respectively) code where maybe you can't work 100% in a functional language on a project, but you can get a compromise mix.

So much of understanding the "lingo" is recognizing the patterns and/or problems to which they refer, and a lot of the easiest way you will pick up patterns and see recurring patterns is practice.


I got comfortable with Lisp using http://www.4clojure.com/

Learning through games like this is a big help because they provide: 1. clear and approachable goals 2. clear feedback on whether you've achieved the goals

Maybe Clojure isn't what you're looking for but maybe there's a similar service for a language you'd like to learn.



I don't think this is different in OOP literature, where you may read something like "abstract factory pattern", you just need to get familiar with the terminology. I would start reading Pierce's Types and Programming Languages and then any book on Haskell (you may also want to reinforce your abstract algebra :)


I never use those terms and main programming goto is clojure.

"abstraction over type constructors" might have to do with typing and not related to functional programming.


You run the risk of being lynched by the Haskell mob, with talk like that. But I agree, in principle.

The hardcore “sound types” crowd believes that anybody using dynamic types is doomed.

I must not have the same problems they do.

First class functions; closures; higher order functions; partial function application, seasoned with the occasional variadic vs fixed arity function, and I’m good.

One can learn the other 95% of the jargon to deal with 5% of the problems later.


> The hardcore “sound types” crowd believes that anybody using dynamic types is doomed.

What they believe is that "dynamic types" is a misnomer, and if you think your program is checking dynamic types, it is in fact doing runtime pattern matching over a variant record. There's nothing wrong with pattern matching and variant records; but conflating them with types is just nutty, and using them pervasively as part of the ordinary flow of code is no different than programming in old-style VB.


I suppose that you mean that it should be "checking types dynamically", rather than "checking dynamic types"?


I don’t think many people programming in dynamic languages would die on that hill. Avoiding types is kind of the allure, I think?


At the expense of having to write more tests that a type system could have solved at compile time, and likely catch more runtime logic bugs as well.

For the record I am a convert. Types have made my developer life much happier, especially working with unfamiliar code across multiple projects/teams. When you treat your builds as long term proofs your confidence level increases dramatically.


I don’t disagree :)


Doomed is a bit of a strong word. I personally prefer static types because they reduce the number of unit tests I have to write, and at a higher level (following from this) they can inform the design of a system in a way that encourages robustness. Algebraic data types are not really inherent in functional programming at all; as you mention you can do it just fine in a dynamic language and a good chunk of ADTs and type stuff translates to say Rust as well.


I found it equally tough because while there are a lot of resources to cover the basics and get you coding in Haskell there isn’t much to bridge the middle ground from there to the level the deep enthusiasts and experts are at. All I can say is keep trying and reach our for help on IRC channels or at a local functional programming meetup (and by local these days you could remote in to any meetup you like!). They’ll be people happy to help.

It’s been a while since I’ve dabbled with Haskell but I believe type constructors take a type and return a type (like List takes Int and returns List Int) so an abstraction over them might be an m where say m is a monad and it takes an a and gives you m a. m is an abstraction because it could be anything. A List, or Either, or State etc.


> type constructors take a type and return a type

Ordinary constructors, which everyone here is familiar with, construct values. They may take some arguments, which are themselves values.

A type constructor constructs types. It may take some arguments, which are themselves types. List is a great example of a type constructor with one argument. `Bool` is a type, and arguably a type constructor with no arguments. `Either` is a type constructor with two arguments.

In C++, `Pair` is a type constructor with two arguments; although there are more constraints on how you can use it (or at least were last time I delved deep in C++).

An example of "abstraction over type constructors" would be something that lets you pass in a type constructor as an argument, giving different behavior (whether parametric or ad-hoc - sorry, more jargon...) depending on what you pick.


Collect the definitions that you are learning over time. Put them in Anki as multidirection cloze deletions. Then, just do your Anki deck each day.

You will find that your fluency in almost any subject is improved with this practice.


Question to all the functional programming experts: When OOP got introduced people started building all these elaborate object hierarchies and design patterns only to find out after a while that simpler is better and most features should be used rarely or never.

Is there also a risk with FP too that people fall a little too much in love with “elegant” code only to find out later that maybe simpler is better? Reading some people writing about FP I feel there is some risk of people doing this.


> Is there also a risk with FP too that people fall a little too much in love with “elegant” code only to find out later that maybe simpler is better

I know some functional programmers who write complex code because they want to use the latest features of the type system. They get additional type safety, or better code factorization, but it results in code that is very hard to read and maintain, especially for people not as skilled as the initial programmer.


There's been a "simpler is better" conversation bouncing around the FP community as well lately.

Jumping on all the shiny-new-feature bandwagons that makes code drift towards unnecessary complexity, even if it technically makes things "better" given some sort of objective context, is something a team working with these languages really needs to consider.


The two that I can personally thing of are using higher order functions unnecessarily (i.e. sometimes it is better just to use a boolean flag rather than handing in a callback) and too much indirection (though honestly I see this more in languages like Java than functional languages).


I've outsmarted my future-self on several occasions. So yeah, there's a risk.


Generally in FP, “elegant” == simpler.


On that, I have seen lot of TypeFunctorMapMutatorKeywordsBeingImported to shorten the code and make it elegant...of course from the standard library or modules.

When you could have written it yourself with one or two extra lines. Why? Is it simpler? Or is it unnecessarily abstracted?


I mean, that’s on a case by case basis. I personally tend to do most of my FP in OCaml and F#, which have less of the tendency towards infinite abstraction that people sometimes accuse Haskell programmers of. I don’t think the approach you describe is elegant, but this is highly subjective.


1. It's code reuse. Don't rewrite code that's already written.

2. It's harder to understand. "Why didn't this use the standard library function? Is it doing something different? It doesn't look like it, am I missing something?"


It’s definitely not _always_ just about code reuse, particularly when you start really stacking GHC pragmas and such.


Try a textbook? e.g. Types and Programming Languages by Benjamin C. Pierce.


I agree that TaPL is an excellent book, but I did not find it useful in explaining functional programming idioms. (It does do a great job of explaining typed lambda calculus, however.)


Excellent book that one. I'm working through it right now. It's dense, but good and approachable.


Most of the jargon denotes straightforward definitions. Pick the top-most frequent three terms, e.g. monad, monoid, applicative, write down the definition + a small example on a card, lather rinse repeat a few times and you're golden.

Edit. For example, let's take 'catamorphism'. Read the Wikipedia entry [0] and it's a seemingly never ending jungle of more and more abstract terms. Homomorphism, initial algebra, F-algebra, endomporphism. This easy, let's see what an F-algebra is [1] and we'll be on our way soon. "If C is a category, and F: C → C is an endofunctor of C, then an F-algebra is a tuple (A, α), where A is an object of C and α is a C-morphism F(A) → A.". Oh my God, this is hopeless.

Or, find a good description, with examples [2]. A 'catamorphism' card, abbreviated from [2]:

Catamorphism

A function that “collapses” a recursive type into a new value based on its structure. Visitor pattern.

Define a sum type Gift:

    type Book = {title: string; price: decimal}
    
    type Gift =
        | Book of Book
        | Boxed of Gift
Define a 'description' function over Gift:

    let rec description gift =
        match gift with 
        | Book book -> 
            sprintf "'%s'" book.title 
        | Boxed innerGift -> 
            sprintf "%s in a box" (description innerGift) 
    
Parametrize the function, creating a 'catamorphism':

    let rec cataGift fBook fBox gift =
        match gift with 
        | Book book -> 
            fBook book
        | Boxed innerGift -> 
            let innerGiftResult = cataGift fBook fBox innerGift
            fBox innerGiftResult

Refactor your 'description' function to use a 'catamorphism':

    let descriptionUsingCata gift =
        let fBook (book:Book) = 
            sprintf "'%s'" book.title 
        let fBox innerText = 
            sprintf "%s in a box" innerText
        // call the catamorphism
        cataGift fBook fBox gift

[0] https://en.wikipedia.org/wiki/Catamorphism

[1] https://en.wikipedia.org/wiki/F-algebra

[2] https://fsharpforfunandprofit.com/posts/recursive-types-and-...


> Read the Wikipedia entry [0] and it's a seemingly never ending jungle of more and more abstract terms.

Yup, category theory is often called "general abstract nonsense" precisely because it manages to talk about many disparate parts of math (or programming!) in a way that purposely avoids any reference to the specific. To usefully interpret the above, you specifically need to know that your domain often uses the category (C) of types (as objects) and functions (as morphisms), and that this category features certain generic types (such as Option, List, Array etc.) as endofunctors - and then literally plug all of those into the definition and work out the result.

It's exactly the difference between (in an elementary context) a word problem, vs. the general theory of equations of type foo which could be used to solve any number of word problems - except replicated at a higher level.


I like the way you frame CT. Never got too deep down this rabbit hole, the terminology is just too atrocius. I wonder though how useful it is in practice. It is easier for me to take a formula and generalize it among one dimension at a time as the needs arise, see the catamorphism example above. As opposed to correctly divine 5-10-15 very specific concretizations of an uber-abstract scheme to arrive to the same result. Echoes of premature abstraction.

Perhaps the world lacks CT for dummies. Teach it in primary school. New Math was not aggressive enough ;)


I'd argue that 99.9% of OCaml programmers have never heard of the term "catamorphism"!

Still cool to know there's a name for it, but I wouldn't write in a code review "why don't you refactor your description function using a catamorphism".


In Java terms, a "type constructor" would be a generic class like, say, `ArrayList<T>`.

A constructor can be viewed as a function that takes values for the fields of a datatype and gives a value of the datatype in return. Not all functions are constructors, but some are.

At the type level, if we squint a little, "Maybe" or "List" behave a bit like constructors. They take types like "Int" or "Bool" as parameters, and give types like "Maybe Int" and "List Bool" in return. Not all type-level "functions" are type constructors (for example, in Haskell type families are not type constructors) but some are.


I also dislike unnecessary use of programming jargon. A type constructor is all the possible ways to make a value of that type.

For example:

  type answer = | Yes | No | Maybe 
The type is 'answer'. The constructors are Yes, No, and Maybe (these are actually called type variants but now we're getting into jargon). I presume an 'abstraction over type constructors' means you want to generalize that type, so you can use the constructors to produce an abstract type that works with any type but I've never heard that specific term.


This is wrong, those variants are value constructors. The type constructor there is “answer,” there’s only one of them.


So type constructors take types as their parameters and return new types you're saying, whereas the variant type in my example, the values are the constructors that make up the value of Type Answer


That's right. Type constructors are related to generics. For example if you have a generic type `List<A>` you could think of `List` as a type constructor.

In most languages, however, that's not a construct you can do much of anything with.


Keep trying to understand, asking questions, trying stuff first hand and eventually it'll come! The "Hask anything" sticky in /r/haskell is a friendly & pseudonymous way to ask questions too.

One thing I've noticed working in Haskell: Everyone is an expert in different things. Very rarely do I meet a Haskeller whose knowledge or interests subsumes another. That's part of the fun. Everybody doesn't know some terms!


I got comfortable with Lisp/Scheme using DrRacket. It is actually very helpful for beginners. I think you will benefit by learning functional programming this way, scheme is very simple and effective.

https://racket-lang.org/


Here's a reference you may find useful

John A De Goes - A Glossary of Functional Programming

http://degoes.net/articles/fp-glossary


I had the same problem and decided to begin with fundamentals. I read "types and programming languages" by Benjamin C. Pierce and then things started to make sense.


I guess the answer is simply: just study funcional programming. 30 minutes into any basic haskell course (e.g. the wikibook, or real world haskell) and you would know what a type constructor is. I don't know, you can't expect to magically know things without studying them first? :) What you mean when you say "functional programming speak" is just common technical terms with a specific technical meaning. You wouldn't complain you don't understand the meaning of "Minkowsky metric" before learning relativity :p


Learn Map, Filter, Reduce. Build a toy project with a functional language.


I took a Haskell class in college for that

It did not help much.


> So how does a programmer from non-functional world become fluent in understanding sentence such as "abstraction over type constructors"?

We already have very primitive (and restrictive) imperative OO languages which are good at teaching the basics to new programmers. My observation is, FP lacks such gateway languages. Even the simplest FP-first language is intimidating.

So my answer to your question is, a good strategy to learn these can be carving out a relatively small portion from a language (i.e Haskell with only Algebraic Types and Pattern Matching) and working only on it for a period of time. Doing that doesn't teach terminology but exposes what Haskell is really obsessed about. For the curious, it is expressing type relationships as flexibly as numeric arithmetic we know from high-school.

Just like high-school arithmetic, several recurring patterns (often called formulas in Math classes) have their special names. Almost all programming languages are good at closely matching that syntax in their numeric expressions. It is unfortunately not the case for types though. In programming,

this functional type (pseudo code):

  A = (X + Y) | Z
tend to be expressed as:

  interface class A {
  
  }
  
  
  class A1 implements A {
    X x;
    Y y;
  }

  class A2 implements B {
    Z z;
  }
or as:

  union A {
    struct A1 {
      X x;
      Y y;
    } a1;

    struct A1 {
      Z;
    } a2;
  }
In both examples, it is longer to write the same "meaning" in OOP than FP, especially when there are abstract methods involved.

This difference in verbosity (aesthetically subjective) bugs an FP programmer more and more as they get used to their language. When that happens, people start naming things. Having an abstract type of `Collection` is no longer enough because we realize `Collection`, `Mappable`, `Traversable`, `Generator`, `Optional`, `Atomic`, `Pair` and `Tuple` all share the similarity of:

* wrapping a type,

* having a way to access the wrapped value(s),

* and the wish to support all operations the wrapped value supports without an unwrap, composable out of the box.

We call this pattern a name, the infamous "Functor" (incorrectly but who cares), and axiomize those assumptions formally in the target language, so that it is longer reimplemented again and again. The word "Functor" itself otherwise has no meaning, it is letter salad.

To learn those names effectively, you must be comfortable enough with the basics so that you get annoyed by the repetition and start researching new names as you stumble upon with new patterns.

In Haskell, "abstraction over type constructors" usually means (in OO terms) "declare a new generic interface and implement that in the abstract type so that you don't have to reimplement it non-genericly in the concrete type".


> So how does a programmer from non-functional world become fluent in understanding sentence such as "abstraction over type constructors"?

Incoming essay...

Pure functional programming is fundamentally about software components called "pure functions", or just "functions" for short. (I'm quoting the term because they're not the same as what are called functions in other languages. I'd call those other things "procedures" instead, but alas.) Other programming communities rally around components like "objects", and that's fine. The important thing is to pick something to break your system down into, and here, we're talking about "functions".

"Functions" are simpler kinds of components than objects or procedures: the ways in which they behave are much more limited, so they're easier to reason about. When a "function" is interacted with, it can't perform any side effects, so interacting with the "function" multiple times gives the same behavior every time. Procedures can manipulate global state, and objects can manipulate internal state, so reasoning about them takes more effort. You have to keep track of time: what happened before this interaction?

"Functions" have two interfaces: an input side and an output side. When a "function" receives a value on the input side, it will always emit a value on the output side. We can wire "functions" together by connecting the input of one to the output of another. The result is a system with a single free input side and a single free output side -- that is, it's also a "function". This makes it very easy to wire up lots of "functions".

Usually, a "function" has certain expectations of the input values that it receives, as well as some guarantees about the output values it emits. In a dynamically typed language, when those expectations are not met, a runtime error is emitted. This is fine -- it's just one way to handle failed expectations. But sometimes we can formalize those expectations statically, and let the compiler check up-front whether those expectations are met. This does add some complexity: you might not be able to wire up two "functions" if one can't satisfy the other's expectations. Statically-typed programmers have decided that they're willing to put up with that.

In a statically-typed world, the input and output interfaces of our "function" software components can be tagged with specifications. Functional languages are often judged by how expressive these specifications can be; a specification language like this is called a "type system". You can have type systems for components that are not "functions", but the rules of composition become more complex. (For "objects", class systems are quite popular.)

Most type systems allow you to break down an interface specification into smaller pieces, which themselves are valid specifications. That means we now have two kinds of components in our system: "functions", which exist at runtime, and types, which exist at compilation time. The rules for how types compose can be more complicated than those for "functions", but we're usually okay with that, as long as "functions" are kept simple.

Some type systems allow a value to satisfy multiple types. For example, class-based systems allow this via subtyping relations. Others might describe types as predicates (truth functions) on a pre-existing universe of values (e.g. TypeScript). Pure functional programming typically requires mutual exclusion: a value cannot be part of multiple types. We often say the the type defines its values because of this.

The low bar for a static type system is "algebraic types". This means that we have two ways to compose types, conventionally called "product" and "sum". Most type systems have products, but sums are historically rarer. Values of a product type `X * Y` look like `(x, y)`, and we can pull out either element of the pair. Values of a sum type `X + Y` look like either `Left x` or `Right y`, and we can ask which one it is and do something different depending on the result.

Java and C++ do not have sum types. You can approximate their specifications to some extent using the Visitor pattern or discriminated unions, but the implementations in these languages are verbose, and unpleasant to read and write. Rust and Haskell do have sum types, so specifications of that nature are used much more readily.

We can generalize further. We can think of the product and sum of types as "type functions", or "type constructors", that take a pair of types as input and produce a type as output. Many type systems allow you to define your own "type functions". In Java, these are generic classes: "List<T>" is a type function from some type T to the type of lists containing that type. In C++, these are templates. But in most pure functional programming, static type functions look just like runtime value functions, since they behave just like them too. It's just a matter of when they're evaluated.

Once again, when you have functions, you often want to place demands on their inputs, and guarantees on their outputs. Type functions are no different. There are many possible ways to "type types", as it were, but a popular approach is by using "traits" or "typeclasses". Just as with types and values, a typeclass places requirements upon the types that inhabit it. (But a type can also satisfy multiple typeclasses.)

For instance, "Monoid" is a typeclass on types with two associated items: a "zero" value of that type, and a "concat" function that takes two values of the type and gives another value of that type. If you know that a type is a Monoid, then you know that these associated items are also available to you, regardless of what the actual type is. (For instance, you can write a "concatAll" function that iterates over a list of elements of a monoid and concatenates them together regardless of what the monoid type actually is.)

With typeclasses in mind, instead of taking a "list of X" or a "set of X", you might abstract over all such things as "collections of X". But you don't want to say what a collection is for every possible X. We need to abstract over type constructors. A "collection" is any type such that, given a type X, you get a type with certain associated items expected of all collections. Java approximates this with subtyping.

"Functor" is, in some ways, a generalization of "Collection". A type function that is a "Functor" is a type that, if you have a value of that type over Xs, and you have a function transforming Xs into Ys, then there must be a mechanism to get a value of that type over Ys instead. We call this mechanism "map", and it can be instantiated over any collection. Got a list of X and a function from X to Y? We've got a way to get a list of Y. Got a set of X and a function from X to Y? We've got a way to get a set of Y.

Because "set" and "list" are type constructors, we say that we are abstracting over type constructors to get a single statement about all such things. "For every Functor F, if I have an F containing Xs and a function transforming Xs to Ys, then I can get an F containing Ys."


Give up while you can. Immutable data classes and functional pipelines are great, everything else isn't worth it. Lots of people say FP is amazing but very few people use it in the real world because like you say, its unintelligible.


This comment is a textbook example of the Blub Paradox.


I had a very visceral experience of Blub in my formative years. I don't recall how old I was, certainly under 11. I was explaining to my father the many things I could do in LOGO, and he asked about arrays. LOGO supports arrays, of course, but it was beyond anything I'd dealt with. I remember that feeling of entirely not understanding why you'd want to bother with arrays instead of just making more variables.


I had a similar experience also at age 11 - I was learning C++ to solve my math homework problems. I heard about Python, tried it out, and went on IRC to ask how I could write "goto" statements in Python. I couldn't believe it when people told me they weren't necessary, and linked "Go To Considered Harmful"!


Oh man, some of my LOGO programs had so many gotos.


For the record, the Blub Paradox is wrong. It hinges on being able to rank computer languages on a one-dimensional axis labeled "power". It turns out that you can't actually do that.

For example, the Lisp people are sure they're looking down when they look at Haskell - no macros. But the Haskell people are also sure they're looking down when they look at Lisp - no decent type system. But if they're both looking down at the other, that's kind of a problem for the Blub Paradox.

Here's the other problem with it: I don't care about "power" in languages in any platonic sense; I care about power to write the specific program I'm trying to write. That includes the language's expressiveness, sure, but it also includes available libraries and the rest of the ecosystem, tutorials if there's parts I have to learn, the ability to find others who are proficient if the program is big enough to need a team, and so on. The power to write my specific program depends on the problem, not just on the language.


I think both Lispers and Haskellers would be willing to agree that they're both more powerful on most axes than, say, Java.


You're welcome. I like to think its more than that though. I recognize Spark as FP built app but its a rarity among thousands of large mainstream applications. Do you have many non-trivial examples of applications written in a functional style?


Thanks for introducing me to the Blurb Paradox!

Though on first glance, doesn't it suggest that all programming features are good, and only their absence is bad? St. Exupéry's idea of perfection would disagree.


No, because it is possible to add power without adding features. This is typically called generality. In fact, adding features really can’t do much to change how fundamentally expressive a language is.


I am not sure. Approximately 5-10% of of tools I interact with are written in Haskell. So I think you are wrong.

Lot of webdev stuff is moving to functional paradigms. Lazy evaluation, streams/iterator pattern combined with map/reduce/filter (FRP). You can pick out any modern ui framework and you would be drowning in fp soon enough.

Many attempts at building a functional language for web - purescript, elm, reasonml etc.


I use FP everyday, but I'm not an academic and I couldn't tell you what a monad is. But I do know about closures, lambdas, filter, map, reduce, arity and other basic concepts. I prefer to write functions instead of classes (JS/TS). So I'm not a wizard, but FP is helpful even at an elementary level




Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: