
Ask HN: How to be fluent in functional language speak? - _448
When I read academic articles or blog posts written by functional programmers it feels alien to me because of the terms used in their writing. I am not just talking about monads, monoids etc, but the general programming terminology itself is different to what I regularly hear and read as imperative OO programmer. For e.g. this sentence &quot;abstraction over type constructors&quot;; whenever programmers from C++, Java etc world will read that they are sure to say, &quot;eh?!?&quot; and fall off their chair.<p>So how does a programmer from non-functional world become fluent in understanding sentence such as &quot;abstraction over type constructors&quot;?
======
centimeter
"abstraction over type constructors" isn't very meaningful without more
context anyway, but they're probably referring to something like the
following:

non-abstracted:

    
    
        map :: (a -> b) -> List a -> List b
    

abstracted:

    
    
        map :: Functor f => (a -> b) -> f a -> f b
    

"List" is a type constructor (it takes a type and returns a type), but we can
write a "map" function that works over more types than just List - for
example, Maps and Options and so on (aka "Functors" in Haskell parlance).
We've "abstracted (the function map) over the type constructor (of the data
structure being mapped over)".

To answer your more general question, if you just learn functional programming
on a language with a powerful type system (Haskell is probably the most
germane example) you will learn at least some of these terms. Most of them
turn out not to be very deep, just hard to explain (for reasons unclear to
those who understand them), but still very useful.

~~~
jimbokun
> We've "abstracted (the function map) over the type constructor (of the data
> structure being mapped over)".

Then why didn't they just call those types "Mappables"?

~~~
bojo
I think this is a great question because it tends to come up a lot.

What do you call the following common operations?

    
    
      - 1 + 0
      - 1 * 1
      - "string" ++ ""
      - [1, 2, 3] ++ []
    

As far as I know there is no term for this grouping of operations outside of
functional programming languages which use category theory.

We call it a "monoid" based on the following precise definition:

> In abstract algebra, a branch of mathematics, a monoid is an algebraic
> structure with a single associative binary operation and an identity
> element[1].

I used the "identity element" specifically in my examples above since they
tend to be a good way to grasp what is going on.

It turns out a "functor" has more properties than just being "mappable", which
is why we like using the more precise term in our literature. However, it's
still a decent intuition if you aren't familiar with the concept.

For example, in the parent comment the 'f' in the second line of code shows
that it is _some kind of structure_ that adheres to a Functor, and also infers
that this structure can also be mapped over. This is a nice property because
without caring about the _implementation_ , we can still do functor-specific
operations over abstract structures.

[1]
[https://en.wikipedia.org/wiki/Monoid](https://en.wikipedia.org/wiki/Monoid)

~~~
twic
"binary operators with an identity value"

~~~
wLMeOv2qjsFOjZG
the binary operation should also be associative (e.g. subtraction is no good)

------
mywittyname
Practice programming in a functional language. They enforce a lot of
restrictions that you won't be used to, and in overcoming these restrictions,
you'll begin to learn and understand the reason for the jargon.

It's really hard to explain "abstraction over type constructors" in a pithy
manner despite being a relatively simple statement because the obvious follow-
up is "but why?" And the answer to that question isn't going to satisfy you
without an understanding of the functional programming environment.

It's exactly like when novice programmers get thrown into Java land in CS101
and you have to tell the to just ignore the whole class, public, private,
static, etc jargon. They need to gain experience programming in that
environment to really appreciate those concepts.

------
mamcx
Simply, read this:

[https://fsharpforfunandprofit.com/fppatterns/](https://fsharpforfunandprofit.com/fppatterns/)

Then:

[https://fsharpforfunandprofit.com/rop/](https://fsharpforfunandprofit.com/rop/)

Then if wanna go round:

[https://fsharpforfunandprofit.com/series/thinking-
functional...](https://fsharpforfunandprofit.com/series/thinking-
functionally.html)

Ok, almost all that site. is truly good.

~~~
Twisol
Scott Wlaschin (the author of that site) does an amazing job of explaining
functional programming concepts, and I highly recommend both his site and his
book, "Domain Modeling Made Functional". But he does usually try to _avoid_
words like "monad" and "functor", so you might not pick up fluency in the
mathematically-inspired vocabulary from this particular resource.

------
axelerator
I think the first step here is to put your expectations into perspective.
Functional programming also is a range of complexity. If you're not used to
functional programming and try to read articles targeted at the high-end
you'll naturally struggle to understand them.

I still remember "not getting" OO in university in my first lecture. I had
programmed BASIC, C, Pascal, PHP before. And if someone would have tried to
explain me the idea by means of the visitor pattern, it would surely have made
me say "eh?!?" too. So allow yourself to start slowly.

I'm also still dabbling in functional programming - at least that's what it
feels like reading about monoids etc - but by actually using functional
programming languages in my more serious spare time projects I feel I get a
better and better understanding. And "those" articles start to speak more to
me.

------
marceloabsousa
You're not alone. Even functional programmers struggle to read those papers. I
found that in the functional programming world there's a lot of concepts being
thrown around with different names.

I think your best option is to directly contact authors or very good educators
and ask them exactly what they mean.

------
yodsanklai
First, you probably don't need to read academic articles to learn about
functional programming. There's really nothing magical or complicated with
functional programming. For instance, Scheme or OCaml are routinely used as
beginner programming languages in schools around the world.

You can start with the basic concepts and eventually build your way up to more
abstract constructs when you realize you need them. But even there, you don't
need to know the academic terminology (or if you do, it'll make more sense
when you get familiar with the language).

I frequently use monads and functors, yet I don't know much about category
theory (and the little I know doesn't make me a better programmer).

~~~
centimeter
Not sure about Scheme, but OCaml's type system is not powerful/expressive
enough that (to use the given example) abstracting over type constructors is
likely to ever come up. (No HKTs in OCaml.) Also, OCaml uses a naming scheme
for certain things that I haven't seen used elsewhere - e.g. what OCaml calls
a "Functor" is a parameterized module, not closely related to Functors in
other languages that use that abstraction. I suspect this is a result of the
French/English language divide in the initial development of OCaml vs all
other major functional languages.

~~~
jolux
OCaml functors come from Standard ML, which predates Haskell. And you can
definitely use them to implement HKTs — they are comparable in expressiveness
to single-parameter typeclasses.

~~~
centimeter
I use OCaml at work ~every day, and HKTs (to the extent they can be expressed
at all, using HOMs) are so unwieldy that I have never run across them in
production code. In Haskell, they are both easy and routine. So I’m
approximating this with “no HKTs in OCaml”.

~~~
jolux
HKTs indeed aren't a common pattern in OCaml, but first-class modules and
similar patterns are common enough in Core. Your statement was that the type
system was not powerful enough to express them at all. Whether or not it's
practical is a bit different :)

You also said that their use of "functor" was probably the result of being
French vs English, when I pointed out that in fact it came from Standard ML,
which is a British design. Moreover, the creators of Haskell originally built
typeclasses as an extension to Standard ML (1980s), and Functor was added in
the same version of Haskell that Monad was (1.3, roughly 1998).

The connection between ML functors (specifically the interface/signature) and
typeclasses in Haskell is relatively clear, if you understand how they both
work. I don't disagree that typeclasses are more general, but modules have
their appeal as well. I'm pretty excited to see where Backpack ends up in
Haskell, personally.

~~~
centimeter
I definitely sufficiently hedged my original claim:

> OCaml's type system is not powerful/expressive enough that (to use the given
> example) abstracting over type constructors is likely to ever come up.

And I stand by that. It’s not likely to ever come up, even if it’s possible to
kludge it.

You’re right about the name though - it seems like the anomalous use of
“functor” predates francophone intervention.

~~~
jolux
It's weird to call it anomalous ML came first in the programming sphere.

What you said is that it's not powerful or expressive enough, but my
impression was that they were both expressible in terms of System Fω, an
understanding I would like to correct if it's wrong.

~~~
centimeter
> It's weird to call it anomalous ML came first in the programming sphere.

Eilenberg and Lane introduced functors (in the sense more or less used by
Haskell) in the 40s.

> my impression was that they were both expressible in terms of System Fω

This is not correct anymore - Fω is insufficient to describe GADTs. You need
coercions and equality constraints as well.

It's also approximately as useful a claim as "the languages are equally
powerful because they're both Turing complete".

I'm referring to practicability, not theoretical possibility (and I've
clarified as much several times now).

~~~
jolux
>Eilenberg and Lane introduced functors (in the sense more or less used by
Haskell) in the 40s.

Yes.

>This is not correct anymore - Fω is insufficient to describe GADTs. You need
coercions and equality constraints as well.

Of course, OCaml has GADTs these days. I suppose neither of them are really Fω
anymore, then.

>It's also approximately as useful a claim as "the languages are equally
powerful because they're both Turing complete".

Agree to disagree. I think when discussing how powerful a language is, a
rigorous notion like Turing completeness or position on the lambda cube is
still significant, even if it's not the most immediately visible.

>I'm referring to practicability, not theoretical possibility (and I've
clarified as much several times now).

Fair enough, I'll chalk this up to a misunderstanding.

------
qppo
It's not the jargon of functional programming, it's the jargon of type theory
and PL design. So maybe start there?

------
WorldMaker
As with so many things in life, a lot of it boils down to practice.

Are you just trying to read academic articles and blog posts? Have you tried
working in one or more of the languages? Beyond "toy problems" to trying to
solve larger problems?

The more you work in a language, the more you might start to understand the
needs that underlie things like category theory (the mathematics of monoids,
semigroups, monads, etc) and where other higher level abstractions start to
play in how you want/need to write software.

There's also places to practice even in C++, Java, etc. A lot of functional
programming is making its way "stealth" into even the most imperative OO
languages. Are you handy with your languages map/filter/reduce libraries
around iterables? Have you tried pushing that deeper towards other monads such
as query transformation (Linq) or time (async/await, ReactiveX)? Have you
thought about how those work under the hood? (Explored what a iterable
generator or an async/await decompile to?) Have you started yet to think in
terms of high level "lambdas", places where functions themselves can be reused
by other objects/functions? (Beyond "callbacks" towards thinking of functions
themselves as lego bricks to build with.)

There's also middle ground languages such as F# or Clojure that you can use
side-by-side (in some circumstances) C# or Java (respectively) code where
maybe you can't work 100% in a functional language on a project, but you can
get a compromise mix.

So much of understanding the "lingo" is recognizing the patterns and/or
problems to which they refer, and a lot of the easiest way you will pick up
patterns and see recurring patterns is practice.

------
k00b
I got comfortable with Lisp using
[http://www.4clojure.com/](http://www.4clojure.com/)

Learning through games like this is a big help because they provide: 1\. clear
and approachable goals 2\. clear feedback on whether you've achieved the goals

Maybe Clojure isn't what you're looking for but maybe there's a similar
service for a language you'd like to learn.

------
phekunde
If you are coming from imperative land (more specifically C++) then nothing
beats articles and videos by Dr. Bartosz Milewski:

[https://bartoszmilewski.com/2014/10/28/category-theory-
for-p...](https://bartoszmilewski.com/2014/10/28/category-theory-for-
programmers-the-preface)

[https://www.youtube.com/watch?v=I8LbkfSSR58&list=PLbgaMIhjbm...](https://www.youtube.com/watch?v=I8LbkfSSR58&list=PLbgaMIhjbmEnaH_LTkxLI7FMa2HsnawM_)

[https://www.youtube.com/playlist?list=PLbgaMIhjbmElia1eCEZNv...](https://www.youtube.com/playlist?list=PLbgaMIhjbmElia1eCEZNvsVscFef9m0dm)

[https://www.youtube.com/playlist?list=PLbgaMIhjbmEn64WVX4B08...](https://www.youtube.com/playlist?list=PLbgaMIhjbmEn64WVX4B08B4h2rOtueWIL)

------
pera
I don't think this is different in OOP literature, where you may read
something like "abstract factory pattern", you just need to get familiar with
the terminology. I would start reading Pierce's _Types and Programming
Languages_ and then any book on Haskell (you may also want to reinforce your
abstract algebra :)

------
dominotw
I never use those terms and main programming goto is clojure.

"abstraction over type constructors" might have to do with typing and not
related to functional programming.

~~~
Roboprog
You run the risk of being lynched by the Haskell mob, with talk like that. But
I agree, in principle.

The hardcore “sound types” crowd believes that anybody using dynamic types is
doomed.

I must not have the same problems they do.

First class functions; closures; higher order functions; partial function
application, seasoned with the occasional variadic vs fixed arity function,
and I’m good.

One can learn the other 95% of the jargon to deal with 5% of the problems
later.

~~~
zozbot234
> The hardcore “sound types” crowd believes that anybody using dynamic types
> is doomed.

What they believe is that "dynamic types" is a misnomer, and if you think your
program is checking dynamic types, it is in fact doing runtime pattern
matching over a variant record. There's nothing wrong with pattern matching
and variant records; but conflating them with types is just nutty, and using
them pervasively as part of the ordinary flow of code is no different than
programming in old-style VB.

~~~
jolux
I don’t think many people programming in dynamic languages would die on that
hill. Avoiding types is kind of the allure, I think?

~~~
bojo
At the expense of having to write more tests that a type system could have
solved at compile time, and likely catch more runtime logic bugs as well.

For the record I am a convert. Types have made my developer life much happier,
especially working with unfamiliar code across multiple projects/teams. When
you treat your builds as long term proofs your confidence level increases
dramatically.

~~~
jolux
I don’t disagree :)

------
quickthrower2
I found it equally tough because while there are a lot of resources to cover
the basics and get you coding in Haskell there isn’t much to bridge the middle
ground from there to the level the deep enthusiasts and experts are at. All I
can say is keep trying and reach our for help on IRC channels or at a local
functional programming meetup (and by local these days you could remote in to
any meetup you like!). They’ll be people happy to help.

It’s been a while since I’ve dabbled with Haskell but I believe type
constructors take a type and return a type (like List takes Int and returns
List Int) so an abstraction over them might be an m where say m is a monad and
it takes an a and gives you m a. m is an abstraction because it could be
anything. A List, or Either, or State etc.

~~~
dllthomas
> type constructors take a type and return a type

Ordinary constructors, which everyone here is familiar with, construct values.
They may take some arguments, which are themselves values.

A type constructor constructs types. It may take some arguments, which are
themselves types. List is a great example of a type constructor with one
argument. `Bool` is a type, and arguably a type constructor with no arguments.
`Either` is a type constructor with two arguments.

In C++, `Pair` is a type constructor with two arguments; although there are
more constraints on how you can use it (or at least were last time I delved
deep in C++).

An example of "abstraction over type constructors" would be something that
lets you pass in a type constructor as an argument, giving different behavior
(whether parametric or ad-hoc - sorry, more jargon...) depending on what you
pick.

------
moosey
Collect the definitions that you are learning over time. Put them in Anki as
multidirection cloze deletions. Then, just do your Anki deck each day.

You will find that your fluency in almost any subject is improved with this
practice.

------
Ididntdothis
Question to all the functional programming experts: When OOP got introduced
people started building all these elaborate object hierarchies and design
patterns only to find out after a while that simpler is better and most
features should be used rarely or never.

Is there also a risk with FP too that people fall a little too much in love
with “elegant” code only to find out later that maybe simpler is better?
Reading some people writing about FP I feel there is some risk of people doing
this.

~~~
jolux
Generally in FP, “elegant” == simpler.

~~~
searchableguy
On that, I have seen lot of TypeFunctorMapMutatorKeywordsBeingImported to
shorten the code and make it elegant...of course from the standard library or
modules.

When you could have written it yourself with one or two extra lines. Why? Is
it simpler? Or is it unnecessarily abstracted?

~~~
chowells
1\. It's code reuse. Don't rewrite code that's already written.

2\. It's harder to understand. "Why didn't this use the standard library
function? Is it doing something different? It doesn't look like it, am I
missing something?"

~~~
jolux
It’s definitely not _always_ just about code reuse, particularly when you
start really stacking GHC pragmas and such.

------
cjbprime
Try a textbook? e.g. Types and Programming Languages by Benjamin C. Pierce.

~~~
WkndTriathlete
I agree that TaPL is an excellent book, but I did not find it useful in
explaining functional programming idioms. (It does do a great job of
explaining typed lambda calculus, however.)

------
pacala
Most of the jargon denotes straightforward definitions. Pick the top-most
frequent three terms, e.g. monad, monoid, applicative, write down the
definition + a small example on a card, lather rinse repeat a few times and
you're golden.

Edit. For example, let's take 'catamorphism'. Read the Wikipedia entry [0] and
it's a seemingly never ending jungle of more and more abstract terms.
Homomorphism, initial algebra, F-algebra, endomporphism. This easy, let's see
what an F-algebra is [1] and we'll be on our way soon. "If C is a category,
and F: C → C is an endofunctor of C, then an F-algebra is a tuple (A, α),
where A is an object of C and α is a C-morphism F(A) → A.". Oh my God, this is
hopeless.

Or, find a good description, with examples [2]. A 'catamorphism' card,
abbreviated from [2]:

Catamorphism

A function that “collapses” a recursive type into a new value based on its
structure. Visitor pattern.

Define a sum type Gift:

    
    
        type Book = {title: string; price: decimal}
        
        type Gift =
            | Book of Book
            | Boxed of Gift
    

Define a 'description' function over Gift:

    
    
        let rec description gift =
            match gift with 
            | Book book -> 
                sprintf "'%s'" book.title 
            | Boxed innerGift -> 
                sprintf "%s in a box" (description innerGift) 
        

Parametrize the function, creating a 'catamorphism':

    
    
        let rec cataGift fBook fBox gift =
            match gift with 
            | Book book -> 
                fBook book
            | Boxed innerGift -> 
                let innerGiftResult = cataGift fBook fBox innerGift
                fBox innerGiftResult
    
    

Refactor your 'description' function to use a 'catamorphism':

    
    
        let descriptionUsingCata gift =
            let fBook (book:Book) = 
                sprintf "'%s'" book.title 
            let fBox innerText = 
                sprintf "%s in a box" innerText
            // call the catamorphism
            cataGift fBook fBox gift
    
    

[0]
[https://en.wikipedia.org/wiki/Catamorphism](https://en.wikipedia.org/wiki/Catamorphism)

[1]
[https://en.wikipedia.org/wiki/F-algebra](https://en.wikipedia.org/wiki/F-algebra)

[2] [https://fsharpforfunandprofit.com/posts/recursive-types-
and-...](https://fsharpforfunandprofit.com/posts/recursive-types-and-
folds/#catamorphisms)

~~~
zozbot234
> Read the Wikipedia entry [0] and it's a seemingly never ending jungle of
> more and more abstract terms.

Yup, category theory is often called "general abstract nonsense" precisely
because it manages to talk about many disparate parts of math (or
programming!) in a way that purposely avoids any reference to the specific. To
usefully interpret the above, you _specifically_ need to know that your domain
often uses the category ( _C_ ) of _types_ (as _objects_ ) and _functions_ (as
_morphisms_ ), and that this category features certain generic types (such as
Option, List, Array etc.) as _endofunctors_ \- and then literally plug all of
those into the definition and work out the result.

It's exactly the difference between (in an elementary context) a word problem,
vs. the general theory of equations of type foo which could be used to solve
any number of word problems - except replicated at a higher level.

~~~
pacala
I like the way you frame CT. Never got too deep down this rabbit hole, the
terminology is just too atrocius. I wonder though how useful it is in
practice. It is easier for me to take a formula and generalize it among one
dimension at a time as the needs arise, see the catamorphism example above. As
opposed to correctly divine 5-10-15 very specific concretizations of an uber-
abstract scheme to arrive to the same result. Echoes of premature abstraction.

Perhaps the world lacks CT for dummies. Teach it in primary school. New Math
was not aggressive enough ;)

------
danidiaz
In Java terms, a "type constructor" would be a generic class like, say,
`ArrayList<T>`.

A constructor can be viewed as a function that takes values for the fields of
a datatype and gives a value of the datatype in return. Not all functions are
constructors, but some are.

At the type level, if we squint a little, "Maybe" or "List" behave a bit like
constructors. They take types like "Int" or "Bool" as parameters, and give
types like "Maybe Int" and "List Bool" in return. Not all type-level
"functions" are type constructors (for example, in Haskell type families are
not type constructors) but some are.

------
Durgasoft
I also dislike unnecessary use of programming jargon. A type constructor is
all the possible ways to make a value of that type.

For example:

    
    
      type answer = | Yes | No | Maybe 
    

The type is 'answer'. The constructors are Yes, No, and Maybe (these are
actually called type variants but now we're getting into jargon). I presume an
'abstraction over type constructors' means you want to generalize that type,
so you can use the constructors to produce an abstract type that works with
any type but I've never heard that specific term.

~~~
jolux
This is wrong, those variants are value constructors. The type constructor
there is “answer,” there’s only one of them.

~~~
Durgasoft
So type constructors take types as their parameters and return new types
you're saying, whereas the variant type in my example, the values are the
constructors that make up the value of Type Answer

~~~
bweitzman
That's right. Type constructors are related to generics. For example if you
have a generic type `List<A>` you could think of `List` as a type constructor.

In most languages, however, that's not a construct you can do much of anything
with.

------
whateveracct
Keep trying to understand, asking questions, trying stuff first hand and
eventually it'll come! The "Hask anything" sticky in /r/haskell is a friendly
& pseudonymous way to ask questions too.

One thing I've noticed working in Haskell: Everyone is an expert in different
things. Very rarely do I meet a Haskeller whose knowledge or interests
subsumes another. That's part of the fun. Everybody doesn't know some terms!

------
onemoresoop
I got comfortable with Lisp/Scheme using DrRacket. It is actually very helpful
for beginners. I think you will benefit by learning functional programming
this way, scheme is very simple and effective.

[https://racket-lang.org/](https://racket-lang.org/)

------
fpglossary
Here's a reference you may find useful

John A De Goes - A Glossary of Functional Programming

[http://degoes.net/articles/fp-glossary](http://degoes.net/articles/fp-
glossary)

------
milani
I had the same problem and decided to begin with fundamentals. I read "types
and programming languages" by Benjamin C. Pierce and then things started to
make sense.

------
andrepd
I guess the answer is simply: just study funcional programming. 30 minutes
into any basic haskell course (e.g. the wikibook, or real world haskell) and
you would know what a type constructor is. I don't know, you can't expect to
magically know things without studying them first? :) What you mean when you
say "functional programming speak" is just common technical terms with a
specific technical meaning. You wouldn't complain you don't understand the
meaning of "Minkowsky metric" before learning relativity :p

------
sova
Learn Map, Filter, Reduce. Build a toy project with a functional language.

------
benibela
I took a Haskell class in college for that

It did not help much.

------
diegoperini
> So how does a programmer from non-functional world become fluent in
> understanding sentence such as "abstraction over type constructors"?

We already have very primitive (and restrictive) imperative OO languages which
are good at teaching the basics to new programmers. My observation is, FP
lacks such gateway languages. Even the simplest FP-first language is
intimidating.

So my answer to your question is, a good strategy to learn these can be
carving out a relatively small portion from a language (i.e Haskell with only
Algebraic Types and Pattern Matching) and working only on it for a period of
time. Doing that doesn't teach terminology but exposes what Haskell is really
obsessed about. For the curious, it is expressing type relationships as
flexibly as numeric arithmetic we know from high-school.

Just like high-school arithmetic, several recurring patterns (often called
formulas in Math classes) have their special names. Almost all programming
languages are good at closely matching that syntax in their numeric
expressions. It is unfortunately not the case for types though. In
programming,

this functional type (pseudo code):

    
    
      A = (X + Y) | Z
    

tend to be expressed as:

    
    
      interface class A {
      
      }
      
      
      class A1 implements A {
        X x;
        Y y;
      }
    
      class A2 implements B {
        Z z;
      }
    

or as:

    
    
      union A {
        struct A1 {
          X x;
          Y y;
        } a1;
    
        struct A1 {
          Z;
        } a2;
      }
    

In both examples, it is longer to write the same "meaning" in OOP than FP,
especially when there are abstract methods involved.

This difference in verbosity (aesthetically subjective) bugs an FP programmer
more and more as they get used to their language. When that happens, people
start naming things. Having an abstract type of `Collection` is no longer
enough because we realize `Collection`, `Mappable`, `Traversable`,
`Generator`, `Optional`, `Atomic`, `Pair` and `Tuple` all share the similarity
of:

* wrapping a type,

* having a way to access the wrapped value(s),

* and the wish to support all operations the wrapped value supports without an unwrap, composable out of the box.

We call this pattern a name, the infamous "Functor" (incorrectly but who
cares), and axiomize those assumptions formally in the target language, so
that it is longer reimplemented again and again. The word "Functor" itself
otherwise has no meaning, it is letter salad.

To learn those names effectively, you must be comfortable enough with the
basics so that you get annoyed by the repetition and start researching new
names as you stumble upon with new patterns.

In Haskell, "abstraction over type constructors" usually means (in OO terms)
"declare a new generic interface and implement that in the abstract type so
that you don't have to reimplement it non-genericly in the concrete type".

------
Twisol
> So how does a programmer from non-functional world become fluent in
> understanding sentence such as "abstraction over type constructors"?

Incoming essay...

Pure functional programming is fundamentally about software components called
"pure functions", or just "functions" for short. (I'm quoting the term because
they're not the same as what are called functions in other languages. I'd call
those other things "procedures" instead, but alas.) Other programming
communities rally around components like "objects", and that's fine. The
important thing is to pick something to break your system down into, and here,
we're talking about "functions".

"Functions" are simpler kinds of components than objects or procedures: the
ways in which they behave are much more limited, so they're easier to reason
about. When a "function" is interacted with, it can't perform any side
effects, so interacting with the "function" multiple times gives the same
behavior every time. Procedures can manipulate global state, and objects can
manipulate internal state, so reasoning about them takes more effort. You have
to keep track of time: what happened before this interaction?

"Functions" have two interfaces: an input side and an output side. When a
"function" receives a value on the input side, it will always emit a value on
the output side. We can wire "functions" together by connecting the input of
one to the output of another. The result is a system with a single free input
side and a single free output side -- that is, it's also a "function". This
makes it very easy to wire up lots of "functions".

Usually, a "function" has certain expectations of the input values that it
receives, as well as some guarantees about the output values it emits. In a
dynamically typed language, when those expectations are not met, a runtime
error is emitted. This is fine -- it's just one way to handle failed
expectations. But sometimes we can formalize those expectations statically,
and let the compiler check up-front whether those expectations are met. This
does add some complexity: you might not be able to wire up two "functions" if
one can't satisfy the other's expectations. Statically-typed programmers have
decided that they're willing to put up with that.

In a statically-typed world, the input and output interfaces of our "function"
software components can be tagged with specifications. Functional languages
are often judged by how expressive these specifications can be; a
specification language like this is called a "type system". You can have type
systems for components that are not "functions", but the rules of composition
become more complex. (For "objects", class systems are quite popular.)

Most type systems allow you to break down an interface specification into
smaller pieces, which themselves are valid specifications. That means we now
have two kinds of components in our system: "functions", which exist at
runtime, and types, which exist at compilation time. The rules for how types
compose can be more complicated than those for "functions", but we're usually
okay with that, as long as "functions" are kept simple.

Some type systems allow a value to satisfy multiple types. For example, class-
based systems allow this via subtyping relations. Others might describe types
as predicates (truth functions) on a pre-existing universe of values (e.g.
TypeScript). Pure functional programming typically requires mutual exclusion:
a value cannot be part of multiple types. We often say the the type _defines_
its values because of this.

The low bar for a static type system is "algebraic types". This means that we
have two ways to compose types, conventionally called "product" and "sum".
Most type systems have products, but sums are historically rarer. Values of a
product type `X * Y` look like `(x, y)`, and we can pull out either element of
the pair. Values of a sum type `X + Y` look like either `Left x` or `Right y`,
and we can ask which one it is and do something different depending on the
result.

Java and C++ do not have sum types. You can approximate their specifications
to some extent using the Visitor pattern or discriminated unions, but the
implementations in these languages are verbose, and unpleasant to read and
write. Rust and Haskell do have sum types, so specifications of that nature
are used much more readily.

We can generalize further. We can think of the product and sum of types as
"type functions", or "type constructors", that take a pair of types as input
and produce a type as output. Many type systems allow you to define your own
"type functions". In Java, these are generic classes: "List<T>" is a type
function from some type T to the type of lists containing that type. In C++,
these are templates. But in most pure functional programming, static type
functions look just like runtime value functions, since they behave just like
them too. It's just a matter of when they're evaluated.

Once again, when you have functions, you often want to place demands on their
inputs, and guarantees on their outputs. Type functions are no different.
There are many possible ways to "type types", as it were, but a popular
approach is by using "traits" or "typeclasses". Just as with types and values,
a typeclass places requirements upon the types that inhabit it. (But a type
can also satisfy multiple typeclasses.)

For instance, "Monoid" is a typeclass on types with two associated items: a
"zero" value of that type, and a "concat" function that takes two values of
the type and gives another value of that type. If you know that a type is a
Monoid, then you know that these associated items are also available to you,
regardless of what the actual type is. (For instance, you can write a
"concatAll" function that iterates over a list of elements of a monoid and
concatenates them together _regardless of what the monoid type actually is_.)

With typeclasses in mind, instead of taking a "list of X" or a "set of X", you
might abstract over all such things as "collections of X". But you don't want
to say what a collection is for every possible X. We need to _abstract over
type constructors_. A "collection" is any type such that, given a type X, you
get a type with certain associated items expected of all collections. Java
approximates this with subtyping.

"Functor" is, in some ways, a generalization of "Collection". A type function
that is a "Functor" is a type that, if you have a value of that type over Xs,
and you have a function transforming Xs into Ys, then there must be a
mechanism to get a value of that type over Ys instead. We call this mechanism
"map", and it can be instantiated over any collection. Got a list of X and a
function from X to Y? We've got a way to get a list of Y. Got a set of X and a
function from X to Y? We've got a way to get a set of Y.

Because "set" and "list" are type constructors, we say that we are
_abstracting over type constructors_ to get a single statement about all such
things. "For every Functor F, if I have an F containing Xs and a function
transforming Xs to Ys, then I can get an F containing Ys."

------
rb808
Give up while you can. Immutable data classes and functional pipelines are
great, everything else isn't worth it. Lots of people say FP is amazing but
very few people use it in the real world because like you say, its
unintelligible.

~~~
centimeter
This comment is a textbook example of the Blub Paradox.

~~~
dllthomas
I had a very visceral experience of Blub in my formative years. I don't recall
how old I was, certainly under 11. I was explaining to my father the many
things I could do in LOGO, and he asked about arrays. LOGO supports arrays, of
course, but it was beyond anything I'd dealt with. I remember that feeling of
entirely not understanding why you'd want to bother with arrays instead of
just making more variables.

~~~
centimeter
I had a similar experience also at age 11 - I was learning C++ to solve my
math homework problems. I heard about Python, tried it out, and went on IRC to
ask how I could write "goto" statements in Python. I couldn't believe it when
people told me they weren't necessary, and linked "Go To Considered Harmful"!

~~~
dllthomas
Oh man, some of my LOGO programs had so many gotos.

