
Functional Programming, Abstraction, and Naming Things - tel
http://www.stephendiehl.com/posts/abstraction.html
======
pash
_What is the black king in chess? This is a strange question, and the most
satisfactory way to deal with it seems to be to sidestep it slightly. What
more can one do than point to a chessboard and explain the rules of the game,
perhaps paying particular attention to the black king as one does so? What
matters about the black king is not its existence, or its intrinsic nature,
but the role that it plays in the game._

 _The abstract method in mathematics, as it is sometimes called, is what
results when one takes a similar attitude to mathematical objects. This
attitude can be encapsulated in the following slogan: a mathematical object is
what it does. ..._

—Tim Gowers, _A Very Short Introduction to Mathematics_

~~~
tel
I really like that metaphor!

------
davexunit
I really enjoyed this article. It's great to learn more about groups having
been aware of nice properties like closure and identity when designing new
data types and the procedures that operate on them. While it doesn't discuss
group theory explicitly, SICP's section 2.2 does a great job showing the power
of closure.

[https://sarabander.github.io/sicp/html/2_002e2.xhtml#g_t2_00...](https://sarabander.github.io/sicp/html/2_002e2.xhtml#g_t2_002e2)

~~~
js8
I think invertible functions and groups are sadly missing from the current
functional programming paradigm. Two ideas that I would love to see fleshed
out:

1\. If compiler would know which function is invertible, it could
automatically build inverses of composed functions. And for instance, you
could pattern match on a function argument, and it would automatically do the
inversion for you!

2\. Knowing that some operator is a group operator could lead to efficient
optimizations. Consider a collection over which you calculate a folded value,
where the folding is a group operator. Then knowing that, you could use group
element inverse to recalculate the value when the collection changes, instead
of recalculating it over the whole collection. Example: You need to keep a sum
of list of numbers. Since addition is group operator, removing elements could
cause the sum to be recalculated via subtraction of the single element.

~~~
dllthomas
This is explored some in Haskell in a couple places. One I remember off the
top of my head:

[https://hackage.haskell.org/package/lens-4.13/docs/Control-L...](https://hackage.haskell.org/package/lens-4.13/docs/Control-
Lens-Iso.html#g:2)

------
elcapitan
"Dijkstra’s quote is quite apt: “The purpose of abstraction is not to be
vague, but to create a new semantic level in which one can be absolutely
precise.” "

Read more Dijkstra!

------
proc0
Well, that diagram at the end is something I've been looking for, for the
longest time. I haven't been able to find a concise relationship of all the
functional terms/interfaces.

I think the part that eludes many tutorials and articles, is explaining what
exactly we're calling a monoid or a monad. Is it the actual set of laws? Is it
an abstract "object"? And although there are many attempts at simplifying
their explanations, I found that is necessary to grasp the concept of a
category first. Really having an intuition for a category gives way to
understanding functors, etc. because that's what their definition is based on.
Attempting to explain functors, etc. just with functions (of any language)
will leave the learner hanging as to what exactly a functor is, since
languages just implement them, but category theory provides the underlying
abstraction that gives them context.

~~~
js8
The diagram comes from
[https://wiki.haskell.org/Typeclassopedia](https://wiki.haskell.org/Typeclassopedia)
which also explains most of these things.

------
dllthomas
I think the use of mathematical names for sufficiently generic concepts in
Haskell improves a certain kind of discussion.

If we had an interface called "Appendable", that leaves room for arguing over
the boundaries of what "really" counts as "appending". This is contentious,
because interfaces define what we should be able to rely on.

In Haskell, it's entirely clear what is and is not a Semigroup. Does it follow
the Semigroup laws? Okay, it's a Semigroup! What can I rely on if I ask for a
Semigroup? The Semigroup laws.

Discussion of, say, whether "container" is a good way to think of a "functor"
is more quickly recognized as a purely pedagogical question - which doesn't
mean it can't be contentious, but doesn't as much get in the way of getting
work done.

------
senthil_rajasek
Here's a wikipedia disambiguation page for Closure, especially if you are a
programmer,

[https://en.wikipedia.org/wiki/Closure](https://en.wikipedia.org/wiki/Closure)

~~~
davexunit
This ambiguity is why "Structure and Interpretation of Computer Programs"
avoids the term "closure" when talking about free variables in a procedure.

[https://sarabander.github.io/sicp/html/2_002e2.xhtml#FOOT72](https://sarabander.github.io/sicp/html/2_002e2.xhtml#FOOT72)

------
js8
I still don't understand how one can express a concept such as monoid or group
in, say, Haskell. One option is typeclasses, but they don't actually let you
express the laws.. Furthermore, sometimes the laws require more than one
different objects, or objects which satisfy some other laws and so on
recursively.

I can see how one can start from types, but I don't see how one can start from
abstract algebra (or categories or something). That is, possibly not to have
fully specified objects that you work with.

~~~
theseoafs
You can't, not with Haskell. (Maybe you can, I don't know, with some really
insane extensions and type-level programming. It wouldn't be nice.) Haskell's
approach is: write out the typeclass and the implementations, and you can
probably safely assume that the "natural" implementation of the typeclass will
satisfy the laws.

Dependently typed languages are what you use for this kind of thing.

~~~
js8
I agree that you can probably do that with dependent-types, I am just
wondering if there is a way to do stuff like that in more simpler formalism,
similar to untyped lambda calculus.

I mean, in untyped lambda calculus you start with simple objects that are
fully specified and you compose them to get more complex objects. I think one
should be able to do the "opposite" thing, i.e. start with potentially
complicated objects and restrict those down by adding relational conditions on
them.

So I am looking for some (minimalistic) formalism to do that, something like
"lambda calculus in reverse". In classic Lisp, there were operators CAR, CDR
and EQ which let you introspect any lambda expression. So maybe there should
exist a formalism having these decompositional operators as primitives..

~~~
theseoafs
You already know it: the way you do this is with a homoiconic language. I
don't know of any "formalisms" of homoiconic languages, but there's always
Scheme. I don't know why this idea is relevant though. If where you're going
with this is "you can write code to take in a monad instance as an input
syntax tree and verify it satisfies the laws", then I assure you that's
impossible in the general case.

------
marcosdumay
Well, those students never really used Math on their lives, except for
variants of arithmetic. This is not their fault, and the fact that they
survived until an advanced undergrad course without needing it is troubling.

Anyway, I've tried to explain such stuff, and my best success rate is by first
throwing people out of their comfort zone by asking them to explain what's a
number, or something equally fundamental.

