Hacker News new | past | comments | ask | show | jobs | submit login
The category design pattern (haskellforall.com)
246 points by _zhqs on Nov 8, 2015 | hide | past | favorite | 156 comments



An abstraction is only useful if you can use it.

What sold me on applicatives was being able to write "traverse", and similar operations, that would be m * n if you had to write them for each (traversable, applicative) pair but can be m + n instead ( http://m50d.github.io/2013/01/16/generic-contexts.html ). That's a very concrete code saving. Having three examples of things that form categories is great - but can we show a real-world function that we can write that takes a generic category, that does something useful with each of these three examples? That would demonstrate that we're getting real value (i.e. a code saving) out of the abstraction.


The generic benefit of a category is the proof that it satisfies the identity and associativity laws. This lets you write "generic proofs" that work for any category.

For example, take the following example type:

    newtype Example f c a b = Example (f (c a b))

    instance (Applicative f, Category c) => Category (Example f c) where
        id = Example (pure id)

        Example f . Example g = Example (liftA2 (.) f g)
You can prove that if `c` satisfies the `Category` laws and `f` satisfies the `Applicative` laws, then `Example f c` also satisfies the `Category` laws.

This in turn lets you take any `Category` and keep extending it with as many `Applicative`s as you want in any order, confident in the knowledge that your extended `Category` still satisfies the `Category` laws.

This is a common pattern in code that is organized around category theory and/or abstract algebra and I wrote a post showing a simpler example of this phenomenon (using `Monoid` instead of `Category`): http://www.haskellforall.com/2014/07/equational-reasoning-at...


You're just telling me how the abstraction gives rise to more abstractions. That seems to be getting even further away from where the rubber hits the road. I want to see a program that does something directly useful, something there's an actual business case for, that would take more code to implement without the Category abstraction.


Categories are just an utterly pared-down notion of composability, so most of the time you don't even recognize them in the wild (like the associativity of monads, which on an abstract level says that only the sequence of instructions matters, but not their grouping - it's something that you expect to be true in any imperative language!), or only recognize the absence of categories as horrible design.

If you haven't already, you should play around with the pipes library, because it makes very heavy use of categories and it's an extreme positive example of composability.

As another practical example, Haskell's class constraints form a thin category, with constraint entailment as arrows. This means that instance resolution never depends on ordering of imports, local bindings, or basically anything other than the raw set of available instances, and also any two instances at the same type must be equal. This buys us great scalability and refactorability, most notably in comparison to Scala, where alphabetizing module imports sometimes breaks programs.

That said, in category theory one rarely works with vanilla categories but rather other abstractions formulated in categorical language. Similarly, in Haskell vanilla categories aren't as useful as the many well-known categorical abstractions (basically, the core Functor - Applicative - Monad hierarchy is directly category-theory inspired, and they've proven to be insanely useful).


I don't think the class example works, because you wouldn't operate on a class hierarchy in code. Pipes is all well and good but unless I'm implementing it myself I don't need to care about the examples.

If you want to sell this to ordinary programmers you need to take another step down to the concrete level. Show two generic functions that take a category and show how they are useful for two different categories (heck, I think I can think of one myself: it should be easy to write a function that takes a type-aligned list [c a b, c b d, ... c y z] and combines them as a single c a z). If it's a useful abstraction then that really shouldn't be hard.


We want to call them one after another, and return a failure if any of them fails, success if they're all succesful

I didn't find that example particularly motivating since I can easily think of a simpler solution: exceptions. (Alternatively, if I was using C, it'd be setjmp()/longjmp(), which perform a very similar style of control flow.)


Exceptions are a more complex solution. They're much harder to reason about, and you can't abstract over them because they're special-cased in the language. I've literally seen Java libraries offer an interface something like this:

    <T> T doSomething(Callback<T> callback)
    <T, E extends Exception> T doSomethingThrow(CallbackThrow<T, E> callback) throws E
and even that doesn't properly cover the cases, because ideally you'd like to be able to pass a callback that throws two or more unrelated exceptions as well. Setjmp/longjmp is "simple" like goto is simple; if you want to refactor a function that uses them you have to think very carefully.

If we have \/, the possibly-error is just a value. We can use it with generic functions in the normal way. We can refactor them in the obvious way and it will just work correctly, and the type system can assure us we've done it right.


Exceptions are a more complex solution.

Are they really? I'd like to see the actual code which gets generated from that ultra-abstract solution, since I already know what creating an exception frame or setjmp/longjmp turns into.

If we have \/, the possibly-error is just a value.

If you use pointers, then an error can also be signified with a value (0), and there's also this very simple but possibly slightly less efficient way (in the success case):

    ret = (v1 = func1(v0)) && (v2 = func2(v1)) && ...


It's only "simpler" if you approach programming from a low-level, non-functional perspective, which seems to be where you come from. The parent is talking about complexity in terms of solution complexity (avoiding undesireable states and writing fail-proof software), not in number of machine-code instructions generated, or not having to learn a new abstraction.

The people interested in functional programming and compile-time guarantees want to work at higher-levels of abstraction, where Exceptions are just hard to typecheck crutches that effectively implement the Option/Either types but outside of control of you compiler (at the call stack level).


> Are they really? I'd like to see the actual code which gets generated from that ultra-abstract solution, since I already know what creating an exception frame or setjmp/longjmp turns into.

Do you know the micro-ops that that "actual code" will be executed as? The voltages that will pass through each logic gate? Or do you trust that the processor will behave the way that it's specified to and not worry about how it actually implements it?

Have you seen how Smalltalk does control flow? if or while are just ordinary (virtual) functions that take a function as an argument. If you understand the concept of passing a function to a function, and the concept of calling a virtual function, then you don't need to have a separate concept for if or while, they're just ordinary functions written in the language that you can read the code of. That to me is real simplicity, and either gives you the same thing.

> If you use pointers, then an error can also be signified with a value (0), and there's also this very simple but possibly slightly less efficient way (in the success case)

I like that. Genuinely. And at an implementation level that's pretty much what Option is. There are a couple of awkwardnesses: the vns would have to be declared beforehand, making it possible to use them uninitialized (though I guess that's always a danger in C?), and you can't extend it to something like Either where your error can contain data, but as a first cut it's pretty cool.

The advantage of using an applicative or monad over that is that you get access to an existing library of functions. E.g. traverse (apply a possibly-error function to each element of a list, returning either a list of successful values or a single error), or the various control flow operators like ifM (execute one or other of two possibly-error functions, where the boolean we use to decide might already have been an error). The standard library could define all these functions just for possibly-zero-pointer, but it's nicer to define them generically so that they can be reused for possibly-error, async-operation, operation-with-programatically-accessible-log, operation-depending-on-value-I'll-supply-later, operation-that-needs-to-be-inside-a-database-transaction and so on, in the same way that e.g. sort is defined generically rather than just on integers.

At a more advanced level you can also define your own generic operations. E.g. I have a generic Report class; one of the subclasses uses async I/O (because it needs to call a web API) and also needs to record some statistics about the rows, another does the per-row piece of the report in a way that might fail, another needs to write to the database on each row. By writing my (abstract) Report class in terms of a generic monad (with abstract methods that return that monad), I can reuse the same logic for all these implementations.


The ironic thing about this post is, no actual category theory is used. Defining functions and functional composition laws is done (and is a perfectly natural thing to do), at the beginning of any undergrad math class. The definition of functions and composition laws was done more than a full century before category theory.

Category theory was invented for a deeper reason - to have a rigorous definition of a natural transformation, to enable a precise language to translate back and forth between algebra and topology.

Note that this is not a comment about the usefulness of category theory in theory of programming languages, it is about the uselessness of the linked article in talking about it.

As another commenter pointed out, in practice Haskell functions as values don't even form a category. This actually matters as illustrating the practical difficulties involved.

As a mathematician, I'd say most practicing computer scientists have a very skewed/warped understanding of category theory, and in their ignorance of other theoretical concepts, tend to misunderstand what they do think they know about. In general defining what you're trying to solve, tethered to a real, meaty problem, and then judging theoretical formulations for succeeding at these use cases is a good way to not get stuck in 'empty abstractions'. New mathematics gets invented (and judged) as a way of solving problems. The chain of reasoning might be long but it's there. People don't invent new terminology for no reason (and even if they did, and it didn't help for understanding a problem, it'd get no traction)


Did you read the entire post? Functions were only the first example to introduce the reader to the basic idea of associativity and identity. The post then discusses the Kleisli category and the category of coroutine pipelines.


I did read the entire post. Isomorphisms and homomorphisms between groups also obey associativity, identity, can be composed etc - but there is no point to talking about the 'category of groups', as opposed to, simply, Group theory unless I somewhere use the fact that it's a category, in say, talking about the topological spaces, and the fundamental group etc.

Did you read my comment? I claim that because something is associative and has identity, concepts clarified and used in the 19th century, is not sufficient motivation to bring about category theory. It's not used for anything in this article.


You are correct, but it seems a bit much to take the author to task about this. Isn't all that you ask for to replace the words "category theory" with "the notion of category" throughout the document?


A couple of comments:

Correct, it is not category "theory". The terminology "category theory" here is used analogously to how database people say relational algebra is based on "set theory". It sounds a bit silly to mathematical ears.

On the other hand, the article is about much more than composing functions. In fact the section headings are "The function category", "The Kleisli category", and "The Pipe category". The rudimentary structure of a category is useful for putting disparate concepts on common ground.


This is wonderful, and correct, theory. As the Futurama joke goes, "you are technically correct; the best kind of correct."

What the Haskell community keeps running into is that most people don't think in theories. Trying to prove the theory to be practical has a poor success rate. They need to implement useful things that people love, then show how the theory got them there naturally.

Sentences like this:

> Category theory codifies this compositional style into a design pattern, the category.

To the average person, a theory codifying a style into a pattern just isn't useful. What does that do? Monads are a common joke to this effect. They are wonderfully useful but poorly understood by most developers. It isn't being right that matters if few understand you.

Why was Rails successful?

Sure Ruby is nice, but Python is very similar (and it wasn't the `do...end` blocks.)

It's because Rails could be used to easily build functional CRUD apps — Whoops! it works — that made difficult easy. Sandy Metz doesn't get heard if DHH isn't building Basecamp.

As a philosopher, it pains me to know people don't think that way, but if you aren't changing people's lives then all the theory is just mental masturbation.


Haskell isn't designed for "average people" or "most people" and why should it be?

Real human beings use Haskell every day and love it.

Real human beings are capable of learning about teh scary maths and abstract theories.

You studied philosophy? Are you aware that most people don't understand anything about philosophical theories other than The Secret? Have you thought about how maybe those theories can change people's lives even though they're not aimed at a lowest common denominator? E.g. by being influential in the long run, and also by being intellectually stimulating to people who do bother to learn about it?

Rails-style success is not the goal for Haskell. It is an explicit anti-goal: the motto is "avoid success at all costs."

Terse and rude counterpoint to your thesis: Average people are lazy idiots and designing your language to appeal to them will result in a very popular incoherent mess, like Java and Ruby, and Dijkstra will make pointed remarks at you from his grave.

http://www.paulgraham.com/avg.html


>Haskell isn't designed for "average people" or "most people" and why should it be?

Although it isn't stated as such, I think the point is that the haskell community is, on the whole, truly awful at pedagogy.

I'm perfectly fine with the notion that Haskell isn't for everyone, but being a poor pedagogue is always a bad thing.


If you're a good pedagogue, please help out.

Teaching isn't easy, and the abundance of poor pedagogy is surely a big reason why so many people develop mathematical anxiety.

https://en.wikipedia.org/wiki/Mathematical_anxiety

> Students often develop mathematical anxiety in schools, often as a result of learning from teachers who are themselves anxious about their mathematical abilities in certain areas. Typical examples of areas where mathematics teachers are often incompetent or semi-competent include fractions, (long) division, algebra, geometry "with proofs", calculus, and topology.

The programming world lately is inundated with tutorials that take their pedagogical style from the Teletubbies or at best Bob Ross. Maybe that is not a viable way of teaching Haskell—maybe there will need to be actual study involved, with textbooks and exercises and trained teachers. I don't really know.

https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD103... ("On the Cruelty of Really Teaching Computer Science")

> So, if I look into my foggy crystal ball at the future of computing science education, I overwhelmingly see the depressing picture of "Business as usual". The universities will continue to lack the courage to teach hard science, they will continue to misguide the students, and each next stage of infantilization of the curriculum will be hailed as educational progress.


>If you're a good pedagogue, please help out.

I truly think I am, but I'm still struggling to learn Haskell...

>the abundance of poor pedagogy is surely a big reason why so many people develop mathematical anxiety.

I'm one of them, but 28 years later, I'm finally realizing that mathematics actually aren't that hard.

>The programming world lately is inundated with tutorials that take their pedagogical style from the Teletubbies or at best Bob Ross. Maybe that is not a viable way of teaching Haskell.

I think you hit the nail on the head, here, and I would like to hazard the following opinion: this is a deeply cultural problem in the United States, where we operate under the implicit belief that learning should be fun.

I don't mean to suggest that learning isn't rewarding, deeply satisfying, and occasionally exhilarating, but beyond learning your ABCs, it's rarely fun in the traditional sense. The problem, as I see it, is that we're failing tell the truth: learning non-trivial things is tedious, grueling, difficult, and tooth-grindingly frustrating while still being one of the most worthwhile things in life. Instead, we try to teach difficult or non-obvious concepts with the pedagogical equivalent of coloring books.

Stated differently, we simultaneously tell our youth that one has to "work hard" and "make sacrifices" for things like sports, while telling them that intellectual pursuits should be nothing less than blissfully entertaining.

As a general rule, I avoid anything labeled as being "for beginners", unless it's from a foreign editor. I'm lucky to be a fluent speaker of French, and I've noticed that French textbooks systematically approach subject matters in a rigorous, Cartesian, matter-of-fact way ... and this from grade 1!

You want to know how I finally grokked monads? I read this paper: http://repository.cmu.edu/cgi/viewcontent.cgi?article=2846&c...

It sucked. I mean it was awful. I spent weeks pouring over this thing, fighting the kind of frustration I described above.

And I got a D+ in undegraduate pre-calculus. I'm not a "math guy", but like everybody else, coloring books don't cut it after you start growing armpit hair. Big-boy concepts require big-boy methods.


Agreed, and thanks, that looks like an interesting paper.

In Gothenburg, Sweden, where I studied, Haskell is the first introductory language for students of CS and (I believe) CE. I know people who had barely even used a computer when they started there who are quite proficient with Haskell.

A side benefit of that is that the dorks like me who had been coding Perl and C and whatever for years before were thrown back to "oh shit, I have to actually study" mode and thus equalized with the people who were starting from scratch.

In terms of gender, when I started studying women were way less likely to have been coding since the age of 6. In the exam for a later course on Advanced Functional Programming that included monad transformers, functional embedded languages, the basics of dependently typed programming, and a semi-large Haskell programming project, I was happy to see a female classmate who hadn't programmed at all before university get the best score of anyone. But I digress... (Shoutout to Elin if you're reading!)

I'm somewhat anxious that my way of talking about this stuff might make people think that Haskell is way more difficult to learn than other languages, which I don't think is the case. I think people probably forget how much trouble they had grokking concepts like "objects" and "superclasses" and "generators" and so on. Haskell might be difficult in some other ways. Maybe in some ways it forces you to really understand things more than other languages. But it's still very possible to learn to program in Haskell without being exceptionally good at math.

Also, some of what people find tricky about Haskell is also present in other ML-style languages. I learned about O'Caml and some SML, and also Scheme, before I got into Haskell. So I knew about functional recursive styles of programming, and algebraic data types, and probably the biggest new things were lazy evaluation and the complete lack of side effects leading to the need for monadic sequencing. Those are not insurmountable.

Edit: By the way, I also appreciate how Haskell is becoming a sort of grass roots study group for coders to learn more about math, algebra, category theory, and what have you. It is not at all constrained to the universities anymore. This article is at the top of HN right now! Interesting times.


I think you're on to something here. It may be that programming languages that require you to 'reboot' are better suited to people totally new to programming than to people that already know how to program using some other methodology.

This is because compared to your old, well used and worn tools that new tool starts off as a set-back rather than an advantage and we more or less expect an instant productivity boost from new technology. That set-back then becomes a huge hurdle to overcome.

The question a programmer faced with a problem to solve is always one of investment: should I invest my time and effort into this new technology that will make me x times more productive or simply a better programmer (or someone capable of tackling more complex problems) a year or more down the line, or should I break out my trusty tools and turn a buck today (or turn a buck for my employer).

In most cases that equation works out in favor of the old tools so the energy required would need to be offset by concrete short term gains if a person is to switch from one tech to another.

That lack of concrete short-term gains is the big stumbling block for old hands, and if you manage to hack that then I think a lot of these better technologies would find traction.

People new to programming don't have that particular hang-up and I'd expect them to do much better with such new tool or different tools. (But they'd have the same problem if they needed to switch to other tools, it's just that in this particular instance they have a blank slate advantage, that evaporates the moment they become proficient in that technology.)


I went to the same school (I presume) in Gothenburg. I came into the CS classes with a decent amount of math background, and a small amount of programming classes (FORTRAN, C, ADA and other kinds of evil you use in chemistry and physics). Our CS intro classes were Java and Haskell. Haskell didn't feel like a reboot to me, it felt like using math to write programs, so at least for me, Haskell felt a lot more accessible than learning OO concepts in Java did. I am sure lecturers caring more about Haskell also mattered, but at least for my background, getting the Haskell concepts was a easier than the Java concepts.


This comment and the parent comment was really interesting. It's made me think about my current path and somethings I have done/learned that has incrementally pushed me towards functional programming over a long window of time.

I have played with Scheme, along with a half-completed SICP read through. I have also played with Haskell, along with partial read throughs of LYAH and real-world Haskell. Those experiences, although incomplete, have been important to me as a programmer. I don't think those actually 'sold' me on FP though.

I took a detour for some time and decided to become a better programmer using OO methodologies. I used TDD, I memorised SOLID and all that. Read up on design patterns and then used them practically. All well and good. From my toying with Scheme, Haskell and associated literature, I wasn't entirely satisfied. The code I produced was always a bit, I'm not sure, but not entirely satisfiying. It looked and worked better than what I get given to review when considering freelancers for an extra pair of hands - part of that is because of PHP I think. I carried on this way for a while in any case.

In my own self-learning, I'd started looking at language design and compilers. I read some books and decided I didn't have enough mathematical skills to consume the texts. I turned to learning discrete mathematics. That was a big eye opener. As I was reading about formal logic, set theory, proofs, etc, my mind kept turning to the problem of applying this in my work. Haskell and FP came to mind too. I believe doing this helped it finally click that a functional approach can be taken successfully in most of the work I do.

Whilst I was doing this, I had a fairly simple project that, if designed correctly, could make my life easier in future projects. I took an OO approach, designed the UML, started writing tests, etc. I decided that it was over-engineered or at least it was a lot of work for what I was trying to do. I decided to take an FP approach. I had the project completed with far less code that I was happier with. In the same language. That was almost like an 'epiphany moment'. I started getting seriously interested about functional programming then. I am now starting to see the flaws and bloat in the approaches I was taking before.

I recently read Out of the Tar Pit again. I had read it before but I didn't really get it. This time it excites me and made my brain whir with ideas. Especially on how I can make these approaches compatible with the constraints of my day job.

I actually now digest as much as I can about FP. Haskell, as an important player, frequently comes up. After laying the ground work, monads make a lot more sense. Haskell as a language excites me.

In my opinion, and I am self-taught when it comes to computers so it may not be 100% applicable, here is what I did to start my transition from imperative to declarative:

a) experiment with functional programming. Not actually get it and be ok with that. Carry on down the path.

b) Read HN and note articles posted about FP

c) Continue learning about imperative programming and consume the advice from OO thought leaders. Apply it to your work.

d) Learn a bit about maths and what it actually is, especially realising it's not just rote computation and application of algorithms from school.

e) Take a functional approach instead of an OO approach within the language you are working with. I did this in PHP. Not only will it show you how FP compares, it will also show you why Haskell/OCaml/etc by way of the deficiencies for FP in your chosen languages. In PHP, the type system (arrays as your Everyman...) and inconsistent type hints, explicit closing over of variables in outer scopes and piss-poor support for modules is one. Biggest is the only way to pass a function in the function namespace is to use a a fully-qualifies string (so function name along with namespace - you get undefined constant otherwise). With Java, it might be that you cannot have stand alone functions, which would suck.

This helped a lot for me but I'm sure spending a long time learning about software down to the hardware in my spare time contributed a lot to this too.


Nice to hear.

I had similar moments of epiphany while taking a course on abstract algebra and writing down some theorems in the form of Agda definitions.

As you may know, Agda is a Haskell-inspired language that has an even more powerful type system that makes it possible to define arbitrarily strong types, so that for instance you can define the type of a function that sorts a list—and then know confidently that any implementation of that type does indeed sort a list.

Anyway, when learning Agda, one comes across the notion that programs and proofs are in some sense the same thing. In other words that the code for a function can be read as a proof that the type of that function is inhabited. This seemed to me kind of bizarre and magical at first...

The more I thought about it, the more it made real sense, and not only that, it helped me understand proofs in a much more intuitive way. Before, if you had asked me what a proof is, I might have said "it's a very strict way to argue that a statement really is true." I didn't know that such proofs can be given formal structure... I couldn't really explain how to relate it to programming or functions...

But now that I've to some degree internalized the notion, if you ask me what a proof is, I might say it's an unambiguous description of how to transform an input into an output.

If you call the input P and the output Q, then P is a premise and Q is a conclusion, and the proof is the "arrow" P -> Q that through some well-defined sequence of "truth-preserving transformations" gives you knowledge of Q assuming nothing but P (and some fundamental axioms).

And in the world of programming, P is an input argument and Q is a resulting value, while the "arrow" P -> Q is a well-defined sequence of correct transformation that produces a Q-value given only a P-value (and some fundamental primitive functions).

So in this sense you can say that the implementation of a function is actually a kind of logical argument.

If you want to make use of this insightful connection (the "Curry-Howard equivalence"), however, your programming language needs to be pretty strict. If your P -> Q argument suddenly throws a null pointer exception, it's not much of an argument—it means your claim of being able to produce a Q given P is false.

This connection really helped me to understand both proofs and programs in a deeper way.


Thank you.

1. I'd heard of Agda but never read any code. What a brilliant language. I've added it to my list of my languages to spend time with.

2. I think your experience with Agda, especially the realisation that programs = proofs, leading to the Curry-Howard correspondence is exciting and rightfully bumps Agda to the top of my list.

Functional programming has also opened my eyes with type systems too. That actually, an inconsistent and weak type system is a bigger liability than once thought - I'm realising that daily. Especially with null values and "trying to call method on non-object" errors.


If you could share some of the resources you used to explore discrete mathematics, that would be really helpful.


I found the book 'Discrete Mathematics and It's Applications' a helpful book. People often recommend 'Concrete Mathematics' too.


Excellent quotation in the intro to that paper:

> ... Our intention is not to use any deep theorems of category theory, but merely to employ the basic concepts of this field as organizing principles. This might appear as a desire to be concise at the expense of being esoteric. But in designing a programming language, the central problem is to organize a variety of concepts in a way which exhibits uniformity and generality. Substantial leverage can be gained in attacking this problem if these concepts are defined concisely within a framework which has already proven its ability to impose uniformity and generality upon a wide variety of mathematics.

(John Reynolds, "Using category theory to design implicit conversions and generic operators", 1980.)


I'm truly glad that you're enjoying that paper :)

This is completely unrelated, but I found this paper to be similarly brilliant in its simplicity, clarity, and rigorous explanation: https://ramcloud.stanford.edu/raft.pdf


That paper took 25 years to refine. Additionally, for the right audience, more math could make it more succinct.


I don't doubt it, but I'm not sure what you're trying to say. Would you elaborate?


I have a hard time already convincing myself that correct sequential imperative algorithms are indeed correct - which probably means that I'm a bad programmer, but I can't change who I am. So convincing myself that that Raft, a concurrent algorithm, is correct (meets its specification) from an informal description is completely out of question - where is the formal proof of correctness?


Take a look here for pointers to formal proofs:

https://news.ycombinator.com/item?id=10017549

Specifically Verdi, a Coq framework for distributed system verification, which now includes a formally proven Raft implementation.

https://github.com/uwplse/verdi

This appears to be the main theorem, "raft_linearizable":

https://github.com/uwplse/verdi/blob/master/raft-proofs/EndT...


Thank you very much. :-)


Ah, sorry. I think you're reading this with a much deeper understanding of mathematics than I.

I'm really curious, though. Could you give me some insight into how algorithms are proven correct?


The most dreaded words in math education: 'from which it is obvious that...'.


What are some examples of good pedagogy in programming?

I would point to pretty much anything by Norvig, Knuth, Sedgewick, and maybe to Zed Shaw's agressive tutorials.


Pedagogy is related to teaching children, andragogy is the education of adults. Each has unique challenges in that the young lack a base of knowledge and adults may have their own conceptual frameworks that need to be addressed.


Semantics! (Though I'll be sure to remember that distinction for future conversations).


Fair enough. Teaching non-developers functional programming may prove an entirely different challenge than teaching procedural programmers functional programming.


This seems to disagree with standard usage: https://en.wikipedia.org/wiki/Pedagogy


If you ever see anything online that has more than, say, half a dozen users, then it is NEVER the case that it suffers from "poor pedagogy". Why? Because it only takes 1 person to explain something, even if it's brief, people will start linking to there and eventually that method of explanation spreads. Like pointers in C, or classes - who would say they "suffer from poor pedagogy"? Anything is easy to teach, it just takes a single person who undersatnds it.

But.

If you ever see a technology truly suffer from "poor pedagogy" then it has a secret: in some fundamental, underlying way, it sucks.

Now the rules of the game are different. Suddenly, those who understand it will keep it under their breath and muddy the issue. No one will be explicit about these design choices and simply explain it: to explain it is to criticize it.

Then you have this case where people are holding their tongue, and those who eventually figure something out join that small cabal and simply don't mention it in normal, sane terms. They just don't - it would be rude to do so.

So remember. If you ever perceive poor pedagogy online in any tech subject, then it is camouflage, you are being lied to. In some fundamental way nobody will mention, it simply sucks.

At that point it's up to you to read between the lines and decide whether it still has enough benefits for you to learn it.


If you can explain clearly why something "sucks," then that explanation will spread and soon enough everyone will agree that you are right and the thing is useless and bad.

But if you notice yourself spreading irrational FUD under throwaway nicknames, and people don't seem to quickly catch onto your point, it may be that secretly and fundamentally, you are trolling.


it's inappropriate to make ad hominem attacks here, please refrain from doing so.

I didn't name any shortcomings in Haskell, I gave a general rule of thumb.

However, as you yourself demonstrate dramatically, by using an ad hominem and saying "spreading irational FUD", if I did have a clear and rational understanding of why your prefered technology has underlying shortcomings as a trade-off, which learners need to deal with, then I certainly would keep that information to myself. Who wants to be skewered by a community?

For this reason, you will never see clear explanations in that format. Instead, people who have attained that information for themselves will simply keep it to themselves - they won't teach it at all. They will teach it without reference to alternatives or to the strength of trade-offs that have been made.

That's why it will seem like a case of "poor pedagogy". But it never is that. It's a case of people in the know politely avoiding teaching it.

Please note that this is just my personal opinion, you can disagree with it. I didn't mean to start a debate.


You were claiming, with extremely vague reasoning from a throwaway account, that any technology with a reputation of bad pedagogy—obviously referring to Haskell—"fundamentally sucks."

Ad hominem fallacy is when you use a character attack but pretend it is a logical argument. I don't claim to make a logical argument—there's nothing logical in your comment to respond to.

If you don't want to start debates on the internet, try not telling people that the technology of their choice "fundamentally sucks."


Haskell doesn't have a reputation of bad pedagogy per se. My only point was what anyone's reaction should be when their personal impression is "bad pedagogy", in any area of software. It means that something is missing.

The idea that I'm "obviously referring to Haskell" and attacking me for it is beneath a response.

By the way this thread is an excellent example of why if I did have clear arguments for limitations baked into technology xyz, I certainly wouldn't share them: leaving learners in the difficult position of being presented difficul material/approaches, without a context, by those who understand it but keep mum about any limitations or difficulties.

(An example would be the syntax for standard library container operations in C++ (stl things), like with vectors. Nobody is going to explain that this syntax is so much uglier than higher-level vector/array syntax due to history, because the minute they did it would become obvious that there should be a CoffeeScript-like intermediate syntax parser that turns easy syntax into C++. Instead, it's presented without any such context, and nobody fixed the fact that this is harder to learn, harder to parse or debug, and more error-prone, than higher-level interpreted languages. It's just silently accepted without reference to this being a historical design trade-off.

I'm not objecting to it, but if something seems to be described circuitously or not being taught very well, then you're probably not being told the whole story.

There is a good and specific rason why.)


I apologize for misinterpreting.

Note that just as your original comment was murky and seemingly accusatory, so was my original reply to you—because I was parodying you.

You talked about unspecified things fundamentally and secretly sucking... so I said that you might be fundamentally and secretly trolling... who knows if any of these claims are true? They're both murky, vague, and somewhat paranoid.

The more straightforward argument you're producing now seems interesting, and I agree to some extent. I'll happily admit that Haskell—just like every single language—has historical warts and problems. Some of them are being addressed by the community, with some of them the community is stuck in arguing, etc.

C++ has always been firmly committed to backwards compatibility, which is one reason it is such a successful industrial language. Haskell has some of that too, and incompatible changes made to the core language are rare and always met with suspicion.

However, I don't think the Haskell community is especially dogmatic about the perfection of their own language. If anything, people are constantly discussing ways to make it better.


> The idea that I'm "obviously referring to Haskell" and attacking me for it is beneath a response.

You were replying directly to a post that claimed "the haskell community is, on the whole, truly awful at pedagogy" so it's easy to see how mbrock mistakenly concluded that, when you said "in some fundamental, underlying way, it sucks", you were referring to Haskell (when in fact you meant something entirely different).


His point is that your argument is so difficult to follow that it looks ad hoc. That, in combination with your throwaway account, raises red flags for trolling.

Ad hominem attacks are informal fallacies, granted, but they're still sometimes relevant.


So difficult concepts are useless... got it.

>Anything is easy to teach, it just takes a single person who undersatnds it.

Spoken like someone who's never taught a difficult subject.

More to the point: this is what anti-intellectualism looks like, and it's not something to be proud of. If you made the effort to understand the occasional cryptic idea, you'd be amazed at the number of useful things you would learn.


I'm sorry, can you give a single example in the arts and sciences of something that has a reputation for "poor pedagogy"? (But is in fact just inherently difficult.)

Here are some inherently difficult subjects: quantum field theory, advanced calculus, orbital mechanics. Do any of these subjects suffer from a reputation of "poor pedagogy"?

I was referring instead to specific technological solutions, and, specifically, to encountering the idea that a subject "merely" suffers from poor pedagogy.

My comment had zero to do with the difficulty of any subject.


> Average people are lazy idiots and designing your language to appeal to them will result in a very popular incoherent mess, like Java and Ruby, and Dijkstra will make pointed remarks at you from his grave.

This is so funny

Yeah, Ruby and Java are not great. But they get the job done and produce value, and a lot of people use it.

I couldn't care less what Dijkstra thinks, he's not the one paying my bills

If you can see the value in Haskell great. But I find it hard to remember where are the big technology companies using Haskell (or tools built with it). Maybe because if you force 'mathematical consistency' in an inconsistent world everything else sucks


Sure. I'm a huge fan of bash scripts but I'm certain that attempting to enforce mathematical consistency on that language would be a comical disaster. (Although Gabriel Gonzales's "pipes" library for Haskell does make reference to a categorical interpretation of Unix pipes—hey, it's like this "math" stuff is quite widely applicable.)

Look up Facebook's use of Haskell if you're curious about industry use.

https://code.facebook.com/posts/302060973291128/open-sourcin...

I worked for a startup that used Haskell for its entire backend, including a custom kind of graph database. I joined my coworkers to a Haskell hackathon in Zürich and met hundreds of people using the language commercially (and happily).

It's not that Java and Ruby suck—it's just that they're incoherent and messy. That may sound like slander but I have used both languages professionally, I have contributed to both Rails and the Ruby interpreter, I don't think they are bad tools. Everything is a tradeoff. If you CAN'T see the value in Haskell, I humbly recommend you look into it, but I can't force you to become interested.


So the goal is for Haskell to be the Ithkuil of the programming world?

That's a noble idea. Knowing this also makes it easier for me to chose to avoid reading articles about it.


No. The goal is to be useful in both research and industry by means of principled design based on mathematics and engineering. If that scares you, fine, you don't have to be interested in everything. If you think everything that's not designed to appeal to some notion of an "average programmer" is necessarily obfuscated and impossibly difficult, that's on you. Go ahead and avoid it.


That sure does sound like a yes, not a no.


No it is not. Haskell appeals to a certain audience, for the same reason that assembly and C/C++ appeal to a certain audience (fun fact: this category pattern will also be available in C++17 to be known as Concepts).

Sure, it is a niche language, and might have too steep a learning curve for the average programmer, but it is used in production, by real world businesses, and people write real world code with it.


The theory is for getting consistent/composable APIs, and trust in the code - trust that you can optimize it, parallelize it, or just that it behaves the way it's supposed to, without having to read every line of it. You don't need to think in theory to benefit from that.

While it's certainly cute that I can `gem install` a CMS, the Ruby ecosystem is infamous for its magical, non-composable, monolithic frameworks. The functional ecosystems are working toward fixing the all-too-familiar problem where you have to either start over, or hack apart some giant stateful mess anytime your exact usecase hasn't already been built as a library. It's easy to throw a nice looking house together with Lincoln Logs, but I'd take Lego instead any day.

Edit: Also, just because someone hasn't built the tallest building in your neighborhood yet doesn't mean they're not building anything. JP Morgan, Kapersky, Facebook, and IMVU are hiring Haskell devs, just off the top of my head.


Agreed on popularity of language/framework not being an indicator of success (wrt to software evolution), quite the opposite seems to happen in the mainstream.

Worth noting that finance industry heavyweights such as JP Morgan and Standard Chartered (largest Haskell shop in the world) are laying off more than they're hiring these days[0][1], though unlikely this trend would directly affect tech hires (current batch of layoffs seem to be a trimming of management tier).

On a somewhat related note: were Standard Chartered to open source their fork of GHC many Haskellers not sold on lazy-by-default would rejoice; from Scala side of the fence my interest in Haskell would increase 10X were that to happen, line numbered stack traces alone, who woulda "thunk" it ;-)

Moving forward Haskell's probably going to continue to be pilfered of its theoretical gems by other languages that leap frog it in terms of adoption. Not a bad thing if we take SPJ's "avoid success at all costs" at face value.

[0] http://www.bloomberg.com/news/articles/2015-05-28/jpmorgan-s...

[1] http://www.bloomberg.com/news/articles/2015-07-20/standard-c...


Sure, we can have a perfectly reasonable debate as to whether or not lazy-by-default is the best choice for a language. I personally like the laziness, but if Scala, or Closure, or ML, or Elm, or maybe by some act of god COBOL "wins" by taking the good parts, that's fine.

But here, we have an article trying to make some concept approachable - some math, not tied to a specific language. Maybe it's hard for most to follow, since it assumes the readers knows what a monad is. That's fair, but it'd be the same if it were Scala.

The top comment here - and a significant number of the upvoted comments here, and on every similar article - are just baseless attacks against the idea that people should learn this. Not questions or technical discussion, just attempts to ostracize the part of the community that finds it valuable. It's anti-intellectualism, plain and simple. Maybe it's the terminology, or the syntax, or because articles didn't show the lock before giving the key, but it is what it is.

- Re: Hiring - JP Morgan may be doing layoffs somewhere, but I mentioned them because they made a "Hiring Haskell Devs" post within the last week.

Edit: Seriously, go re-read that top comment, from the condescending reference down to the assertion that it's all mental masturbation. In response to a post by a knowledgeable person trying to teach something useful to people who are interested. That's the top comment. Embarrassing.


>That's the top comment. Embarrassing.

My experience on HN has been that the top comment in most threads is usually the most incendiary one, not the "best" one.


> Worth noting that finance industry heavyweights such as JP Morgan and Standard Chartered (largest Haskell shop in the world) are laying off more than they're hiring these days[0][1], though unlikely this trend would directly affect tech hires (current batch of layoffs seem to be a trimming of management tier).

Indeed. Hiring of Haskellers continues apace at SC: https://twitter.com/donsbot/status/654630194519646208

> were Standard Chartered to open source their fork of GHC many Haskellers not sold on lazy-by-default would rejoice

It's not a fork of GHC. It's an entirely new compiler, called Mu.


Yes, the tech is eating the rest of the business. You need less people after you've turned a lot of the business into an API.

I've got four open positions at the moment.


JP Morgan may be laying people off but they had a hiring ad out on the Haskell subreddit <1 week ago [1] looking for help.

I don't think general layoffs at all correlate to language usage in this case (they didn't cut a Haskell team, specifically).

[1] https://www.reddit.com/r/haskell/comments/3re0kp/jpmorgan_ha...


Some time ago there was a post about this (I can't find it).

People don't understand what monads are because they don't know what problem they solve.

This article has the same problem. It doesn't explain what problem categories solve. Well it does but the average reader doesn't understand the problem exists.


I think they also don't understand monads because they have confused notions about the kind of concept "monad" is on the meta level. Their criteria for understanding and their methods of learning are unsuitable for becoming competent with the use of algebraic concepts. They are trying to think their way to a type of "intuition" that simply is not possible, kind of like trying to learn to ride a bike by "understanding" bikes, comparing bikes to various other things, or spamming the internet with complaints about how bikes are strange and difficult. (What's the problem with just taking the bus?!)

If Haskell's "Monad" type class were instead called "AbstractComputationFactory" or some other such intimidating technobabble, then when someone looks up what is the superclass of IO they might just think "oh, that's some weird detail, whatever, I might learn about that later." Since instead the name "Monad" is vaguely funny and appealing, people want to master that concept in principle before they even get "Hello, world!" to run in GHCi.

Of course people are free to focus on whatever they want, and we can talk about the possibility of improving the pedagogical aspects of Haskell, and probably university teachers have interesting things to say about that.

But there is no magic way to explain an algebraic concept that will suddenly make it intuitive and easy. If people want to understand them, they'll have to study. If people want to use them without understanding them on the conceptual level, that is eminently possible.

I myself don't understand monads and I've been using Haskell since 2005. I'm just not good at abstract algebra (flunked that class) and PDFs terrify me. However, even though if a mathematician asked me to explain monads I would be embarrassed, I can still use them and even define my own. That's because I've tinkered with them and acquired a know-how feel for the structure and what they are good for and how the patterns work—which is exactly how I've learned every other concept in programming.

Since most concepts in programming are ad-hoc, informal, and specified by hand waving, there is not even a possibility of understanding them at the level of mathematical rigor. What is an object? What is a class? Nobody knows. They just use them.


More than people trying to learn Monads, I see people trying to teach them. Few people write introductory guides to the "AbstractComputationFactories" of other languages, but it seems like everyone who gets (or thinks they get) monads feels the need to write a guide on them. It's inevitable that newcomers to the language will stumble upon them early on.


True enough. That has to do with the allure of jargon, I think. It's become a meme in itself. Nowadays on the Haskell subreddit, what I see most upvoted are tutorials that involve creating software that does something concrete. If that happens to involve a clever but not overwrought use of cool type system features, all the better. If anyone reading is yearning for some internet points, go write a Haskell tutorial that's useful, concise, and mostly shies away from the rabbit hole of abstract explanations. Even if it's merely about how to run a regexp or do a POST request.


It's like explaining Iterable interface out of the blue.


People don't understand what monads are because they don't know what problem they solve.

Or for that matter, the fact that it even is a problem, because it's easy to see that there are plenty of other programming languages with which people can do lots of productive things without ever thinking of the term "monad".


Replace the word "theory" with "abstraction" or "interface" then. All the article is saying is that there's a bunch of common things (Unix pipes, function composition, procedure sequencing) that all follow a few simple rules (a "contract" if you like). This means you can reason about them together without having to think about the individual "implementation details".


Reading this article I had the same feeling I had when I was a physics student in math classes: Why the haskel do I need this? Like every time some abstract math concepts were introduced with no connection to reality whatsoever.


For me, learning mathematics is a way of priming my imagination with patterns and tools that may or may not be relevant, but with the qualification that almost everything can find use if you look hard enough. I'm curious and like to think so having a library of abstractions is always satisfying. Just sitting down to figure out how math can apply to some simple object of meditation is usually fruitful.

You can get away speaking English with a relatively compact vocabulary. But reading broadly to enrich your vocabulary with new words and idioms allows you to describe events more fluidly, and perhaps in a way which inspires you into better ways of describing the same thing you could have described less poetry in the compact vocab set. In that way words let you change your perspective by affecting how you frame your sense-data.

Mathematics is the same way. If you want to work in a variety of domains, you're going to have to be agnostic ahead of time as to what tools might be useful, which is to say, have many tools.


You don't learn an English vocabulary by reading the dictionary, you read literature and see how those words are used in context.

Learning in a vacuum is hard for all but the most abstract thinkers.


I don't think I've ever read a math book that just went "Here is a list of definitions and the statement of some theorems. Have fun!" The problem is that math is usually built on top of more math. So the motivating examples given are often examples in mathematics, just in a different area.

For example, most category theory books talk a lot about the categories of abstract algebraic structures to get you an idea of what the things they're describing look like in "the wild." If you've never done any abstract algebra, it's like trying to use the Rosetta stone when you don't know Egyptian or Greek.


> Just sitting down to figure out how math can apply to some simple object of meditation is usually fruitful.

I personally find that bordering on impossible (and I still obtained a physics degree). I need a few examples of how a concept can be applied before it starts making sense.

At the moment I'm dabbling in image processing, and it wasn't until I had some visual explanations of morphology that it made a bit of sense. Now the maths has to follow on...


As a physics student in math classes, you should be more aware of the applicability of abstract theories, not less.


I terms of usability this seems to me up there with the set theoretic formulation of relational model. I.e. not as something everybody needs to do but offering an efficient descriptive model for complex systems.

When designing programs or datastructures I prefer to use pen and paper and some simple set theoretic ad hoc notation system.

I write code for production daily and without this higher level design model my code would be uk terrible shape and very slow to modify.

"As a philosopher, it pains me to know people don't think that way, but if you aren't changing people's lives then all the theory is just mental masturbation."

Basically really good programmers seem to see the optimal structure for a specific problem out of practice. I need to have some sort of theoretic framework which to use to compose my program logic.

I've not had the time to look into category theory but this post succeeded in convincing me it's a practical mental tool to help in program composition.

So, based on my personal preferences, I disagree this would not have practical benefits.


> To the average person, a theory codifying a style into a pattern just isn't useful

I think the average OO programmer recognizes the value of design patterns, eg the Gang of Four book.


Only after realising what 'problem' a pattern solves, perhaps by experiencing the problem over and over. I think this is why design patterns in OO are commonly abused (overused) - people learn about them without understanding the problem.


Sure, I agree. Novice programmers are not going to instantly understand what a category is, or how to utilize compositional programming without a bunch of experience and examples. However, I think for programming to progress we need to develop theory.


The proof is in the pudding.

You know why we have JavaScript? Because someone went and built it. Once it was built, it existed and people used it. All the theories about what a scripting language for the web should look like fell out the window the moment someone went and put one in their browser.

De facto standards beat de jure standards. Every time.

Theory is great. It makes for amazing and stimulating conversation. But in the real world the only thing that counts is building shit.

PS: this is also why English is the global language of trade and not Esperanto. Even though English is both harder to learn, has a terrible "API", and inconsistent semantics and grammar and syntax. Esperanto is clean and well designed and and learnable in 150 hours and not used by anyone.


> De facto standards beat de jure standards. Every time.

Only for a very base definition of "beat". People are falling over themselves to write X-to-Javascript compilers because Javascript simply isn't good enough.


Well, presumably an advance will eventually provide a competitive advantage.

For example, before the Copernican revolution, we had a model for the cosmos that made decent predictions. The heliocentric model actually made worse predictions for a long time until it was evolutionary improved. However, to the early scientists who adopted it, the heliocentric model "felt right."

Maybe right now programming in a shitty language like javascript with no regard to program structure is better at "getting shit done" than programming in a functional language using universal abstractions like composition. To people who study and think about it, though, it "feels right." Perhaps if they stick with it and perfect the art, it will eventually provide such a competitive advantage that it will be universally adopted.


There are already JS libraries for things like functional composition. If current trends persist, it will eventually get language-level support for that.

JavaScript is a tricky language like that. It lurks in dark alleys, beats up other languages and rifles through their pockets for spare syntax and semantics. Much like English.


Yeah, but the GoF book mostly is gobbledygook when designing program composition - it mainly shows how to patch java and C++ to be more like scheme. It does not actually help that much in program composition.

To quote Feynman incorrectly, Design patterns ala GoF are about as usable to programmers as ornithology is to birds.


> Philosophy of science is about as useful to scientists as ornithology is to birds

> Design patterns ala GoF are about as usable to programmers as ornithology is to birds.

So here design patterns is compared to philosophy of science. I think of design patterns more as a toolbox than an inapplicable study of the art. Much like how math is essential for science, I feel that solid programming abstractions and patterns are essential for software development.


"I feel that solid programming abstractions and patterns are essential for software development. "

Yeah, I feel exactly the same. To me object oriented programming as served by C++/Java/C# is good and fine for building abstractions over existing systems.

System design however, is served much better by thinking the system in terms of fundamental data structures, graphs and relational models rather than as an obtuse object hierarchy of some sort. This is my main gripe with GoF - it's used as a some sort of toolbox for software design when actually it just explains how to convert familiar and non-seremonial idioms from other languages to C++'s verbosity. If the student is not exposed to software construction in these other languages and just to GoF he/she is missing on so much (I know I did, at least).

But this is just my personal preference, maybe someone else writes much better software when thinking in objects but for me my initial years as a programmer were basically wasted in trying to build beautiful object hierarchies out of everything.

I write mostly C++ as my day job. Software modules which are built by embracing the C++ object model to it's full extent are the most horrible stuff I have to deal with :)


I think the complaint is basically that the majority of GoF patterns are better solved simply by having first class functions, and many students won't realize this because they only know languages without them, so they never develop an intuition for seeing such simple ways of solving problems, and instead build complicated machinery to do trivial things. And GoF tends to legitimize this approach in their minds.

It's a good book, but it's not a book for beginners.


> Why was Rails successful?

I don't think Haskell aims for this kind of success.

Abstractions can be useful and they can be beautiful. If your goal is to "easily build functional CRUD apps" you need the former. If you do research in computing science or would like to further your understanding or simply seek to appreciate the elegance and beauty in computation, you need the latter.

Admittedly, the latter goals (except research) are self-directed (they may make you a better or at least a happier programmer) and they don't directly contribute value to your customers, so I understand why you just called them "mental masturbation". However, I strongly disagree with that label. There is a long-term value in learning and general self-improvement.


Unfortunately I don't know of much research in formalizing CRUD apps, but maybe it's out there.

The long-term wish as I see it would be for us as an engineering discipline to develop a really clear understanding of the domain, enough to be able to build these CRUD apps in composable ways that can be reasoned about with clear language.

Guiding vague questions might include: What is the essence of a CRUD app? How could you define a CRUD app in terms of sets, functions, and categories? What kinds of laws apply in the general CRUD domain? How can we statically verify correctness properties for real-world CRUD systems?

That hope has nothing to do with mental masturbation and everything to do with wanting to concretely improve the state of civilization. And of course to reduce the amount of time spent making and debugging the same kind of thing over and over again.


I am by no means a researcher, but, as a programmer who writes said CRUD apps for a living, I can guarantee that I derive most of my understanding of a complex business domain from the structure of the relational database backing it. And what is a relational database? A (time-evolving) subcategory of the category of finite sets! Primitive types (as sets of values) and tables (as sets of primary key values) are objects. Fields (as functions of primary keys) are morphisms. Chained foreign key traversal is morphism composition.

I find this way of reasoning incredibly powerful, because it tells me quickly what kind of transactions make sense in a business domain. Of course, this is contingent on the database being well designed. It is exactly the same as with Haskell types - if you can leverage the (type system) schema to encode interesting logical properties of your (problem) business domain, then (type checking) checking a query against the schema in itself becomes a lightweight form of formal verification.

One thing that must be remarked, however, is that, sadly, SQL lacks analogues of sum and product types, that is, tables whose cardinality (number of rows) is the sum or product of the cardinalities of other tables. This addition would make SQL even more powerful, but it would require a move to a more category-theory-based foundation for databases. (The relational model is based on first-order logic.)


Isn't a join of two tables essentially the product of the two tables?

That said it does appear that sums are missing. Infact it seems like every mainstream language manages to fail to add sum types for some strange reason and you have to recreate them from scratch.


An unconstrained `join` is indeed the product of two queries, and a `union all` can be used to get their sum as well. This is in the query language. But what I need is sums and products in the table language.

There is a functor from the category of tables to the category of queries, which maps every table `foo` to the query `select * from foo`, however, a table isn't the same thing as a query: table fields may contain user-entered data, whereas query fields are always computed from table fields.

To give a concrete example, consider a database with two primitive tables, `male` and `female`. A primitive table is “independent” in the sense that you can add or remove rows from it at will. In other words, a primitive table behaves just like a normal SQL table.

Now define the derived table `person` as the sum of `male` and `female`. Because `person` is derived, you don't explicitly add or remove rows from it. Instead, every time you add or remove a `male` or `female`, a corresponding `person` also gets added or removed.

What I want is the ability to add the field `name` to the `person` table directly, without it existing in either `male` or `female`. You can't do this in SQL. The situation is similar for products.


> An unconstrained `join` is indeed the product of two queries, and a `union all` can be used to get their sum as well. This is in the query language. But what I need is sums and products in the table language.

The "table" language of SQL (DDL) includes pretty much the the entirety of the "query" language (DQL) through view definitions.

Of course, to do what you would really want to do to use relations to implement product and sum types, you needed materialized views with appropriate unique indexes for the candidate keys (and, ideally, auto-deriving the keys for product/sum tables from those of the base tables -- for product tables you can just concatenate the keys of the base tables, for sum tables you need the keys from the base tables to be equivalent and then to have an additional column that uniquely maps to the source table.)


A materialized view cannot contain user-editable fields unless these fields actually come from some actual table. Hence, a materialized view isn't a table in its own right, let alone a sum or product table.


> A materialized view cannot contain user-editable fields unless these fields actually come from some actual table.

"User-editable" is actually superfluous; this is true (or false) of read-write attributes in exactly the same way as it is of read-only attributes.

> Hence, a materialized view isn't a table in its own right, let alone a sum or product table.

You seem to be use "sum or product type" and "sum or product table" in somewhat unusual ways. Upthread, you suggested that sum and product tables were simply realizations of sum and product types by way of tables; but sum and product types have domains that are, respectively, the sum or product of the domains of the set of types each is based on, they don't include additional data.

The kind if augment sum or product relation you seem to be referring to can be achieved in a relational database through views (including, to the extent useful to the application at hand, materialized/indexed views), where the "base" sum or product type is a materialized view as described in the grandparent comment (including the described indexes), and the additional fields are supplied through a related table with a foreign key constraint (the augmented sum/product being represented with a view that joins that table to the base sum/product view.)


In general, to say that a category has sums and products, these sums and products must be objects of the same category. The axioms for a category are totally agnostic to the concrete nature of its objects. Just because Hask objects are types, it doesn't mean objects in other categories are Haskell types of behave like Haskell types. In the context of databases, it makes sense to treat a schema as a category whose objects are its tables, and whose morphisms are chained foreign key traversals. And tables contain fields that carry data of their own.


I still don't understand. What you call derived tables seem like views to me. They always need to be composed of primitive tables. So there will be a primitive `name` table and the person view will be `name * (male + female)`... Admittedly everything will be in 6NF but it still seems doable. Is there a limitation that I'm missing?


Sorry for the delay. I wasn't really thinking of making a primitive table just for names - what would its primary key be anyway?

The limitation you're missing is that SQL doesn't let you readily associate a user-entered `name` with each `person`. The best you can do is put a `name` field in the `male` table, then another `name` field in the `female` table, and use both `name`s when defining a `person` view. In my opinion, this is inelegant.

What I'm thinking about can be illustated with the following gist: https://gist.github.com/eduardoleon/1e8ad9174ec5ae0386dd


Created a fork with a solution.


It isn't apparent from my identifiers, but, in my `existing.sql`, `person`'s real primary key isn't just `person_id`, but rather `(gender, person_id)`.

The way you've handled it, now you have an invariant to maintain that `male_id`s and `female_id`s don't collide. If you want to define arbitrarily many sum tables in your database, this can be really hard to enforce. My `proposed.sql` doesn't have such a problem.


Right, which is why I initially said that sum types (disjoint unions) are missing :)


Product tables are missing too. Let's say you have tables `foo` and `bar`. For every `foo` and every `bar`, you want the user to specify a `qux` value. Presently, what you need to do is:

(0) Create a table `foo_bar`, with fields `foo_id`, `bar_id` and `qux`. In particular, `qux` must be nullable. [Yuck!]

(1) Add triggers to `foo` and `bar` that automatically insert or delete rows from `foo_bar`.

(2) Hope [I'm not joking] the user remembers to set all the `qux` values in `foo_bar` whenever he inserts a row into either `foo` or `bar`.


How does SQL lack product tables? The product of the column spec (foo int, bar bool) with the column spec (baz text, quux date) is the column spec (foo int, bar bool, baz text, quux date).


A column specification isn't a table. A table contains user-editable fields. If you have two SQL tables and compute their Cartesian product, the result is a query, not a table.


Could you describe precisely the category that you are talking about and explain why it doesn't have sums and products?


The category I'm talking about has tables as objects, and chained foreign key traversals as morphims.

In SQL, this category is freely generated ( https://en.wikipedia.org/wiki/Free_category ) from a quiver ( https://en.wikipedia.org/wiki/Quiver_%28mathematics%29 ) whose nodes and edges are SQL tables and foreign keys, respectively.

In my proposal ( https://gist.github.com/eduardoleon/1e8ad9174ec5ae0386dd ), the category is still freely generated, but from something that has more structure than a quiver - some nodes may be designated as coproducts or products of other nodes.

In either case, the category can't possibly have all coproducts and products, and that's okay with me - a database can only have finitely many tables, after all. My beef with SQL is that it doesn't allow this category to have any coproducts and products at all.


What do you mean by "SQL tables"? Presumably not the type of a table (you rejected that notion when I proposed it under the name "column specification") but rather a table with all its data?


Yes, by “SQL table”, I mean the table itself, with all its data. I don't think it's useful to think about “the type of a table”. Rather, I view tables as types in their own right - a table is the type of its rows.

I can see why this would seem weird from a Haskell perspective. Haskell encourages the programmer to view types as static collections of values. That is, normally, Haskell types don't get new inhabitants as your program runs. There are exceptions to this rule, like `IORef`s and `STRef`s, but idiomatic Haskell doesn't use these much AFAICT.

However, in Standard ML, some (static!) types are dynamically evolving collections of values. For example, if the control of flow reaches the line `exception Foo of string`, a new constructor `Foo` is added to the existing `exn` type. (As for why this is useful, see: https://existentialtype.wordpress.com/2012/12/03/exceptions-... ) Or, if the expression `ref x` is evaluated, where `x` has type `foo`, then the type `foo ref` gets a new inhabitant - a freshly created mutable cell initially containing `x`.

“Tables as types” is just reusing the idea of dynamically evolving (static!) types in a database context. Inserting or deleting rows from your `customer` table changes its collection of inhabitants, but it doesn't change the fact that there is a single type of customers.


Eh...

    SELECT a.*, b.* FROM a CROSS JOIN b;

    (SELECT 'a' AS tag, * FROM a) UNION (SELECT 'b' AS tag, * FROM b);


As I mentioned in my comment below: https://news.ycombinator.com/item?id=10529838 , the category of SQL queries has coproducts and products, but the category of SQL tables does not. Coproduct and product tables are very useful in their own right.


Ah OK, fair enough.

Not sure what definition you're using for the category of tables, but I don't think the distinction between table and query is really that significant, at least from a theory point of view. You can declare views and materialized views. In some SQL dialects you can even define triggers which allow you to 'update' them, not that I would have thought mutability would be particularly nice to reason about in a category-theoretic framework.

If you really want to do theory on this stuff, just use the relational algebra, or better yet just plain first-order logic. Much nicer, you have all the products and coproducts you want, and the results can probably be re-applied to SQL with a bit of cludge-work :)


Triggers are an imperative hack. This is like using a C struct with an enum and a union, and claiming that C has "sum types".


Agreed that SQL is ugly as hell, but if you want to talk about its theoretical properties that's a separate debate. Theory doesn't care whether something's aesthetically pleasing, just whether it's possible.


Monads are a common joke to this effect.

So much so that I wonder if the popular monad tutorial involving spacesuits and burritos wasn't actually made as a pisstake (but I am 99% sure the author was serious).


You might be interested in the inverse burrito-monad tutorial: http://www.cs.cmu.edu/~edmo/silliness/burrito_monads.pdf


> Sandy Metz doesn't get heard if DHH isn't building Basecamp.

Eh? Are you saying Sandy Metz isn't good, and needs DHH to provide a platform?

Disclosure: I have read POODR and am not sure what to make of it, as it's pragmatic but says the opposite of other OOP books I've read


>> Category theory codifies this compositional style into a design pattern, the category.

>To the average person, a theory codifying a style into a pattern just isn't useful

But you don't mean "the average person", you mean "the average competent programmer."

For example I'm not an "average person" and I don't understand the sentence you linked (the one I quote at the top of this comment.) It's just gobbledegook to me, the way a bit mask is to someone who doesn't know what bits are exactly, they're just, like, 1's and 0's in the computer right? You have to explain a lot before they would even understand what you're talking about. They might not know what a power of 2 has to do with anything.

This is an analogy. It's the same - average competent programmers don't have this mental infrastructure. I and most competent programmers have no idea what a category is.


>I and most competent programmers have no idea what a category is.

By your own admission, you have no idea whether or not category theory is useful to you.

Take pride in your ignorance if you like, but at least don't claim to know what you're talking about in the same breath.


Or in other words. Mathematicians have been thinking of ways to solve problems much longer than anyone else, they have formalized it, they developed amazingly general methods.

There's a lot to learn from them.


But just using the definitions is cargo cult and using abstractions for the sake of using abstractions is not necessarily good mathematics. Currently it looks like "use this design pattern because mathematics".


+1 for "just using the definitions is cargo cult", that seems like a common problem!

I would add "just using a word from math is cargo cult". I just wish someone had told that to the 7 people who decided to use the word "static" for 7 different things ...


> Currently it looks like "use this design pattern because mathematics".

I've only recently gotten into FP, but to be honest, I prefer that to "use this design pattern, because design patterns", which is how things usually go, particularly in OOP. A lot of the "theory" in imperative programming seems four steps removed from the underlying theory (to the point where you can't really trace patterns to their underlying theoretical reasoning).

Sure, that often makes is easier to understand and utilize. To me, however, after 8 years of that, it's pretty sobering to start realizing just how much of your repertoire is cargo culting.

I would also argue that there is a huge difference in how the two worlds use, or even understand, design patterns. FP often puts you through the trouble of going back to basics (yes, sometimes even mathematical basics) and that can take enormous amounts of and effort. That's a level of discomfort and slowness I have come to appreciate.


category theory is excellent mathematics useful in various areas of physics (if you're looking for proof of application)

why wouldn't it be useful in programming?


It's useful because it's a generic design pattern not because of the mathematics behind it (of which virtually nothing is actually used, at least in everyday programming).


Yeah, generic design pattern which is minimal, compact and simple.

Yeah, why would I use mathematics in programming.


> Yeah, generic design pattern which is minimal, compact and simple.

A good argument for monads. Much better than the dishonest appeal to mathematics.


I think you're reading more into spooningtamarin's posts than was actually there.


Really?

Yes, I definitely mentioned mathematicians, because they were the ones who showed us the problem solving methods, the modeling/design method.

I'm definitely agreeing that one uses practically none of the category theory in programming. But let's say that we can be influenced by the design of category theory, and get some theorems for free because of it.


I have been studying Category theory recently, on the surface, its merely a theory about dots that are connected by arrows.

For some reason, category theorists often claim that category theory is "more fundamental" than set theory, whatever that means. It's especially bizarre since many category theory books begin by saying they will define categories without using sets, and then proceed to say "let A be a collection of dots, and B a collection of arrows" but only God knows what the difference between a collection and a set is, from what I can tell...


Russell's Paradox (https://en.wikipedia.org/wiki/Russell%27s_paradox) means you cannot have a set-of-all-sets. For the category of sets and functions between sets, that means you have to use the weaker idea of a 'collection'.

A positive side-effect of this avoidance of set axioms in the definition of a category is that we can more easily define categories with very different ideas of membership. For example, a category where membership is a real number (like in fuzzy logic). Categories with this idea of 'generalised membership' (plus a few extra bits and bobs) are called 'toposes', and can be used in much the same way as sets are. Importantly, we can relate different categories like this together using the same terminology and concepts and so more easily define ways to join classical code with fuzzy logic, or with neural networks (or anything else which satisfies the topos axioms) and know they're not insane.

Incidentally, you can define the axioms of a topos (and sets are a topos) in purely category theory terms. That is why some (and I'm one of them) view categories as a more useful foundation than set theory (where attempts to define the converse are much more messy).


Basically it's more fundamental because if you try to bootstrap mathematics from set theory you need a set of 10 axioms which seem unmotivated and disconnected. To bootstrap math from category theory invokes stepping through a rich series of theories starting from the merest notion of "combining things" and moving upward. It helps us see the foundations of mathematics as living within a large universe of alternative, slightly differing foundations and therefore recognize the somewhat "arbitrary" nature of standard math foundations. In many ways this is a more appealing sort of foundations. Category theory requires sets to "bootstrap" itself, but you can get by with a naive construction and work up from there.


Sets are more than a collection. They have axioms that are not needed to define categories.


Mathematicians used the "naive" concept of a set as "a collection of objects" for thousands of years, and it led to a major crisis in the foundations of mathematics that was only resolved by establishing a rigorous system of axiom's that define what a set is.

So I disagree that a "set is more than a collection", with the qualification that the concept of a "collection" that is not associated with any axioms whatsoever, is meaningless. Why do I say "meaningless"? Because this "naive" concept of a collection leads to a number of logical paradox's that can only be resolved by using a system of axioms to define these collections.

So it's true that you can define sets by various axiom systems that are not the same, but these systems all exist under the umbrella of "set theory".

Using the "naive" concept of a collection is fair enough, since we all "know what you mean", but only as long as your point isn't that category theory is more fundamental than set theory.


Actually, it didn't resolve the crisis. In fact, categories, computability theory, and most mathematical findings of the last 150 years are a direct result of stepping around the crisis, namely Russell's paradox.

My point is not that category theory is more fundamental than set theory (though set theory can be described in terms of categories), but rather that it isn't derivative. You don't need the formalisms of set theory to have category theory.


If you continue your studies that statement will be clarified.


I presume a collection can have multiple of the same thing, whereas a set cannot?


i once had a conversation that ended with "i don't know enough category theory to be good at haskell."

after learning a (little) more category theory and a (very little) more haskell, i think that assessment was inaccurate, and i wouldn't want anyone else to think the same thing.

haskell is strictly-typed, sugared lambda calculus. if you want to learnyouahaskell, think about types and type classes. learn how to desugar wheres, lets, guards, etc. don't worry about functors, much less monads.


> i once had a conversation that ended with "i don't know enough category theory to be good at haskell."

> after learning a (little) more category theory and a (very little) more haskell, i think that assessment was inaccurate, and i wouldn't want anyone else to think the same thing.

Indeed. I work with some of the most highly-regarded Haskellers that exist. Few of them would claim they know much (or any?) category theory, nor do they care about category theory.


I don't know who the Haskellers you refer to are, or who they're regarded highly by, but all of the core members of the Haskell community that I know do care about category theory, and do know some amount of it. It's too useful to ignore.

You absolutely do not need to know category theory to use Haskell effectively and productively. Once you've been using it for a while, though, you pretty much inevitably start learning (and caring about) the mathematics that underlies it.


I'm pretty sure most of the following would say they don't know much (or any) category theory:

Simon PJ, Simon Marlow, BoS, Lennart Augustsson, Don Stewart, Duncan Coutts, Neil Mitchell, Roman Leshchinsky

That hasn't stopped them producing masses of great Haskell.

I would welcome correction or clarification by any of the above luminaries!


> Don Stewart

he wrote my window manager!

going out on a limb, i guess these people would say that they don't know much category theory because they understand that category theory contains results, not just definitions. moreover, it bleeds into algebra, geometry, k-theory, etc. pretty fast. even some /very/ respectable mathematicians (e.g. matsumura) are hesitant to say they know much about all of these things.

i think it's really cool that haskell makes programmers interested in math. i'm still pretty sure that everything about it that has anything to do with category theory is contained in the Prelude or on hackage, tho. in other words, all the category theory stuff that's "built-in" to haskell is itself expressed in haskell. at the risk of raising some hairs: at one point Crockford mentioned that making monads is possible in javascript, too.


> > Don Stewart

> he wrote my window manager!

Mine too!

To be clear, I would guess (and I'm by no means sure) that these very highly regarded Haskellers barely know how to define "category" and "functor" (the mathematical versions, not the Haskell versions) and certainly don't know how to define "natural transformation". I admit the following as evidence: https://twitter.com/bos31337/status/656319244263518208

I think it's very important that we, as the Haskell community, reaffirm at every possible opportunity that you do not need to know category theory, or even much mathematics, to succeed at Haskell.


If people are interested in reading more about how category theory can be used to help define the structure of software, there are two excellent books I can recommend.

The first, 'Computational Category Theory' by Rydeheard and Burstall, builds a library (in Standard ML) that allows one to construct objects that satisfy properties you care about (defined in category theory terms). For example, you could define a rules system in a fuzzy logic and have the right code be automatically generated using the library.

The second is less concrete, but probably more eyeopening. 'Category Theory for Computing Science' by Barr and Wells starts from the basic axioms of a category and ends with defining all sorts of interesting patterns for software modularity based on diagrammatic reasoning (a common tool in category theory for proving properties through simple pictures). The ideas here have helped me simplify and think more clearly about code I've been writing (in languages like Java, Python, and Ruby), even if the code isn't (ostensibly) categorical.


Note that Haskell functions, when considered as values, do not actually form a category: https://wiki.haskell.org/Hask


I believe that any such problems are due to non-totality and/or non-strict semantics, and generally aren't important in practice. For example, from the abstract for "Fast and Loose Reasoning is Morally Correct" (http://www.cs.ox.ac.uk/jeremy.gibbons/publications/fast+loos...):

  Two languages are defined, one total and one partial, with identical
  syntax. The semantics of the partial language includes partial and infinite
  values, and all types are lifted, including the function spaces. A partial
  equivalence relation (PER) is then defined, the domain of which is the total
  subset of the partial language. For types not containing function spaces the
  PER relates equal values, and functions are related if they map related values
  to related values.  It is proved that if two closed terms have the same
  semantics in the total language, then they have related semantics in the
  partial language. It is also shown that the PER gives rise to a bicartesian
  closed category which can be used to reason about values in the domain of the
  relation.


As I expected, I can follow it until it gets to monads. The thing about explaining monads isn't even a joke anymore, it's just become mundane reality

It says that (.) is the same thing as (>=>). Fine, but then why are we talking about (<=<) and (=<<) instead of (id) and (.) now?


The point of the article is that even though function composition and monadic composition are made out of different things ((.) and id versus (<=<) and return respectively) they have the same underlying structure: associative composition with identity.


For me this has been really useful.

While I somewhat have a noob understanding of Haskell, and some experience with monads, the parallels between composing functions (i.e. "id" & (.) ) and composing monads i.e. "return" & (<=<) never really occurred to me. Now that I've seen this, I'll probably start looking for similar patterns in other places.

This has been what made Haskell click for me: the minute you start doing something that is slightly repetitive, like a for loop including a continue/break, parallelizing code or using if error/null all the time, there's a pattern there. Haskellers recognize these patterns, and implement them once, so you can reuse them and avoid making mistakes by implementing something similar yourself every single time. Having this opportunity makes your code more shorter and concise, and instead of implementing something yourself, you can use these battle-tested implementations to model your code. (And thus avoid those annoying not-null/if error bugs for example.)

Even though Haskell might not be suitable for every problem domain, I found out it really shines by it's explicitness. You model your domain using types for intermediate states, and the exercise of converting one type to another is rather trivial. (f.e. a sudoku solver [3]).

The biggest problem for me personally has been my inability to speak the category theory language, but I'm learning... I also believe that the basic Haskell libs need examples next to their descriptions, because most of us learn by example. [1] If I can see what a maybe monad is for, I can start using it without properly understanding monads. After a while you'll have seen enough different types of monads to understand their use case. IMO one language implementer that understood this extremely well is José Valim, just take a look at the Elixir docs for data.list. [2]

Because Haskell has been mostly implemented by mathematicians and computer scientists, they assume basic upfront knowledge. This is what makes it hard for beginners. What we really need in the Haskell world is some cognitive bias mitigation. YMMV.

* [1]: https://www.reddit.com/r/haskell/comments/3rhpyq/as_a_haskel... * [2]: http://elixir-lang.org/docs/stable/elixir/List.html#function... * [3]: https://github.com/ToJans/learninghaskell/blob/master/0003%2...


Function composition and its generalizations are cool, but let's not blind ourselves and claim that they're the key ingredient for good programming.

We should be looking for more ideas that make programming better with minimal learning cost. Examples of such ideas that were wildly successful in the past: garbage collection, modules and imports, Unicode strings. Each of those was justified by practice rather than theory.


So this is a bit ranty and I hope it doesn't come off as too rude, but here goes:

Let's talk about functions. Why is id . f = f . id = f? id . f is a function which takes some argument, pushes it on the stack, calls f, pushes the result from f on the stack and calls id. This is two function calls and you go two stack levels deep. Just f is one function call and one stack level deep. If f and id are real functions executing on real hardware, then id . f is not strictly equal to f due to unavoidable side effects even if id and f are both pure functions, e.g. you may have not enough stack space to execute id . f, but you could execute just f. Real functions executing on real hardware aren't exactly the same like the pure mathematical notion of a function. If Haskell is a language for writing computer programs executing on actual computers, then id . f ≠ f; and if it isn't, then it's not very useful.

Abstractions are well and good, but I must be able to reason how the machine will work while executing my program. Take the official Python implementation - CPython for example. It's nowhere near as fast as C if you write your code naively, but there is very little surprising about it. Or take C. If you write e.g.

    int id(int x) { return x; }
    int f(int x) { return 2*x; }
    int id_f(int x) { return id(f(x)); }
Then the compiler is free to compile id to a noop and id_f to be exactly the same as f, but it is not required. Most people won't write a program which requires that behaviour by the compiler in order to work correctly.

If the Haskell compiler is required to perform such optimisations, then I would concede id . f = f; just like tail recursion optimisation is required in Scheme, so tail recursion is actually equivalent to loops, rather than being just mathematically equivalent.

The broader point I want to make is that a computer programming language is inescapably tied to the reality of how hardware executes instructions and one must have some guarantees on how what is written will translate to machine execution.

Another thing is that once you reach a certain level of abstraction, you lose your ability to reason about the concrete things in that class. I'll illustrate what I mean.

Let's construct the set of Handleds. A handled is a thing that has a handle, and a handle is a cylindrical protrusion affixed to the object, that can be grasped by an average adult human hand and pushed or pulled along an axis perpendicular to the cylinder's height. So, a TNT detonation device is a handled, since you have a handle on the plunger you press to detonate the TNT. A door also has a handle. Your laptop bag has a handle. A table leg also technically counts as a handle and you can use it to move the table. The thing is, while you would have a theory on how to make handles, such that they would be easily operated by people, the fact that an object has a handle tells you nothing of the object, other than it can be manipulated with a hand.

Pretty much the same goes for a category. Yes, you can compose two elements of the category into a third. So... what does it do? You don't know. The problem is, the two lumps of fat that sit in your skull need to affix some concrete properties on the object in order to think about it. Functions are series of instructions that transform values. Or maybe they're relations, if you're a mathematician. But the essence of a function isn't in its composability (at least for programmers), it's in its usage as a unit of execution. Maybe also in its usage as a way to organise your source code.

There's no point in explaining something as "it's a category" any more than in a car salesman starting out by pointing all the handles on the car and how the bumpers can be grabbed as ersatz handles in case you need to get a handle on the situation. Saying something is a category is a deferred meaning that you can only understand once you know what that something is, so it's more of a mental burden than an explanation of anything.

Category properties may be simple, but their relation to the actual execution of a program is anything but. If you want to explain Haskell, maybe start off with the simple parts. Like how what you write executes on the actual machine and work your way up.


Your point about equating f . id = id is related to what the equality sign means.

Presumably you don't object to the notion that sin(a + b) = sin(a) cos(b) + cos(a) sin(b), even though the left and right hand sides are totally different in terms of computation.

That's because you know that such identities, while indeed abstracting over computational details, have real meaning and are extremely useful in mathematical thought (and algorithm design; you can use that equation to implement an efficient numeric oscillator).

Given that you probably accept that trigonometric equality, you can't really dismiss equational reasoning in software as useless just because it abstracts over stack frames and machine instructions. That kind of abstraction is exactly the point.


I gave an example above of categories being useful for a very mundane purpose: https://news.ycombinator.com/item?id=10529150

Most categories that arise in practice aren't “just” categories - they have interesting extra structure. For instance, platonic Hask, the category of Haskell types and total functions, is bi-Cartesian-closed. The categories that arise when modeling databases are typically not closed (from a programmer's POV, this means that functions aren't first-class objects - you can't store them in tables), but they have finite coproducts (UNION ALL) and products (SELECT ... FROM multiple, tables). SQL-style aggregate functions are nothing but commutative monoid objects in these categories. etc.


Correct, and when the article says "any composition operator that observes these laws will be ... free from edge cases" this is not strictly true. Nonetheless, any such object will be freer from edge cases, and the concept is still extremely useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: