
Scalable Program Architectures - mightybyte
http://www.haskellforall.com/2014/04/scalable-program-architectures.html
======
quchen
Gabriel is also the author of the popular _Pipes_ Haskell library, and nice
person in general. Both of these make me heartily recommend his other blog
posts about various Haskell topics, which often portray advanced features
remarkably well.

Examples:

Scrap your type classes: [http://www.haskellforall.com/2012/05/scrap-your-
type-classes...](http://www.haskellforall.com/2012/05/scrap-your-type-
classes.html)

Hello Core: [http://www.haskellforall.com/2012/10/hello-
core.html](http://www.haskellforall.com/2012/10/hello-core.html)

Coding a simple concurrent scheduler yourself:
[http://www.haskellforall.com/2013/06/from-zero-to-
cooperativ...](http://www.haskellforall.com/2013/06/from-zero-to-cooperative-
threads-in-33.html)

Tutorial as part of library doc (!):
[http://hackage.haskell.org/package/pipes-4.1.1/docs/Pipes-
Tu...](http://hackage.haskell.org/package/pipes-4.1.1/docs/Pipes-
Tutorial.html)

~~~
the1
and quchen is trusted authority. you can like whatever quchen likes, too,
until there's heartbleed.

~~~
dllthomas
Pardon?

------
twic
The idea that this composability is unique to Haskell is false. Java's streams
are an example of exactly the same thing in another, non-functional, language.
The Gang of Four "decorator", "composite", and "chain of responsibility"
patterns are further general examples.

It is true, perhaps, that in Haskell, programmers reach for homogeneously-
typed composition more readily than programmers in other languages. Good for
them! But it's either arrogance or ignorance to assert that this is a special
Haskell thing.

Furthermore, i am dubious that this really is a good strategy for building
large programs. The idea that you can combine lots of parts of some type to
build a bigger part of the same type is extremely appealing. But in my
experience, the bigger part often has slightly different properties,
behaviours, or uses which warrant a different type with more features. Unless
you want to impose those features on the smaller types too.

For example, consider a batch application which processes files through a
number of stages (i realise it's the 21st century, but apparently we still
need to do this). There is clearly a type for a stage, with values for things
like uncompressing, validating, renaming, parsing, etc. There is probably
going to be a type for a chain of stages, with values for various uses of the
application. A chain looks like it should have the same type as a stage -
ultimately, both take a file in, and spit a file out.

But then, it turns out that we want to move the file through a sequence of
directories, one for each stage, as we process it (the operations guys are
really keen on this). Furthermore, we need to be able to report on what files
are currently at which stage. So, a stage knows which directory it owns -
presumably, it has a property of type _directory_ for that. But a chain owns
all the directories of its component steps, so it owns several directories -
it's going to need a property of type _collection of directory_. So what do
you do? Report a single directory for the chain, and somehow expose the rest
through a backdoor? No, that's a kludge. Have every step report a collection
of directories, which will mostly be single-element collections? No, that's
weak, because the type of a step no longer fully describes its the constraints
on it. Use a higher-kinded type parameter, so the chain can have a collection
of directories, while the steps have a single one? Mad wicked, but racks up
the reader's cognitive load. Use different types for steps and chains? Well,
actually, since that's simple and doesn't have any practical drawbacks,
probably yes.

~~~
loup-vaillant
> _The idea that this composability is unique to Haskell is false._

Haskell is one of the rare language/community where composability is the
default. Even purely technically, few languages do composability as well as
Haskell.

The Gang of Four pattern you mention most likely work around weaknesses of
Java. As such, they demonstrate the _shortcomings_ of Java, not it's
strengths. If you need a pattern that badly, make it a language feature,
dammit. (Even without Lisp-style macros, source-to-source transformation is
quite viable.)

> _But in my experience, the bigger part often has slightly different
> properties, behaviours, or uses which warrant a different type with more
> features._

I'd say most of the time, you just failed to capture the commonalities. More
specifically, you seek generality through exhaustiveness, when you should use
genericness instead.

If you stumble upon something that looks like a monoid, _except_ for such and
such detail, then the details probably need to be either removed (they're
could be design bug), or integrated into a generic bucket or something.

> _For example,_ […]

Those specs suck. Whoever asked you this don't understand their own needs, or
are struggling with a legacy behemoth that should eventually be replaced.
_(EDIT: or, as mightybyte suggested, you just approached the problem the wrong
way. Which strengthen my point about failing to see the commonalities.)_

~~~
tel
> _I 'd say most of the time, you just failed to capture the commonalities.
> More specifically, you seek generality through exhaustiveness, when you
> should use genericness instead._

THIS is one of the most important things I've ever learned from programming
Haskell[0]. The way to make things generic is to whittle away all of the sharp
edges and implement them as compositions atop base patterns. Those base
patterns arise out of polymorphism, not exhaustiveness.

You don't make lists generic by realizing that you need a list type to have
all the properties of a string, a tree, a vector... You make it generic by
realizing that much of its structure arises by forgetting what's inside.

[0] Which is not to say you can't learn it elsewhere, or that you cannot _do_
it elsewhere. Instead, Haskell has the facility, the philosophy, and the
community to do it really effectively all the time and so if you read Haskell
code you'll be floored by some of the great examples possible there.

~~~
loup-vaillant
Fun fact: I have known this intuitively since forever, but only recently
realized that when I say "generic", most programmers I know tend to hear
"exhaustive", and recoil in horror; while I really meant "generic".

Now, I also know why I like parametric polymorphism, and dislike subclass
polymorphism: the first is generic, while the second is exhaustive.

~~~
mushly
What's the difference between "generic" and "exhaustive," and where can I read
more about all this stuff?

~~~
loup-vaillant
Imagine you want to process a list of foobarbaz, where elements can be foos,
bars, or bazes.

Exhaustion is when you write foo specific code, bar specific code, and baz
specific code. Then your list-processing facility is general because it
handles all the cases. A typical way to do that in practice is to use subtype
polymorphism, with the class inheritance mechanism: have a foobarbaz
interface, a foo class that implements it, a bar class that implements it, and
a baz class that implements it. Each with their own code.

Genericness is when you ignore the foobarbaz specifics altogether: your code
doesn't even mention the types. A typical way to do that in practice is to use
parametric polymorphism (generics in Java, templates in C++). See C++'s
Algorithm library for an example, or the standard Prelude from Haskell, or the
OCaml standard library.

------
dkarapetyan
I think the general principle here is combinator based approaches to program
structure. Combinators are just as easy to use in object oriented languages as
they are in functional languages especially languages that allow some form of
operator overloading. The following is all valid ruby code

    
    
      f > g
      f | g
      (f | g) >> lambda { ... }
      (f > g) >> lambda { ... }
    

Taking these ideas a little bit further you end up with mini DSLs purpose
built for expressing things very concisely. In fact if you squint a little bit
you could imagine the above code expressing some form of BNF grammar as Ruby
code and there are several parser combinator libraries out there that do
exactly that.

I find it a little annoying that the general principles get lost behind smoke
and mirrors like monoids and monads when things are in fact much more
accessible and do not require anything other than some basic understanding of
abstract algebra. It's the algebraic and not the fancy static types approach
that has paid the most dividends when it comes to how I structure my code for
readability and maintainability.

~~~
prutschman
The important part of what I guess you could call algebraically structured
combinators is that they must follow certain laws. A type system can certainly
help enforce at least a subset of those laws, but you're right that you don't
have to have one in order to make use of the structures.

Where you lose me is in saying both that a basic understanding of abstract
algebra is helpful and that monoids are part of the "smoke and mirrors"; I'd
consider monoids a part of a basic understanding of abstract algebra.

~~~
dkarapetyan
Poor choice of words on my part. Yes, monoids are indeed a very basic kind of
algebraic structure but the tutorials for these structures are always in a
language like haskell where the actual simplicity of the concepts gets lost
behind the type system. Type systems in my mind are logical structures and
even though there is a very close relationship between algebra and logic one
doesn't always have to tie the algebraic structures to some kind of type
system to make sense of them.

~~~
prutschman
Nods. The nice part about the type system is that it can automatically reject
(some) misuses of the structures. I've found that helpful while learning some
of these things as it can let me know that I don't actually understand
something that I thought I had. I agree that emphasizing the typed aspect can
do an injustice to the other algebraic properties.

------
gregw134
If anyone wants to contribute to a similar project, I've started a concurrency
framework for Java: [https://github.com/Gregw135/Simple-Java-Concurrency-
Framewor...](https://github.com/Gregw135/Simple-Java-Concurrency-Framework)

------
briantakita
> This is one reason why you should learn Haskell: you learn to how to build
> flat architectures.

I suspect that most people learn the advantages of flat architectures with
experience using any programming language. For me, I started to "get it" with
Javascript & Ruby. So Haskell hardly has a monopoly on this.

I understand that Haskell has a different take on this. It seems the author
has found a pattern in Haskell that he frequently uses.

Ideally, I can use patterns in multiple languages, so my experience can
seamlessly transfer.

This often means there's a lowest common denominator to a particular pattern.
I'm afraid that if I learn a pattern unique to Haskell, that it is not
applicable to any other language. I suspect that there are underlying & common
principles that are expressed differently with Haskell.

~~~
T-R
These are not patterns unique to Haskell. In fact, it's just the opposite:
these are mathematical abstractions, and so they can be used anywhere that you
have use of function application, in stark contrast to the alternatives. By
using these patterns instead of Ruby or Javascript-specific, Application-
specific, or Object Oriented-specific patterns, you can be more confident that
your abstractions will be consistent with eachother, and as a result, more
composable.

Case in point: I'm currently developing in Javascript, but I use these
patterns (Monad, Applicative Functor, etc) to manage things like asynchronous
code and error handling, because they're far more flexible/composable and
reliable (in terms of not having side effects) than the other offerings on
hand, and I know that will always be the case, because these abstractions were
designed with the constraint of composability in mind: in fact, they are, by
definition, the minimum set of constraints required to get these emergent
properties. The benefit that Haskell itself offers is that it can check at
compile time that these constraints hold true.

~~~
briantakita
> I'm currently developing in Javascript, but I use these patterns (Monad,
> Applicative Functor, etc) to manage things like asynchronous code and error
> handling

Thank you for this comment. I'm interested in knowing which these fundamental
patterns that Haskell emphasizes.

Unfortunately, time is limited & currently, I'm not able to allocate enough to
learn & use Haskell in a meaningful project. I recognize that there are some
important abstractions & I'd love to learn all of the useful abstractions that
are applicable to other languages & environments.

