
A Modern Architecture for FP - buffyoda
http://degoes.net/articles/modern-fp/
======
skrebbel
The people in the horrible, horrible world of enterprisey OO programming call
this Aspect-Oriented Programming. It's often accomplished with postcompilers
or source transformations and whatnot, because the languages used aren't
Haskell, but it boils down to roughly the same thing.

AOP never really caught on because the advantages (keep logging and error
handling separate from the business logic) were too small for the extra mess
it added. Notably: difficult debugging, plus really weird COMEFROM-style
behavior[0] if some well-meaning colleague added a little too much magic to
the "aspects" (e.g. the automatically mixed-in logging code). The function
suddenly does stuff that you never programmed. Good luck hunting that one
down!

I strongly suspect this style of programming has similar downsides. Anyone
tried it in large projects?

[0]
[https://en.wikipedia.org/wiki/COMEFROM](https://en.wikipedia.org/wiki/COMEFROM)

~~~
marcosdumay
I have some monads from when I was less experienced that are screaming to be
turned into that. I'm keeping them on my to-do list for now, but it would be a
gain. (But I have a logger to write, and will probably adapt to something like
it.)

The main downside is complexity. It gets hard to reason about it fast. As
types depend on more things, and grow bigger, error messages are hard to read,
parenthesis abound, and people really start switching off. Type synonyms help
here, but are not a panacea. (A similar problem for your "hard of debug"
complaint.)

The COMEFROM style is mainstream on FP, so people are used and expect this
kind of thing. That means that your functions better support doing stuff you
never told it to do, because it will, you know it, and you'll adapt
accordingly. It's normally not a problem, notice the "normally" there.

------
eru
Free monads are very much `initial'. Compare the final approach:
[http://okmij.org/ftp/tagless-final/course/](http://okmij.org/ftp/tagless-
final/course/)

~~~
tel
Note that there's a bit of a pun here. "Finally" tagless isn't really any less
initial than free monads are. The final encoding would give us something
codata like, not just described via a typeclass. That said, both are still
"initial" as in "initial algebra".

~~~
eru
Interesting. Could you expand on that, please?

~~~
tel
"Initial" usually means "the initial algebra over a functor". This means that
for some functor f the algebra `i : f (Initial f) -> Initial f` is initial in
the category of f algebras. This means that for any X and `g : f X -> X`
there's a universal function `universal g : Initial f -> X` such that `g .
fmap (universal g) = universal g . i` at type `f (Initial f) -> X`. Or, we
have a function

    
    
        universal : (f x -> x) -> (Initial f -> x)
    

and we can actually define Initial in this way

    
    
        universal' : Initial f -> (f x -> x) -> x
    
        newtype Initial f = Initial { universal' :: forall x . (f x -> x) -> x }
    

In "strict Haskell" where we can act only finitely, we construct values of
`Initial f` only by slapping finitely many layers of `f` on top of one
another. For instance, when `f` is `data ConsF x = NilF | ConsF Int x` we can
make a list [1, 2, 3] like

    
    
        Initial $ \join -> 
          let nil   = join NilF
              a : x = join (ConsF a x)
          in 1 : 2 : 3 : nil
    

In other words, Initial things are described by their construction.

Clearly, this relates directly to initial objects constructed via, e.g., Free,
since it does roughly the same thing. Free emphasizes the notion of layering
things atop one another. In Lazy Haskell we can still use this layering to
construct non-initial objects (more to come below) but if Free were
transported to Strict Haskell it would clearly only construct initial things.

\---

So what about Finally Tagless?

    
    
        class List l where
          nil :: l
          cons :: Int -> l -> l
    

We're still going to be constructing values of `List l => l` by application of
finite layers! If anything, we're more stuck to this process now.

    
    
        cons 1 (cons 2 (cons 3 nil))) :: List l => l
    

If we swap this out for explicit dictionary passing we can see that we're
missing an argument like

    
    
        data ListD l = ListD { nil :: l, cons :: Int -> l -> l }
    
        \d -> cons d 1 (cons d 2 (cons d 3 nil)) :: ListD l -> l
    

and also that `ListD l` is equivalent to `ConsF l -> l`. It's really the same
thing as the free method and is again operating initially.

\---

So what does it take to make something "final"? A final coalgebra of `f` would
be an object `i : Final f -> f (Final f)` such that for any X and coalgebra `g
: X -> f X` we have `universal g : X -> Final f` such that `i . universal g =
fmap (universal g) . g` at type `X -> f (Final f)`. Or,

    
    
         universal : (x -> f x) -> x -> Final f
    

which again can be used to define `Final f`

    
    
        data Final f where
          Final :: (x, x -> f x) -> Final f
    

No longer can we define things by how they are constructed, now we must define
them by how they are _viewed_. This opens up the doors to new kinds of
structures, even in "Strict Haskell"

    
    
        natsFrom :: Int -> Final ConsF
        natsFrom n0 = Final (n0, \n -> ConsF n (natsFrom (n + 1)))

~~~
eru
Thanks!

------
michaelbjames
I like the ideas of the article. An abstraction over IO and being able to
distinguish between inherently serial operations and parallel ones on a
different level are good ideas. I do not think this article exemplifies these
benefits. Instead, it focuses much time on implementing a toy DSL. I want to
see what Free can do but IO cannot (functionally or stylistically).

------
ch
What is really cool about this approach is that, unlike the counterpart design
patterns you might find in an imperative OO language such as the Interpreter
and Command patterns, within the strong-statically-typed functional languages
we can rely on the guarantees the type system gives us for abstractions such
as the Free Applicative to allow for optimizations that would be merely
convention in a less type-enforcing type system.

~~~
eru
Some `design patterns' are expressible even in very weak type systems.

For example, array of tuples vs tuple of arrays. In pseudo Haskell notation:

:: ([a], [b], [c])

:: [(a, b, c)]

The second option enforces that you have the same number of a's, b's and c's.
The second option can be faster to access in, say, C.

(I wonder whether you can do a similar, but more complicated, analysis for
internal vs external linked lists and other containers.)

------
twic
_While ideal from a theoretical perspective, as a practical matter, can you
imagine renaming a 10 GB file by first creating a copy of it, and then
deleting the old version?!?_

 _That kind of inefficiency is not practical for most real world programs!_

 _To solve this problem, we can write an optimizing interpreter, which is a
special kind of interpreter that can detect patterns and substitute them with
semantically equivalent but faster alternatives._

But why?

------
anentropic
"There are problems here I haven’t talked about, and most languages don’t make
this particular style of programming very easy or performant."

...hmm, that's a rather big caveat arriving at the end of the article

------
vinceguidry
HtDP changed the way I look at programming. But today I'm a diehard OOP
fanatic. I want code that reflects the way I think, not code I have to work to
understand.

I did not know but am not surprised to find that people write procedural
Haskell. Procedural is how we all start out thinking about things. You do A,
then B, to reach goal C. It takes time and experience and deliberate practice
to improve upon that way of solving problems.

Functional moves the code in the direction of math. OOP moves in the direction
of the domain. Math is harder to understand than domain logic, you will
inevitably hack together an object system on top of your FP in order to
implement domain logic.

Both styles have properties that are work better for certain domain concepts.
What's nice about modern programming languages is that they build in
primitives so you can use whichever style fits the concept you're fleshing
out.

But trying to do everything functionally is ultimately counter-productive, in
my opinion. It's the wrong format to declare high-level domain logic in,
because high-level domain logic is that which is closest to human thought, not
the underlying math.

Any FP 'architecture' will pretty much be object oriented. Math may treat
state as ugly cruft, but humans need to group information close to where it's
needed and in ways that make sense, that means state. Don't let the quest for
mathematical beauty get in the way of solving your problem.

~~~
khgvljhkb
Most bugs I make are about state being out of sync - in fact every bug that is
not because I misunderstood a problem is because state is out of sync.

In OOP programs, every object usually has it's own state, and it is updated
all the time. If I can describe the world as one big hash-map, and my whole
program can be while(true){writeToScreen(render(appState))}, then I'm very
happy.

Haskell, which you seem to dislike, has a lot of syntax and some new concepts
you need to pick up to get going. I recommend you to try out something that is
extremely simple, like Clojure.

In Clojure there are not many things you need to learn before being dangerous.
The data structures are lists '(), vectors [], hash-maps {} and sets #{}.
There are functions (fn [] ) and macros, but you don't need them.

When coding functionally you can be sure you don't fuck something up by
changing some state somewhere. And you don't need to mock data. And you don't
need to do ORM. It's freaking great.

~~~
vinceguidry
But I don't need Haskell or Clojure or anything like that to solve my
problems. Ruby works just fine for me. I have lists, vectors, hash-maps and
sets in Ruby. I can data pipeline simply and idiomatically. I don't work with
math-heavy domains, so I don't need heavy math (type system) to write my
programs.

When I have bugs in my code it's because I misunderstood the problem domain.
State is missing somewhere, state that was unaccounted for and so is running
loose in the code. The process of fixing the bug also illuminates what I was
missing about the domain. Working on a program is the process of tightening
the code around the domain.

> If I can describe the world as one big hash-map, and my whole program can be
> while(true){writeToScreen(render(appState))}, then I'm very happy.

I want my code to be flexible, and I only want to represent concepts once. I
want to be able to interact with it on the command line, on the web, as an
API. To do this I need to manage all the different ways other systems can get
at the domain logic because otherwise I'm reimplementing parts of the system
inside other systems. My program needs to be self-contained.

The best way I've seen to write and manage this kind of flexible code is with
dynamic typing and OOP. Dynamic typing isn't a necessity, but it is a big
help. OOP just makes everything sane.

~~~
dllthomas
_" I don't work with math-heavy domains, so I don't need heavy math (type
system) to write my programs."_

I don't think you know what you're missing.

Type systems can be used to enforce pretty arbitrary things. Representation of
data can be handled fairly well automatically, so types describing
representation are only marginally useful (mostly where we care about
interchange or care a _lot_ about performance). However, if you make your
types domain relevant, they can help you with domain relevant things. This can
be simple - "I don't want to pass a bid price where I expected an ask price"
\- or it can be surprisingly sophisticated; I was able to ensure that certain
actions were only taken on the correct threads, checked at compile time, in
_C_ \- this helped me _tremendously_ in refactoring when I discovered some
piece of logic needed to live somewhere else. I really can't imagine doing
that work without type checks, and it wasn't the slightest big "math heavy".

------
darawk
This is what i'm doing in javascript over in redux-effects:
[https://www.github.com/redux-effects/redux-
effects](https://www.github.com/redux-effects/redux-effects)

------
pierrebeaucamp
I'm still new to FP, but I like how PureScript solved this problem by having
an Effect Monad:
[http://www.purescript.org/learn/eff/](http://www.purescript.org/learn/eff/)

------
tel
I program like this all the time in Haskell. Much of the latter bits can be
replaced by so-called "mtl-style" typeclasses (used similarly to Oleg's
Finally Tagless idea). I have a brief talk on this at last year's LambdaConf
but didn't really give it sufficient space I suppose. I think the whole
Prism/Inject machinery can be completely eliminated in this way.

------
js8
I am not sure you can get rid of IO monad. It's how the Haskell programs
interact with the outside world, and I think IO is a primitive that you can't
build from other things.

But in metaphorical (architectural) sense, yeah, you can probably get rid of
it and use different monadic datatypes for different parts of the program that
access different pieces in the world.

However, I am not clear how you then combine/serialize those different monads.
It seems eventually you will get something like IO monad again. I am not sure
but I think there needs to be some master monad going on in all Haskell
programs that (potentially) serializes all the actions to the outside world.

~~~
steveklabnik
Haskell had a different system for IO before the IO monad existed, so it's not
_strictly_ necessary.

~~~
groovy2shoes
Yeah, but that system sucked. You essentially sent commands to an external
interpreter to do I/O. Simon Peyton Jones calls it "embarassing" in _Coders At
Work_.

------
neogodless
Is the audience of this article only people that think about Functional
Programming (as a philosophy) often enough to refer to it as FP?

~~~
neogodless
Downvoted, and yet - is it not wise to start with a headline that is clear
enough to attract a broad audience? If the article really only wants to
attract a smaller slice, that's fine, but I asked a valid question (in my
opinion.)

------
dvh
A modern architecture for functional programmers

I don't understand. This was posted yesterday
([https://news.ycombinator.com/item?id=10806075](https://news.ycombinator.com/item?id=10806075))
and you didn't even change the shortcut nobody outside domain could possibly
know.

~~~
detaro
From the FAQ:

 _Are reposts ok?

If a story has had significant attention in the last year or so, we kill
reposts as duplicates. If not, a small number of reposts is ok._

~~~
dvh
I have no problem with reposts, I have problem with using shortcuts that
nobody outside domain can possibly know, and then, using the same unexplained
shortcut in repost.

Normal thing to do would be understand that not everybody knows what "FP" is
(free pascal, floating point?) and use full term in "corrected" repost.
Perhaps original post didn't get any attention because people don't care about
floating points.

Shortcuts are usefull when you mentioned term 100 times and it's just
annoying, but you don't mention FP in title 100 times did you? There should be
ban on using shortcuts on HN titles until it is well known shortcut (CPU, RAM,
MB, ...)

Title should have enough information that if you stop random stranger on the
street, can he understand or guess what is it about? The first thing rubber
duck
([https://en.wikipedia.org/wiki/Rubber_duck_debugging](https://en.wikipedia.org/wiki/Rubber_duck_debugging))
would ask is: "what is FP?" and that means title is wrong and deserves zero
comments.

Edit: another thing - title is only write once, but read hundred of thousands
of times, you are literally wasting manhours here.

