
The Essence of FRP [video] - begriffs
http://begriffs.com/posts/2015-07-22-essence-of-frp.html?hn=2
======
kazagistar
I was hoping for more insights as to why his method is better, but it seemed
to focus heavily on just two points... (1) cause that's how I defined it, and
any other way is wrong and (2) cause I find the abstraction more elegant.
Neither one was very convincing, though there was a hint of something from the
second.

~~~
tikhonj
Well, there are two parts to it. Here's my take on it, influenced heavily by
talking to Conal about it.

The importance of nice denotational semantics is that they _fully define_ the
abstraction and make it easy to think about. In the end, all it means is that
you have a simple but complete mental model for what the abstraction _is
supposed to be_ that you can manipulate in your head. Since the model is
formal, there's no fuzziness and no room for implementation-defined behavior.

The description is _crisp_ and _complete_. There's no room for uncertainty:
any question about how the abstraction is supposed to behave can be answered
by manipulating well-understood objects—plain functions or pairs in this case.
It's easier to reason—even informally—about well-defined mathematical objects
than about vague ideas written out in prose.

Personally, I wish all my abstractions were like that. A tall order!

The importance of continuous time in the semantics is a bit more subtle. It's
one of the things that has made implementing general-purpose FRP efficiently
tricky.

Why is continuous time useful? Composability.

Think about it as the difference between vector graphics and raster graphics.
We can combine vector graphics from different sources at different scales
without losing any information—with raster graphics, things would get blurry.
We can transform vector graphics in arbitrary ways without losing information.
We can manipulate parts in all sorts of different ways without losing any
precision at the seams.

We only have to approximate at the very end when we render to a physical
screen and we have complete control over _how_ we approximate: things like
anti-aliasing only get applied at the very end and are never baked into the
actual data.

The goal with continuous time for reactive programs is to make them compose
and transform just like vector graphics components. You should be able to
define behaviors in different places and combine them without worrying about
how their sampling rates interact. (Which is _really difficult_ without
continuous time!)

You should also be able to take an arbitrary component of your reactive
program and transform it, like changing the tweening function on animations.
For example, it would be great to add quadratic easing to animations in your
UI _without_ having to modify the animation code directly. With continuous
time, you could just apply the easing function to the existing behavior
without worrying about jerky movements or sampling.

To me, this whole idea is incredible. You'd be able to manipulate interactive
components of your UI directly: the behavior of your code over time would be a
completely first-class citizen. And you wouldn't have to worry about sampling
issues _from your system_ , which I've found difficult in the past. (I had a
really hard time implementing drag and drop in JavaScript in part because the
way mousemove events fire as the mouse is moving is inherently confusing.)

Of course, as I mentioned, it's unclear how to implement this efficiently
right now. But is it fundamentally impossible? I don't think so. We've managed
to make a whole bunch of seemingly absurd abstractions efficient in the past:
virtual memory, garbage collection, even procedure calls. Why not arbitrary-
precision numbers followed by continuous time FRP?

------
michaelochurch
This was a great talk. Thanks for posting it.

------
vvanders
Need to watch the video but it seems a bit poor phrasing to say Elm, etc
"misapplied" FRP.

Elm even makes a point in their docs that they don't adhere to a strict
definition of FRP:

[http://elm-lang.org/docs](http://elm-lang.org/docs)

~~~
rubiquity
Agreed. I bet the author thinks this way solely because Elm is strict rather
than lazy. So what? Strict is much, much more simple.

~~~
ericssmith
No. The author is the originator of FRP and has had a clear definition of it
for almost two decades that included a written denotational semantics and
modeling of continuous time. FRP has been confusingly redefined by a string of
people in the last few years. They could've chosen other labels.

~~~
vvanders
The thing is elm doesn't say they are FRP, just that they borrow ideas from
it.

~~~
vvanders
Since I can't reply directly that paper references classical FRP and how elm
differs.

Anyway my metapoint still stands in that tone is a poor way to foster
community and collaboration.

~~~
rubiquity
Not only that but purist definitions help nothing at all. I don't care if FRP
is well defined. If the definition of FRP doesn't make my programming
experience better but FRP with some adaptation does, then I'll take the latter
every time.

~~~
dang
You guys are being too dismissive. If the creator of a seminal concept wants
to explain precisely what he meant by it, it behooves us to listen.

This thread should be about the substance of what he says in the talk, not
perceived slights to other projects which we're all still free to use.

~~~
rubiquity
I don't take issue with Conal Elliott taking the time to explain what he meant
by it. I very much appreciate that! My issue is with the poster above that
puts down languages with FRP concepts because they aren't "The FRP, The Whole
FRP and Nothing But The FRP."

I love the FP community but if there's one thing everyone on a whole could
learn it is that purist beliefs do no good for FP adoption and only help
reinforce the "Functional Programmers are condescending" sentiment.

