The term was, to my understanding, invented by Conal Elliott shortly after his work with Fran. To his mind, FRP is defined as describing synchronous, continuous signals in a language with denotational semantics that clearly suggest those signals. "Events" are included in the denotational model in order to consider signals which change "infinitely quickly".
It turns out that these programs are usually executed using synchronous or even asynchronous event passing networks, but this should be considered an implementation detail alone. Further, implementation events are not the same as FRP "Events".
Typically, to my understanding, things in the "reactive extensions" family of code are not FRP nor are intending to be. They are influenced by Elliott's woork, I'm sure, but they tend to describe effectful, asynchronous event-passing networks directly. Eric Meijer has a talk about this .
So it is certainly reasonable to describe "reactive programming" using this mental model, although it turns out that "reactive programming" is sometimes avoided as a term. I think the reason is two-fold: (1) "functional" is kind of sexy today and (2) "reactive programming" has become incredibly diluted as a term and it's difficult to really understand what anyone might be talking about any more.
I think the first reason is compelling if a bit cheap, but the second reason should be a cautionary story to misusing the term "FRP". If it goes the way of "reactive" we will have lost even more capability to speak to one another precisely.
This is why I appreciated Evan's talk in attaching labels to different portions of the space.
I watched it just last night and found it very thought provoking (so much that I stole some of the concepts to create transduce-async  this morning).
Maybe, if you drop the F. For clarification on FRP there was a great talk at strangeloop this year:
Glitches can appear when, for example, two inputs of a certain "functional box" change almost at the same time. When input 1 changes, the output changes, and then when input 2 changes, the output changes again, thus resulting in lots of superfluous computation in the chain that follows this output.
The nice part about reactive programming is intended to be that stuff gets recomputed incrementally, but in practice this often does not happen!
There is nothing in reactive programming that implies being incremental, but glitch does that anyways (in the sense that tasks whose dependencies haven't changed aren't replayed). On the other hand, glitch might replay tasks more than is optimal.
Yes there is. Because if it's not incremental, then you could replace any "reactive" program by one which just recomputes its outputs every time something changes, from scratch. Reactive programming solves this by only recomputing parts which change.
PS: Thanks for the link, I'll look into it!
I mean, it seems like reactive computations should be incremental, ya, but you won't find many (any?) models that deliver on that. On the other hand, you have plenty of incremental models that aren't reactive (eg SAC). Glitch is designed to be reactive and incremental, so it does "replay when something changes" and is able to suppress change propagation if/when a change fizzles out.
I find that to be completely desirable. That behavior is exactly what I want... simply done more efficiently .
 Also modulo a whole bunch of local state. AFRP handles this well by having a good notion of what it means to "switch an arrow in" but it's been a challenge for applicative/monadic FRP. Rx programming (e.g. "not FRP") tends to solve this problem by ignoring its existence and just littering local state everywhere.
It isn't clear to me that arrowized FRP is incremental. I think some of the less pure FRPs are (like Flapjax) given that they do a topological sort of the signal graph to repair it (if they were going to redo everything, the sort wouldn't be necessary).
AFRP is a good example here. AFRP semantics are easily stated in a recomputational way. It makes it necessary to talk about causal AFRP and bounded history AFRP which are nice terms to think about a computation (if sort of obvious). Then, the efficient implementations (of causal AFRP) themselves are incremental for efficiency.
data (i ~> o) = A (i -> (o, i ~> o))
data (i ~> o) = forall s. A (i -> s -> (o, s))
If you're referring to the ability for a new event to update the signal network only partially (in Elliott's terminology, "push" semantics) then there's Amsden's TimeFlies library.
2) If you need a push model, couldn't you just use .throttle / .debounce? I feel like FRP has plenty of tools to tackle this problem.
If your worry about recomputation is about efficiency, then admittedly FRP is probably not your best choice of paradigm. FRP consistently chooses reduced code complexity at the expense of efficiency (with the assertion that mutable state / imperative code is inherently more complex in a complex system than .flatMap.throttle.etc, which is obviously debatable).
((_ x expr) (cons x (delay expr)))))
(define scar car)
(define (scdr xs) (force (cdr xs)))
(define snull? null?)
(define (sfilter f xs)
(cond ((snull? xs) '())
((f (scar xs)) (scons (scar xs)
(sfilter f (scdr xs))))
(else (sfilter f (scdr xs)))))
(define (smap f xs)
(if (snull? xs)
(scons (f (scar xs))
(smap f (scdr xs)))))
I think you are absolutely right. One of the popular examples of where RP is useful is in implementing a spreadsheet program.
> "FRP is programming with asynchronous data streams."
So, is Akka reactive or not?
I would try to separate data flow from the FRP style that reifies the time step into first-class values .e.g. w/ "signals" or "behaviors", or from the "arrowized" forms of FRP
One of the interesting aspects of FBP and many dataflow systems is that it can also be defined as a schema for actor-model messaging. i.e. you still have the general actor model as the low level abstraction, but then you assume actors to be archetypal and immutable between runs of the network, existing only to implement a finite, typed, addressable number of "in" and "out" queues on each actor instance, which are subsequently configured into the final network. An explicit model of time is not used - "all messages delivered" is the standard termination state, but if one breaks the Morrison model to add local actor state again, it becomes trivial to defer message delivery across runs of the network.