
A General Theory of Reactivity - yuchi
https://github.com/kriskowal/gtor
======
pron
Aside from hijacking the term reactive-programming[1] (though it does give it
a mention) this text repeats another mistake the _Reactive Manifesto_ makes,
namely confuse intent (or abstraction) with implementation.

Even using the (quite unclear) hijacked definition of "reactive", there is
_absolutely nothing_ tying the goal of responsive programs to concepts such as
asynchronous APIs, promises and observables. As an example, Erlang (and later
Go) has been used to write "reactive" software with neither a hint nor a
mention of callbacks, promises or observables (at least for the purposes
outlined in this text).

Those concepts are mere implementation tricks designed to circumvent blocking
in environments that either don't have support for multi-threading at all
(which makes blocking impossible if concurrency is to be maintained), or those
where threads are necessarily kernel threads (where blocking entails a
significant overhead). In fact, the whole notion of non-blocking is entirely
accidental to the discussion of responsive/concurrent applications. Where
lightweight threads are available (and, therefore, blocking is free), those
design patterns are completely unnecessary for writing responsive software,
and, in fact, make it quite cumbersome.

Callbacks, promises and observables obliterate stack traces, make concurrency
opaque, require explicit back-pressure as well as a complete "shadow"
implementation of error handling and control structures beyond those provided
by the language. They would all be considered anti-patterns[2] if they weren't
a necessary last-resort workaround to the problem of expensive blocking (which
happens to be not too hard to fix by other means).

[1]:
[http://en.wikipedia.org/wiki/Reactive_programming](http://en.wikipedia.org/wiki/Reactive_programming)

[2]: At least in non-pure languages. They do serve a purpose in referentially-
transparent languages that don't have the concept of a stack or a thread, at
least not in their common definition.

~~~
rektide
GTOR to me seems to be about finding intents, looking at different signallings
of thing or things happening across time. You state in the first paragraph
that there's a clash with how GTOR views things, and you have three paragraphs
prostelytizing for greenthreading assuming that I understood the initial clash
you saw, but you've provided very little context to explain the "namely,
[GTOR] confuse[s] intent (or abstraction with implementation"

Are you getting hung up on the fact that GTOR talks through some specific apis
for reactivity? You seem very much opposed to letting someone mention
callbacks, prormises or observability? But GTOR is itself about mentioning
these things so we can get past them and think of reactivity more generally.
Yet you seem mired in your hatred for these concepts, in a way that prevents
you from seeing that the article is about building effectful processing-
having code or data whose effect is the running of other code, in a general
fashion. And your inability to see that generalness, your forcing some weird
diatribe against non-greenthreaded systems as all missing the point, is there
rooted itself in confusing an implementation with intent, for surely blocking
or no, concurrent or non, code and data will create reactions amid others.

I suggest getting more into the categories about. Sometimes binding happens
before event, sometimes event before looking. Sometimes event is singular,
sometimes it's over a duration, sometimes it's ongoing- edge, level triggering
and continuous. Sometimes event has concurrency sometimes it's over time.

None of these don't make sense in Erlang or any other runtime, even if you
might not see them expressed as first class constructs with apis, even if the
reactivity were more baked in, as Erlangs send is.

Most of all, I have a very hard time understanding what your clash really is.
I feel very much I'm being sold something, but the first paragraph doesn't
give me a clear idea of what it is I'm being asked to accept evidence for.

~~~
pron
My disdain for those technique stem from their being anti-patterns in
imperative languages. Performing side effects inside monadic compositions is
particularly dangerous, and the _only reason_ anyone even considers importing
these techniques is because blocking is expensive, as they replicate features
already found in imperative languages (control structures, exceptions).

There is nothing about these that make them related to "general reactivity",
mostly because there is no such thing as reactivity. They are simply
techniques of performing actions in referentially transparent languages. In
imperative languages, the most general, theoretically pure, fundamental
construct of "reactivity" (whatever that means), is the thread (Erlang's send
is not special, and not required).

There is absolutely nothing more fundamental about the referentially-
transparent (lambda calculus) way of doing things than the imperative way of
doing things, and since the article discusses imperative languages, the LC way
of doing things is potentially harmful as there is nothing to enforce
referential transparency.

~~~
rektide
Your focus on GTOR explaining techniques is madness. It's clearly not about
that, yet you keep descending beyond the general in GTOR and into the specific
implementations. You have some priority, some thing that has to be fulfilled,
and you are fixated entirely upon this square-1 position and unable to see
anything beyond that. I think if you could find the general, you'd see that
your preferred expressivities- imperative systems- enact the same kinds of
GENERAL couplings. Whether it's implicit or explicit, it's still a general
conception and we have to think about it and have mental models that
understand coupling/reaction.

And those things GTOR does use in square 1 are fine. Your heavyhanded
rejection of them is colosssaly sad and twisted. If a mental construct suits a
problem and helps you think of it, that's what is important. The programmatic
platform follows our ability to think, it does not define it, yet you keep
insisting the cart must come first, that the platform has to be a certain way
or else or else or else. FUD. FUD FUD FUD.

It's using the references that everyone on the planet but those of your little
cultish view know to explain _general ways of thinking_ about things. Your
unwillingness to see parity between the general ideas put forward because it's
not your favored implementation reads grossly to me. Observers, contracts,
data-flow are real, graspable concepts and if they closely model how you want
to think about a problem then they are most definitely good concepts, and it's
up to the platform to have good ways to express natural thought. I still have
an enormously hard time decyphering the mad mess of shibboleths you toss out,
activating your elitist trope of true beleievers (and I'm not a complete
neophyte to your clique myself), but I am 100% sure, as I stated in my first
paragraph, that all systems enact a coupling/reactivity. Control, whether it's
explicit or implicit in the flow, is still the chief thing entailed in
mentally modeling and putting into practice: this is unrejectable.

In particular, your bringing up of the referential transparency doubles down
on my prime criticism in the previous: you bring up points that you claim
contend, but you provide no support, no structure where anyone but those
already attuned to your brand of thought can get onboard with you. Your claims
are uncontestable, because they are indecypherable. They claim a split, but
don't deign to point out what that split is. That, culturally, is a big
indicator to me.

Your case rests around whether or not reactivity is a general thing or not,
and I think it's quite clear we can find these concepts embodied in any
system.

~~~
pron
I find it a little hard to understand what it is you're even talking about,
and I suggest you take a moment to understand what I'm saying (because it's
nothing as objectionable as you seem to think).

What is "reactivity"? This document defines it as "the process of receiving
external stimuli and propagating events". Ok, then -- I wouldn't call it a
very rigorous definition, but I can roughly grasp what the author means (more
or less; I think). Now, please show me why callbacks, promises or observables
are more "general" approaches to "reactivity" than, say, threads, blocking
queues and blocking futures? I chose those because they happen to be the duals
of the aforementioned constructs. If A is a dual of B, how can you say A is
the "general theory" while B is a "cultish view"? _This_ is what is very much
rejectable.

Now that we've firmly established that A (callbacks, promises, observables) is
no more general than B (threads, futures and queues) -- yet the document
focuses on one while completely ignoring the other -- I express my _opinion_
(that I can support[1]), which is that in imperative languages, constructs B
are far superior to their no-more-general constructs A. That, however, unlike
my previous statement (about the document confusing goals with
implementations, which was a statement of fact), is just a statement of
opinion.

> Observers, contracts, data-flow are real, graspable concepts and if they
> closely model how you want to think about a problem then they are most
> definitely good concepts, and it's up to the platform to have good ways to
> express natural thought.

Sure they are, but this document mentions some antipatterns to implementing
these abstractions (which is why I said it confuses implementation with
abstraction). Queues are definitely a dual of observables, and in my (and many
others') opinion, they are a far superior way of implementing dataflows in
imperative languages (they provide implicit backpressure, and they make
concurrency clear). Same goes for futures vs callbacks, or threads vs monadic
composition (threads preserve stack traces, control flow and exceptions, while
monads/promises don't -- they require a shadow implementation of all those
basic constructs).

I model many problems with a dataflow abstraction, but when I want to
_implement_ it, I reach for a blocking queue (aka, channel) rather than
observables. Don't confuse abstraction with implementation.

[1]:
[https://www.usenix.org/legacy/events/hotos03/tech/full_paper...](https://www.usenix.org/legacy/events/hotos03/tech/full_papers/vonbehren/vonbehren.pdf)

~~~
cowbertvonmoo
My use of the word "general" was more a joke than the scope of my intent. I
meant "general" in the sense of "general relativity", meaning that I’ve
proposed relationships among concepts that are often understood to be
unrelated. Specifically, I wanted to debunk the notions that one of these
tools is categorically better than the others or that any one of them should
subsume the others.

------
tel
I'm just happy someone for one wrote one of these things and properly
distinguished FRP as being different from "reactive programming" generally.

~~~
pron
FRP is no more than reactive programming (sometimes known as dataflow
programming) using a pure-functional programming style rather than an
imperative style. It's just that the term "reactive programming" itself has
been hijacked by some to mean something else entirely (roughly: concurrent,
low-latency programs in general).

~~~
seanmcdirmid
If we go back to the 80s and the original reactive programming languages
designed for embedded systems (which inspired FRP), we can see even more
differences.

In systems, there is a big push for (and against) "event-based programming"
models with respect to concurrent and distributed systems; e.g. see:

[https://www.usenix.org/legacy/events/hotos03/tech/full_paper...](https://www.usenix.org/legacy/events/hotos03/tech/full_papers/vonbehren/vonbehren.pdf)

where the definitions are even more different.

~~~
pron
Ah, yes. That paper (among other things) inspired me to implement true
lightweight threads on the JVM. I also use some of the arguments in that paper
to demonstrate that while pull and push are technically duals, in practice,
pull APIs are always superior (except, again, in referentially transparent
languages that lack the concept of a stack and a thread).

~~~
seanmcdirmid
I've been writing.designing reactive programming languages and frameworks for
more than 10 years now, I started out with Superglue in my dissertation:

[http://lampwww.epfl.ch/~mcdirmid/papers/mcdirmid06superglue....](http://lampwww.epfl.ch/~mcdirmid/papers/mcdirmid06superglue.pdf)

Early on, I realized that pull was the only safe/sane solution, so in all my
languages (barring early versions of Superglue), change propagation merely
dirties execution that could have read the changed value, which is queued to
be re-executed and will pull the new value if it still depends on it. That is
the style that I use for Glitch, and it has turned out to be very robust.

Push is much harder. Actually, I'm not even sure how those systems even work
properly; too many things to coordinate.

------
yuchi
I found this read one of the most clear on the topic. I just made it almost
mandatory in my team :)

------
liugiul
To me, this seems pretty similar to Clojures lazy-seq. A seq is (afaik, I'm no
expert Clojure guy although I enjoy it a lot) something with a 'next' value. A
lazy seq basically can wait with returning the next object, and therefore be
evaluated when it decides it's ready (like new data comes in, like a stream).
Please let me know if I misunderstood this!!!

As a someone who does JS for my day job I'm really enjoying playing with
Clojure in my evenings and weekends (like now, it's Saturday night 00:30 here
in Berlin). A small part of me dreams about doing it full time ;).

~~~
astine
Close enough. A lazy sequence is a sequence who's elements are only generated
when they're asked for.

In Clojure, a sequence (seq) is a list datastructure. It can be traditional
Lisp list, an array, or some other datastructure who's items are arrayed in an
ordered sequence. You can generally treat a sequence like you can treat and
array or a list in other languages accessing any element by its index, (except
that it's immutable, of course.) Clojure doesn't traverse the sequence via
'next' unless the particular sequence type (ie a Lisp style list) is
implemented to do it that way.

A lazy sequence is a special kind of sequence which is implemented with a sort
of 'cons' structure where the 'cdr' is a function to be called rather than a
pointer. To clarify, in a traditional Lisp, a list is implement with a series
of pairs of pointers called 'cons' cells. The first pointer is called the
'car' and it pointed to the content at that point in the list and the second
pointer is called 'cdr' and it points to the next pair. A lazy sequence is
similar except the that the pointer to the next pair is rather a function
which returns the next pair. This means you can have a sequence who's actual
data is not yet fulfilled. When you access an element in a lazy sequence, that
element and every element before will be generated (and cached). Unless you
access an element in that sequence, that element will never be generated.

This lets you have things like infinite datastructures or do something like
wrap a stream in a sequence and loop over it like it was an array.

BTW, if your day job is with JS, maybe you can squeeze in a a ClojureScript
app on a small project? I've done this.

------
tunesmith
I immediately got hung up on the term "spatial" which has several concepts
bolted onto it, while spatial is never defined. What do they mean by spatial?

~~~
seanmcdirmid
Just a guess, but say you have an array of continuously changing values. What
value you get out depends on index (spatial) and time (temporal) of the
access. You could abstract over either: an index through iteration, and time
through observation.

------
visarga
This reminds me of LazyJs. It also unifies many concepts under the same
umbrella: arrays, dictionaries, strings, events - they could all be kinds of
lazy sequences. Unifying so many things under the concept of lazy sequence
gave me a wonderful feeling of insight.

