
Rx glitches aren't actually a problem - jessaustin
http://staltz.com/rx-glitches-arent-actually-a-problem.html
======
sdrothrock
The title initially intrigued me because I thought it was talking about errors
in prescriptions; it's actually about ReactiveX.

~~~
daniel-cussen
I thought he was talking about reception glitches in radio.

------
fmstephe
EDIT: The article is about javascript. Everything below does not apply to
javascript. You need more than one thread, at least, for any of these
problems.

I strongly suspect that combining async streams will prove to be a problem in
practice, whether we resolve glitches or not. This article requires quite
subtle reasoning about async systems. In my experience this type of reasoning
is rarely done effectively on real life projects.

I am not criticising Rx, I am interested in Rx. But I am always concerned when
a project/community starts producing articles on very subtle topics and claims
'x isn't a problem if you just understand it properly'. Here properly probably
means deeply, and deep knowledge in software is scarce. There are too many
things to know, to know many things deeply.

Let me look at one example (and please comment if I'm really not understanding
something).

So we consume, asynchronously, two streams. Because they are asynchronous we
might hit some point where it's a bit uncertain where two outputs are related.

    
    
        errors            ---e1----e2------------------
        userActions       -u1------u2------------u3----
        analyticsMessages ---e1u1--(e1u2 e2u2)---e2u3--
    

That's a good example, and we fixed that problem with
'errors.withLatestFrom(...'. But, since we are asynchronous don't we also have
to worry about this scenario.

    
    
        errors            ---e1-------------------e2--
        userActions       -u1--u2--u3-u4-u5-u6-u7-u8--
        analyticsMessages ---e1u1---------------e2u8--
    

Now, let me clarify what happened here. The error occurred after u2, but the
errors thread got descheduled and the userActions thread kept on processing. I
am not certain exactly what to expect from
'errors.withLatestFrom(userActions,...' here but I suspect that 'e2u8' may be
the expected output.

So this error reporting system is fundamentally unreliable. If this is correct
then I would say that combining Rx streams is not appropriate for this use
case. This also suggests that although we managed to get rid of our glitches
we still produced a terrible piece of software.

Do I have to wrong end of the stick? (The answer was yes, but it was good
anyway)

~~~
staltz
> The error occurred after u2, but the errors thread got descheduled and the
> userActions thread kept on processing.

Your suggested case is contrived and doesn't apply to JavaScript (the examples
were in JavaScript where I assume all these streams are in the same thread). I
am not claiming withLatestFrom is the correct mechanism for determining order
of events in multi-threaded assumptions. That's an orthogonal concern to the
issue at hand.

> I strongly suspect that combining async streams will prove to be a problem
> in practice

Combining async results is a true problem and people often attempt to solve
them with callbacks or heavy usage of Futures/Promises. Rx solves these
problems easily, and I've witnessed a lot of developers confirm this when
learning Rx. It's not hard to teach the difference between withLatestFrom and
combineLatest and their intended use cases. Much easier than managing callback
hell, which is what RxJS intends to replace.

~~~
fmstephe
(A great thing about HN is getting to talk with the authors directly)

I didn't recognise that you were talking chiefly about javascript. It is true
that my example doesn't apply here.

Ok, then I would make one comment (and edit my first comment).

I would avoid the discussion of simultaneity at the start

    
    
        'Events in parentheses happen “simultaneously”.
         In practice they happen at slightly different times,     
         but separated by only a couple of nanoseconds, so
         people understand them to be simultaneous. Events (c1c2) 
         are called glitches and sometimes considered a problem
         because one would expect only c2 to happen.'
    

Because you are working in a single threaded simultaneous events just don't
exist. The glitches you describe occur completely independently of timing,
nanoseconds or otherwise. Drawing up diagrams where events occur at the same
time is adding complexity which doesn't exist.

    
    
        errors            ---e1-------e2---------------
        userActions       -u1------u2------------u3----
        analyticsMessages ---e1u1--(e1u2 e2u2)---e2u3--
    
    
    
        errors            ---e1----e2------------------
        userActions       -u1----------u2--------u3----
        analyticsMessages ---e1u1--(e2u1 e2u2)---e2u3--
    

Both of these scenarios produce glitches (as I understand it). So we don't
need any notion of 'simultaneous events' to create them.

Anyway - that is nitpicking. I enjoyed your article. I agree that the scenario
described at top does not apply in javascript.

I too strongly dislike callbacks :)

------
noelwelsh
Glitches as demonstrated via the diamond pattern can be avoided by traversing
the graph in topological order. This was demonstrated in Greg Cooper's PhD
thesis: [http://cs.brown.edu/~greg/](http://cs.brown.edu/~greg/)

Transactions, as I understand them, solve a different issue. This is the issue
of multiple inputs to the graph occurring at the same time.

~~~
ds300
> Glitches as demonstrated via the diamond pattern can be avoided by
> traversing the graph in topological order.

This is correct for DAGs which only propagate _value_ (as in Javelin[1] and my
own library DerivableJS[2]), but for graphs which propagate events (as in Rx),
topological sorting would only work for those parts of the graph which are
_effectively_ propagating value. Events don't have an inherent dedupe
operation, so it is very difficult to even imagine ways in which glitch
avoidance could be automatically enforced. It would certainly require semantic
program analysis.

Personally I think we should be avoiding the proliferation of events (as
encouraged by Rx enthusiasts) for exactly this reason. Their imperative nature
makes them very difficult to reason about.

[1]: [https://github.com/hoplon/javelin](https://github.com/hoplon/javelin)

[2]:
[https://github.com/ds300/derivablejs](https://github.com/ds300/derivablejs)

~~~
noelwelsh
The event/value distinction is an interesting viewpoint I hadn't considered.
Thanks for writing this.

------
wooby
They aren't a problem if the operations at the leaves of the graph are
idempotent, which is usually the case if these operations are UI repaints.

If the operations are for other effects - such as Ajax calls - then glitch
elimination is handy. Eliminating glitches at the dataflow level prevents
debounce logic from leaking out to the code performing the effects, preserving
the clarity and generality of that code.

I find glitch elimination most useful when the dataflow graph is value-
propagated, vs. event-propagated, as it is in the dataflow library I helped
implement, Javelin -
[https://github.com/hoplon/javelin/](https://github.com/hoplon/javelin/)

Javelin also supports transactional input, another feature that's helpful when
building dataflow graphs on which effects other than UI-repaint are hung.

------
krashnburn200
It's a feature.

