In the event model, Events that deal with model A also have to be dealt with in models B,C, and D. Wouldn't this lead to exponential explosion?
The fact that B has to explicitly wait for A in the code also seems prone for errors. If A changes to no longer deal with that event, will B block forever?
And this still builds an event chain, because B is blocking on A. I don't see how this is different from an event chain, execpt that the dependency is non-obvious (whereas a callback chain/future chain will at least show what data is being passed into B from A)
This is maybe OT as well, but this article could really do for some concrete examples in the explanations. "Model A" and "Model B" are not examples.
The dispatcher takes a potential mess of calls between various components, centralizes them and makes the sequence of events and dependencies much easier to reason about.
People knock Ember for its complexity but in reality it's simply a complete/coherent solution for the kind of problems people like this are iterating towards - and watching their code get more complex as they do.
Computed properties which depend upon other computed properties never seem to update quite right. The object system is tightly coupled to the template system. It's a monolithic framework that depends upon hidden magic to bind the pieces together.
How similar is Ember's dependency tracking to Knockout's?
With Knockout I've had exactly the problem described in the article, where properties end up depending on eachother in a chain, and it is hard to keep track of what actually happens when data changes across multiple view models, particularly when inevitable special cases appear, where really I want to do something slightly different half way down the chain depending on what initiated the change, but all I have is a generic "property changed" event. How does Ember handle that?
Is there some subtlety I've missed here?
Lets assume the user clicks somewhere, so an event is fired "user-clicked-x" that model A listens to.
So you know that model B needs to change as well and you fire a new event type from model A, say "model-a-changed-because-of-user-clicked-x" or a more generic event, say "model-a-changed".
Both will cause headaches, because they require a quite high cognitive load. With the generic one, you have the issue that sometimes you want A to change but not B, which leads to lots of conditionals (all of which you need to remember).
With the specific one, you have two events that mean almost the same, but are different. With every model C and D you need to carefully evaluate which event you listen to.
In isolation, this is perfectly fine.But if you have 10+ of those things interacting with eachother, it will be a big mess. You will have 30-40 different events, that map out a hierarchical order that you need to track down through multiple modules every time you change something.
If you do not abstract the chain away, the complexity is simply too high. You'd have to keep roughly 15-20 links in your head, which is too much. With Flux or similar patterns, you just have to keep one pattern in your head.
In the end it's a simple story of abstraction. Your mind can only deal with a couple of different things. If you reach the threshold, you need to abstract. Flux shows one way to do this (and it is only useful if you've reached the need for abstraction threshold).
Having one or two short event chains is perfectly fine. If you have 5 different user interactions that have a complex ripple effect through the state of your app, you need to do something, or development speed slows.
> you fire a new event type from model A, say "model-a-changed-because-of-user-clicked-x" or a more generic event, say "model-a-changed".
These two seem to be on opposite ends of a spectrum where I'd try to pick a middle point. I wouldn't want the semantics of "this is a reaction to a UI event" anywhere past the first event, it's way too detailed. I'd try to pick something like "model A's date field changed ", or ideally something more meaningful like "The User updated their address".
To give you some context:
In the app I'm developing now, we've got roughtly 25 global user interactions (meaning an event that will be triggered with a DOM event).Most of these events affect more than one model. Quite a lot of those also trigger complex event chains.
Multiple models have complex dependencies that would form branched dependency chains with conditionals.
It's just impossible to have everything in your mind at all times, which you need to if you want to extend a chain without bugs.
Now you might not have this issue in your projects, because they don't require that kind of interactivity. If thats the case don't bother, keep doing what you are doing.
But if you ever realize that things get pretty complex in one of your projects, then you know where to start :).
So think about opening up a file in a web editor, that's an action. Many things might care about when a file is opened. The tab UI, maybe a log, other users connected to your session. Sometimes views correlate well to model updates, like a form, but many times they do not in which case a different event type can be useful.
That is, now he just has to declare it in the interested party -- whereas in your case, the programmer will have to fully manually manage who gets what.
With two separate events types, he would only have to listen for the events.
A: listen for X
B: listen for X
B: execute after A
A: listen for X
B: listen for Y
everyone thinks that adding a ton of extra complexity in the guise of a simpler api will make it more maintainable.
in this example you now have 4 events under the hood and two event dispatchers. but it seem simpler because you only "see" two and one dispatcher.
while it may make things just a little bit simpler to maintain you may enter debug hell when there's bugs on that added under the hood complexity... which will often be a code you're unfamiliar. making the article main argument (helping on debugging edge case bugs) kind of two edged
I simply cannot follow the argument about "added complexity"
The use case defines the minimum amount of links you have to make, in this case 4.
Now you have two options: You just roll with it (we used to do that), or you try to abstract away a few steps (we do this now with the dispatcher).
The dispatcher here is roughly 100 lines of fairly simple, unittestable code, so I really cannot see where the elusive bugs come from. 100 lines that you will easily save if your app is complex enough by the way.
With the dispatcher, you can abstract away a couple of steps consistenly over your app. Now I fully agree that you need a certain threshold of complexity to make it worthwile.
But you cannot avoid the original complexity of your usercase.
There is no alternative to decent abstraction if you reach a certain level of complexity, because it is given externally.
Other concepts like two-way databinding do exactly the same.
wait for C;
wait for B;
wait for A;
It's only a technical nuisance that one model needs to wait for the other to update.
I would certainly not view a sequence dependency that's critical to correct operation as a "technical nuisance" to be hidden away; it's an important fact.
You see, what you call a clear sequential path is what I call evil :). We had tons of those in our app, and it was so hard to keep track of them, and everytime I didn't look at it for a while, I had to invest 10 minutes to track down the event chain.
There is one simple reason for this: If you have a sequential path, you loose the context after the first link. A updates because "user-clicked-x", and B updates because A updates. But if you're working on B, you ask yourself why the heck did A update in the first place?
You might argue that you are always aware of this, but we found that when our app grew more complex, we actually didn't always know.
I'm sure you could find a solution where you keep the sequence and don't lose context, but then you have to state the dependency in the wrong place: In model A you have to say that model B should update. But that's all wrong because A shouldn't care about models that depend on it. The models that depend on A should care!
So by inverting the whole sequence dependency, you gain the following:
1. in every model you are aware of the context, i.e. the original event that triggered the state change in the first place
2. you are also aware on what other models this model depends on (it is explicitly stated and not hidden like you said).
This means that you can work on model B and extend it without ever looking at other models, while knowing exactly the origin of your state change. In my opinion, this is decoupling at its best. It also helps unittesting greatly.
While I agree that inverting the dependency is a drawback, I've gained things that, at least in my opinion, heavily dominate that drawback.
Stack traces are a thing. Chrome even has asynchronous stack-traces.
A updates because "user-clicked-x", but it might also update because "user-clicked-y". Or because "user-clicked-z". So now I have 3 things to add to A and to B, because B needs to refresh everytime A does. So if you forget about one of these, then suddenly B is out of sync.
A "solution" to this problem already exists in terms of event listeners. If B listens to any change on A, then A doesn't have to deal with B, but B updates properly and doesn't have to deal with all event types A has to deal with.
Now this argument line has dissolved into plain sillyness.
He showed how he made the congnitive load less and the dependencies more explicit and locally evident IN THE CODE, and you tell him to use "stack traces" for that?
I'm trying to understand whether this is truly the case, or if I'm missing something.
So now I have 3 things to add to A and to B, because B needs to refresh everytime A does
We used to do this. A lot. It was a big mess. Because in reality (for us), it is never that clear cut. You almost never have full dependencies so that thight coupling is the right way to do model it. There are always exceptions that you need to be aware of.
We found that decoupling reduces cognitive load greatly, and changing the relationships can be done a magnitude quicker because "listen to except if this and that happens" doesn't scale. There are just too many this and thats if your app is big.
I know that the status quo method always looks easier than something new, because its unfamiliar. But restriction of communication flow is, in my opinion, the ONLY way to keep order in a big, complex app.
Just consider this: I've tried both ways extensively, and I favor the Flux way. I might be an idiot, but there is a possibility that I'm not. This should encourage people to really try this once to avoid status quo bias.
Maybe the OP is just giving a bad example, or maybe I'm missing something - but one thing I really don't like in his example is that model B's 'waitFor' call needs to know about Model A, and that model A has an appDispatch. This kind of tightly coupled code isn't going to do anything useful for your codebase if you have a large application. It might make debugging easier right now because things are more explicit, but it will make you less flexible in the long term. It also doesn't solve the problem that you still need to understand that A has to finish before B can start - which I suppose most people would simply solve with an additional event. The OP does mention this in the article, but I'm not convinced that the trade off is worth it here. If I'm to explicitly state dependencies within the event handling code, then that means that any time I want to change how things handle events, I've got to remember exactly which of my dependents reference me. This isn't helpful - and in fact it may be more painful than the current situation.
I guess my main problem here is that the benefits this brings just aren't enough (in my view) to offset the potential pain. To me, it kinda just feels like cutting off your nose to spite your face. You might make one area a bit easier to debug, but you lose out in other areas too.
On the surface the code may look more sensible and easier to think about, but in practicality I'm not convinced that this won't introduce additional pain later on.
IMHO Application events should simply bubble up through the DOM, and get caught by appropriate controllers (Backbone.View class in backbone parlance) that then modify the appropriate models. If need be it's the top-level controller for the app, eg:
I'd take a simple approach like that over an event bus/dispatcher/coolest-new-pattern-since-sliced-bread - any day, and sleep at night knowing if something happens to me, any employee will be able to parse and debug my code.
PS And no, I don't use Backbone, but my own 500 LOC MVC framework that's got zero dependencies and is about 100x faster.
Could you explain what you do differently in your MVC framework to achieve those gains?
two-way databinding is perfectly fine and really helpful for a lot of apps. However, it will screw you for certain use cases.
In my opinion, it really boils down to how much global interaction you have in your app. If you simply render a lot of models that can only be changed trough the view, then I don't really see a reason for a Flux like architecture. Two-way databinding is perfectly fine there. (React vs whatever then boils down to performance vs toolset)
If you do a lot of graphical stuff that allows for heavy interaction, filtering and other kinds of stuff, I believe that two-way databinding will be your downfall. You need to clearly and explicitly map out on how data flows through your app. That is the time where an architecture like Flux shines.
So ask yourself this question:
"How often can a user do something in your app, and a lot of Views need to rerender?"
If the answer is "often", then you should consider something like Flux. If not, Angluar might be a better fit, especially if you already know it.
It isn't a 100% fit for what you're looking for -- there is no case study -- but it might be useful to you.
The main problem is that a TODO app is too simple. But if the app were more complicated it would be much harder to prepare and read the examples.
It's also clearer, easier to implement, already implemented in tons of frameworks and easier to unit test (since you no longer need to mock the appDispatcher).
You should probably try to avoid using a global eventDispatcher (as any other global), whether you can specify order of execution or not.
What I meant is it seems (I've never used or seen it before) to be closer to a message broker than a bus - http://www.udidahan.com/2011/03/24/bus-and-broker-pubsub-dif...
Rather than heaps of code, and weird concepts, you can just implement firing the event into the A, and B models by calling a method on each. Simple!
In practice these are easier to debug, because the debugger supports function calls with stack traces, and easier to read, because people understand function calls. They are also faster.
This is why I think that long article is complex. Remember that OO uses messages and objects. Method calls are events. Also remember that there is an M in Document Object Model (DOM). So you see we already have a Model, and events?
Please also consider how much less code there is in the jQuery version of the todomvc app compared to the react one. http://todomvc.com/
The people arguing against you don't know what they're talking about. They're stuck in a paradigm where "de-coupled event-driven architecture" is a holy goal and can't see their feet on the ground anymore.
Faster: could be, could be not. Without profiling this is an empty statement. Besides, "premature optimization is the root of all evil".
Simpler: Noope. You have to manually track all the interactions, and you strongly couple things together with your "OO and functions" idea. It might be simpler to just churn out code initially (instead of understanding an architecture), but it becomes an ad-hoc mess soon.
Let me put it this way: it's not like you have discovered a novel way of building stuff compared to these coders that overcomplicate things. What you propose is what these engineers have already tried, used for a decade or so, and found unscalable and wanting.
It can work for small and not complicated pages, but it's not a solution to modern single page web apps.
You might think that you are proposing a clear and simple way of coding as opposed to something like overengineered J2EE patterns mess.
But what you describe is more like the "Why use procedural code etc, GOTOs for flow are simpler and faster", or "why use functional programming, imperative is simpler and faster".
It is faster, because it is just a function call. It doesn't create any event objects, or have to go through six layers of other function calls.
It's simpler because there is less code, and there are less concepts.
No, it's not novel or new. Messages as function calls, and MVC comes from 70s smalltalk.
Yes, if there are lots of outputs it may be useful to reduce the coupling. In my experience, even with large apps (100 person teams), this is often not the case.
It is often useful to know what your app is doing by looking at where the events are going. Just using function calls makes this very easy, both in source code and in the debugger. So this also needs to also be weighed up in the decision of how you are passing events around.
Yeah, so this is a first sign of misreading the dangers.
Of course the code you mentioned is "smaller than in the article". The code in the article is a toy example to illustrate a specific point.
It's when you start to build a full app, with all the (necessary for the requirements) complexity that it's gonna get much more and more unyieldy that the code one would write on top of React.
>It is faster, because it is just a function call. It doesn't create any event objects, or have to go through six layers of other function calls.
Again: premature optimization. After you build your colossal temple of function calls in a NON TOY example, it will either be slow (because you'll have re-introduced abstractions and layers by yourself) or it would a complex spaghetti of cross-calls.
>It's simpler because there is less code, and there are less concepts.
Less concepts != simpler. Assembly has less concepts too.
And it's only "less code" because you're comparing a "toy example + framework" with a "toy example without framework". It's after that level that it gets hairy.
>No, it's not novel or new. Messages as function calls, and MVC comes from 70s smalltalk.
Yeah, and it's what all these coders going to React etc have tried already for a decade and found out that it doesn't cut it with modern apps, because the environment they run in is nothing like a Smalltalk MVC application.
What stops you from unit testing function calls? Nothing.
Keeping DOM modifying code separate is good practice in either case. Then you don't need to involve the DOM in much of your code. Doing end to end testing is also completely ok to do.
It's worth repeating, so I'll say it again... It's better to keep DOM manipulation code separate. Then you don't need to mock, or simulate anything. Except where needed. You do need to test DOM manipulating code somehow.
AngularJS also uses the DOM in their tests: https://github.com/angular/angular.js/blob/master/test/ngAni... Here's a jquery-ui test for comparison: https://github.com/jquery/jquery-ui/blob/master/tests/unit/s...