I've done something similar to this, though in a React Native world, where ReactiveX frameworks are a common enough pattern that this really seemed like the most obvious way to do it.
I think the terminology that made more sense to me at the time is CQRS - instead of treating everything as a synchronous query/mutation, mutations are expressed as commands, and the expectation is that their effect will be reflected in the UI and on the backend asynchronously, but of course in the UI much sooner than in the backend.
The tricky part is that CQRS comes with some requirements for careful data modeling - you need to understand how your mutations compose across time, and like `swatcoder` mentions, you also need to use some unusual techniques for composing/coordinating operations across separate entities.
That said, it led to a fantastic user experience and additionally provided the benefit of 'solving' a whole host of issues around idempotency that turn out to be surprisingly common problems on mobile platforms because of missed ACKs from the server.
I read this as the author wanting to decouple as many things as possible so that they don't needlessly block on each other and so that logic for each layer is as contained in that one layer as possible. It's a valiant goal and I think they'd discover a lot of conceptually aligned tools in Reactive/Rx frameworks.
But of course there are tradeoffs to every architecture and none seem to be noted in this article. I'm sure the author is aware of this and just didn't want to distract the article, but to others reading it, here are some examples:
1. When you fully decouple model changes from UI updates through an event queue, you gain efficiency on the model (which won't block on UI) but in trade, you increase the risk of the UI flooding from having more events than it can handle.
2. When you narrow the model language to GET/SET/DELETE {id} {data}, you gain reusability in your components but lose techniques for coordinating dependent changes.
3. Highly decoupled architectures are harder to debug with traditional tooling because it's hard to follow the path from a bug's manifestation back to its root cause. The decoupling introduces a break in the trail, like a fugitive slipping into the river to shake off the bloodhounds.
Some projects don't need to worry about those problems or can solve them adequately within an architecture like this, but ultimately no architecture is universal.
> I read this as the author wanting to decouple as many things as possible
Nope. This was actually created incrementally, starting with YAGNI/DTSSTCPW, adding elements only when they became absolutely necessary and actually removing a lot more code than was added, despite becoming much better in quality and performance.
The one big thing was that the additions were not limited to the architectural preconceptions we currently have.
And then there was a lot of serendipity. This was not what I had set out to build ... it turned out way better.
> risk of the UI flooding
It turns out that Blackbird actually prevents that exact scenario, in fact that was the main reason for reifying the update events in the first place. And it worked pretty much perfectly at that.
> ... lose techniques for coordinating dependent changes.
Not our experience.
> Highly decoupled architectures are harder to debug with traditional tooling
Might be the case generally, we did not find that to be the case with Blackbird. I think that was largely because the elements were both very simple and so incredibly easy to test individually that we got very few surprises in the composed results.
But you can and do, of course, still test the compositions as well. Hexagonal architecture For the Win!
> The decoupling introduces a break in the trail, like a fugitive slipping into the river to shake off the bloodhounds.
Again, not really, as the model, where the functionality/complexity lives is pretty much exclusively synchronous. The parts at the edges where asynchrony is introduced are generic and largely functionality-free, they tend to not change with new functionality and thus not be the source of new bugs.
And of course you can also test them independently.
As an example, the code path for performing all requests is completely generic. The only parts that change are the PI to URL translation and object serialisation/deserialization, and those are local, functional and easy to test without all the other machinery. Once that works, the confidence that it will also work with actual requests going over the wire is close to 100%.
This sounds interesting. Unfortunately, the blog author didn't link to Blackbird, and there are many things with that name. Is it an open-source project somewhere?
I think the terminology that made more sense to me at the time is CQRS - instead of treating everything as a synchronous query/mutation, mutations are expressed as commands, and the expectation is that their effect will be reflected in the UI and on the backend asynchronously, but of course in the UI much sooner than in the backend.
The tricky part is that CQRS comes with some requirements for careful data modeling - you need to understand how your mutations compose across time, and like `swatcoder` mentions, you also need to use some unusual techniques for composing/coordinating operations across separate entities.
That said, it led to a fantastic user experience and additionally provided the benefit of 'solving' a whole host of issues around idempotency that turn out to be surprisingly common problems on mobile platforms because of missed ACKs from the server.