I've also observed this similarity. The msg is like the actionType, the wParam and/or lParam are like the polymorphic objects that you pass with your action.
The dispatcher is also not the most efficient model, where every store is registered to listen to every event. This is a bit like multiple windows on an event loop. The difference is that in Windows, messages are almost always targeted to a particular window's handle (hwnd). This doesn't make sense in Flux, since it's more of an observer pattern. The logic of interpreting the meaning of an action is left to each store, which is really just a cache.
The biggest problem I have with Flux relates to this polymophism. I use TypeScript where possible and this is the one place where it always breaks down. I understand the appeal of JS objects but the only way to ensure your Flux based system is stable is to have lots of unit tests around your actions and stores.
Redux is a more straightforward take on caching. I can also use type annotations on the reducers and associated store structure, so this helps ensure structural consistency. It also solves the isomorphism problem of server side rendering because each request can get its own state. There is no out of the box solution for this with Flux, since stores are singletons by default.
Minor nit: stores are just caches with observers. I'm not sure why they weren't just called caches.
I like how Redux is a pretty simple distillation of some flux concepts... In the end, I think it comes down to application scale. The "new" way of mutating models based on OO classes that tend to contain any given amount of logic tends to be much harder to reason with as you add features. More features means a linear to exponential growth in complexity and risk of side effects.
With one-way workflows combined with immutable state, and idempotent components, it's much easier to log/replay/test any given scenario.
The big idea from old school windows that is shared with Flux is the idea of little views that render themselves and manage their own state. In Windows we called those Controls or Window Classes. It is a good idea, and one worthy of preserving.
but on windows each had only their designated space. on react, if one widget mess up with another's DOM, all hell break lose. ...maybe similarly to silly applications abusing the under documented windows api to do thinks like have a special skin instead of the normal windows shell.
If you've ever created a user interface in HTML, you've used something that does not have the concept of user controls. Let's say that I want to make a numeric entry user control for entering numbers on a touch screen. This control will be made up of a collection of built-in UI controls that will work together to do what I need: let's say a text field, a couple of up and down buttons, an always-visible keypad, and a little slider that lets us move between min and max.
In most HTML-based systems, it's not a built-in or natural thing to have such a self-contained "User Control" that I can just plop in to my user interface in 25 different places and have it manage itself, and the interaction between its own sub-components. Django, for example, completely lacks such a concept (although TurboGears does have it). This is a first class concept in classic Windows programming, and also in Flux-- although it's missing from most web-based systems.
Hehe, i was suspected someone would bring up HTML. Plain HTML is a markup language designed for documents, not a GUI framework. It hardly even has the concept of any interaction at all in the first place, how can it then have the concept of reusable and encapsulated interaction? (jquery is not a GUI framework either btw, its a DOM manipulation tool). I would say HTML is too low level to be called GUI framework. There are other low level graphic frameworks that also lack this, OpenGL for example which is designed for graphics only, if you want widgets you will have to build that yourself on top of the low level framework or find someone else who already did.
But yes, you are right that it's quite lacking in this area on it's own. Although it's easy to find frameworks on top of html that adds this feature. Google for "HTML datepicker" and i bet you will find thousands of results. Other frameworks on top of html that does have this concept baked in are asp.net and anuglar.
1)
Mac OS X does not store a bitmap for every widget, that's iOS's architecture. It stores a bitmap for every window. Having a layer (GPU-stored bitmap) was only introduced once CoreAnimation was ported to OS X. It was and is optional.
2)
OS X Views also have a -drawRect: method that works the same way.
And react and frameworks like it just duplicated this, see http://blog.metaobject.com/2015/04/reactnative-isn.html In fact, when I first read about react (non-native), my first thought was "hey, finally they came up with a good equivalent of NSView + drawRect:
Isn't drawRect: working at a different level of abstraction than React? In React, the view function constructs a virtual DOM tree, which can contain links, buttons, form fields, tables, etc. The equivalent on OS X would construct NSControls or NSCells, or objects that ultimately got translated into those, rather than just drawing on a canvas.
I saw this as react choosing the obvious way to implement drawRect: inside a browser: painting with HTML.
You could easily have a "graphics context" that accepts high-level objects, or a variant of drawRect: that creates subviews and then tells those subviews to draw themselves.
I don't see this as being fundamentally/structurally different, though there is a slight difference in the implementation.
As someone who has written several things with Flux and Flux-esque architecture, I see it as a step in the middle, rather than where things are ending. It's not a large step from Flux (Stores update themselves in response to actions) to Redux (Model the entire application as reducers on a sequence of Actions) to RxJS Observables.
What's shared in there is the idea that unidirectional data flow is a whole lot easier to reason about, model, and simulate than 2-way data flow. Everything else is semantics.
I really appreciate that Facebook resisted the urge to build the Ultimate Framework, with one thousand bells and whistles.
Instead they kept things simple and low level, to the point that writing vanilla React and Flux is kind of verbose. However, I much prefer this approach to some of the other frameworks that try to do everything for me, but which mostly just end up confusing me with too many abstractions.
> What's shared in there is the idea that unidirectional data flow is a whole lot easier to reason about
It makes some things a whole lot "easier to reason about" (so sick of that phrase), but other things not so "easy to reason about", like, for instance, error handling. Getting my head wrapped around the fact that asynchronous errors had to live in their own stores and be handled in the same way as all other data passed to the view was certainly not "easy" to reason about and still doesn't sit right with me to this day. You make concessions with every pattern and there is no silver bullet.
I'm not saying easy to reason about in the sense of "easy to learn because it isn't a change from how we used to do things", but rather, in that it allows one to easily answer:
- What is the current state of things?
- How did we arrive at the current state of things?
- What should the UI look like given the current state of things?
It means that we can say: "Thing A happened, then Thing B happened, then Thing C happened". And then conceptualize "what should things look like after that chain of events". I've found this to make errors a whole lot easier about, because I don't need to piece together the state when an error happens -- just fire an action that says "An Error happened", then the stores figure out how to act accordingly. It's just another action.
Unidirectional data flow has been mainstream at least as often as nests of observers and component state. Probably moreso, if you count the 20 years of computing where the only thing you could do was batch processing.
What React does do is bring unidirectional data flow into the modern SPA web era. That's an accomplishment now, but it's important to put it in its proper place in history. Most of the history of programming had unidirectional data flow, it has its share of flaws as interfaces get more complex and interactive, and it's likely that at some point in the future people will move back towards multidirectional data flow.
Win32 is not unidirectional data flow. there is the API SetWindowLong to modify/update a component anywhere you like. It's used so widely everyone must've done it.
There are some great things going on in React / Flux, but the part that needs to be emphasized about Flux, that Facebook doesn't address explicitly anywhere, and that most people eager to always be on the cutting edge will never admit, is that this stuff was designed to solve problems for very complex applications. Complexity is relative, and the solutions that reduce complexity and friction in the development process for Facebook may increase it for another organization. That is to say, Flux / React et al is by no means simple. Not even a little bit. But it probably simplified a lot of things for the Facebook team. However, YMMV for your 6 person startup engineering team.
The complexity comes from stitching everything together. You choose your router, your flux library, your build tools, whether or not to use JSX, whether or not to write your CSS with Javascript, which fancy new React specific testing and mocking library you need to use, how to organize your project, what best practices you should follow, what gotchas you will encounter because of Reacts relative young age as an OOP library, etc.
You will always have to deal with complexity, it just depends on what kind of complexity you are willing to stomach. Some people prefer to deal with the complexity of stitching things together, other people prefer to have things stitched together for them and deal with the complexity of many abstractions. Both are fine choices, and we will debate endlessly with each other over which approach is the best approach. (hint: neither are).
Facebook's Flux is pretty verbose and can be difficult to setup but similar libraries like Redux are very simple if you have prior JavaScript experience.
React is also pretty simple when compared to libraries like Angular.
I agree, React is fairly simple to use on it's own, but the Flux pattern / framework is really where the complexity comes in, IMO. I refer to them together b/c you rarely see one without the other. My general feeling is, Flux seems like an excellent solution for some of the problems Facebook has -- their inbox for example, where you have multiple components on a page that need to react when external events happen (a message is sent, a message is received, etc). My feeling is that many would-be cargo-cult-ers don't have problems at this level of complexity. Most of them are doing basic CRUD operations, and using a needlessly complex framework to do it.
Windows development has been increasingly moving to ReactiveX and ReactiveX-style observables as the "best practice" way of handling time and change. (I certainly encourage new Windows projects to take a good, long look at ReactiveUI.)
I certainly think that we're going to see more migrations to RxJS (or Bacon or other relatives) from Flux (or in addition to Flux).
Personally, I've been preferring Cycle (http://cycle.js.org), which is directly RxJS-based, over React and Flux, but there are an increasing number of options and maybe no "perfect" answers just yet.
Yes, when I first read about Flux (which was after I learned RxJS) I kept thinking that you are essentially writing by hand things that could be generated automatically. It's like going from nested callbacks in early Node.js to Go. :)
I prototyped a decent amount of UI in Cycle and thought that it was great. Using virtualdom with it was really enjoyable and a lot easier to reason about than what is seen in the typical JQuery-based RxJS/Bacon/Kefir examples. However, I found the interaction between exception handling in Cycle and RxJS to be strange at times... maybe I'm just not experienced enough with them yet, but I swear I had some swallowed exceptions that I could never track down. That being said, I feel that this is "the future" (standard disclaimer applies).
Currently I'm using Knockout.js for UI stuff at work and I've found that by maximizing the use of pure computed observables and minimizing mutation of observables (essentially, implementing unidirectional data flow) I can get pretty good results.
Yes, exception handling with Observables, just like with Promises, gets to be a bit more complicated, especially as browser Dev Tools haven't quite caught up to them yet. (With Promises native to ES2015 that's getting fixed and native Observables are tentative for ES2016 or ES2017 which would be great. Also, now that Promises are native, I'm really hoping for a big Node push towards more Promise-friendly APIs.)
The great thing about Observables, though, is that even though exception handling is more complicated as you first learn it: it's a lot more consistent as you get used to it than the callback/event-handler world. That said, I do think Cycle swallowed exceptions it shouldn't have early on (especially without good browser dev tools support) and seems to be getting better at console.loggging them if not propagating them through observable streams as sometimes it should.
My fallback is still Knockout as well. It has served its role, with its very basic observables, very well over the years.
In a nutshell, React encourages you to build applications with unidirectional flow and a mostly immutable style of data handling. React, however, mostly focuses this effort on the View. Cycle extends this encouragement to the entire application architecture and RxJS is its means of doing so in a consistent, reasonable way.
Pros to Cycle over React is that it is truly more "Reactive" in the best of ways, and in a way that is easier to work with (RxJS has a standard set of operators for building and composing reactive Observables that are shared with other platforms).
Cons to Cycle over React is that you lose "the Facebook effect". It's much more a project of love than of big enterprise and so it's a bit harder to learn and a bit more complicated at first glance. (That said, RxJS itself and all the Reactive Extensions libraries have gotten a lot of corporate love from companies as diverse as Microsoft and Netflix.)
http://reactivex.io is a great reference site to learn more about RX.
http://cycle.js.org of course also has a very strong explanation of what is and how it expects to work.
Have a look at the Windows design some more? It's pretty extensively studied and documented. A "this is like that" article is a hint that if you wanted to learn more about this, you could also learn more about that.
Specifically, look into how XAML apps work on the Universal Windows Platform [1]. Windows application development moved past Win32 about ten years ago with the initial version of XAML (Windows Presentation Foundation).
Nowadays, the event loop is still there under the hood, but it's abstracted away by the UI framework. You are left with independent views that can render themselves, but the raw event loop is exposed as higher level events raised by individual views (e.g., a single "Tapped" event instead of having to handle separate mouse/touch/pen/controller down+up events).
Apps often use some sort of message bus for communicating between non-UI components in a decoupled way, which may or may not re-use the UI thread's event loop. In general, you try to minimize what happens on the main loop, since it can lead to UI unresponsiveness. However, in XAML the main event loop is actually a priority queue, so you can put stuff into it at lower priority than regular input events[2].
> Disclaimer: I work at Microsoft, where we're using XAML to build the Windows Shell.
This is off-topic, but I have a question about the Windows 10 shell. You previously said that the new XAML-based part of the shell is written in C++/CX. Why not C#? Is the performance and/or memory footprint of the .NET Framework still not good enough, even with .NET Native?
Except that XAML itself is just the XML friendly way to describe what traditionally was done by a resource fork.
Yah, sure its got a few new tricks, but in that regards its just another all that is old is new again. In that it basically serializes the CLR where as the old resource fork serialized the windows API.
Source? I always took XHTML to be an effort to make html into a stricter more parse-friendly format. Xhtml lets you use an xml parser to pull apart a page rather than having to deal with some kind of markup tag soup (which also makes it a great target for tooling within an IDE). XAML is concerned with layouts and binding in ways that, I don't believe, were ever intended for Xhtml. I could be wrong but they've always seemed to be worlds apart.
Document developers and user agent designers are constantly discovering new ways to express their ideas through new markup. In XML, it is relatively easy to introduce new elements or additional element attributes. The XHTML family is designed to accommodate these extensions through XHTML modules and techniques for developing new XHTML-conforming modules (described in the XHTML Modularization specification). These modules will permit the combination of existing and new feature sets when developing content and when designing new user agents.
If your document is just pure XHTML 1.0 (not including other markup languages) then you will not yet notice much difference. However as more and more XML tools become available, such as XSLT for tranforming documents, you will start noticing the advantages of using XHTML. XForms for instance will allow you to edit XHTML documents (or any other sort of XML document) in simple controllable ways. Semantic Web applications will be able to take advantage of XHTML documents.
The old HTML behaviors would be achieved by XML stylesheets, rendering into HTML4 or XML events.
There were then a plethora of standards planned to augment XHTML with application level.
I've seen comments followed by inappropriate disclaimers a bit recently, so I'm going to go all old man here and let you know that a disclaimer is a statement denying something, not a statement of credentials.
I'm not sure what single catchy word you want here, but that's not it.
I think the word most people are looking for in these cases is "disclosure", in that they want to "disclose" their affiliation so that readers are aware of potential biases in their views.
I am too, but maybe you're thinking of Windows Presentation Framework? XAML files are still used to describe user interfaces in whatever they're calling Metro these days.
no, but they've introduced a bunch of versions of it under different names (WPF, Silverlight, Phone Silverlight, WinRT XAML, UWP), a few of which broke compatibility, so that's probably where you got that impression.
Anyone can bitch about the latest tech because there's very little new under the sun. It's far more constructive to suggest improvements. Like the adage "give me solutions not problems".
In Mac OS (1984), controls were different from windows. Advantage was that controls were more light-weight, disadvantage that one couldn't build a control out of smaller controls (at least, the system did not support that). Its event loop also was extremely rudimentary. If you got a mouse click event, you would have to make a system call to find out what window it was in, then a system call to find a control, etc. If the OS had a real kernel and memory protection, that would have killed performance.
Carbon moved the "giant switch statements" out of the views, inside the library (edit: maybe, even into the kernel. That way, application processes only would be woken up if they really had work to do). Controls had to tell the library what events they were interested in, and what function to call to handle them.
Advantage was that the system didn't need to fire zillions of events to controls that didn't do anything with them (for example, few controls respond to "mouse moved" events, but if the system doesn't know that, it has to send corresponding events to a top-level window, the control below it, the control in the control below it, etc.). That decreased power usage (at least in theory; code and data accessed could be closer to each other, decreasing cache misses, and there were fewer JSR-switch-do nothing-RTS cycles), and allowed for the system to introduce more high-level events (double-click, triple-click, "mouse entered", "mouse moved outside the area of the control", etc) without decreasing performance; disadvantage that registering the exact set of events one was interested in wasn't fun (especially because, in those days, one needed to pass universal procedure pointers to the Carbon library to register handlers)
If you have lambdas and reflection, the latter can be made free for programmers. So, I guess that is an area one could look at. I'm not sure the gains will be real in a system where the windows live in a browser window, though.
Well, first, I'd note that React and Flux are already learning from the past. For example, I absolutely think there are tons of similarities between WM_PAINT and React's DOM diffing, but there are also major, major differences. A really key one being that React effectively handles the rendering tree directly, and can therefore do high-level manipulation and performance work on it, whereas Windows paint messages forced the windows themselves to handle all of their state diffing and painting issues. This makes the React part of Flux a lot closer to things like WPF, or retained-mode 3D graphics. (In fact, it wouldn't shock me if that were the actual inspiration for React's DOM work, and that the rest of this is more covergent evolution.)
I'd also note that we know that this style of design scales amazingly well. You can build and maintain applications as complex as Word, Myst, Netscape, and so on indefinitely. So we definitely know that this design has some historical precedent of working really well, and we're probably not way off track.
That in turn means I think we can answer the "what learning can we apply" by looking at what worked well historically.
For example, one of the things you have to do is to hide the low-level event loop. That's what frameworks like OWL and MFC did very early on, and I think what frameworks like Reflux are trying to do now. You even have some of that in the form of observers and so on in Flux itself. But I think that getting those types of things standardized, and a bit higher-level, will help a lot. I suspect, although I don't know, that ES5 was a bit of a blocker on getting that in Flux earlier, and suspect that Babel's pervasiveness will let that situation start changing, but that's entirely a guess.
We also know that, for the overwhelming majority of apps that are actually written, having a GUI designer building on the underlying framework (e.g. Interface Builder, VisualAge's form designer, etc.) can both decrease development time and reduce bugs, and we know that such tools work best with certain patterns in how callbacks work in the underlying framework. Specifically, you want one-to-many observers, strongly typed events that can be exposed and described via reflection, etc. So I'd hope that implementing that kind of thing in Flux can be done in a forward-looking way from the beginning, rather than getting bolted on later. More generically, designing the framework with an eye towards making it tooling-friendly is probably a really good way to future-proof things from the beginning.
I guess we'll see as we move forward. Each situation is a little different, so while there are parallels, it's hardly a slam-dunk that things will go exactly the same. But I do think that keeping an eye towards tooling is a really logical way to look forward and learn from the past.
I think once we see react components converge across web + native we'll start to see more cross-platform tooling and designers springing up. I think things are looking pretty bright. Arguably the key insight the react team had was to treat the browser as a dumb output... and then realise that the DOM could just be one of many outputs.
I did just wonder whether in the GUI world there was a major pattern that people were using today that Flux should be using instead. But apparently not.
I am struggling with the GUI designer factor. Most of my career has been spent working on WinForms and WPF GUIs and a stack of different things under them. Visual designers are fantastic for tweaking layout, but lately the question I'm asking is whether it's really worth being forced into a particular way of retaining state in the UI and a particular edit/compile/test workflow.
When I step back and look at the broader life cycle of projects, very little time is spent in the visual designer. Is that because visual designers are just that powerful or does it speak more to the relative weight of other development factors dominating projects, at least the ones I work on, and admit a possibility that the designer may be worth giving up in trade for other gains in other areas like interactivity (e.g. F# REPL) or a different conceptual model (e.g. React-style functional components)?
I surely wouldn't want to write big WinForms/WPF GUIs in normal imperative style:
var f = new Form();
var p = new Panel();
var b = new Button();
b.Text = "Go";
b.Click += ...
p.Add(b);
f.Controls.Add(p);
But the way React/Om and Elm are integrating GUI construction into the flow of the program is looking awfully compelling.
I'm interested in hearing more perspectives on the tradeoffs between having visual designers and constructing the GUI in code, assuming an acceptably powerful syntax or model for constructing it.
If you want to know what comes next from fb as far as Flux goes, dig into Relay. It's actually really important to build flexibility into the data types. The model of a user changes but a year-old app should still be able to query and understand the results.
That also goes for old components. When you have thousands of components, how do you update the model? Well, you avoid strict typing and look to query languages instead. Rather than every component sharing a type, each component internally defines what it needs.
Move to the dispatcher, the data fired off by actions needs to fit the stores that are listening. A common type requires everything to be completely in sync. I think it's worth considering how a query language could fit here, such that a listener can query the payload for what it needs from a message. A flexible object/JSON blob are a reasonable way to pass messages with an eye for that in the future.
I look at the messages to be the start to a distributed application architecture. So I can have messages that target remote services. Eg "turn on living room lights". And similarly, messages from the server, "living room lights are now on." It's a nice way of tying local APIs and remote services together. Have them all communicate with the same messages.
> A really key one being that React effectively handles the rendering tree directly, and can therefore do high-level manipulation and performance work on it, whereas Windows paint messages forced the windows themselves to handle all of their state diffing and painting issues.
That is a big difference. I think we should pay more attention to things like who-owns-what when we're thinking about the structure of our applications. It's a bit glib to, as the author of the piece does, say "oh well there's diffing here, and Windows had diffing too, so they're the same". Differences like this, well, make all the difference.
This is just a high level thought : Will we eventually converge to the point where Flux will also provide an abstraction to the GPU to help speed up rendering? I know there are some tools in place now, but can we get to the point where the designer has no idea they are utilizing an underlying native GPU vs how the browser chooses to do the rendering?
> For example, one of the things you have to do is to hide the low-level event loop.
This is the very point at which you cross from a library into a framework.
Not to take away from anything else you said, I wonder if this evolution isn't just the latest step in a neverending oscillation around this pivotal point.
This was an awesome read. We use React and Flux daily at work so I'm going to share this with coworkers. I'm a little confused on what the author's concern is though.
> I’ve just felt…well, weird. Something seemed off
Is there anything substantively wrong with the flux pattern or drawbacks?
EDIT: I changed that paragraph; thanks for pointing out it no longer fit with the rest of the post.
Original comment:
The weakness of typing in the switch blocks. I'll alter that sentence a bit; it's a leftover from an earlier version of that article where I was going to focus very narrowly on how uMsg/wParam/lParam, like the way actions are usually done in Flux apps, are decoupled to the point where it's very easy to make typing errors. Then I described why I thought they were similar to begin with, and then axed most of the original post when the entire thing switched to showing how Flux is an old pattern we've done before. I'll see if I can tweak that sentence in a way that keeps the flow going.
Or better, use Elm (or something similar) and tagged unions. That way you get type-checking and exhaustive-checking of your message types. You still have a big match (~switch/case) but now the compiler yells at you if you've forgotten to handle a message tag or if you're not unpacking the right types from it.
I agree. He does not say. That it's an old pattern is probably a testament to its strength.
I am using React/Flux (Redux) as well. I generally love it and find it easy to write dynamic UIs and without making very many mistakes. Each React and Flux component is small, focused, testable, and easy to reason about.
The only gripes I have are the small amounts of boilerplate I end up with. Redux cuts this down a lot over a homebrewed Flux implementation I wrote. Also, testing complex trees of components can be a bit of a drag.
So what is Angular then? Angular seems to me to be more like an ORM for the view where there's dirty checking of the model and then update events are dispatched to the external system which is the DOM instead of the DB. Is there something similar in the GUI toolkit world?
Angular is very similar to the MVVM style of GUI programming that's popular in Microsoft's XAML-based libraries (e.g. WPF, Silverlight, WinRT and now UWP). Which I think is interesting because that's where they ended up. Modern windows programming doesn't involve writing WndProc functions anymore.
See I think Flux is too low-level. I think it's too hard to reason about from the top level. Not that the architecture is inherently bad—people are using it a ton–but that things get out of hand way to fast. Regardless I can't wait to see web development in a year!
I share the author's feeling of déjà vu. I feel like I've seen this article already. It was a comment posted on HN earlier today. It's kind of fascinating how someone's comment can get promoted to someone else's blog post in a few hours.
Just to note, it is the original commenter's blog, not stolen. @gecko wrote "And the comment is now a blog post with more context for those who were lucky enough never to write raw Windows code"
I love the correlation between modern in-browser development and programming early personal computers. It's akin to how digital logic abstracts away the tyranny of E&M physics, but several layers higher, and this time just between instruction sets/runtimes.
ChromeOS is kind of making this leap, but I really wonder when web browser ASICs (or equivalent) will start popping up.
> React by itself doesn’t actually solve how to propagate changes
It does actually - you update the state, then React propogates the changes for you through it's props mechanism. Flux is an extra layer of indirection over state changes if you need it: https://twitter.com/floydophone/status/649786438330945536 (edit: I regret my tone here, there is clearly ongoing work in this area and no widely accepted best practice yet)
Flux is not message passing, React components do not redraw themselves, React components do not pass messages to each other, Flux only superficially looks like winapi because of the switch statement in that particular example.
React provides the view as a function of state. winapi is nothing like that.
React is a giant step towards functional programming. winapi is definitely nothing like that.
The author understands React quite well, it's this comment that, while also understanding React, misses the author's point by insisting on the superficial differences.
>It does actually - you update the state, then React propogates the changes for you through it's props mechanism.
That it achieves something (change propagation) it doesn't mean it's the solution to that thing. There's a reason the author used the wording "doesn't solve" (instead of "doesn't handle", "doesn't allow" etc). Flux (and it's variations) is what really solves propagation in React without it becoming a tangled mess.
>Flux is not message passing
Nobody said it is.
>React components do not redraw themselves
They do, that's the whole point. React components manage, compose and return their DOM representation structure which is the HTML analogous to "drawing". return(<b>{msg}</b>) IS drawing.
The fact that they don't have low level rendering code embedded in them is just an implementation detail (because of course the DOM already exists).
>React provides the view as a function of state. Windows is nothing like that.
Only the author doesn't mention Windows, the abstract OS, he mentions a specific implementation of its UI engine that clearly provides the view as a function of state.
Similar miscomprehensions of the article drive the other objections.
I don't agree with your core thesis, "redraw" and "draw" are imperative verbs which are ubiquitus in winapi, `return <b>{msg}</b>` is not a verb but a value, this not superficial or an implementation detail, but rather a massive change in programming model to the point that IMO comparisons that overlook this difference are not useful at best and harmful at worst
That's because HTML has an abstract declarative way to define the layout/drawing whereas Windows did not.
But that's not especially important when it comes to React in general. The important is that the "how to draw myself" (in a high level) never leaves the view, not whether the "drawing instuctions" happen inside the app (as low level draw commands) or outside (by a mechanism that handles
composable values representing markup).
In both cases the view is a black box -- in that nobody else determines it's layout/style/contents. Whether somebody else or the view itself draws them might reflect a functional vs more imperative style, but it's not the key benefit of react.
In fact there are React implementations like React-Canvas that don't use the DOM/diffing mechanism at all -- they just invalidate the whole top level "viewport".
Why? Where is the practical difference for the program design between "returning html that the browser then draws" to "drawing themself" at the end of the flow?
Serious question, and I'm not a Windows programmer, I might be missing something.
There is no practical difference. You could argue that manipulating the DOM is the closest analogy to Draw(), but trying to make a distinction between that and generating HTML (that is going to be used to update the DOM) seems like splitting hairs.
And: It's true that DOM manipulation could be seen as "lower level" than HTML generation, but the main difference for programmers is the API. A framework that forced you to call createNode('b') and createTextNode() and appendChild() would not get the adoption that one which lets you say "return '<b>Foo</b>', all other things being equal.
your question resolves to when to choose imperative vs OO v functional, and while I'm not going to try to discuss that here, they are profoundly different. Winapi is imperative, React is functional.
That's a red herring. That would be relevant if we were discussing the imperative vs functional methodology.
But what the article discusses is the similarity in the architecture of two view management systems.
And even more so, the imperative changes in the Windows scheme described are encapsulated inside the views, and the overall system is more functional that you give it credit for. But, again, I think that this is beside the point.
Its pretty essential for non-trivial applications with data-structures more complex than can be handed over to React. setState is just sugar - it also calls forceUpdate.
"It does actually - you update the state, then React propagates the changes for you through it's props mechanism."
That's not all that different from what DispatchMessage does in the Win32 API.
Anyway, I think that what FRP, React, and Win32 all have in common is a reliance on explicit local state. Components are not supposed to maintain references to other components and send them messages directly; instead, components are supposed to pass messages back to the top-level event loop of the component hierarchy, which then dispatches them to the appropriate subcomponent, which is supposed to take the appropriate action based only on information in the message and its own local state.
This is in contrast with later OOP frameworks like Swing, MFC, NextStep/Cocoa, or the DOM, where it's common to hold a direct reference to another component and then update it via observer. These hold implicit global mutable state; the state of distant components in the tree may be silently updated in response to an event firing in some other component.
The trade-off here is whether you want spaghetti code or ravioli code. Functional frameworks (FRP, React, Win32) tend to locate logic for the details of what should change outside of the components themselves, which focus only on how it should change. That keeps component logic simple and prevents unexpected side-effects, but means that the code for the controller itself can become ginormous. OOP frameworks tend to move logic for what should change into observers, which then reach across the component tree and update a number of different components directly. You break up the logic into a number of bite-sized pieces, but there's no way to see or predict what the effect on the system of a whole of an event will be.
Interestingly, Polymer/webcomponents goes back to OOP-style ravioli code. I suspect that like the rest of the computing world, we'll see a see-saw effect on a roughly 4-5 year cycle, so maybe around 2018 observers and mutable state will become popular again.
The author does not state that React components are literally redrawn like Windows views. He simply recognizes a pattern and makes a pretty apt comparison.
> React by itself doesn’t actually solve how to propagate changes
In my experience, it really doesn't solve the problem all that well for a sufficiently complex app. I'm willing to give the author the benefit of the doubt on that point.
> React is a giant step towards functional programming. Windows is definitely nothing like that.
Err, that's not entirely true for MVVM apps, where views are just declarative markup that are rendered (retained mode) based on logical state provided by data bindings. It's been like that ever since XAML was introduced in 2006.
MVVM makes ubiquitous use of mutable state which means model change listeners, callbacks calling callbacks etc, FP/React is about using immutable state to dodge all these problems by design.
The point is that in both React and XAML apps, you (usually) don't write code to mutate the View. In your code, the View is just a declarative construct.
There's nothing stopping you from making immutable ViewModels in MVVM, other than bad perf (which React would presumably also suffer from on larger apps).
The point is React is not just View, each component is a view plus the part of the "controller"/"presenter" that would take care of populating the view. And the controller/presenter is where state is always encapsulated, which React gets (mostly) rid off.
Facebook doesn't seem to perform so badly in terms of a larger app. More varied components on a page than most applications in general. Most of the quirkiness I see in FB tends to come from how they deal with eventual consistency with their backend in order to scale to millions of simultaneous users.
Adding a late comment to appreciate the MVVM synopsis in your blog post. The pain of tracking lots of callbacks and cascading updates is a lot of what Pete Hunt was talking about in Be Predictable, Not Correct [0], but there's not a lot of discussion applying that analysis to MVVM as for other forms of MV*. I find it's a real concern in WPF apps with much of any complexity.
Not sure I fully follow this.. I understand component's don't pass messages to each other, don't redraw themselves. But isn't flux basically doing somewhat of a uni-directional message passing? Maybe my terminology is incorrect and i'm mixing up concepts , but it feels like a modified pub-sub type architecture (difference is that when something is dispatched its sent to _every_ registered callback as opposed to subscribing specifically). Still feels like a form of message passing. I really don't know enough about how old windows development worked, but from this article it seems like it's a _similar_ type of pattern.
One nitpick I have though is the giant switch statement they mention. I typically put all this into an object so you don't need to traverse through some huge switch. Not sure why everyone just doesn't do something like -
I wonder if you are running into the same thing that I have seen with React. Because each node in the React tree is called a "component", people tend to think that it should be self contained. In the article, the author opines that the state should be kept in the same object as the view.
I think this is where people run into problems with React and it is what gets them running for something like Flux early on (I agree that it is not necessary in many cases).
If instead of putting the state for a node in the same node that renders it, you put it in the parent node, suddenly things get a whole lot simpler. To look at it the other way around, a node keeps the state of its children and passes that state as props to the children. That way the children are all idempotent with respect to rendering.
So what do you do when the child wants to update the state? The parent passes a callback to the child and the child calls that callback. This updates the state in the parent and passes new props to the child.
The effect of this is separation of rendering logic from update logic. Every React component knows how to render itself from props. It knows how to request an update to itself with callbacks that are passed as props, but it doesn't know how to update itself. The parent holds the state of the children, but doesn't know how to render them. It simply passes that state to the children. The parent knows how to update the state of the children.
You may recognize this pattern. The parent is the controller for the child. The state that the parent holds is the model for the child. The parent node together with its children form an MVC structure. Of course each child can be a parent too, which leads to a hierarchical MVC structure.
With this, you will usually not need Flux. Of course, one could imagine that passing callbacks is tedious. We could create a special object that handles all the "actions" that the child might make. Then we could simply pass that object in the props. Sometimes, the state that you want to update is several layers up and you end up passing this object down the tree just so that the very bottom node can call "actions" on it. So, instead, we could make this a singleton object and call it a "dispatcher".
Also, you sometimes have situations where, for convenience sake, you might want to store the state in several places in the tree. Let's say every terminal node needs a certain piece of state, so you end up having to put it in the top node in the tree. Then when it changes, you end up rendering the whole tree. So instead, you can create an object (let's call it a "store") that holds the state. Individual nodes observe the "store" and when it changes, they pass the relevant values as props to their children (never updating their own state or rendering it directly!!!). We can even make this "store" get updated by pushes form outside events.
And there you have Flux. Useful in certain circumstances, but I would suggest that if you are finding that you absolutely need it, it's because you are trying to render local state, which you should not do.
With respect to the original article, this is what Windows got wrong (with everyone else, of course). The code that renders the state, holds that state, which makes you reach for WndProc().
I've watched a lot of talks and read a lot of articles about these technologies, and this is the clearest progression I've seen explaining the rationale for Flux.
And thank you for making the point about what Windows got wrong. It is a critical distinction between two things that otherwise look very, very similar.
Can you write another one on the progression to Relay? GraphQL is where they lost me.
Stated more simply, React is just like "rebooting" your computer after you have changed the config files. It is, in this respect, quite ancient technology, except that the framework hits the "reset" button for you.
I've never liked the comparison of Flux to functional reactive programming. It's really just good ol' object-oriented design. Actions are akin to the Command pattern and the Dispatcher feels like a Mediator. Passing callbacks instead of objects and making a mostly directed graph does not yield FRP.
The big dispatcher switch in Flux is eerily reminiscent of how we used to program AWT widgets back in Java 1.0 days. This architecture was improved greatly in Java 1.1 with a delegation model. If the history is to repeat itself, as the OP so eloquent argues for, then, if you want to see where flux will be going in the next couple of years, start using knockout.js now and for once stay ahead of the curve.
I mean, both of those get things right. XAML/WPF tries to make the interface cleanly human editable in code, but is incredibly verbose, and the "can be human editable" constraint ends up being "is only human editable" pretty quickly. Interface Builder is a lot more intuitive and powerful, but it's more of an all-or-nothing affair, in my opinion.
Beyond that, though, they have way more in common than they do different. I'm not sure it makes sense to talk of one of them as a better model than the other, versus maybe a better implementation of the same workflow.
Well, as there's no Cocoa on Windows and no WPF on OS X it doesn't really matter.
Also that SO post reads a lot like "I can't stand Interface Builder" (which is understandable if you aren't prepare to embrace it).
I find the Cocoa model of not having to do anything with XML far better for my sanity. But then again I guess there are people who prefer XML artistry to drawRect overriding. :)
And uses it incorrectly. Idempotence is when f(f(x)) == f(x). React views take some props and/or state and return a component, so applying the view to its output would just give an error.
I think the word he's looking for is "pure".
EDIT: I was wrong - apparently "idempotent" can also describe a consistent relationship between the input and some state. In that sense, it's actually a very good description of how the input to a React view affects the DOM.
Well, even in Wikipedia, the definition is quoted as being different for CS vs the unary operation you describe.
"In computer science, the term idempotent is used more comprehensively to describe an operation that will produce the same results if executed once or multiple times."
Which is more that f(x) == f(x) == f(x) for the state affected by f.
I'm not sure that they are. You just need to model the concept of state as the function's input/output, as functional programmers are eager to do :)
If we start in state x, then apply operation f, the resulting state is f(x). If we apply operation f again, we'll be in state f(f(x)). If f is idempotent, then state f(x) and state f(f(x)) are identical.
Don't they have to be pure in order to be idempotent? If you might get a different virtual DOM even when props/state are unchanged (not pure), then re-rendering the component might update the DOM even when props/state are unchanged (not idempotent).
It makes sense that idempotent can have both definitions.
Mathematical functions can't have side effects so they have the "normal" definition of f(f(x))=f(x).
In computer science, with side effects, since state can change, the passage of time could be though of as similar to composition f(x) ... f(x) ~= (f o f)(x) = f(f(x)). This is the line of thinking that gets to monads where successive calls would in fact be f(f(x)) (or f(x).f())
Am I understanding idempotence/side effects/monads right?
Rendering a React component satisfies the f(f(x)) definition, too! We just need to consider the component's props/state to be constant, which is the scenario that the article describes.
In general, the act of rendering a React component is a function from (component props, component state, DOM state) to DOM state. But, when the component's props/state are considered constant, the rendering process is simply a function from DOM state to DOM state.
This is an idempotent function over DOM state if and only if the React component's render method always returns the same virtual DOM tree — that is, iff the React component's render method is pure. In that scenario, the internal React renderer will perform the diff, discover that the physical DOM state matches the returned virtual DOM tree, and leave the physical DOM state unchanged.
Turns out, saying that the render process is idempotent is exactly equivalent to saying that the component's render method is pure! Cool!
> A unary operation (or function) is idempotent if, whenever it is applied twice to any value, it gives the same result as if it were applied once; i.e., ƒ(ƒ(x)) ≡ ƒ(x). For example, the absolute value function, where abs(abs(x)) ≡ abs(x).
So is f(x) = x + 1 idempotent?
Under this part: "whenever it is applied twice to any value, it gives the same result as if it were applied once" it would appear so.
f(2) = 3
f(2) still = 3.
When they say "applied twice", they're not talking about two independent applications to the same initial value. (That's a test for functional purity, not idempotence.)
Instead, they're referring to the following sequence of actions:
1. Update the value x according to the operation f.
2. Update the value x according to the operation f.
WndProc is how the windows manager communicates with Windows UI code. Flux is how the UI communicates actions back to the data layer of the application. They're completely different.
The dispatcher is also not the most efficient model, where every store is registered to listen to every event. This is a bit like multiple windows on an event loop. The difference is that in Windows, messages are almost always targeted to a particular window's handle (hwnd). This doesn't make sense in Flux, since it's more of an observer pattern. The logic of interpreting the meaning of an action is left to each store, which is really just a cache.
The biggest problem I have with Flux relates to this polymophism. I use TypeScript where possible and this is the one place where it always breaks down. I understand the appeal of JS objects but the only way to ensure your Flux based system is stable is to have lots of unit tests around your actions and stores.
Redux is a more straightforward take on caching. I can also use type annotations on the reducers and associated store structure, so this helps ensure structural consistency. It also solves the isomorphism problem of server side rendering because each request can get its own state. There is no out of the box solution for this with Flux, since stores are singletons by default.
Minor nit: stores are just caches with observers. I'm not sure why they weren't just called caches.