We've been thinking about this a lot lately for some of the projects we've been doing for Light Table and we've essentially been doing the same thing as what David's proposing here.
What react ultimately opens up is a way to do immediate mode UI [1] on top of the DOM _efficiently_, which changes things pretty dramatically. It means we can start to treat the browser as just a renderer and get the infectious design decisions of the DOM out of our programs. If nothing else, this gives us freedom, but as david is suggesting, I think this also gives us an opportunity to treat UI much more directly than we currently are. If you want to know what the state of your UI is, you just have to read linearly down through the code that produces your tree. No nest of dependencies, no state hidden in the UI components, you could even get rid of event hierarchies if you wanted.
More important than anything else, this gives us a chance to dramatically simplify our model for UI and magically be even faster than we were before. Sounds like a win to me.
If you want to do immediate mode, then why would you use the DOM at all? Canvas is supported in IE9+, and is much less complicated for such applications.
I'm uncertain that having to reimplement all your high-level drawing routines by hand and figuring out how to handle responsive and reimplementing all activation and behaviors on top of that is less complicated than just using what the DOM provides.
Even text wrapping must be done by hand when you're using canvas.
I've written text wrapping and responsiveness code for canvas, in javascript - I don't think the complexity is even of the same order of magnitude as React or Angular.
What about line height? Judging by some of the responses I've seen on Stackoverflow, calculating this involves running a loop and querying the pixel colours from the canvas's buffer; a very ugly hack, if you ask me!
line height is straightforward enough, if you draw text with line height on N then you should move N pixels down for the next line.
Text baseline is the tricky part - if you want to e.g. put a taller piece of text next to a shorter one, then they should be put together so they share a baseline, but there's no decent way to get the text baseline.
The hack I use is to draw the text in a hidden span, and put a 1px inline-block span next to it, and check where the 1px span ends up. This works though it is ugly.
Why don't you precompute the space things use, including the text that is drawn using a matrix? You could easily calculate the baseline of anything by just doing matrix multiplications then.
However, I think you would still end up using a hack. The general idea of creating a replacement for the DOM has bugged me for a while too. It is a great to replace the DOM completely by a statemachine, I think. We need a New Document Model that supports "Batch Operations", is "Stateful" and uses the latest best performing algorithms published to do so.
Components are the CORRECT way, I fully agree with the facebook team here.
It seems that there is one guy who ALREADY DID all those things and replaced the DOM by something that's more akin to 2013's technology requirements. Link: http://www.nidium.com/
We have a tool that does Canvas drawing from React too. Currently some pieces are cached in retained mode but it's just an implementation detail for certain performance characteristics.
Canvas APIs are currently not as fast as the DOM renderer. There's also a lot of added complexity with regards to layout, text flows and text input. The code you'd have to ship down to solve all that with pure Canvas isn't worth it for a lot of applications.
There are many cases (e.g. accessibility for people using screen readers) where having your web site be a machine-parseable document is very valuable. There are cases where using Canvas for everything makes sense, but having it be the default feels like a step back in many respects.
"Immediate mode" is used as a metaphor here. He doesn't want to actually do graphics work. He wants to send things out to get drawn and have them get batched up appropriately and only rendered when needed.
The goal isn't to do immediate mode, it's to implement UI in the simplest manner possible. Throwing away the DOM and reimplementing significant parts of its functionality in Canvas doesn't make life simpler.
> Thus we don't need React operations like setState which exists to support both efficient subtree updating as well as good Object Oriented style. Subtree updating for Om starting from root is always lightning fast because we're just doing reference equality checks all the way down.
I don't think anyone actually likes using explicit setters/getters in frameworks like Backbone and Ember. Of course Angular avoids it but that's by the crazy "dirty-checking". Obviously the new Object.observe will help this situation, but I love how simple Om/CLJS makes this.
> This also means that Om UIs get undo for free. You can simply snapshot any state in memory and reinstate it whenever you like. It's memory efficient as ClojureScript data structures work by sharing structure.
> VCR playback of UI state
I can't wait for details on this. This has gotten me really excited about client-side apps again.
Are there a lot of details to explain? An immutable data structure means that modifications to the structure tend to be memory cheap. So you save every instance you care about secure in the knowledge that it doesn't waste much space. Then to perform forward or backward operations, you just switch which version you're looking at.
maybe "details" is the wrong way to phrase it. I'm excited to see /a real world implementation/, either proving that it really is that simple, or showing what other complications arise.
Going beyond the normal Undo to error/state correction. A lot of JS apps right now just ask a user to refresh the page when something goes wrong. This provides an easier way to get back into a previous working state.
It gets complicated; you have to store the inverse of every API call you use in order to back it out in order to have proper Undo support. IMO the UI is the easiest part of it, as you normally just reuse methods you already use in your app.
For example:
* Rename model: inverse is to rename back to stored name
* Delete model: inverse is to create with same data
* Create model: inverse is to delete the model
* Add model to collection: inverse is to remove from collection
.. and so on. It's pretty simple. I use something similar to JS-Undo-Manager[1] to manage it. Om isn't going to get you around needing to make those calls and your UI should (generally) be able to handle executing the inverse of an action at any time.
In particular, if you have any triggers in your API they may need to be deferred until after undo is no longer possible, or have explicit commits.
Lets say you have the ability to add a user to a group, and they will get an email about it. You accidentally add your boss to the "I hate my boss" group. How do you support undo in this case? Some sort of timeout before sending is really the only valid approach, but where?
One was is to do the deferred part on the client side, but this means closing the browser after adding somebody will mean they won't get notified at all. You really need to have support for undo in the API for this to be meaningful (or the "add to group" api returns a notification ID that you can cancel).
As an aside, I'm pretty sure the inverse of deleting a model is typically going to be far more complex than just recreating it.
Deleting a social network profile, for example, involves deleting all photos, associations, posts, etc. Easier just to deactivate models so you get easy undo, and periodically flush deactivated objects.
Yes, you're right; I shouldn't have put the "it's pretty simple" statement just a few sentences after "It's complicated" - clearly, depending on the app, undo functionality can get pretty hard to manage, especially when there are emails or complicated relations involved.
Om looks very interesting and seems to handle exactly what I've been looking for. We have reactive widgets, which is great for making changes in the data automatically update the UI. But the hard part is closing the loop: how does the widget communicate back to the data about changes? It would be interesting if we had a zipper-like abstraction, so that the widget gets handed both its data and a function to call when it wants to change just its data. Then that function is smart enough to go find the right place in the big data structure to go do the replace.
Edit: Ok, I now see how the Om todo example is handling update, and it's really cool. It creates a set of channels that encapsulate the knowledge of how to handle each type of change to a todo [0]. That gets passed in to the todo widget as "chans" and the widget sends messages to it in its event handlers [1]. I wonder if this whole channel CRUD abstraction is general enough to make it part of Om or another layer so that it didn't have to be recreated each time.
I apologize that the Om TodoMVC version is a little bit buggy at the moment, I put it together mostly to demonstrate the benefits of the React/Om model and it appears I missed a couple of TodoMVC behavior issues as they weren't important for demonstrating the approach - I'll try to clean up these annoyances later this evening.
Thanks for posting this. Forgive me for not looking over your code before asking: how does Om compare with Pedestal? Am I correct in thinking there is a lot of architecture overlap, with Pedestal's application state map occupying a similar role to React's virtual DOM, and mutation through message tuples in channels?
Both frameworks seem to offer similar advantages: decoupled model and view, UI state playback (VCR style) and instrumentation. How significantly do these frameworks differ? Or are we seeing different paths towards a convergence around a new set of model/view practices?
Yes, you are right, there are overlaps. Pedestal is undergoing a major rewrite these days, as I heard from the team at the ClojureConj. Also, Brenton Ashworth, Pedestal's developer has recently started investigating reactjs and found that it "perfectly complements the new version of Pedestal" (see his Twitter https://twitter.com/brentonashworth ) so exciting things to come!
I know it sounds crazy, but I think your post just outlined the next 5 years of web development innovation, swannodette- This ties together a lot of ideas that are extremely important, for the first time in one place. Thanks for doing this.
I already have my own React.js+Clojurscript bridge for personal projects, because I think it's an extremely powerful web dev combination. I'm glad I can finally abandon my own library for Om!
Great to hear, it needs more work :) It's more or less a conceptual foundation at this point, but I think it's going to rock with help from the community.
BTW, it was your React mailing lists posts that also really got me digging into this.
> Om never does any work it doesn't have to: data, views and control logic are not tied together. If data changes we never immediately trigger a re-render - we simply schedule a render of the data via requestAnimationFrame.
Ember.js has done this since day one with the Run Loop. Additionally it allows to coalesce operations yourself if you need control.
Angular also would not update the DOM as many times as the backbone example as it uses dirty checking to get around this problem.
While it's nice that Ember.js provides a run loop that's not enough, just batching all your updates into one place just puts all of the work together, you'll still going to pay for the work and in the worse case you have to hand coalesce, yuck. In Om if there is no work ... there is no work, it's implicit in the system! No need for hand coalescing your state changes.
Angular.js still suffers as shown by the optimization article I linked to in my post. These kinds of typical optimizations can and should be pushed out of the user's hands.
You're misunderstanding. There's no hand coalescing in Ember, and there's no duplicate work. If you change your models a thousand times in a row, you still get one redraw, fully automatically.
The same holds for dependent computed properties. If you change a dependencies many times in a row, the dependent property only reevaluates once.
It's still slower than ReactJS but much less so than before. In fact it's now #3 in that benchmark suite behind ReactJS and Backbone (and much faster than ReactJS in the completing benchmark), although I wouldn't be surprised if the other frameworks were set up as incorrectly too.
How does the performance of this compare to AngularJS? Also a common performance issue in AngularJS is when there are too many watchers (such as a large table with many rows and columns) which causes the $digest to become really slow. Would Om/React avoid these kind of performance issues?
Yeah, I think it's rather misleading to compare om to the performance of the least optimized and most naive (on purpose, mind you) framework instead of a more robust one.
At least according to this data, React and Backbone are the fastest. (Om is faster than vanilla React)
One suspects that with hand-optimization the other frameworks could be a lot faster, but the point of this post is that it is fast without hand-optimizations.
Keep in mind that those benchmarks don't use the requestAnimationFrame() batching strategy (out-of-the-box it's not installed since it makes testing harder since you have to wait for the next frame) like swannodette mentions.
I'd be interested to see these numbers after doing `npm install react-raf-batching` and doing `require('react-raf-batching').install()`.
In general "faster without optimization" is completely meaningless because it doesn't mean fast with optimization, which is what actually matters.
Example: Average code written in node.js is typically just as slow as, say, PHP (in fact node.js was 3x slower than php in techempower benchmarks). However, it has potential to be much faster when optimized whereas PHP doesn't reward such optimization almost at all.
The title of this post is strange. This seems more like the future of JavaScript views than the future of models or controllers.
I don't see a large movement to immutable data structures on the horizon in JS. I can appreciate the performance implications in Om, and would be interested in using React + Mori to the same end, but I'm not sure that it would keep me from having mutable data structures to represent most of my application state.
There are so many now-solved problems in JS MVCs that were a complete trainwreck several years ago - client-side routing, sanely managing data, and intelligently organizing your code base - that all assume mutable data structures and traditional object-oriented paradigms.
This might be the future of ClojureScript (in fact, it should be the future of CLJS, as it's much more elegant than any other view solution I've seen for it), and functional data structures may be a clever way to optimize the DOM, but this certainly doesn't seem like the future of JavaScript to me.
React is used by Facebook and Instagram and many others - I think these companies know a thing or two about scaling rich complex client side applications. React + immutable data has been used by Facebook to get order of magnitude performance enhancements.
So I dunno, seems like it might catch on eventually ;)
I think Instagram is not a good example too, but could you please provide the name of a project as big as Facebook (or better, 10% of it) using AngularJS?
I can imagine client-side routing, sane data management, and intelligent code base organization all without mutating the data of your state.
Granted, JS won't move to immutable data structures, but React encourages you to treat your states as immutable, which leads to lots of benefits.
This is like a dream coming true. I don't like Javascript's quirks, nor programming DOM with templates nor functions. I'm looking to build somewhat complex UIs without having to think in JS and manage state. React absolves me from DOM and ClojureScript keeps JS at bay.
Only if starting ClojureScript development wasn't so hard. I'd like to use browser REPL, IDE like LightTable. I am used to LiveReload's speed which makes loading changes instantaneous. But ClojureScript compiling seems to be still slow(ish) and I have already found half a dozen of different cljsbuild configuration examples. Compiling simple cljs files can take anything from sub second to 20 secs, and I don't understand why.
Could you perhaps tell more about your development process, Swannodette? How do you develop ClojureScript apps? I don't see anything beyond base cljsbuild in Om's project.clj. I confess I haven't yet had time to play with Om and see how fast it can be compiled.
The first build of ClojureScript app is somewhat slow because we need boot the JVM, compile ClojureScript, and then analyze and compile your code. After the first compile it should be sub second. I always use the auto build mode and when I'm developing I don't use any optimizations. Works well enough for me.
Thank you, yours is what I tried first and it's definitely fastest, sub second after first compile. However there is no REPL like cljs-start and couple of others I tested have. Between cljsbuild, Austin and LightTable I haven't yet found a stable and fast solution yet. I'll try adding these various REPLS to your example and see what happens.
External browser with REPL inside LightTable seems easiest for a noob like me. Unfortunately LT doesn't support that use case yet, and you can't use Chrome debugger inside it. I still need to see what the browser thinks is happening and not just the ClojureScript's side.
I did manage to get React + JSX, CoffeeScript and ClojureScript + bREPL working with LiveReload, in a single page app. That felt truly powerful.
Some notes about what I did here: you start the server by running (go), you can quit the CLJS repl easily by typing in :cljs/quit as normal and return to the user namespace and then safely (stop) or (reset), stop will stop the server and reset will refresh all of your Clojure namespaces (useful if you're writing macros in Clojure for use in ClojureScript).
Aside from this, just turn on cljsbuild auto with :optimizations :none, and code recompiles are sub-second.
Thank you. I will definitely take a look at these over the weekend. User.clj seems particularly useful. Reloaded workflow reminds me of IPython, where you need to reload modules and reset the object instances for the new code to take effect.
> External browser with REPL inside LightTable seems easiest for a noob like me. Unfortunately LT doesn't support that use case yet...
It certainly does :)
On the connections panel click 'Add connection' and then 'Browser (External)'
The first time you do that it will give you a script tag to add to your page. Once you've added that and loaded the page in the browser, try again and it will connect.
You can email me (jamie@kodowa.com) or the LT group (light-table-discussion@googlegroups.com) if you have any problems :)
Ah, I see. Thanks for the pointer. The docs on connections pane list only JavaScript, CSS and HTML can be evaluated, so I did not try with ClojureScript. I however did try to connect from LT to nREPL which had a running bREPL to external browser. That did not work (no connection), though I have no idea if it even should... ;)
I'll see how it goes and will post to the LT group if I stumble on something.
> The docs on connections pane list only JavaScript, CSS and HTML can be evaluated
Ah, that's because its a two stage process. You need something that can compile cljs to js (eg nrepl) and something that can eval js (eg browser, node, lighttable).
I think I was confused about that at first too. We should figure out how to make it clearer.
I don't think ClojureScript could become popular beyond a subset of Clojure users unless it becomes self-hosted, so it can be compiled without a JVM and eval in a browser with an extension. The JVM is not universally loved/acceptable.
I keep meaning to make a game with it, but always give up because it's so much more painful than pretty much any other to-JS language.
That might be, but it's young yet. Just seeing its ideas (or general functional programming in general) getting more widespread, like React here, feels good to me. I have more pain points with JS. I'd rather deal with build difficulties than have to handle DOM state, evented programming and JS quirks.
This is really cool, and I am one of those people who rolls their eyes whenever there is another article about some newfangled way to build Javascript apps on HN.
One logical step from here is instead of having a one-to-one correspondence between the "virtual" DOM and the browser DOM is to introduce a higher level meta representation based upon the context. This seems like a logical path towards a generative, projectional approach to controlling UI and browser document rendering in general. It's been tried before in several contexts and the hacks I've tried myself have always been to hard to get my head around since it's a complex problem, but this seems like it could be a really decent foothold to build a projectional, transform-based paradigm. For example, having a meta-DOM that encodes mathematical notation (probably inspired by LaTeX), which gets transformed into the current virtual DOM, which is used to update the real DOM. User manipulates a integral on screen, and the downstream transformations are performed lazily and efficiently all the way to the screen. This type of lazy evaluation from document to screen is essentially the core challenge (from a engineering standpoint) in building a usable real-time projectional editor like that demonstrated by intentional software back in 2010 [1].
That's exactly what we encourage :) composition of higher level components that render down to other high level components that eventually render to (virtual) DOM.
I wonder if this approach would work with visual XML (Docbook) and HTML editing, where resulting DOM can be complex and large. Competent writer might generate loads of input (writing 100 letters per minute, copy-paste). If my writing tool has latencies, that annoys me far more than game latency.
Other approach I have been testing is contenteditable, but this won't work with virtual DOM at all. Though how you parse an existing XML document to React's virtual DOM is beyond me. Perhaps you have to generate function calls SAX style.
> We could do that probably. Would need your own parser
Parser for HTML and XML to generate React components and generate virtual DOM from that? This got me thinking.
> With content editable we can just parse he html. We have an example in the repo.
It must be the weekend denseness setting in... I grepped through React repo, but I just found bits in the core that had something to do with content editable. Could you perhaps point me to right direction? If I understand this correctly, I could parse the entered HTML to virtual DOM.
I've been watching Clojure for a while, and I love the spirit of this work.
However, as a framework author, I feel obliged to point out that of course you can create a more expressive and performant UI framework in a language with S-expressions, macros, and value semantics. What's hard is doing it in JavaScript. :)
It also feels a bit like reverse logic to cite ever-faster JavaScript VMs as a reason to choose a new framework for performance reasons -- shouldn't it matter less exactly how you structure your application logic when you're running on a "fusion reactor"? -- but I realize there's some subtlety here about lower constants enabling better algorithms. (Still, if your framework includes a language compiler or gets to take advantage of an expressive macro system, it should be able to run on anything.)
Once it's possible to compile ClojureScript without booting up a JVM -- which could happen if it becomes self-hosting -- I'll make a Meteor package for it. I'd also like to see the compile times get a little shorter and the runtime library get a little smaller.
The main features David talks about in the post are available in vanilla JavaScript:
> If you're a JavaScript developer, I think taking a hard look at React is a really good idea. I think in the future, coupling React with a persistent data structure library like mori could bring JS applications all the way to the type of flexible yet highly-tuned architecture that Om delivers.
I think there are a couple reasons Mori and other libraries like it haven't caught on in the JS mainstream:
1. It's clunky to use them in vanilla JS compared to the default mutable objects and arrays. While you gain simpler semantics, the code becomes harder to read, and that tradeoff isn't always worth it.
2. Most JS developers are not experienced with building programs around immutable values.
Meteor could use Miro inside minimongo on the client, for example, and turn objects into vanilla JS before passing them to the app, if that implementation offered significant performance benefits (or equal or better performance and cleaner code, though the size of the Miro payload would also have to be weighed). Once the app has to deal with Miro objects, though, it starts to feel less like the JavaScript people know.
would you mind simply explaining a little more what an immutable value is? I Understand immutable as something that cannot be changed.
> all of our collections are immutable
How can a collection which is data that eventually ties into a db be unchangeable?
Think of strings in JavaScript. Those are already immutable:
var fooString = "foo";
var secondFooString = fooString;
secondFooString; // => "foo"
fooString = "bar";
secondFooString; // => "foo"
We set the variable fooString to point to a different string, but the original, underlying string hasn't changed. In JavaScript, we can think of a string as a value.
This is not the case with arrays in JavaScript:
var firstArray = [1, 2, 3];
var secondArray = firstArray;
firstArray[0] = 100;
firstArray; // => [100, 2, 3]
secondArray; // => also [100, 2, 3]
Because we can change the underlying contents of the array, an array in JavaScript isn't a value. It's a place: a reference to a location in memory. The underlying value could be changed at any time.
But, using Mori, collections are values, just like strings:
var firstVec = m.vector(1, 2, 3);
var secondVec = firstVec;
firstVec = m.assoc(firstVec, 0, 100);
firstVec; // => [100, 2, 3]
secondVec; // => still [1, 2, 3]
Instead of modifying firstVec in place, mori.assoc creates a new vector that is identical to firstVec except for the change we want. We then assign the result to firstVec. secondVec is unchanged. We are unable to go in and change the underlying values because a vector is a value, not a place.
The most obvious way to build this would be to deep-copy the entire collection when it's changed, but that would of course be way too slow and wasteful — imagine copying a one-million-long array just to change one element. Clojure, ClojureScript and Mori minimize unnecessary copying using a very thoughtfully designed data structure you can read about here: http://hypirion.com/musings/understanding-persistent-vector-... The short story is that, surprisingly, you get "effectively O(1)" copying when you use assoc.
About what I'd expect for the first benchmark, but that's the kind of intervention from the user that I think frameworks should not require. The time still spent by Backbone.js on the second one more or less illustrates why I don't believe in event oriented MVCs.
I've been expecting this after seeing a few of your teaser tweets and as expected I absolute love it! I've been waiting for something like this ever since I read up on persistent data structures and functional reactive programming almost 10 years ago.
I'm wondering how this compare to the Javelin library as that seems to offer the same functionality when combined with hlisp. Would I be correct in saying that Om achieves the same by using ClojureScript's data structures and core.async to offload the FRP part to React?
I haven't looked at Javelin enough to have a strong opinion about it. Om at the moment is completely focused on rendering EDN data and making that blazing fast. Personally I think representing UI components as generic data has a lot of legs.
I also don't really believe in templating languages, but I also think that functional boilerplate for rendering children is a bummer. I'd like to see a client side query language modeled after Datomic's Datalog syntax instead.
I'll be diving into the source code soon to figure it out, but are you then using core.async only to handle incoming user events or also to coordinate the rendering of the UI?
What about long GC pauses and running for cleaning up all those wonderful immutable data structures? Immutability is great when it's at the forefront of how a language is designed. Plastering immutability all over hot code paths in a language that wasn't designed with immutability in mind isn't great.
In practice, unless you're animating at 60fps over hundreds of objects (and who would do that with dom elements?), you shouldn't run into any GC issues. LightTable uses ClojureScript datastructures for pretty much everything and there were only a small handful of cases we had to optimize. Any "normal" application probably won't ever have to.
For a little more color: these objects usually live in the new generation which I've observed doesn't drop frames even on mobile.
You can pull this up on an iPhone 4S or newer (it uses the same technique with tons of allocations but doesn't really drop frames): http://petehunt.github.io/react-touch
The secret sauce is that you can animate CSS transforms every requestAnimationFrame without breaking out of React's natural data flow.
So we declaratively express the UI as a function of a single float which represents the scroll position (i.e. how open/shut the nav is or what position in the photo viewer you're at).
When we do that, we can use the excellent Zynga Scroller touch gesture physics engine (reverse engineered from iOS) to do the touch gesture stuff, then we can declaratively rotate, fade and translate everything as a function of this float.
Code is on github! And is actually in a reusable library!
I'm also super interested in this combination. However, on my new moto x, the photo scroller drops to ~45 fps and looks pretty chunky.
I also noted that you say the demos work best iPhone 5 on iOS 7, but unclear if you meant in contrast to earlier iPhones, or in contrast to Android, or both. Have you looked at all at the perf issues on Android?
The FPS drop on the photo scroller is likely due to jpeg decoding and painting on the main thread (I thought this was fixed in Blink but maybe not). Try "warming it up" by scrolling through all of the images a few times so the decoded jpegs get cached. I think this is the biggest perf problem on the Web today.
The reason Android may "feel" sluggish while reporting good FPS is because Android simply has a huge problem with touch event latency in the browser that you can't get around. This is where iOS really kicks Android's ass, at least on the web.
So the reason I limited to iPhone 5 and iOS 7 was because I can guarantee the touch latency and that the images fit in texture memory. Maybe they don't on the moto x.
Ditto on Firefox for Android, will try IE on WP later today. Although a solution that works on exactly 1 phone is not a solution I would ever use. To be honest, I don't get the point of reimplementing scrolling in JavaScript, can you explain why you wanted to do that?
It works well on iPhone 4S and up. And it works, but not great, on all android phones I've tried (including Firefox os on zte open!)
The point of rebuilding scrolling is to get the scroll position every frame so you can update the UI (ie rotate a photo or fade the left nav). You don't get scroll events during momentum scrolls so you have to rebuild it.
Ran this on my Galaxy S3, impressive demo. As these techniques become more common, there may be a shift away from native mobile apps to web/hybrid(phonegap) apps for most applications.
Jordan, from the React core developer team here. Awesome post, swannodette! This is exactly how we intended React to be used. As swannodette said, at Facebook, we use persistent data structures, in order to prune the update search space for comment updates. We've seen as much as a 10x improvement in update speed for certain operations.
React is a really great fit for Om, persistent data structures, and functional programming in the following ways:
1. We want to allow developers to elegantly describe their user interface at any point in time, as a pure-as-possible function of data dependencies.
2. We allow hooks for your system to help guide the updating process along. These hooks are not necessary. Often, we'll add optimizations long after we ship. We strongly believe that perf optimizing shouldn't get in the way of writing code elegantly and shouldn't get in the way of the creative development process and actually shipping to your users. At the same time, performance matters - a lot. So we ensure that at any point in the update process, if you know better than the framework, you can help guide the system. The fact that this is optional and doesn't change the functionality or correctness of the system is critical. Persistent data structures are an excellent (likely the very best) way to hook into the update system without making the developer do anything special.
Some people here were wondering about the apparent OO influence in React. Here's how I personally think of React's OO support/influence:
1. It's there to help you bridge with other existing mutative, stateful libraries in your stack - you know you have them. The DOM falls into this category as well.
2. It's there when you want to treat state as an implementation detail of a subcomponent. This is only because we don't have a good way of externalizing state changes, while simultaneously keeping the nature of them private. We just need more people to think about it (I'm sure the ClojureScript community can help us chew on this). Our internal motto is to keep things as stateless as possible.
3. A lot of the OO support in React is there as a concession, more than being considered a virtue. It's really cool to have the FP community involved in the UI space. Those people are already sold on FP and statelessness and get the luxury of programming in tomorrow's paradigms today (how ironic that FP has been around for decades!) To accelerate this momentum, we also want to reach out to people who aren't yet sold and change how they think about building UIs and software in general. The most effective way to do this is to reach out to them where they stand today, on some middle ground. It's really great to see eyes light up when they see that they can use simple functional composition in order to build large, sophisticated apps.
We're really glad to have swannodette and the ClojureScript community checking out React (github.com/facebook/react). We should consider adding some level of support for persistent data structures in the React core. Let us know if there's anything we can do to help.
Hi Jordan, ReactJS looks really awesome. Curious, what kind of persistent data-structures do you use at Facebook? Are they similar to the ones on Clojure[Script]?
We built our own immutable object utilities that prevent mutating anything in the object graph. These immutable objects look and feel just like regular objects/arrays, so you can use functional map/reduce etc. The only thing you can do with them besides reading their properties, is to create a new version of the previous object with changes applied. We then use object identity to detect when things could not have possibly changed between render cycles. We prune off those paths that will not need to be updated, justifying the pruning based on dependent data's object identity remaining the same over time. Same object identity across two points in time necessarily implies that their deeply immutable data structures have not changed, therefore their generated output could not have possibly changed.
> A lot of the OO support in React is there as a concession, more than being considered a virtue. It's really cool to have the FP community involved in the UI space. Those people are already sold on FP and statelessness and get the luxury of programming in tomorrow's paradigms today (how ironic that FP has been around for decades!)
As Sir William said, "There is blood in the old ways yet".
There was a really good talk from Charles Nutter about making JRuby fast and the mutable things that Ruby does that basically break caching and things that you do to make things faster at runtime.
I'm not surprised that persistent data structures can make things fast, in fact I've spent the last week speeding up a Rails app in some ugly spots by preloading the data structures to keep DB queries from happening, effectively turning a lot of just in time queries at the ORM level into a pre-loaded data graph. The speed is fantastic, but what is interesting is you could add a level of immutability to this and would be potentially even faster, especially on top of the JVM.
I've been playing with the idea of immutable entities in Obvious Architecture for a while and it really changes the way you look at your business logic and performance.
This kind of thing is exactly what makes me think CLJS is the current frontrunner to be the first compile-to-JS language to gain mass adoption without JS-like semantics (as CoffeeScript has).
It's the performance.
So many of us who want JS alternatives have made our peace with the idea that we'll have to sacrifice a bit of performance if we want to use a nice language.
But being able to improve performance while using a nicer one?!
Count me in! I already have a serious project in mind for this.
For me, the non-idiomatic compiled JavaScript and Google's library that's stopping me. Wisp seems like a good solution with it being a Clojure dialect that provides super clean JavaScript that you can work with.
On another note: maybe if the installation of ClojureScript would be manageable in, say, an hours instead of having to search half a day among outdated information, more people would try it (and this is from someone who already uses Clojure and Leiningen a bit).
Clojure and its libraries has the worst documentation, and this malpractice seems to be continued in ClojureScript.
https://github.com/magomimmo/modern-cljs is how I got started with Cljs. If you scroll down, you'll see some tutorials that get you set up and lead you to a workflow.
I had a few teething problems getting it configured properly, but I managed to get ClojureScript working from scratch on a Windows machine in about an hour, I think. What problems did you have? (Mine were Java related)
This is fantastic. I've been goofing around trying to figure out a way to approach the general case (representing an interactive DOM with just EDN), but I hadn't done jack on performance. Thank you thank you David.
Not to take away from the other points in your post, but on the benchmarks, the backbone example is writing to localStorage, while the om example isn't.
The overhead from localStorage appears to account for a significant chunk of the difference in performance. You can remove the localStorage calls with 'Backbone.sync = $.noop' or similar. After doing that and clearing localStorage, benchmark one drops to around 350ms, and benchmark 2 drops to around 2000ms.
Of course, benchmark 2 is where your library really shines, and backbone still takes its time with that one.
Not true, all Om timing information includes writing to localStorage. I just don't load the page from localStorage because that made benchmarking tedious for me.
I'm really excited about this. Until now I've hated working with Javascript because of a combination of the language itself, and its primary domain (the DOM).
Seeing React.js at JSConf.asia last month got me excited that I don't have to touch the DOM anymore, but I still had to deal with Javascript the language itself.
And now this comes along. Now I don't have to deal with DOM (or at least it offers better abstractions for working with it) and I get to use the most pleasant language I've tried so far.
I'm not great at reading ClojureScript - but I'd really like to port some of the optimizations from Om, such as the rendering on requestAnimationFrame and usage of shouldComponentUpdate to Backbone.LayoutManager[1]. Swannodette, if you're around, do you have a minute to give a more in-depth explanation of how that works?
If you're using Backbone.js I would just rely on React to deliver the requestAnimationFrame enhancement. As far as shouldComponentUpdate, just make your component implement a better one.
To be honest at this point there is little that Om does over React other than provide really good defaults ;)
Thanks for the input. I'll look into it. We've been talking a bit about doing a more general rewrite of LM and I've been looking for ideas that would make it worthwhile (not just cleaner code, but better perf or features). This might be one of them.
I'm sure doing a similar thing with LayoutManager wouldn't be too hard. I think LayoutManager uses _render internally too so you may want to call it something else.
That's great - benchmark 2 is essentially a bunch of useless work, and it appears to really speed things up by making sure we only actually touch the DOM every 16ms, instead of constantly.
I'd imagine a Backbone integration with React would get us even closer. I'm not sure if it actually makes sense to go too far with an LM conversion as React appears to do a better job. But LM could certainly benefit from waiting until RAF.
React does the diffing on the output (which is a known serializable format, DOM attributes). This means that the source data can be of any format. It can be immutable data structures and state inside of closures.
The Angular model doesn't preserve referential transparency and therefore is inherently mutable. You mutate the existing model to track changes. What if your data source is immutable data or a new data structure every time (such as a JSON response)?
Dirty checking and Object.observe does not work on closure scope state.
These two things are very limiting to functional patterns obviously.
Additionally, when your model complexity grows, it becomes increasingly expensive to do dirty tracking. However, if you only do diffing on the visual tree, like React, then it doesn't grow as much since the amount of data you're able to show on the screen at any given point is limited by UIs. Pete's link above covers more of the pref benefits.
So, maybe i'm misunderstanding, but, the perf advantage is that react waits until the next RAF and then only updates the parts of the DOM that actually changed? So, now i'm wondering: why can't the browser do that? Isn't this whole middle-man approach something to eventually be optimized out?
The difference is that React constrains the operations a user can do (i.e. we only give them the DOM node if they explicitly ask for it and only let them manipulate it at certain times) so we eliminate the operations that cause the DOM to be slow.
If the DOM were to implement this it'd have to break backwards compatibility.
That seemed more like a rant about Angular than an unbiased comparison. I had the impression that the writer didn't do things the "Angular way" when he said, "Since composing directives is so annoying, they end up being basically mini-jQuery apps that are pretty hard to maintain."
Been playing around with RxJs and wonder how much difficulty would it be to combine React with this? RxJs seems to work particularly well with Angular but I know Back one much better. is there harmony between the two?
React exposes event handlers and setState methods which are seemly mutable and Object Oriented. This is because React is designed for large scale organizations and to be approachable by a broad developer base.
However, once you have certainly complexity in your asynchronous flow, RxJS is a great way to express that.
The interesting part of our experiments (which we designed together with Erik Meijer) can be found here:
RxJS makes it easy to create abstractions from complex pieces of your async flow. The getStreams method in that example could easily be broken apart into multiple pieces.
This fits very well into the React model and is definitely a great compliment if your organization is already familiar with Rx.
I think it really depends on what you are trying to accomplish. For one thing, a quick peak into React revealed to me that there isn't two-way data binding. I think React and Angular have a lot of friction. They both seem to do well with data binding in their respective ecosystems.
Lol, why can't it be just another FW that's not MVC? We can't evolve?
Games don't use MVC, they use E/S. There is more than that pattern. Also when using API, you need something better.
Those aren't orthogonal. There's a pretty wide design space for component systems. Lots of games use "components", where game entities are split into pieces for different game domains (rendering, AI, etc) without going all the way down the entities/components/systems path.
It is true that most games don't use MVC. I think that's because MVC isn't a good fit for games. It seems to work fine for business apps, though more experimentation is always good.
I think swannodette's post is less about MVC and more about declarative manipulation of the DOM. As long as you have a function somewhere that creates that representation and a way of handling events you can use any sort of architecture you want.
That wouldn't change the amount of DOM thrashing caused by jQuery, it would only improve rendering (and perhaps reflow) performance, but would not improve the cost of DOM manipulation.
Can you expand on that a bit? As I understand it, om/react don't improve the cost of DOM manipulation, they just reduce the amount of DOM manipulation that's done by only altering parts of the DOM that have changed. Couldn't you do the same type of change checking on the virtual DOM tree so that you only touch the actual DOM when it changes?
I see, yes viewed that way you COULD do this with jQuery- But you'd be missing most of the benefits of react.js, due to the missing "functional programming" that is possible with the react.js approach.
But that's the whole point; it's effectively the same operation, but Om takes 1/5 the time and takes a much more efficient path. Backbone with some simple RequestAnimationFrame deferral somewhat approaches that speed but in some cases Om still beats it by an order of magnitude.
It may be because I'm not familiar with ClojureScript's syntax, but the whole sample application code seems like a real mess to me. It's full of boilerplate code and it is happily mixing application logic with DOM rendering. [1]
Compare this with an alternative JS MVC framework (like, say, Knockout.js) and another modern "javascript-compatible" language (like, say, TypeScript), and see for yourself. [2]
While I didn't run any benchmarks, it's safe to assume the Om demo is faster. However which sample do you think is easier to write, test and maintain? If "The Future of JavaScript MVC Frameworks" is supposed to look like the Om sample, sorry but I'll pass.
While the TypeScript + Knockout combo looks pretty good, it doesn't look that much more expressive to my eyes. Certainly hasn't been my experience that Clojure or ClojureScript are hard to test and maintain.
My point is that if Om claims to be superior to other JS MVC frameworks - as it is presented as "the future of MVC frameworks" - it needs to be good (if not superior) in many aspects. Performance is a big deal all right, but another aspect (which seems essential to me) is that using this framework should be easy, concise and intuitive. And I've been kinda disappointed to see that the sample code given to present Om in action is not. At all. If I wanted to fiddle a bit with this code right now, to add a few features for example (like adding a timestamp besides each item indicating when it was created), I wouldn't have any clue where to start.
What bothers me is that Om has clearly been written with ClojureScript in mind, so in a way this framework is supposed to be sublimated by the language. It should "feel" right. And I know it's possible with other frameworks (and my comparison with Knockout+TypeScript was meant to illustrate that), so I expected it to be the case with Om.
I dunno- I'm a clojurescript guy, and it seems pretty "sublimated" to me.. I think the problems React.js and Om are trying to solve are more critical when dealing with large & complex applications... I think one would see the benefits more clearly in those cases (though it feels a simpler to me as it is already)
What react ultimately opens up is a way to do immediate mode UI [1] on top of the DOM _efficiently_, which changes things pretty dramatically. It means we can start to treat the browser as just a renderer and get the infectious design decisions of the DOM out of our programs. If nothing else, this gives us freedom, but as david is suggesting, I think this also gives us an opportunity to treat UI much more directly than we currently are. If you want to know what the state of your UI is, you just have to read linearly down through the code that produces your tree. No nest of dependencies, no state hidden in the UI components, you could even get rid of event hierarchies if you wanted.
More important than anything else, this gives us a chance to dramatically simplify our model for UI and magically be even faster than we were before. Sounds like a win to me.
[1]: http://en.wikipedia.org/wiki/Immediate_mode