Hacker News new | past | comments | ask | show | jobs | submit login
RE:DOM – Tiny turboboosted JavaScript library for creating user interfaces (redom.js.org)
127 points by edward on Jan 20, 2019 | hide | past | favorite | 77 comments



Not having a VDOM is only a pro if your library maintains feature parity with libraries that have a VOM. Otherwise, you just lack whatever it is that VDOMs enable. This library seems to forget why view libraries use a VDOM. In this library, you have to write imperative update logic, which the VDOM specifically allows you to avoid. The VDOM is a slight performance hit most people accept in order to be able to write declarative code.

In this case, I find it a bit misleading to pass off not having a VDOM as a pro. It's a bit like building a social media site with no users and claiming everyone should join because it doesn't have spam.


In my opinion the benefits of declarative GUI are vastly overstated for the average web app. Given that your application is appropriately componentized, it rarely makes things much clearer.


Agreed!


We're actually using this. Re-dom is a very thin layer on top of the DOM. Basically all it does is hide some of the weirdness of that.

The best way of using it is simply emulating what you would do in react (without jsx) in terms of design patterns. So you have component classes, with state, and a render method. Half of the success is just using good patterns like that. All redom does is allow you to create trees of elements and manage those.

It has a few simple primitives for that. The main thing is a an el method that takes 3 parametes (maximum). The first is the element name, classes and id as a string. ".foo.bar" means a div with class foo and bar. "span.foo" means a span with class foo, and so on. The second (optional) one is an object with attributes. So if you have an a tag, you might want to pass in an object with an href, title, target, etc. The last parameter is either a string for text nodes, another element, or a list of elements. There are setAttr, setChildren, etc. methods you can call on elements. There are mount/unmount methods and there is some syntactic sugar for managing lists. That's about it.

You can shoot yourself in the foot (just like with other frameworks) but otherwise this works as advertised. Mostly there is not a lot of magic happening in terms of expensive stuff.


Try my new lib... here it is right here!

    document.createElement("div");


Most people get tired of the verbosity of createElement + appendChild + createTextNode, so they whip up a utility function, something like

  element(‘h1’, {class: “foo”}, “Chapter 1”)
Then they add method chaining, support for event listeners, and various other bells and whistles.

And that’s how new Javascript frameworks are born.


I've been using plain JavaScript more and more lately. Luckily Element.protoype has this nice property where if you attempt to read some setter functions it will throw, making it cake to implement a chainable createElement.

Fun stuff, https://github.com/mini-eggs/ogle-tr-122b



What's your library licensed as?


Surely you'd have to pick SSPL, to avoid the risk of Amazon getting rich off this innovative technology without contributing back to the community???


Huh, one day I forked a part of RE:DOM called nodom[1], to use it in my library for rendering D3 charts in the background worker. I'd been extending nodom, and finally got working D3 in worker[2], sorry the pun. As I recall, nodom was pretty straightforwardly coded w/o magic and easy to extend, a nice library if you need to play with lightweight vdom.

[1] https://github.com/ptytb/nodom [2] https://github.com/ptytb/d3-worker


This idea of creating DOM with some API is not new and never really understood it. Just look at the login class, it's way more complex than plain HTML, hard to read and messes with .this.

If you want DOM in your code, JSX is the way to go.


With more complex project there’s benefits of having purely updatable components, the support of just native JS without quirks and knowing exactly what’s happening and when without black magic.

But if course it always depends on the project which way is better.


With more complex projects, anything that isn't a declarative becomes painful to manage. In complex web-apps, the benefits of using a VDOM over imperative libraries like this far outweigh the extra O(N) memory consumption.


> this far outweigh the extra O(N) memory consumption.

Calculating memory overhead for vdom is actually pretty hard. With imperative code there will be several different code paths for initial rendering, updates and removal, and depending on the ratio of component instances per component type in the application, vdom can consume less memory because there will be less code and less code means that there will be less internal data for JIT.

Also, if it is a highly dynamic application with high ratio of dynamic bindings per DOM element, and imperative library will wire all dependencies with direct DOM bindings in a huge dependency graph, it is much more likely that this graph will consume more memory, especially in use cases like filtering large lists.


You can make pure updates in RE:DOM. It’s developed for our digital signage system, and it’s really performant in a real world situations.

It’s different than React though, that’s by design, since we needed more precise DOM handling.


Two days ago I started an HTML application. One day ago I realized, I'd want to add localization to it. Today I realized, that I may be better off to have the whole document be created dynamically, since I can then easily implement localization. Note, this is an "app" not a "doc".

Too bad I am forced to use ES5 (IE9).


If manipulating the DOM directly is so much faster than using a virtual DOM, then why do libraries like React or Ember use a virtual DOM in the first place? I honestly though it was for speed.


It depends what you mean by "faster". In some cases you'll be better off with something lighter and accessing the DOM directly.

If you need to do something at scale, then you may choose React. The benefit here isn't from just something as simple as changing the items in a list or the text in a header, but when you need to have large amounts of state in an app that is represented by a complex DOM tree.

The benefits of React here are that the virtual DOM can prevent unnecessary updates to the rest of the tree when your state change should only affect a nested element, for example.


That makes sense. Thanks for explaining :-)


The problem is it's very easy to write non-performant JS code if you're adding/removing/modifying lots of DOM elements, and attempting to do it performantly by-hand tends to lead to a lot of bugs. VDOM isn't performant in the sense that it modifies the DOM faster but in the sense that it allows you to write code that wouldn't be performant and have it execute efficiently. In React you worry about your data flow and component structure without needing to worry about DOM performance.


Full disclosure: I wrote a vdom library ivi[1].

Being fast is not just about micro updates, I think that most vdom authors don't care too much about micro updates performance because it isn't a bottleneck.

For example, Virtual DOM with a powerful composition model can significantly reduce code size because composition gives you code reusability, and virtual dom is one of the most compact output formats to render/update DOM.

1. https://github.com/localvoid/ivi


Seems similar to Backbone.js which was popular a looong time ago. Well, in fact, it's probably 4 to 6 years but it's a really long time for the front-end world. :)


Not much, really..


For those in the know on the internals of the topic library: is this like hyperHTML or lit-html in the way that it remembers the "holes" in the html so every update is just the update of live DOM at the right place?


Check out my talk if you're interested in internals: https://www.youtube.com/watch?v=0nh2EK1xveg&t=410s ;)


Hi and thanks for the link!

I will definitely watch it, but may I ask if you know how the lit-html or hyper work? Reason I was asking was that all the vdom libraries maintain a virtual dom (behind the live dom) that they always recreate when the data changes (but this is cheap, since it's only a data structure), then they compare reconstructed virtual dom to the live dom and swap out only the new parts. And this way the browser only needs to re-render the new parts -- since the old ones didn't change.

But the two other libraries I mentioned remember the "holes" in the html templates directly. So when the component gets new data, then the updated content is simply set as the value of these direct references into the live dom. This way (1) only the new parts are updated and thus re-rendered, but also (2) no virtual dom needs to be reconstructed. And this is why they are faster than the vdom libraries.

So my question basically was, do you reconstruct the entire component tree too or not. But I guess I need to watch the video and read some code :)


This sounds kinda like my little DOM component library: https://github.com/guscost/protozoa

It’s a cool approach, and this one looks more fleshed out. But I’d still think in 2018 it’s almost always a premature/inappropriate optimization to forgo a virtual DOM, unless you’re doing something pretty far out there.


That just sounds so... backwards to me.


It looks like the Element.update function is more or less the equivalent of setState in React. So the UI does seem to be declarative (you may want to emphasize this, at first it looked like a jQuery-like library). How does the declarative aspect of this work without a virtual dom?


Answering my own question here: It's not the equivalent of setState as it doesn't set in memory state, but edits the dom directly when called.


Can anyone explain why you’d use this over just writing the HTML? This reminds me of the days when there were similar libraries in PHP and I could never find a use for them. It was easier and faster to just write the HTML.


Loving it, built something similar a long time ago. Can you use it the other way around tho? Like, take an HTML string and return its RE:DOM representation.


> Because RE:DOM is so close to the metal and doesn't use virtual dom, it's actually faster and uses less memory than almost all virtual dom based libraries, including React (benchmark).

Very well said. Author of "one very popular library" is plainly lying claiming that "Virtual DOM" was somehow faster than real DOM.

That was never the case, even back in IE6 era, aside from very few and well known edge cases.

Switch to Virtual DOM led to one of the biggest slowdowns in website performance.


IIRC the performance gain is when doing things like updating the state of a deep component tree, where react and others diff on the vdom to see what elements need to be re-rendered and then only so actual dom manipulation on those, rather than clear and re-render the entire tree.

I’ll be honest, though, it feels to me like a solution to a hard problem you inflict on yourself by first committing to have the giant component tree rather than questioning if that was a great idea.


I think he's comparing VDOM to the equivalent imperative instructions, which !== re-rendering the entire tree.

For example we want to turn the prev tree into the next tree:

Prev:

  <ul>
    <li>1</li>
  </ul>
Next:

  <ul>
    <li>1</li>
    <li>2</li>
  </ul>
A VDOM would have a representation of prev and next in objects, diff them, and then decide to do something like

  parent.appendChild(makeLiWithText("2"))
OP is saying that this is less efficient than just manually executing the above line. Obviously, it's less efficient because the latter solution skips the entire diffing process.

I do think OP is missing the point of VDOMs. "Figuring out" the resulting imperative instructions for a given prev -> next is not easy (the above example, however, it trivial), and is very error prone, which is why the VDOM diffing solution is motivated in the first place.


> I think he's comparing VDOM to the equivalent imperative instructions, which !== re-rendering the entire tree.

You don't have to have tons of handcoded input handling or tree merging code. As I said in another post, the approach with some structured getters and setters was known since time immemorial. S.js, I think, is the best modern iteration of the approach. There are a lot of different ways of handling that, just google.

> "Figuring out" the resulting imperative instructions for a given prev -> next is not easy (the above example, however, it trivial), and is very error prone, which is why the VDOM diffing solution is motivated in the first place.

Yes, you very much get that. For that reason, when someone is faced with handling frequent and complex page structure manipulation, he is better to apply some actual computer science knowledge, and algorithmics to the task, than to use half-baked solutions. This is exactly because the gain from doing things properly in such case is great.

From utility standpoint, VDOM has its use, but its advertisement as a somehow superior and more performant approach is a disingenuous.


Yes, you do not need to re-render the whole element tree, but you do not need that "Virtual DOM" to do that. It's truly is an "anti-solution" for the problem.

Even back in "medieval" era when YUI was the king, the practice of making getter-setter pairs with direct control over DOM elements was recognised as a preferred practice over any kind of state machine controlled page re-rendering system.

There of course was Angular.js - the prime target for React authors, against which all their claims held true.


You have to have some ideas of what the state was before the change, and what changes need to be done to transfer states. In complex UIs, writing custom update logic for every change results in an unmaintainable mess and a large payload to the client.


Yes, but proper, technically superior ways for "data binding" and templating were around since IE6 era, just back then people were not concerned much with doing things properly with javascript as the universal presumption was "if it uses JS it will be slow invariably of amount of efforts put into it."


That’s not the case with RE:DOM, since you have a single update function to make pure updates.

I usually create a requestAnimationFrame debounce to have just a single render per animation frame.


His point is that writing the single update function becomes complex when you have lots of changes to apply.


The beauty is that you know exactly what’s happening and when.

Debugging is also way easier without long stack traces.

There’s pros and cons in everything.


There was no notion of a virtual dom in the ie6 era that I can remember. Unless Prototype (the precursor to jquery) used one and I just wasn’t aware of it. I believe the libraries of that time either directly extended the dom or wrapped individual elements, but did not maintain anything resembling virtual doms as we know them now.


I wanted to say that manipulating DOM objects without tricky state machine that was supposed to store the element tree structure as data in JS and subsequently recreating it from it was always faster than with it.

Even in the often brought forward case "when a lot of changes to DOM don't translate to changes in markup," like demonically complex template merges, never seen in real life, that VDOM proponents like to throw into benchmarks, VDOM often lost because they were trying to do browser's job when they really shouldn't.

Browsers already used a lot of very similar logic on the inside when DOM 3.0 was being popularised, and it's only natural that VDOM was losing out by trying redoing what browsers were already doing.


> manipulating DOM objects without tricky state machine ... was always faster than with it.

You miss the point of VDOMs. VDOMs aren't so much about performance as they are about ease of development and automatically maintaining tree consistency. No one disputes the fact that creating templates and diffing them against the DOM adds overhead compared to writing out the equivalent instructions manually. The point is that nobody wants to write out imperative spaghetti for complex applications.


> The point is that nobody wants to write out imperative spaghetti for complex applications.

You really don't have to, you just have to walk out of that "wood of popular notions." There are libs to handle functionality of any part of a modern MVC app: reactivity, input handling, declarative components, templating... that don't even making you think of manually manipulating the DOM tree.


Personally I was quite surprised by the good performance of React 16 + Preact in the benchmarks. It's not much slower and does all the "update" logic for you.

Keep in mind, too, React handles MANY edge cases for scenarios like keeping scroll position on update, or animations.


Yeah it’s become way faster that’s true! The thing I have with React (and similar) still is that there’s so much black magic happening. I like to be more in control what’s happening and when.

But usually the bottleneck with React apps are state managing libraries, like Redux etc plus all the other bells and whistles.


Yes, after many years, they polished it to the limit, and browsers themselves got faster. That made the slowdown much less noticeable.


I have only one upvote to give you but thank you for your comment. I am not very familiar with the Js world and it's riddled with vocal minority of not-so-knowledgeable people and I had no clue that the vdom fad wasn't actually reasonable.


I can't comment on the quality of the library, but I did get a quick chuckle from the term "close to the metal" being used to describe a Javascript browser library.


I think very soon, "low-level programming" will mean writing JavaScript directly without a transpiler.


It has been referred to as the "Assembly Language of the Web"

https://www.hanselman.com/blog/JavaScriptIsAssemblyLanguageF...


Why Not? They appropriated “real time” already.


Soon? I’ve already seen that described as low level JS


Obligatory XKCD (albeit about editors)

https://www.xkcd.com/378


> I did get a quick chuckle from the term "close to the metal" being used to describe a Javascript browser library.

This and "blazing fast" make me cringe every time.


It’s a bit of a joke, really ;)

As well as ”turboboosted”. Don’t take them too seriously :D


No, it isn't. It's completely intentional marketing. That diction was specifically used to get people to think the library is better than other libraries.


Believe me, I wrote the library ;)


[flagged]


In case you are unaware. You sound like a dick. Relax...


Fair enough, of course it’s marketing as well. But it’s so common to say ”blazing fast”. Even FB developers argued React being fast even though it used to be really slow.


It was fast compared to the primary framework in vogue back then (Angular.js v1).


It actually wasn’t, they didn’t use key with ng-repeat in benchmarks. Even Angular 1 was actually faster back then when used correctly:

https://500tech.com/blog/all/is-reactjs-fast


That post only invalidates* a specific measurement, but doesn't provide counter-measurements, especially not ones that say Angular.js was faster than React. Although I don't have alternative numbers, and will agree that VDOM comes with a performance penalty, Angular.js's method of change detection was pretty inefficient and required a lot of hand-holding.

Still, I think either were sufficient for most use cases, and performance definitely isn't the main selling point for React (nor Angular.js).

*Well, somewhat - it was still easier to shoot yourself in the foot, performance-wise, in Angular.js, but that's not really the point here, and React has its own issues there (e.g. `key`).


But when I wrote that text, React used to be way slower than RE:DOM. It’s gotten faster since that (still slower though).

I could probably update that ”marketing” text.


For better or worse, we are not far from "bare metal javascript"

There is wasmjit [1] a kernel module that enables execution of webassembly via the linux kernel.

Furthermore there is AssemblyScript [2] that allows to transpile typed javascript to WebAssembly

[1] https://github.com/rianhunter/wasmjit

[2] https://github.com/AssemblyScript/assemblyscript


Good grief we've reached the point of a JS interpreter running in ring 0? Gary Bernhardt was a prophet.

https://www.destroyallsoftware.com/talks/the-birth-and-death...


Well, strictly speaking, WASM and JS are two different things, but basically yeah.


All you need is a full operating system and a >100mb full compliant browser. Doesn't get much closer than that./s


Yeah - I think the term "close to the metal Javascript" belongs to things like this:

https://esp32.com/viewtopic.php?t=497


>closer to the metal

Lol. Sure! Just like how I get closer to the Earth’s core if I’m sitting on the floor instead of my chair.


It’s a bit of a joke.. ;)


i dont want to convert HTML into any other format...its like coffeescript. somebody is gonna pick up your code and they see all these el('h1','OMFG HN SUX'); instead of normal sane HTML.

don't reinvent the wheel man.


This..._isn't_ converting HTML into any other format. It's providing an ergonomic way to generate elements in javascript. The alternative is directly using `myEl = document.createElement(...)` and `myEl.appendChild(...)`. and `myEl.src = 'whatever'`. You're comparing apples and dumptrucks here.


thats pretty much converting javascript into html and vice versa




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: