Hacker News new | comments | show | ask | jobs | submit login
The Glimmer VM: Boots Fast and Stays Fast (yehudakatz.com)
292 points by lowonkarma 11 months ago | hide | past | web | favorite | 89 comments

I love technical articles about how libraries are built. This is really great!

I don't follow the last part of the article though. Specifically this part:

> We accomplish this by (under the hood) allowing every bit of data that goes into a template to separate the process of computing its current value from the process of determining whether the value might have changed.

When you say "bit of data", what exactly do you mean? I assume you mean some field that is bound to a template. Like a property on glimmer component. Using KVO this would be a compute of some kind, that triggers an event when its value changes.

Perhaps you're doing something totally different, but I don't see how? Breaking it down, we have an object like:

     "foo": "bar"
I think you're saying that you don't observe this object (using some kind of obj.set('foo', 'baz') convention) but rather determine that the value changed by some other means. If that is so, what is the other means?

Thinking about it a bit more I think might know what is going on here. In a traditional KVO view layer (I've worked on a couple of these myself, so my thoughts here are biased) when a property changes it triggers an event. The view layer is listening to this event and so it knows to update the DOM to reflect the changed value.

What this means is that the view layer only works with evented KVO objects. What Glimmer has achieved is the ability to work with other types of data management systems. Knowing that is key to understanding (for me, at least).

So given that, what must be going on is that the view layer gets the value of a property with some interface. That interface implementation is provided by the view model so it differs. For plain objects it might just be `return obj.foo`. This will make getting a value really fast. No notification systems are required.

For updating the view layer there must be some generic way of telling it "hey, the obj.foo property might have changed (or maybe not, shrug), figure it out for yourself". Then using the update VM it is able to figure if it really did change, and only update the subtrees if it did.

Or something, this my interpretation. Sounds smart!

Yep, this is pretty spot on!

At a high level, the Glimmer VM is built on top of two primitives:

1. References (as you describe, an interface implementation to provide the underlying value)

2. Tags, an interface that communicates "is it possible this value has changed?"

Every reference has an associated tag, so once you have a reference, at any time you can ask: Is it possible this thing has changed? If so, what is it's value?

In KVO systems, every time a dynamic value is used in the template, that view usually adds an observer on to the root object. This adds a fair bit of overhead to both rendering and teardown. And if multiple properties change, you have to figure out the optimal re-rendering strategy. (E.g., if a value inside an `if` changes, you probably don't want to re-render it if the conditional also changed from truthy to falsy!)

Today in Ember, the view layer no longer sets up observers. Instead, during rendering, we create a tag for each property. When you mutate a property (this.set('firstName', 'Matthew') for example), two things happen:

1. That tag is marked as dirty.

2. A revalidation of the entire tree is scheduled.

The revalidation process starts from the top of the render tree and walks down, asking every reference/tag "is it possible you've changed?" Because this is just an integer comparison, it's very very fast on modern JavaScript VMs, even if you have lots of data on screen.

The tag is like a Bloom filter, though. It means a change MAY have happened, not that one necessarily did. If the tag is dirty, we do a last chance identity check for primitive values. Only if it has actually changed do we update the DOM.

One nice thing that falls out of this is that the application developer can change as much component state as they'd like at once, and we can avoid doing any expensive computation to figure out the optimal place to start re-rendering (the `if` case I mentioned above). By revalidating the tree from the top down and keeping constant time factors low, you get that optimization "for free."

The other nice thing is that you can express all sorts of cool semantics on top. For example, if you have immutable data you can attach a tag that always says "I'm never modified." If you don't want to do any bookkeeping at all, you can attach a tag that says "Always recheck me to see if I've changed." Best of all, as Yehuda mentioned, you can mix and match these semantics in your components. It also allows the data to drive the change semantics, not the component, which you often don't want to have care about how model data might change.

If this is interesting to you, there is some WIP documentation in the Glimmer VM repository that talks more about the philosophy behind references and tags:



Thanks, this explanation is very helpful. One last question, when you say:

> Because this is just an integer comparison, it's very very fast on modern JavaScript VMs, even if you have lots of data on screen.

I don't follow, what is an integer comparison? Don't you have to compare the reference's value to the previous value? Meaning you need to get that reference's value. That reference could be somewhat expensive. Imagine if it is a property getter that calls into a few more functions.

This seems like the disadvantage of not using KVO, you don't know a property's value without asking for it. with KVO the value is cached and only changes when the underlying dependency tree changes. Of course vdom works without this, so it can't be that bad.

But I'm guessing you've found a clever way to avoid having to call out to get the reference's value to compare, so what is it? :-)

And I'll definitely read me in those docs, thanks!

EDIT: Probably should have read the docs first. A global counter id! Very clever... I wonder if you could get away with just a lastModified though. The view layer could keep track of the last time it updated and then just compare lastModified, if something has changed since the last time it updated then it must have changed too.

Going to think some more about this, thanks for the puzzle!

What is the advantage of implementing your own bytecode VM rather than generating JavaScript code and using eval()? It seems to me that the latter would be more efficient, since you wouldn't have a VM within a VM.

For one, JavaScript code is actually pretty verbose. It both has a large payload size and high parsing overhead. Earlier versions of Ember's rendering engine worked this way and the change to a data based wire format in 2.10 made a large improvement: Intercom saw a 28% reduction in their whole application payload size. LinkedIn's uncompressed compiled template size dropped by 71%. As the wire format is all data (arrays for the most part) you would expect some parsing speed improvements as well.

And second, generating imperitive code makes it harder to perform runtime optimizations. For example during initial render Glimmer could note if a property lookup yields an Immutable object and use that knowledge to disregard change tracking for properties off that object. There are some fun things we can do here.

It turns out JavaScript engines are very good at iterating flat arrays and running small, highly optimized functions and that's what Glimmer is doing at its core. When your compiler output generates functions, you're creating lots of functions specialized for each template which means a larger surface for the v8 JIT to worry about optimizing. By instead shipping lots of small, hot functions to be re-used (instructions fired via opcodes), we can get a better optimization result from v8 and other engines.

In this particular case, I believe there are 3 reasons:

1) Size - turning each op into a couple of bytes means that the size of your template is significantly smaller. If each instruction is 4 bytes, I could get ~20 instructions in the space of just one function with a setAttribute call:

    function t(e) {e.setAttribute("id", "bar")}
Size is much more important on the web than it is in other places especially as the next billion people start using the mobile web.

2) Parsing speed - given the size of the JS that templates produce, you start running into JS parsing performance. Just by volume you're going to eat an insane amount of time not just downloading the JS but then trying to turn it into something executable. A correctly implemented bytecode VM could easily beat the cost of parsing.

3) Scheduling - If you just produce raw JS code, you don't have much room to dictate how it executes. Since glimmer's goal is to never miss a frame, they're going to have to take control of the work that gets executed to make sure that they always pause at a frame boundary. That's a much more straightforward thing to do in a VM, where pausing work is just a matter of yielding the interpreter loop. This gives you complete control over how you schedule the work from the ground up. Have some huge dom tree to render? Split it across 10 frames without doing a bunch of control inversion.

In terms of cost, I haven't looked at their implementation, but I assume these guys did their homework. You can implement interpreters that execute instructions in a few nanoseconds without too much effort. If you really put in the effort, you can do it subnano, but that's outside of the scope of handwritten JS. Even if the overhead was 20x a normal call, the cost of the operations this interpreter is running makes that a rounding error. The DOM is slow and the other benefits almost assuredly outweigh whatever tiny cost they're paying at the per instruction level.

There are lots of other potential benefits as well: opportunities for specialized optimizations over the bytecode (you could basically do your own domain specific jit), ease of implementing the base VM for different targets, and so on. There are relatively few times when writing your own interpreter probably makes sense, but it seems like this architecture would give them a ton of headroom to do some great stuff down the line.

For 1) and 2), some frameworks solve the size issue by creating functions for the basic DOM ops, which then get minimized to single character names in production. So in your example, there might be a setNodeAttribute(node, name, val) function, such that the final code isn't `e.setAttribute("id", "bar")` but just `a(e,"id","bar")`, where `a` is the minimized name of setNodeAttribute.

There's really not much noise in that expression, just the "(,,)", so maybe 4 chars that an opcode could save.

As for 3), is that really possible? If you profile most modern frameworks, they're already fast enough that most of the time is spent in rendering, not in javascript DOM manipulation. So even if you cut short your js before 16ms (60fps), you have no idea how long the browser is going to take to render your changes. Plus, the browser will be doing extra work, since it needs to render all the frames in which you've only done part of your updates.

In terms of #3, if you're yielding until the next requestAnimationFrame, the browser is telling you when you have the opportunity to do more work. Is there something that isn't covered by that?

> they're already fast enough that most of the time is spent in rendering, not in javascript DOM manipulation

That hasn't been my experience, but it's been awhile since I've benchmarked any of the frameworks in common use. Change tracking, diffing, and then the dom calls have all been the bulk of the work in large updates. Assuming you're doing those in a dom fragment I'm not sure how "rendering time" (I'm taking that to mean compositing and painting?) could be the bottleneck in that scenario.

BTW if you're curious, browser DOM operations have advanced now to the point that doing work in dom fragments is often slower than just doing it right in the main tree. See, for instance, one of the recent optimizations to the vanillajs implementation of js-framework-benchmark: https://github.com/krausest/js-framework-benchmark/commit/2e...

If I understand what Glimmer is proposing, they want to slice a long update process into a set of batches, so that they can pause in the middle to let the browser render a frame. My points were that a) they don't know how long it will take the browser to render that frame, so it's hard to say when to cut off the batch, and b) rendering intermediate states might increase the overall work, sort of a classic throughput vs latency tradeoff.

Paint and composite are usually fast, but calculate styles, layout and hit test may not be. It totally depends on the complexity of the DOM and CSS, of course, but as an extreme example, the js-framework-benchmark tasks are often 90+% time in render. That's why the results converge on 1.00: 0.95 of that is time spent in the browser rendering the DOM, and the time spent in javascript between a framework at 1.00 and one at 1.05 may be 2x difference (0.05 vs 0.10).

Well, Glimmer actually compiles opcodes to just numbers. So it wouldn't be `a(e,"id","bar")`, it's actually [1,'id','bar']. Opcodes' wire format is an array. I haven't see it being a tree yet. If this is the case, stream parsing is definitely possible.

You could do even better than this if you wanted. Since there are only between 100-200 dom attributes, you can pack that into the opcode itself. Assuming 6 bits for the opcode (64 ops) and 8 bits for the dom attribute (256 attrs), you still have another 18bits to play with in a 32 bit int. If you wanted to allow arbitrary attributes, you could dictionary encode them and then use the full 26 bits for a total of 67M possible attribute names. Alternatively, you can reduce that entire op down to just the opcode by dictionary encoding the operand and packing that in as well. How many programs have more than 262,000 static strings? :)

Compared to the string encoding above we went from 14 chars (1-4 bytes each, we'll just say 2) at 28 bytes to 4 bytes for our 32 bit int.

Yeah, I was just looking at the Glimmer opcodes format. I'm a bit surprised it's an array of arrays, rather than a flattened array. It looks like it goes `[[1,"id","bar"],[2,<other param>],...]` rather than `[1,"id","bar",2,<other param>,...]`. Wonder why? Monomorphism, or faster dispatch by not having to pass a pc index around? Interesting.

The `a(e,"id","bar")` format is what other frameworks produce. It sounds like in Glimmer it would be `[1,"id","bar"]`. So that's only a single char savings.

Yea this is sort of a relic of the initial VM architecture we landed originally in Ember 2.10. That architecture was more like Clojure e.g. read -> compile -> execute. So the nested arrays are seen as sub expressions. You are correct this can be linearized.

Also, this particular format is the "wire format", which is the compact representation that we compile templates into to send to the client.

The client then compiles that representation into flat opcodes, in part by specializing the template based on runtime information (like the exact identity of the components in question).

The runtime opcodes are binary (128-bits apiece at the moment) and optimized for reasonably fast iteration. The wire format is, as chadhietala1 said, not as flat or compact as it could be, but still much more compact than our earlier representations (or the representations of competing rendering engines).

We plan to improve the wire format representation in the near future.

Ah, but you can pass it as json and have it render much faster. (If I've understood this thread correctly. Please correct me if I'm wrong)

I don't know the details in this case, but very often, any use of eval() prevents a lot of optimizations. That's because for anything the eval() can see, all static analysis goes out the window. In a dynamic language like javascript, there may have been precious little of it in the first place, but a lot of smart people have figured out a way to scrape together some run-time optimizations based on them. The existence of eval() tends to kill them.

The parent isn't talking about runtime-eval (where you hit the interpreter with strings over and over in a hot loop), but rather "manual JITing", the technique where you take code that would otherwise be very dynamic (looking up method names using variables, etc.), generate a string containing a function definition of one particular concrete specialization of said code, and then eval that string to get a native function handle—just as if said code had been turned into a blob URI and shoved into a <script src=""> attribute. Those functions will then get statically analyzed just like any other functions.

It's a fun thing to do and it gives you a lot of insight into what is efficient to implement in a programming language that you don't get by simply using someone else's language.

eval() and Content Security Policy don't mix well, btw.

Why does it need to generate code at all? Why can't it parse the template, create DOM nodes, listen to the "home" binding and do textNode.nodeValue = newVal whenever it changes. What does the bytecode VM provide?

This is literally what the article is about.

What IS the glimmer VM? I didn't see any links on the page. Is this it?


It is JavaScript code or native code?

Here's a playlist of technical videos which explore the Glimmer VM internals:


That is it. It's TypeScript.

OK, my confusion is why it's called a "VM":

Glimmer is a flexible, low-level rendering pipeline for building a "live" DOM from Handlebars templates that can subsequently be updated cheaply when data changes.

This sounds like something written on top of a JavaScript VM, not a VM itself. Is it implemented with VM-like techniques? What does the instruction set look like?

> This sounds like something written on top of a JavaScript VM, not a VM itself.

You can write VMs in languages that are implemented with a VM.

> Is it implemented with VM-like techniques?

More than that, it is literally a virtual machine.

> What does the instruction set look like?

IIRC, it's a stack-based VM. Here's the opcodes, I believe: https://github.com/glimmerjs/glimmer-vm/blob/master/packages...

(I have mostly a high-level understanding on this, after talking to lots of people and watching presentations; I don't hack on Glimmer myself.)

I like that one of the opcodes is "FixThisBeforeWeMerge".

Looking this over, I can see why my initial reaction of "why aren't you using WebAssembly?" was facile; this really is a VM optimized for templating languages.

I'm reminded of the Bad Old Days when I was a ClearCase admin; we ended up building what my boss at the time called a Revision Control VM on top of it, so that our developers never really learned ClearCase per se; instead, they learned our system that happened to use CC under the hood. Since then, I've treated the term "VM" with a looser interpretation than usual; clearly, I should have applied that here (though it is, strictly speaking, a VM in the classical sense as well).

Thanks. Up to this point I too was scratching my head wondering what the fuss was about -- if this was a JavaScript-in-Typescript VM, or what. It's a custom virtual machine for a domain-specific language for filling templates, right?

Yes, this is a good succinct way to put it. It compiles handlebars templates to its VM, and then as the machine executes, it updates the DOM appropriately. You can think of the Glimmer VM as a VM interface to the DOM rather than the direct DOM interface, and handlebars templates as programs that compile to that VM.

OK so I guess the VM outputs a virtual DOM tree from a handlebars template, rather than text? I don't really see why you can't do that with a simple tree interpreter, but sure why not.

I've implemented template languages in multiple languages. I've also thought of compiling different languages to the same VM with string instructions (IIRC the old Cheetah template language compiles to Python bytecode).

I sort of see where this is going but the docs weren't particularly clear to me. If anything it sounds like there are a lot of other components besides the VM, which weren't really described in the blog post, and I couldn't find any links.

> so I guess the VM outputs a virtual DOM tree from a handlebars template, rather than text?

As the article mentions, this technique is distinct from virtual DOM. The article is light on what specifically it does though. From the article:

> I'll describe the details of this approach in another post, but the short version is that we compile templates into "append-time" opcodes for a bytecode VM. The process of running the "append-time program" produces the "updating program", which is then run every time the inputs change.


> If anything it sounds like there are a lot of other components besides the VM, which weren't really described in the blog post, and I couldn't find any links.

There's a little bit; that is, this post is specifically about Glimmer's VM; Glimmer itself has a few more things, just like React is more than just a virtual DOM implementation. In Glimmer's case, there's glimmer-component, which lets you write web components, and glimmer-application, which lets your register your components into a cohesive whole, etc.

TL;DR: Glimmer as a project is similar to react. It has some significant and novel implementation details that keep it speedy.

OK thanks, that helps. It sounds potentially interesting but in its early stages.

No problem!

It is and it isn't; that is, all of this is extracted from Ember, so it has had a lot more maturity and testing than you might think at first. Not _super_ so as it's still relatively new, but deployed in lots of big places.

Glimmer intro video, https://youtu.be/i2rwIApjz-4?t=175 might have some answer for you.

Apologies ahead of time if this is a stupid question. I am pretty much the walking stereotype of a web developer with very little experience with anything below javascript/ruby/python/php.

Considering that Glimmer goes quite far in optimizing stuff, at which point would it perhaps make more sense to just emulate HTML elements on a pixel-level?

Or am I vastly underestimating how complex the standard UI elements really are? Or overestimating how much effort went into Glimmer? And if so, as perhaps a more constructive question: are there any areas of 'web' where someone feels a shortcut should be taken?

(On the latter, I'm inclined to feel that WYSIWYG textareas and CSS above class-scope are ripe for 'shortcuts', but perhaps that's opening some cans of whole other worms...)

> Considering that Glimmer goes quite far in optimizing stuff, at which point would it perhaps make more sense to just emulate HTML elements on a pixel-level?

Even if this could be done with reasonable performance (and people have tried this), you shouldn't do this in any browser environment. You can't possibly emulate the behavior of every browser in a satisfactory way; you'll end up behaving gratuitously differently, with subtle breakage of platform conventions, browser conventions, user expectations (including potential extensions), hardware requirements, scaling, rendering, and accessibility.

In addition to what others have said, this would have a significant impact in accessibility and break a lot of the tools used for those with some sort of disability or accessibility issue. You are basically talking about what Flash used to do, and that's not a road we want to revisit.

Whole websites were done in Flash, not long ago. It was not a good idea.

It feels like a bad dream now, but yeah this really did happen. It was awful.

> Or am I vastly underestimating how complex the standard UI elements really are? Or overestimating how much effort went into Glimmer?

Both, especially the first one.

Actually, I'd challenge us to ask ourselves, what do we think emulating HTML elements on a pixel-level actually is exactly? A client-side application rendering layer like X, or Wayland, etc.?

Think of VNC or remote desktop. This is basically an interactive moving image representation of server-side state and logic, which takes inputs from the user in the form of clicks, mouse movements, keyboard input etc. and pipes it to a server which is doing all the rendering and feeding images back to the client.

Now, if you are simply rendering images in the browser that are as good as HTML elements, the only difference between that and VNC really is the presence of client side state, client side event management and response, and somewhat increased interactivity of the client side with the local machine. Otherwise, you are simply collecting user inputs and producing outputs as a function of client/server state interactions and behaviors.

Personally, I see an interesting trend. For example, React seems to be about custom components that have client side local state and event handling, whereas things like Redux seem to be about gracefully handling the interactions between client/server state and behaviors. This same pattern appears in some way in Angular, Vue, etc. - two-way databinding, injection of services representing server-side objects, etc.

Also, consider things like WebAssembly, which will essentially provide an optimized VM in the browser closely linked to Javascript. What do we get when we cross improved client/server side state and behavior interactions plus optimized run-time environment in the client?

I think it's basically like the UI of the application runs in the client (whereas VNC was just an image of the application running), the client handles some of the processing logic and the server handles the rest, often manipulating the view of the user and coordinating that with the user's input.

All this, and it only takes a server, server application, server protocol, client application, client VM, client app/scripting language and client styling.

Looking into the future, this architecture seems to be presented as something that is the future - for example, look at Electron apps or WPF - they actually have a lot in common in some ways. Electron - node, Chrome, HTML, CSS, JS. WPF - UI host/application main loop, XAML, styling, C#...

I also think isomorphic Javascript concepts (write in one language, this code runs in the client, this code runs on the server) are going to make a massive leap forward when WebAssembly hits (shortly!!!) [1].

Exciting times in my opinion.

[1] https://lists.w3.org/Archives/Public/public-webassembly/2017...

Based on some of the comments about what a VM actually is, I think this 2-part series might help those who are wondering (me included) about the technicalities of building one's own VM:

- (Part 1) https://www.codeproject.com/Articles/43176/How-to-create-you... - (Part 2) https://www.codeproject.com/Articles/61924/How-to-create-you...

I'm a big fan of what I'm seeing in Glimmer so far. I had fun adding support for xcomoponent yesterday, making Glimmer components work as cross-domain components.


I'd love it if it were a little more flexible to get started with though. Rather than fire up ember-cli, I'd love to just be able to:

1. Load glimmer.js from a CDN

2. Extend glimmer.Component

3. Render my component into an element

I know ember-cli solves this problem for a lot of users, but I'd love for the number of steps to get a "hello world" working to be as simple as the above. I think that's one of the reasons React has been so successful: it's stupid-simple to get started with, and it draws you in from there.

This is simply not possible with what Glimmer tries to achieve. (Did you even read the article?)

It's not possible to have a slower development build which does template->bytecode conversion on page load?

The point I'm trying to make it, there's already a huge amount of churn around the vast amount build tooling needed to write any javascript these days. Part of making a framework with a low barrier to entry is having the build tooling stay out of your way until you're ready to deploy.

I get that's not necessarily Glimmer/Ember's philosophy, I just think it's a shame that using the framework from day 1 requires buying into all of the technology choices around ember-cli.

Minimising time to first render/interaction is on the list what is important for Glimmer. I don't see how this is compatible with moving the conversion engine to the client.

(Not to mention that the app size zealots will eat you alive)

Yeah, that's essential in production. Totally agree. I'm mainly talking about the development experience for first-time users, or for people who just want to throw Glimmer onto one of their apps on localhost and play around with glimmer.Component. The thing that needs to be fast in those scenarios is the developer experience, not the app.

If you want to play around, have a look at Ember Twiddle:


(No Glimmer there yet, only full Ember, but you can get a feel for what it can do, how it is like, test a things out...)

How does Vue.js compiled templates compare to Glimmer ?

Is Glimmer VM actually a virtual machine?

The term suggests some kind of Turing complete language with sandboxing and memory management.

Without these features "domain-specific language" would be more precise description.

I wonder why the author ignored other big players in those benchmarks such as Angular or Vue.

Someone needs to make a lightweight framework with MobX and Glimmer

Isn't this just re-solving a decades old problem, with the (unnecessary) constraints caused by modern web systems?

While being technically correct, your statement is pretty much pointless.

All development is 'just' re-solving an existing problem with better performance, or 'just' extending an existing algorithm to be more resilient, or 'just' implementing a legacy interface on a new platform.

There are lots of very obvious reasons that the web has the constraints that it does and describing them as 'unnecessary' adds literally nothing useful to that conversation.

All development is just re-solving or extending? That's a sweeping generalization. My own work is definitely the incremental kind, and maybe yours is too --- but surely you'd agree that someone, somewhere in the world is doing truly innovative work.

No, this is false. All "innovation" can be characterized as re-solving or extending. Pick a computing technology, and I'm happy to do it.

Well, of course you can. Pick any human endeavour, including software development, and I can demonstrate that it's just applied philosophy. But it's not a useful exercise, IMO, it's just a reductionist rhetorical device.

I think it's wise to keep ourselves humble -- few of us are blazing new trails, in anything that we do. And I agree that many so-called innovations are nothing of the sort. But if we reduce everything that we do to revision and extension, then the word "innovation" entirely loses meaning. Why would we want to do that?

>All development is 'just' re-solving an existing problem with better performance, or 'just' extending an existing algorithm to be more resilient, or 'just' implementing a legacy interface on a new platform.

Only this is "re-solving an existing problem with worse performance that we had decades ago (on native), but slightly faster than the previous speeds we achieved having it run with its feet tied".

(Where by "feet tied" we refer to the performance penalty imposed by having everything run on the web stack).

Can you explain how "native" solved the problem of performing minimal MVC-style updates to a UI "decades ago"?

Decades ago, we had the Win32 API, which didn't even try to solve this—apps had to roll their own.

>Can you explain how "native" solved the problem of performing minimal MVC-style updates to a UI "decades ago"?

That's not an actual problem people have. Nobody says "what I want is minimal MVC-style updates to a UI". What they want is a fast UI.

So an actual problem is e.g. "having a slow UI" -- and native GUIs solved it "decades ago" by being able to do stuff faster and with less memory and power compared to the web stack.

That said, there were several frameworks that allowed that, especially since MVC didn't magically appear with the web -- the concept originated with Smalltalk. And even "dumb" native GUI frameworks were much closer to the metal than the DOM, and knew how to repaint e.g. only the area of a widget that changed.

> And even "dumb" native GUI frameworks were much closer to the metal than the DOM

GDI wasn't "closer to the metal" than the CSS painting model was. It was reviled for not being close to the metal, in fact (which is why WinG and later DirectX were so important). On Windows NT you had to take a context switch to kernel mode to issue painting commands—how can that be "close to the metal"?

> and knew how to repaint e.g. only the area of a widget that changed

Browsers have been doing this for over a decade.

Actually, I think they should largely stop doing this, since in a double buffered scenario (which has been the norm since the Vista era) partial repaints become complicated, and GPUs are so good at blitting a window-sized area that it's really a non-issue. What are issues are state changes and overdraw, which native frameworks are exceptionally bad at minimizing (this is largely why Skia-GL is underwhelming on Android) and the declarative CSS model is good at supporting. Native frameworks such as Win32 and GTK are held back by having to support this obsolete model. None of the Win32 designers could have imagined that Z-buffers and early fragment tests would become universal.

A meta-note: Pretty much without fail whenever anybody has mentioned some specific thing that Web browsers supposedly fail to do that native can do (the incorrect idea that browsers don't do partial repaints being just the latest instance of it), browsers have been already doing that thing for years. This indicates to me that most people who complain about "native" vs. "Web" haven't really looked into how the Web works. I'm all for acknowledging the Web's shortcomings, but let's concentrate on real issues. (In my view, legacy browser design, which is largely the direct result of trying to be "native", is the biggest problem, not the Web.)

As mentioned below - Smalltalk. You know, the language MVC was invented on...


Smalltalk isn't native!

It is a matter of implementation.

It was native on the Alto, as the primitives were implemented in microcode, which could be a FPGA nowadays.

Also Squeak and Pharo are implemented in Smalltalk.

When people complain about "Web" vs. "native", they are not talking about the Web versus the Xerox Alto.

Well I wish that web was able to achieve those boring native hypermedia systems from Xerox PARC, instead of squeezing a VM into a document platform.

Then one of the "RAD" tools from the nineties. Delphi maybe?

If the problems were solved for people on native, there wouldn't be this much focus on reinventing the wheel on mobile.

Clearly for some subset of people, "something" is missing for those native solutions (I can guess - portability, convenient 'moddability', etc) which they believe the standardization-reliant web can better address for them.

Obviously the browser has become the universal VM. It solves the same problem as Java, but with Javascript instead of Java. So this decade Java school became Javascript schools, next decade it will be something else.

This is what I meant!

Yes, but the point here seems to be that once again, the web platform is obstructive and forcing us to build abstractions on abstractions.

As opposed to what, exactly? "abstractions on abstractions" defines all of computing.

As said above, it's not a distinction, thus it's meaningless.

Very obvious? Could you elaborate? I don't mind changing my opinion, but it looks like the web is a mishmash of poor implementations made possible by increasing bandwidth and ram.

The web is the only platform that has solved the sub-100ms install+load problem, which helped it solve the be-on-every-device problem. Solving those problems introduced constraints did make certain things harder. But IMO it was a good trade-off. Being the biggest software platform in the world certainly suggests it was worth it.

Let me know when Windows gets 100ms install+load and I'll be interested in the rendering model. Until then... measuring install time in seconds, or minutes? You can hardly claim to be performance oriented.

When you can run a real CAD program on a phone, I'll be interested.

The web is for disposable toys, uninteresting ones at that. All of Windows 3.1 could fit in less space than the Apple website. You're telling me a silent install of Windows 3.1 would take more time to install than...Some app to find local hookups by GPS?

If you can confirm to those constraints, you can securely ship UI code that automatically runs on computers all over the world, which wasn't a thing before the web.

Smalltalk-80 had a platform independent VM in 1980. If you want to be more recent, Java. Why is there so much confusion about how the web fits into things?

The Smalltalk-80 VM is certainly interesting but as far as I know it wasn't allowing users to run untrusted, possibly malicious code. Taking security seriously came later.

Java was a pioneer with shipping untrusted code to users in a sandbox, but it turns out its sandbox wasn't secure, and there were other UI issues as well. (Similarly with Flash, though that was more of a mobile performance issue.)

To be fair, the early web had a lot of security problems. What we have now is what's left after a lot of hardening. And we still have a lot of security issues.

But anyway, if you use JavaScript this isn't really your problem. Browsers take care of sandboxing and deployment for you. Nothing wrong with building on the hard work of others.

Java applets on the Web were tried! They failed largely due to performance.

The performance problems were because of the way that specific VM was designed, not because it was a VM. You can easily have sub-millisecond initialization time on a VM if you make it a design goal.

Modern web is just as shitty, we just have faster computers.

What's wrong with re-solving an existing problem? Isn't that the entire idea of "fearless refactoring" that several process models use to mitigate reliance on/complacence with archaic systems?

I don't think "it's been done before" has ever in the history of progress been a good reason for something to not be reexamined and made better or easier to use.

It used to be called mail merge. Now that there are so many abstractions layers, it's apparently best performance wise if you add a VM running inside a VM running inside a VM.

It's not correct to call this a VM. It's an optimized template language/renderer.

I'd like to see an example of the same static gzipped HTML content delivered to the browser and a template rendered with this.

I have doubts this is any more efficient than the browser's built-in gzip decoder on a time basis nor do I think it beats gzip in terms of space efficiency.

It's hard to take seriously those who claim to be optimizing things when they don't have any measurements on what they are trying to improve vs. the new method.

Why not? They have byte code (sort of) that translates to actions...

I don't get the gzip comparison because this is about updating views in an SPA framework (Ember)

Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact