Hacker News new | comments | show | ask | jobs | submit login
Web frameworks are transforming from runtime libraries into optimizing compilers (tomdale.net)
436 points by p4bl0 40 days ago | hide | past | web | 227 comments | favorite



"The trend started by minifiers like UglifyJS and continued by transpilers like Babel will only accelerate."

Those were predated by Closure Compiler, which was started with Gmail, and Closure Compiler in some ways still doesn't have an equal. Besides being a fully optimizing compiler, it also has a module system for code splitting, and optimization passes designed to move code at the method/property level from initially loaded code into deferred loaded code if it is detected to only be used later.

You can see this in action with photos.google.com which splits up every UI component, model, and controller into a dependency tree of modules, which are aggressively optimized. When an action needs to fire, only the transitive dependency of reachable code needed to handle it is loaded.

The closest external version of this is Malte Ubl's"Splittables" for Closure Compiler (https://medium.com/@cramforce/introducing-splittable-1c882ba...)


This seems to be something of a trend in the software library and tool world:

A problem emerges. Some early, sharp thinkers create tools which attack the problem in a high level, abstract, powerful way... but also with the rough edges that come with being early. These tools get a few adopters, but not that many because it is difficult to see the value versus the obvious rough edges.

Then other sharp thinkers, put off by those rough edges, create N new waves of tools which attack the problem less effectively, but more accessibly.

Then eventually, some years later, common practice of the waves of tools will circle back around and start doing the things that that early rave started.

It's easy to criticize the "daily new JavaScript framework" phenomenon along these lines, but I don't think it is specific to JavaScript, nor specific to 2017, and not even specific to software development. It's just part of how we humans work.


Somewhat relatedly - when a new technology wave gets started, it's usually proprietary, because there are big competitive advantages that can be had by being the only one with possession of a new technology. As details leak out and enterprising independent hackers get interested, commodity or open-source clones emerge, and they get much wider distribution simply because they're cheap or free. The original innovator is caught flat-footed, loses mindshare, and eventually has to adopt the open-source solution when it becomes dominant in the open-source world.

Altair -> Apple -> IBM -> Dell & clones. UNIX -> System V -> BSD -> Linux. MapReduce -> Hadoop. GMail -> Closure Compiler -> Traceur -> Babel. Google datacenters -> AWS -> Docker -> Kubernetes.


This is a very strange example. Gmail, Closure, and Traceur are all Google products.


And Closure and Traceur are both open source.


And Babel's the one people actually use.

Within the progression, each new generation is decreasingly tied to a proprietary, revenue-generating product, and increasingly intended for outside consumption. So GMail was completely proprietary and closed off from the outside world. The JS library, compiler, and optimizer from GMail was spun off into Closure, which was widely used across Google products and finally open-sourced 5ish years later, but never got widespread external use because it's very tightly tied to the style of JS development within Google. The idea of JS transpilation was proven by Closure, and Traceur was developed from the outset to be open-sourced and not tied to Google products - but it was developed by a member of the Chrome team, and so the developer perspective was that of how internal Google engineers think, and the development process wasn't really inclusive of outside engineers. Finally, Babel was developed entirely outside Google, based on ideas that had been proven out and were widely disseminated, but it was developed from the start with an open community process.


As someone who had some amateur experience with C++ roughly 10 years ago I felt it was sort of deja vu with JavaScript. Transpilers, aggressive optimization, automation scripts it's nothing that was too unfamiliar with me.

Webpack is roughly a library linker. Babel is like GCC Make. Gruntfile ~= makefile. I think it's coming of age but the JS ecosystem is still undergoing growing pains, plus it can be overwhelming to newbies. People are paralyzed by too much choice. Having IDEs like Webstorm would be good at least taming or organizing workflow better.


Babel is more like cfront, transpiling from a superset to an older base set of language features. Edit: Maybe you meant to compare it to Macro Pre-Processing, which is often set in makefiles. But it's a shim loader.


Reading your comment reminded me of a classic piece [1] about a similar phenomenon.

[1] http://discuss.joelonsoftware.com/default.asp?joel.3.219431....


I think some of this might be that selection bias. Closure compiler is popular, plenty of not-so-sexy projects have used it since its inception to a great degree of success. I seem to see it with some consistency in the enterprise world. It's never been popular with the HN / startup crowd.


This was also predated by the Opa "framework" which is indeed a compiler for the web platform, announced in 2009 and released a bit later.

At the time, people didn't want to hear about a new language so we had to hide behind a framework. But one of the ideas was that there is no sense in bundling libraries at runtime instead of generating exactly the application code that should run.

Cf. http://opalang.org (disclaimer: I was the project creator)


Yup, Opa is pretty nice. It just highlights the differences between "compilers" and "transpilers" which came up in another thread recently. There's a number of transpilers that are fairly simplistic transforms of source languages, or JS->JS translation (lowering ES7/6 to ES5, etc) which do limited whole world optimization, and mostly consist of peephole passes.

Closure, Opa, GWT, Dart, and a few others (ScalaJS?) run fully optimizing compiler passes like a traditional old school compiler. For example, IIRC, Dart transforms (or used to) to Hydrogen IR/SSA internally, runs lots of optimizations, and then transforms back before JS output.

This is not to bash transpilers, but I don't consider Webpack/Babel/Uglify in the same category.


I was just thinking about Opa the other day - it looked nice but I never used it. People were worried about debugging through the layers of abstraction. Is it still "alive"? You write in the past tense...


It is not really alive as of now. But there is still space for a full stack language and Opa is relevant today. MLstate, the startup behind Opa, shifted to secure communication platforms (built with Opa) and was acquired last year but without Opa itself.

Many things we did became hype later (implemented in OCaml, a functional language, JSX before React - I know Jordan played with Opa before building React) and we still have bits that are missing is today's stacks. The ideal next step would be to join a foundation so that development on the project could resume.


Tierless programming languages did not started nor ended with Opa. Eliom is very well alive and kicking (disclaimer: I'm finishing my PHD on the topic), as well as Hop, Split.js, Ur/Web, Links, Websharper, ...

I was always very disappointed by the fact that the inner workings of Opa were never described in any way. I can't even really cite it in my PHD thesis, because the documentation for the early (and more interesting, imho) versions has completely disappeared.



What is present in Opa that is missing in other stacks?


The compiler for GWT has done a lot of this for years (and it's functionality might have fed into closure). It's arguably different because it compiles from Java to Javascript as a first pass. It then does a lot of full-program optimization, including dead code elimination, lifting virtual method calls to static, inlining, reordering code to compress better, and analyzing the call graph to split code. (You have to be explicit about the split points in the Java code, so the analysis is mainly for doing the code splitting and for coalescing components that are are small or would force a lot of code into the common segment.)

I believe GWT is moving on to closure for the latter optimization passes in future releases.


Closure Compiler is very, very good at producing small code.

It also takes a fair bit of manual effort unless it's handled for you (e.g. ClojureScript, Scala.js).

Output modules in particular demand some very particular attention.


Douglas Crockford is the guy who pushed JavaScript into this world. He wrote the first jslint which put JavaScript in the build pipeline on it's own. Also the YUI compressor and minifier are old enough to be deprecated.


The Closure Compiler is the only reason I have Java installed, it's that good. And IMHO, a lot of the bloated JS projects nowadays would really profit from its dead code removal...


In case you haven't heard, you can now use it without Java https://developers.googleblog.com/2016/08/closure-compiler-i...


That's actually really amazing - and I didn't know that, thank you!


It's extremely slow compared to the Java version though.


I think it's about 2x slower.


they're the same in name only. but not in perf and not in output.

https://github.com/Rich-Harris/butternut/issues/35


I don't see where in that issue it says output changes from java to js? Other than the runtime option


The Google Closure compiler and library are used as targets by the ClojureScript compiler https://clojurescript.org/about/closure


I remember the first publicly available being Dojo's minifier which used Mozilla's Rhino to determine which symbols could be safely obfuscated. It's primitive by today's standards, but it was a huge step forward at the time.


Those features are supported in Webpack and other bundlers.

Code splitting: https://webpack.js.org/guides/code-splitting/

Tree shaking: https://webpack.js.org/guides/tree-shaking/


It's not quite the same. It has some aspects of Google's module system in terms of de-duping (our internal version automatically splits up all code referenced in more than one place into a tree of synthetic modules) But Closure's optimizations go far beyond the "tree shaking" described here, which is really about pruning unused module dependencies.

Closure does pruning of methods, properties, etc at a fine grained level, moves code at a per-method level between modules, it computes whether functions have side effects and elides unused side-effect-free code, it does type-based optimizations (https://github.com/google/closure-compiler/wiki/Type-Based-P...), the list goes on and on. It's really like having GCC or LLVM for Javascript code. That's one reason why it's slow, because it does far more than the transpilers and minimizers out there.

(As many in another, GWT does many of the same things, being a fully optimizing true compiler)


I want to add that you can easily write your own compile pass for the closure compiler as well. All the optimisations described above are just compile passes.


React moved some parts from Uglify to Closure recently. They're only compiling using "simple" ("none" and "advanced" are the other options), but they're still seeing benefits: https://github.com/facebook/react/pull/10236


https://github.com/webpack/webpack/issues/2867 - "Tree shaking completely broken?"

It turns out that it basically supported "skip loading of unnecessary ES6 modules", so kind of like a "warn: unused import", but turned into a silence and no-op.

Now with Uglify getting the "pure" annotation, and libraries sprinkling that onto their code, it'll help a bit.


Only in a very primitive way.


In the beginning there was GWT. But along with Java, GWT has a steep learning curve so its appeal is limited.


Svelte(https://svelte.technology/) is one similar idea. It’s a framework that compiles down to plain JS before being shipped out to browsers.

There’s a couple of concerns though:

#1

How are we going to manage browser inconsistencies? We are very much better than what we were when jQuery came out. Yet we aren’t 100% there either. Browser inconsistencies, still do exist and runtime frameworks try to deal with it.

#2

I’ll take Svelte as an example. Say, a component written in Svelte is 2 lines long. The compiled JS file of that component is around 200 lines long (verified). Out of these 200 lines of code, almost a hundred lines are repeating units that occur in every component file.

By the time my app reaches a hundred components, this extra 100 lines gets multiplied by 100 times. This bloat is exactly what we want to avoid with runtime frameworks.

So, let’s think of the simplest solution.

Why not ship these repeating units of code as one module/set/block of code, that gets reused in different places? But wait, isn’t that block of code is what we call “Runtime Frameworks” ?

I still think compile-time frameworks are the best bet we’ve got. Would be helpful if somebody can throw light on these aspects though.


Svelte author here. If you're using a bundler integration like rollup-plugin-svelte or svelte-loader (for webpack), those repeating lines of code are deduplicated across components. There's a bare minimum of repetition, and we're in the process of reducing it further. You'd be surprised at just how well it scales!

Browser inconsistencies are much less of an issue than they used to be. There are only a couple of places (e.g. listening for both 'input' and 'change' events on <input type='range'>, to satisfy IE and everyone else) where we need to accommodate differences.


Just wanted to say that I've been using svelte a bit at work to create widgets that our clients can embed on their websites, and I absolutely love it. Creating svelte components is super easy and consuming them is even easier.

Thank you for having the vision, thank you for making it, and thank you for continuing to improve it.


Some of these concerns were already mentioned in HN when Svelte itself got discussed (https://news.ycombinator.com/item?id=13069841), and the suggested solutions for problem #2 were (kudos to user callumlocke):

- Bundling and gzipping several Svelte components together might compress well – a lot of their size comes from repetitive substrings like `.parentNode.removeChild` and `.setAttribute` etc.

- Once downloaded, the Svelte approach would probably be faster than React (at both rendering and updating) and would use less memory (no virtual DOM, no diffing, just fast granular updates).

- The self-contained nature of Svelte components makes it easier to treat them as atomic downloads and use them as needed. For example, you could get to a working UI extremely fast, and then download more components for below-the-fold or other pages in the background. This could work well with HTTP/2.


According to js-framework-benchmark (http://www.stefankrause.net/wp/?p=431) it doesn't just use much less memory than React, it uses less memory than any other framework, because we don't have the overhead of a virtual DOM. And yes, it's significantly faster.

I answered the deduplication point in another reply to the parent, but you make a great point about code-splitting. If you're using a convention runtime framework it doesn't matter how aggressively you code-split — your smallest chunk will be at least as large as your framework. Self-contained components solve that problem.


That is fantastic, thanks for the clarification. The problem with calling self-contained components on demand is more of a mindset one: we've grown too used to compiling everything into one single file and calling it a day.


Thanks for sharing!

That's a lot of good news. Yet, I find it unsettling, that there's always going to be a minimal(after compressing/gzipping) amount of repeating code for every component.

>The self-contained nature of Svelte components makes it easier to treat them as atomic downloads and use them as needed. For example, you could get to a working UI extremely fast, and then download more components for below-the-fold or other pages in the background. This could work well with HTTP/2.

This is a huge win though - if I can load the first component extremely fast with no extra code.


Check Svelte out (https://svelte.technology/). From the docs: "... rather than interpreting your application code at run time, your app is converted into ideal JavaScript at build time. That means you don't pay the performance cost of the framework's abstractions, or incur a penalty when your app first loads."


Svelte is quite appealing, especially for the "reusable widget" layer web development. But there are two areas where there's probably room for something else to swoop in:

* Pervasively typed with TypeScript, written in TypeScript, completely first-class. (It is very easy to consume TypeScript base code from JavaScript, but tacking types onto something written without them almost always yields a much lesser experience.)

* Output standards-compliant web components by default. (Perhaps with room to also output some other variation of component, if there is some API aspect where that helps.)

(Distraction for another day: I think the future of SPA development will be web components at the widget/component level, while application frameworks start evolving upward to provide more features around building applications - because there is less for them to do around building components.)


Svelte does now offer an option to compile directly to web components, though it's experimental at this time. You could give StencilJS a go - it's web components through and through, and uses Typescript.


This is how we're approaching it with Stencil: https://stenciljs.com/


Whilst I am in favour of the optimisation of resources (and use these sources) we are willingly moving the "View Source" model to for profit entities like github.

In the way the internet has evolved we really should look hard it the elephant in the room: the DOM is for documents - not for interfaces.

IMHO optimisation for javascript is just a short term fad (hopefully) and browsers will adopt a more open approach to interface building. This should lead to "View source" on an open standard for interface building - which I think React comes close to. I know there are projects around that attempt this, but I am not aware of any real successes - at least in the same way React has grown.

The elegance of HTML has been superseded now, because interfaces are rich and packed with features. The DOM design is lacking and slow because of legacy support.

If the whole community effort was spent creating a better UI/UX standard (a standard web front end) - compilers would have no business case and i'm not sure they ever should.


> If the whole community effort was spent creating a better UI/UX standard (a standard web front end)

What do you mean by standard? HTML already has a UX/UI standard (technically standard and with some deviations in terms of design implementation depending on the browser). A <button /> is always a button and looks like a button in any browser.

The problem is that it looks awful if you are not Stallman. Both Android and iOS have been improving and refining (visually!) their UI elements, but this has not been done in the web, basically because there isn't a huge company calling the shots behind it.

If you want to improve that, we are heading in that direction with Web Components, but at the end of the day it's just an abstraction layer over the DOM elements. And I'm not sure I want Google leading the way (with Polymer).

EDIT: Some grammar.


> Both Android and iOS have been improving and refining (visually!) their UI elements, but this has not been done in the web, basically because there isn't a huge company calling the shots behind it.

I don't know about the other browsers, but Firefox defaults to displaying buttons the same way the OS does, so any visual refinement is adopted automatically. Of course the majority of websites override this to get full control over the look, which I find to be quite superfluous in most cases. The text field I'm typing this in and the "reply" button below it work perfectly fine without any fancy styling.


HTML has a UX/UI standard for iterating documents. Applications however, are not explicitly catered for. HTML standards for web application development are a bit like putting F1 wheels on a bus.

For example : a dashboard is a common interface pattern, but this would (and should) probably never enter the Document Object space.


I think you're preoccupied by the word "document". There are many ways that applications are catered for. WebSockets are of no use to documents, only applications, as one example.


Websockets are of use to live documents.

Application stacks are different.

The definition is extremely important as the current level of interaction richness was never really envisaged in 1996.

A document is a resource that is available specifically over http or https. An application is something entirely different as it can be served via a huge array of protocols and terminated and rendered entirely differently.

The separation of concerns here is my need, not the terminology.

For example, how much time have programmers spent globally repeatedly creating and implementing a user login and password reset flow?

This is simply not the concern of the Document Object Model and it means that we have to put up with a huge array of home brewed solutions that each have their own weaknesses. Imagine the same effort and logic applied to routers or tcp/ip for that matter. The internet would simply not work or break.

My point is - there has been a land grab over the UI / UX space that has meant standards like web components are marginalised in favour of 'frameworks' and compilers.

Seems like an awful lot of duplicated effort to reach the same result.


> Whilst I am in favour of the optimisation of resources (and use these sources) we are willingly moving the "View Source" model to for profit entities like github.

That ship sailed when we 1) made JS too capable (ability to initiate requests, ability to handle requests instead of the browser, access to too many user actions) and 2) didn't improve the standard input and UI elements enough (sortable, searchable table should be built-in, for example)


Honest question: apart from nerds, what reason do we have for valuing 'View Source'?

I mean, you can't 'View Source' anything of the computer the browser is running on, or the browser itself.


Besides the educational value mentioned elsewhere, by browsing the web you're promiscuously executing code sent by random strangers, with whom you have no long-term relationship. It's important to be able to review what comes from them. View Source and dev panel are, and forever should be, an important part of the browser.


How do non-nerds become nerds? "View source" can be the lid of the pandora's box for lots of young people who become inspired to be developers.

Alos the web is not just for nerds! HTML is a language that lots of everyday people can write now - it is empowering.


We were already nerds for a few decades before the browser was invented.


And we (my generation) weren't!

"View Source" is not the only way to start on your way to nerddom, but it is, or at least was at some point in time, an important one.


I find computer magazines and programming books more relevant, and those aren't going away.


An opposing data point: I can't recall the last time I used a physical programming book for anything other than a monitor riser (being so thick, they're fantastic for this). And I am from the generation where those books were how you learned to program.

Every resource I consume today is either in HTML or PDF form.


Maybe, but why not all of that? And some more?

Two other things that helped push me into the nerd side when I was a kid:

- QBasic shipping with the OS on my first computer

- Video games that shipped with map editors (Abuse, StarCraft, Unreal Tournament - the latter actually gave access to most of the source code through it)

I'm in favour of everything that exposes the inner workings of computers. Software, in particular, shouldn't be a black box, even if 99% of the time everyone (developers included) treat it as such.


What about the newer nerds who don't have grey beards?


Take a nerd class in high school. Go to nerd university.


None apart from that.. but the freedom to easily tinker and understand whats going on has massive impacts on the industry in the long term.

Technology is becoming easier to use, but more sophisticated and more of a blackbox so much harder to understand or replicate. Higher barriers to entry in the market, more power to existing gatekeepers. Fewer and fewer people will understand enough to wield influence and those people will be employed by big companies for tons of money. It shifts the balance of power and only a select few people can expect to have a big impact.

Simple example - imagine Google was as easy to replicate as it was in the very beginning. If you disliked the direction ads & privacy are heading, maybe 4-5 people could make their own search engine just as good with more privacy rights and less creepy tracking. Now it will take 100 people and billions of dollars to even get close...

barriers to entry go up => resources required to compete with existing players go way up => existing players get more power => existing players abuse power. Ahh the business cycle.


Most things suck. Interactive environments where the user can inspect the software they are running, and even change it on the fly, is what all computing environments ought to be like. We should be empowering computer users and making it easy to look under the hood.


> I mean, you can't 'View Source' anything of the computer the browser is running on, or the browser itself.

Says you:-)

When I'm running SLIME in emacs talking to SBCL, I can type M-. to look up definitions of SLIME elisp, Swank Lisp and so forth, all the way into emacs's C core and the SBCL internals (more Lisp code). This sort of capability has existed since the 80s at least, and maybe even further back.

Yes, yes, I know: I sound like a broken record, extolling the capabilities dynamic systems give one. But they're awesome, and they've been around forever, and they're well-tested, and they perform well enough. It really is a shame the tech world keeps on chasing the new & shiny (and half-baked) instead of improving the wonderful stuff we've had for decades.


Learning: I learned a big deal of web development (HTML, CSS and JS) from view-source. Tweaking: I sometimes fix broken closed-source/proprietary applications or websites by checking the source and writing a user script. Those two can probably fall into the "Nerd" category, though...


When I was a kid and started doing that, I thought that the compiled scripts were written like this, and I was fascinated (and terrified) by how complex they were.


I recall being both impressed by the complexity and either annoyed or impressed by how well formatted the raw HTML output was.

"How can they possibly stand to work on this when it's all jumbled together on one line??"

I would manually pretty-print so I could actually read what was going on, and now I still like to output tidier HTML when I can, even if no one is ever going to see it.


The Web has more reach than any publishing medium in human history (number and variety of consumers, producers, devices, industries, languages, etc.) Many features combine to make it so, and "view source" is one of them.


presumably, the parent commenter laments that you cannot easily decipher the code that drives today's webapps, where as yester-decade's webapp's view-source is simple to decipher (and presumably, learn/copy from).


> Whilst I am in favour of the optimisation of resources (and use these sources) we are willingly moving the "View Source" model to for profit entities like github.

It's not necessarily an either/or. We invented sourcemaps so that we can continue to debug the code we write. There is nothing stopping us from copying our sourcemaps and source files to Production for ourselves (let's face, we all sometimes try to debug Production) and even for those next generation programmers that get curious how we build what we build.


I think the DOM, and really the web as a whole, is a really incredible experiment in blurring the lines between documents and interfaces, which really are just two arbitrary delineations of "what a computer can do".

So for example, the DOM is what enabled newspapers to come online, and now it's what's turning newspapers into interactive online narrative machines. I'm typing into a hybrid document-application right now.


> optimisation for javascript is just a short term fad

WebAssembly should offer the best of both worlds...optimized for performance and with at least an answer to the view source complaint. And, perhaps the best benefit, the ability to write in a language other than JavaScript. These compilers have turned a (semi-)human-readable programming language into what is essentially a textual binary format. It makes sense to just embrace that reality with a fully-binary format.


"Whilst I am in favour of the optimisation of resources (and use these sources) we are willingly moving the "View Source" model to for profit entities like github."

I don't see this as a problem. They're providing a service, and doing it well. Why should they not be rewarded for that?


> a small 40MB iOS app

What a sad, sad world we find ourselves to live in.


Indeed, we no longer worry about food, safety and survival and instead our sadness is directed at irrelevant application binary file sizes.


The sadness does not step from the size of a random binary file per se. The sad part is the prevalence of attitudes deemed unprofessional among people of a certain profession.

It's sad for example, that with all the technologies available to us, to imagine what we could have, and compare it with what we do have.


It wouldn't help. If we spent more engineering effort on efficiencies that don't have a substantive effect on consumer response, we're just being inefficient with our time. That will result in less capable software, higher big counts, or higher software prices. Software is written with budgets, and nothing is free.


How would you like it if a car mechanic left extra garbage in your car because he has no time to clean it and you don't know what's underneath the hood anyway so who cares!

It's kind of hard to trust someone with that attitude, don't you think?

Yet we waste CPU/RAM/Disk space on consumer's devices without hardly giving it a second thought.


Fantastic way of putting it. Developers shirk responsibility too easily


This is more like what if Ford/GM left additional material in the car that didn't need to be there? Those are weight/dynamics/fuel optimizations left on the table.


That's only true if you believe focusing things with " substantive effect on consumer response" is the right thing to do. It sounds like a good heuristic in theory, but the practice clearly shows that we ought to be able to do better than that.

I mean, if only fraction of the effort that goes towards making things shiny and sexy went towards making them efficient and actually useful, the computing world would be better, for no loss in actual capabilities (and likely great gains).


> That's only true if you believe focusing things with " substantive effect on consumer response" is the right thing to do.

I like eating. Which means I like getting paid. Which means the companies I work for have to focus on things like "substantive effect on consumer response" to keep getting money to pay me.


> I like eating. Which means I like getting paid.

I'd love to pay you (by buying your app), but because everyone was focused on things that generate a "substantive effect on consumer response" I simply don't have the space on my phone to install your app.


I like at least making the attempt to earn the title "engineer".


Adding this to the list of anecdotes about why capitalism must end.


We can be sad at more than one thing at a time. I am not overly sad about binary file sizes myself, but I wouldn't say they are irrelevant just because other issues exist, and the original comment is just using a turn of phrase. I doubt they really think this is cause for great concern.


>We can be sad at more than one thing at a time.

Only if it makes sense as a concern. Else we're just being grumpy.


Hardly irrelevant. It's the very people who do worry about food, safety, survival for whom app binary size matters the most. Your users live everywhere on Earth that has a network connection, and in some of those places there are millions or more who consider that connection a scarce, valuable resource.


It becomes relevant when one doesn't intentionally interpret it in the wrong level of Maslow's hierarchy.


Maslow ? You mean the guy behind the obsolete stuff ? Or the guy who let another one torture monkeys ?

http://journals.sagepub.com/doi/abs/10.1177/135050849743004

https://en.wikipedia.org/wiki/Harry_Harlow


People need to stop pulling these "well what about x?" red herrings in an attempt to undermine the discussion.


People also need to stop complaining about things that really are not a problem.


It is really a problem, I'm out of space on my phone due to bloat and it prevents my installing new apps. Sometimes just to update I have to get rid of apps. If you make money off apps then this should concern you.

The next causality is going to be the local sports league, there app has bloated to use 130MB just to show news, ladders and videos. After I uninstall it I'll be much less engaged with the product they are trying to sell me, all because a bloated binary.


I wonder whether the idea of “waste” is triggered by different things for different people, and evokes a moral response.


> we no longer worry about food, safety and survival

Do "we"? You are denying the existence of millions or billions of people who worry about food, safety and survival.

Including the ones that work very hard at manufacturing phones so powerful that you don't care about the size of that 40MB app.


The vast majority of people who have the luxury and time to comment on an internet message board on a discussion about the size of binaries are not having to worry about that.


Drop every other issue right now! /s


According to https://sweetpricing.com/blog/2017/02/average-app-file-size/, the average iOS app file size is 38MB. It's much higher for more highly downloaded (and feature-rich) apps; it's probably higher for recent apps.

So 40MB for an iOS app may or may not count as “small”, depending on the comparison set.

But it is small compared to the historical size of applications.

40MB is 0.5% the RAM of an 8MB iPhone.

My first computer, circa 1978, was a TRS-80 with 15KB RAM (+ 1KB for the screen). An application that used 0.5% of its RAM would have taken 75 bytes. I've written but not distributed functionality in programs of that size. I would classify it as small.

An application that used 0.5% the RAM of the original 128K Macintosh would have taken 640 bytes. An application that used 0.5% of its 400K removable storage would be 2K. I suspect there were a lot of 2K applications (bundled as Desk Accessories), but this was considered small at the time.

During my lifetime there's been an explosion in the number, functionality, and quality (as measured in every way other than absolute byte size) of applications, that may be hard to appreciate if you either haven't lived through it, or are focused on one metric. It's pretty cool.


Facebook app on iPhone is reporting at 1.02 GB for me. Most space used with the exception of storage-based applications like photos/music.


I knew it was big, but didn't think it was that bad.

I'm not familiar with iOS, does that figure include the apps locally cached data and/or embedded resources (I imagine retina images and icons could add up quickly) or is that mostly Facebooks infamously bloated codebase?


Yeah that includes cached resources. But in my opinion that's even worse since it's not an app that I would anticipate needing so much cached information.

Music or Photo applications I would expect to grow in size with use. But for something like Facebook to grow 5x in size on my phone with use seems rather disingenuous. Like "hey we reduced our upfront app size by grabbing assets after install!".

Maybe I'm misunderstanding what's happening here?


No, you're right. His 1GB is mostly cached data. The app is more like 200MB.


Instead of pixelated graphics now we get high-res images and 1080p videos.

What a sad, sad world we find ourselves to live in.


What is sad about this? Small sizes are nice but as a consumer I don't care much. If a larger file size means that apps can be produced easier then I have zero objections.


Because for many consumers, 40MB where it could be 4MB means e.g. 5 minutes of download instead of 30 seconds, or 4% of their data plan instead of 0.4%.

And waste is kind of like honesty - if you are wasteful with this, you're probably wasteful with everything else. Like with storage usage, which is a hard constraint on anything but top-of-the-line mobile phones. Like with network usage. Like with energy usage, which accumulates over everyone making wasteful software and adds up into lots of unnecessary emissions.

As a consumer, I don't care about who is first to the market. Take extra time to make your application not suck. It doesn't take that much more.


How do you know the 40MB download wasn't aggressively optimized down from 400MB? Equating larger file size with lower quality is presumptuous. Is there any known correlation? Equating larger file size with lower honesty seems dishonest to me.

Optimizing takes time, sometimes a lot of time. I know because I've spent a lot of time optimizing file sizes to fit on game consoles, taking xbox360 games over to the Wii for example. If you optimize aggressively when there's no strong need to, when consumers like the one you replied to don't really care, and wont' change their buying habits, you are prematurely optimizing. That's a significant waste of precious time (and money) for a developer.

We (humans) waste a lot of things that are much more important than people's data plans. Gasoline and food, for examples. Why haven't we reduced those 10x? It is possible.


> How do you know the 40MB download wasn't aggressively optimized down from 400MB?

I don't; I thought the context made it clear, that I meant relatively trivial apps that do little but are bloated internally.

> Is there any known correlation

Possibly, I don't know. But I also learned that heuristic from experience. When large app sizes are justified, you usually see this in functionality.

> That's a significant waste of precious time (and money) for a developer.

OTOH from a consumer point of view, you're saving time by externalizing your waste on me. I know that "optimize everything" isn't the right answer, but so IMO isn't "optimize nothing". If you save one dev day by wasting each user's 10 minutes over their use of the app, those 10 minutes multiplied by 100k users suddenly become over two man-years of wasted time.

I know that you can't compare users' time to developers' time 1:1, but the scale at which those decisions affect people is still worth minding. Especially for other resources, as e.g. electricity usage does add up pretty much linearly, unlike small time periods.

> We (humans) waste a lot of things that are much more important than people's data plans. Gasoline and food, for examples. Why haven't we reduced those 10x? It is possible.

And many people are working on it too. I find wasting those things to be bad as well.


Applications become increasingly bloated when they need to support lots of different environments and runtimes.

To give a modern example: if I want a web app to run on a wide range of browsers, I have to include shims and polyfills to handle missing functionality, API differences, and implementation bugs.

But it's not a problem limited to the web, it exists with native applications as well. This is why Electron apps are huge energy and memory hogs. It abstracts every native layer in order for the application to run pretty much everywhere.

IMO, it's not such a horrible thing in some cases. I usually prefer native applications for anything that I'll be using regularly... But electron apps can be incredibly handy for one-off utilities. To give an example I'm personally familiar with: I occasionally use Etcher [0] to burn images to USB and SD cards. I've used it on macOS 10.12, Windows 10, and Ubuntu 17.04 without any issues! The most famous native alternative I'm familiar with is UNetbootin, which didn't seem to work on macOS when I last tried it. Of course, on macOS and Ubuntu I also have the option of using the terminal, which requires me to look up or remember the platform-specific way of listing all disks, identifying the target, unmounting it, and copying the image.

[0] https://etcher.io


That's all fair. I don't like bloat either. I'd suggest the ecosystem is more to blame than individuals devs. Apple and Google could do more to fix this problem than anyone else. It's too easy to make a basic app that's large, and it takes too much effort to get the file sizes down. If it were easier to make a small app than a large one, apps would be smaller.

The real problem is economic, which is why I brought up food & gas. We are optimizing them slowly, but it's not happening quickly because it takes too much effort and consumers don't care enough. You're right that not optimizing is externalizing the waste onto consumers, and that user waste happens at a vastly different scale than developer waste. Which is why if consumers made a big enough stink, the problems would get fixed more quickly.


I agree with everything you wrote here. I too see the problem as primarily systemic.

The problem here is also that users are kind of "captive consumers" of software; i.e. more often than not, there's no other choice besides either using or not using a piece of software. And even if there is a choice, it's usually between two or three pieces of softare, all competing with each other on shiny features and accumulating bloat at tremendous speed.

The frustration is there though; it shows in the rare cases when there is a partial choice. For instance, I know plenty of people who refuse to use Facebook app because of the battery usage, and instead choose to go to Facebook mobile site (which is objectively more annoying and less user-friendly than the app).

> Which is why if consumers made a big enough stink, the problems would get fixed more quickly.

Which is why I do my part in making the stink, complaining about the bloat and waste in places where both users and developers can hear, besides trying to ensure my own backyard (at work and otherwise) is as bloat-free as possible. I guess that's the best I can think of without going into full advocacy mode.


Well, since the problem is economic, the stink has to be economic. For consumers to make a stink, it means they have to withhold their money and stop purchasing wasteful products, not voice their complaints after purchasing.

Despite how much time we all spend solving problems we fear we have but don't really have, most devs don't actually have the option to spend significant portions of their time optimizing, even if they want to. I've been exposed to squeezing large games down onto a Wii console because it was a requirement to ship, but not because we cared about bloat. We did care about bloat, btw, but we didn't spend anywhere near as much time reducing file sizes on the PS3 as on the Wii, even though we could. It was an organizational decision that I had almost no control over.


Gamedev plays by different rules. You guys work too hard already :). I was thinking more about regular run-of-the-mill app and webdev.

> For consumers to make a stink, it means they have to withhold their money and stop purchasing wasteful products, not voice their complaints after purchasing.

Yeah, but that's not easy to do either. I added that point in the ninja-edit to my previous comment. In my experience, in many areas consumers don't really have any real choice. I.e. you choose from what is available on the market, not from the space of all possible products. There is no good line for getting feedback from consumers to producers about options not explored, or even about the reason you refrain from buying something. Again, I currently have no idea how to approach this.


I had to upgrade my phone because it ran out of internal memory and I had to keep removing apps I valued, as they all grew in size.

So bloat cost me real money.

Just because you don't care doesn't mean others don't. This is one of my pet peeves. Developers building things on high spec machines with fantastic network connections. Your website is impossible for others to use as it is so slow and cumbersome but the devs never notice.


Software has been increasing in size over time since the beginning of software. This will never change. It goes hand in hand with why hardware keeps getting bigger capacity too. I agree that some of it is bloat, but some of it is more features and content too. When the OS adds a little, and the libraries add a little, and the apps add a little, the downloads get more than a little bigger, but it can be a side-effect of everyone adding useful features at every level.

There will always be a hardware limit, so if you install apps up to your limit, you will eventually have to delete some of them, no matter how small the apps are and no matter how big your phone's memory capacity is. You can't entirely blame that on developers, it's your budget to deal with. You can buy a bigger phone or install fewer apps. I like to keep a good 20-30% of my memory free just so things I depend on have room to grow.

You also have the option to swap out apps continually. It'd be a pain in the ass for sure, but if the least important app on your phone that you had to delete is really one you couldn't live without, you have the option to delete something else and re-install it, and then swap back later.


Network connections can be artificially limited (even to localhost): https://stackoverflow.com/questions/130354/how-do-i-simulate...

To simulate a lower-spec machine, you can just run some resource-hungry processes in the background.

The problem is of course that almost nobody bothers to run these tests unless they already care about performance on low-end devices.


So you think the world is sad?


No, no one does. Any more than they cast nets to catch wishes, phone their mums every time they step on a crack to make sure that her back is okay etc. etc.

It's an expression, an idiom, even.


Interesting is that usually an equivalent Android app will be about 15-20MB. For larger apps (like Facebook) the difference can be even up to 4x as much (and that's after AppStore stripping).


Yet all this is doing so little. It's not like many people are using WebGL and canvases to do interesting graphical things. Mostly they're just messing with scrolling, popping things up, and fading things in and out. All this machinery is way overkill for what it's used for.

(Especially messing with scrolling, badly.)


You'd be surprised how often WebGL comes into play, take those silky smooth animations for example...

or, you know, Google Maps.


Flipboard built their mobile web app in canvas: http://engineering.flipboard.com/2015/02/mobile-web


In 2002, the Laszlo Presentation Server compiled XML with embedded JavaScript to swf (Flash) byte code. [A later version also compiled to HTML. Performant cross-browser dynamic HTML was daunting in 2002.]

We considered HTML's “View Source” feature important, so we emulated this by causing the compiler to place a formatted source file in the output asset directory, and to embed a “View Source” popup menu. You could turn this off, and probably would for a deployed app, but the nudge was there.


Just throwing my side project, Surplus, out there as another example of this strategy (https://github.com/adamhaile/surplus). Surplus compiles JSX into optimized code that produces real DOM elements. The combination of an ahead-of-time compiler plus no virtual DOM layer means that the runtime code only has to do the minimal, truly dynamic work.

As a result, Surplus is the fastest framework in Stefan Krauss's latest js-framework-benchmark (http://www.stefankrause.net/wp/?p=431).

DSLs are powerful, compiler support can make them fast.


OT, but this jumped out to me:

> In the same way that a compiled Android binary bears little resemblance to the original Java source code

From what I recall, Java binaries actually look a lot like their source code. They're not human-readable of course, but you can decompile them very easily and get fairly good code back out. This made Minecraft an easy game to mod: just decompile a class, change a line or two, recompile, insert back into the JAR and you're done.

Is Android different?


Android code goes through ProGuard on release usually, which is an optimizer (and obfuscator if enabled). It usually makes a lot of optimizations on bytecode level which makes decompiled code noticably harder to read.


If you disable the obfuscation, ProGuard code still looks basically the same.

I never enable the obfuscation, as my apps are GPL anyway, and I want users to be able to just decompile any version.


Not really - ProGuard does several optimizations (finalization, removal of redundant bytecode instructions, pulling up methods into superclasses, removing needless sublcasses, some fixes to handling exceptions and several other things depending on optimization settings) which change the bytecode output to the point where you don't have a two-way mapping from Java to compiled code anymore.


I know, and it gets a lot worse when using Kotlin, but this ensures that users can always take the compiled binary, and check that it doesn’t use any analytics, tracking, or any proprietary code that might violate their privacy.

This is very important for me.


That is only true on a naive compilation, that doesn't make much use of language features introduced since Java 7.

Many companies make use of bytecode obfuscators to make it harder to translate back.

In Adroid not only is obfuscation part of the build process (ProGuard), there is the additional step of converting from Java bytecodes into DalvikVM bytecodes.

So if you only have DalvikVM bytecodes, you might translate to something back that looks like Java, but it won't be the original code.


Not really. The bytecode format (.dex) is different, but it's straightforward to convert dex files back to jar files, and then decompile the jar and get readable Java source.


But it won't be "the original Java source code", specially with ProGuard and more advanced Java constructs, or even if another JVM language is being used.


> The trend started by minifiers like UglifyJS and continued by transpilers like Babel will only accelerate.

Not to be pedantic, but are UglifyJS and Babel "frameworks"? Not a Ember user, so maybe Ember has some sort of built-in source code transformer and that's what the author is referring to?

I think the basic idea that JavaScript developers, especially those working in a browser environment, will increasingly write source code that compiles to JavaScript "bytecode" is not a recent idea. A much more nuanced reflection on that idea can be found here: http://composition.al/blog/2017/07/30/what-do-people-mean-wh....

With all the hand-wringing about "JavaScript fatigue", it's a little bit sad that prominent JavaScript developers use titles like "Compilers are the New Frameworks". The tone of this title is the tone of a bell ringing for the next round of JavaScript fad musical chairs.


> Not to be pedantic, but are UglifyJS and Babel "frameworks"?

The article doesn't make that claim, which kinda invalidates the "pedantry strawman" the rest of your comment is based on.


>...what we call web frameworks are transforming from runtime libraries into optimizing compilers.

I took that to mean that the frameworks are doing the optimization. But the article doesn't give examples of the frameworks. The only examples of code transforming optimizations are Uglify and Babel. So I thought maybe these were the things the author has in mind and not actual frameworks. I was genuinely confused and prefaced with the aside about pedantry because questioning these kinds of semantics can come across as pedantic.

The second half of my comment stands regardless.


> Our job now is figuring out how to adapt the ideas of high-performance native code while preserving what makes the web great: URLs, instant loading, and a security model that allows us to forget that we run thousands and thousands of untrusted scripts every day.

URLs are indeed part of what makes the web great. So, too, is being able to read a document on any device imaginable, from a desktop with multiple monitors to a VT100 to a Plan 9 terminal window to a laptop to a watch to a phone to the custom OS I'm building in my free time. Non-progressively-enhancing JavaScript kills that entirely.

Also, I don't think the security model of the web allows or enables us to forget that we're running thousands (hundreds of thousands?) of untrusted scripts; I think it just encourages us to. Really, we shouldn't forget it at all. The web security model — while nothing to sneer at — has enabled wholesale violations of privacy and the reduction of human beings to marketing data.


It's crazy to me what amounts to most UI's -- basically forms and buttons, lists of things, images, pages have become so complex.

It's like I've died and woken up in a strange, bizarre world where the laws of physics are different, but I still inhabit the same world.

Does anyone else feel this way? Is this just how technology works?


Not so much. Components make this simpler than ever.

  import { Button, Dropdown, Slider } from 'antd' //bootstrap, materialdesign,...
  
  const Header = () => (
      <div>
          <Button primary>Home</Button>
          <Dropdown ... />
          <Slider ... />
      </div>
  )
  
  render(<Header />, document.querySelector('#app'))


This is typical of the maturation of any technology platform. What is actually happening is a change from a long-term growing complexity to short-term stable complexity.

Historically, building something simple in an HTML page has always been simple. A field here, a button there, you're done.

But with increased requirements, that sort of work became exponentially more difficult to maintain. If you have 10 fields each with the same basic functionality, you'd end up repeating the same thing 10 times. If you wanted some complex state management, you're building it all from the ground up. Your code may be "simple" in the sense that you don't need a build tool, preprocessors, or any dependencies to maintain it, but it becomes increasingly difficult to maintain.

React, babel, webpack et al can make things much more complicated _to start_, especially for someone not acquainted with it. But the trade off is that things keep more or less the same level of complexity. Adding a feature is easier, duplicating an element or a screen is easier.


I disagree with this sentiment and it's tautologically false.

Have you ever worked in a mature babel code base? I have worked in many and frequently get contracts to clean them up. I have yet to see a clean one. Can you even point to a clean open source one?

The complexity doesn't reach some steady state, it just keeps growing.

The argument that something simple needs to be complex because of unknown future needs is a prime example of design based on emotion, not engineering.


I think you’re understating the complexity of modern web applications. There are lots of web apps out there that cannot be aptly described as “pages,” or even as simply “UI”.


Mobile apps have complex UI, yet you don't see them switching ui frameworks every 6 months.

JavaScript is in a bizarro land by itself. I think the main reason is that is a starter language for many folks (the initial learning curve is gentler), and once they get good enough, they decide to re-invent the wheel again and again, just a bit rounder and rounder.

Also, since probably a lot of younger folks get into JS, they are more guilable/eager to just try new things, no matter how broken are they.

Sometimes this actually creates real progress, often it just increases complexity with no real advantages apart being "fancy and different"

This will probably would have been the case in mobile as well, but the realities there are different:

1. You are stuck with what apple and google provides to you, and at max you can build some util frameworks to make things a bit easier. Most replacements attempts have failed.

2. There is a higher/steeper barrier of entry to ship good mobile apps, and smarter/more experienced folks steer away into following the latest fad.


Any type of UX interaction was always messy and not very scalable using older javascript.

Sure it was do-able and libraries like jquery, which have been shown to be slow, made it convient and easy to write it. But ultimately it was messy looking code.

Sure using the basic built-in UX like a button that post a form or a site that just had images on it was simple. Once you added interactivity and the web became a lot more than just dry course dumped information people wanted a way to navigate that.

Even the most simple website can be improved with a hamburger menu to have an always available nav menu instead of the classic scroll to the top or have a fat footer at the bottom.

So a convoluted CSS solution is available by using a <input> type 'checkbox' and seeing if its check in the place of the hamburger.

But really that's shoving your view logic and your ui logic together.

That covers the state of javascript before these libraries. What hacks, work arounds, or messy spaghetti code can we create to make it work. Since it worked for us 10 years ago then making maintainable, readable and sensible code should be considered bizarre and strange?

I see this push back a lot. I think it is cause originally learning HTML, CSS and Javascript was easy compared to what it is now. You learned mostly HTML and CSS and you learned the basic idea of functions and making API calls to the browser like document.getElementById or console.log.

But deep down I've met a lot of these 'older' and 'experienced' javascript developers that wonder why we are moving in such a 'necessary' direction. I find most of them don't really understand programming itself. What the javascript language is doing. What the concepts are. The fact that a lot of calls are to the browser api that are exposed via javascript and not natively part of the 'language' itself. Sure Node has some similar those functions because it's built-in to their library to mimic the similar environment.

Point is, if you enter any other language like Java, Python, C++, C#, Ruby ect, these way we use to code in Javascript were essentially anti-patterns and extremely round-about ways to solve problems instead of using proven design patterns that have been established throughout other languages.

So in conclusion I think the push back is that right now Javascript has become more difficult to learn or adapt at first with the notion that 'its easy'. But the truth is that it has caught up to more mature languages but the browsers have still been lagging behind requiring extra work to learn how to adapt tool chains to build these so they are compatible.

I have no issue with people writing a simple website in React, Vue or whatever SPA or UX framework they want. I think it's crazy if they just serve the straight javascript files as creating static files of their site is rather easy and makes the initial load times to paint quicker and javascript-free friendly. Therefore you get both worlds. Nice maintainable and readable code that is very easy to modify months later with the same benefits of your typical static pages.

Long rant but hopefully you can better see why things moved in this direction even for simple websites.


You know, that UI's aren't very complex? You know what's scalable? PAGES. The entirety of the (www) internet is done with HTML content that link to another HTML content. The entire Internet!

What would happen if we made the entire Internet a react application? Would that be "scalable"?

Please choose your words more precisely!


I think you are really misunderstanding what vanilla javascript and HTML really is.

No UI in plain HTMl is scalable. If you have a UI each piece of code needs to be repeated on each page. Have a ten page website?

You need your footer, nav and whatever side bar on each page. Copy and pasted. What it changed? You need to change each and every page.

How did we get around this? WAY back we used server-rendered HTML. The server would serve different pieces, partials, to a piece of content.

This was done largely to PREVENT ugly and repeated code. Because making a simple UI in HTML/Javascript stack is overly repetitive and complicated.

No one needed to make server-side rendered pages to solve that issue but it did make maintaining that website easier. Eventually you could use server-side rendered pages to send data that was tailored to a user too.

This same problem happened on the front-end. Both the back-end and front-end started to use templating languages to help with these kind of repeated code.

The reality is a server creating the UI of a program and sending it to a user was OVER utilizing a server. All applications on a machine render and put together their UI. They utilize their power/cpu to construct those. So Web has started to move those templating languages to the front-end.

Some use build chains to create their pages other use SPA to contain it into one page. But the responsibility to construct the repeated components of the UI have become the clients-side for many sites. A server-rendered site is overkill for most these apps.

Things like middleman, jekyll, gatsby exist to do these for you. Create static pages that have repeatable components.

Pretending that none of these tools were created to actually solve a problem seems like you are ignoring what it's like to actually write a plain html/javascript website with any type of functioning UI.

Any form of website where you repeat components or interaction requires copying and pasting every time you edit 1 piece for every page or we use tools or server-side technology or frameworks to eliminate this.

If you wish to code in that style, in that world, or pick and choose what technology framework solves these problems (Rails, PHP) but complain out other frameworks that are front-end(React, Vue) then you seem to be closing your eyes to the full-stack development. Cause back-end programmers ran into these issues and solved them with templating engines themselves and now we are just doing that for front-end to solve the same issue.


Domain specific languages I suppose are the next cool aid. Or languages with a comprehensive macro system (like Elixir) can bridge the gap between these ways of thinking. This feels very frontend focused though - I'm wondering if there is a more holistic approach.

I have an idea for a framework/project, that over web sockets, views the browser as a thin client for a server side representation; commands would be sent from the browser but defined on the server. I haven't decided how this would work yet fully but it would be interesting to do the DOM diff-ing on the server and have the thin client only transfer the changes. This would hopefully make the initial payload extremely small and the changes being transferred minimal as well.

Are there any projects that do anything like this currently?


"I'm wondering if there is a more holistic approach."

The end game, once WebAssembly is fully integrated into the browser, is to expose the 3D API and/or a high-performance low-level 2D API (or both) and slowly, but surely, a "web browser" will become an environment where you download a full rendering engine for a website, written in an arbitrary language that has WebASM support, and in the end both DOM and JS will become merely another option, at which point IMHO neither of them will actually fare very well, excepting their substantial existing install base. Caching will make this feasible as most of the "long tail" isn't going to make their own engine, just use someone else's, and there aren't really all that many sites who can afford to write their own.

While I consider the forces pressing on the browser to virtually predestine this outcome, it is at least 10 years off.

(If it wasn't for the fact that Javascript is still not fast enough to pull this off, I think we'd already see more frameworks that do layout and ship down a lot more divs and such with hard-coded absolute positions, turning the browser into an environment where it simply puts text and images where the server tells it to, and stops doing all this expensive reflowing and layout. You can see on all kinds of sites the desire to do this on the part of web publishers. However, it is impossible to do this on the server side because you need access to font metrics (where you may not be able to force your own), screen size info, browser zoom level, and a whole bunch of other such things that make it impractical to do server-side in a whole bunch of ways. But JS is still at least ten times slower than native code here, plus would take an inevitable penalty accessing a lot of FFI code, so this is currently almost unthinkably infeasible on the client side.)


Unsurprisingly, something along these lines has already sorta been tried [0][1]:

> Famo.us is the only JavaScript framework that includes an open source 3D layout engine fully integrated with a 3D physics animation engine that can render to DOM, Canvas, or WebGL.

They raised a fairly large amount of money, but eventually dropped the idea and moved to a totally different market [2].

Grid Style Sheets [3] gets an honorable mention as well. It compiled GSS rules and applied them as fixed position divs.

I've been researching this subject a bit lately. Surprisingly, many of these problems were already solved in the native world by the end of the 90s. The key innovation from the web is that it's a fully sandboxed environment.

Despite all the web's flaws, one area in which it has done pretty well is accessibility. Many native apps leave a lot to be desired when it comes to basic accessibility... Heck, that goes for many "modern" web apps as well. Hopefully browsers expose good accessibility APIs by the time your vision of developers shipping their own rendering engines becomes reality.

[0] http://deprecated.famous.org

[1] https://github.com/famous/famous

[2] https://techcrunch.com/2015/11/06/nopen-source/

[3] https://github.com/gss/engine


As always, the fantastic Gary Bernhardt takes this through to its logical conclusion https://www.destroyallsoftware.com/talks/the-birth-and-death...


First, while presented humorously, I take it somewhat seriously as well. And one place where I disagree with it is that unless you consider WebAssembly as Javascript, it isn't true. It isn't Javascript destined to take over the world, it's WebAssembly.

You will know WebAssembly is here and in charge when the browsers compile Javascript itself into WebAssembly and take at most a 10% performance hit, thus making it the default. Call it 2022 or so.


James Mickens presents something like this idea: https://www.youtube.com/watch?v=1uflg7LDmzI


Yes, I had the pleasure of seeing him present that live once, though not at that one. I think he was disappointed that people weren't helping him work on it, but I still think that long term the pressures are absolutely inevitable in that direction. It's just that the world at large can't jump there in one shot, it has to get there one very laboriously-worked-out technology at a time first. He's gotten a bit more famous for his sense of humor, but I can attest that he knows his stuff.


It's a very persuasive idea. I also wonder whether Atlantis's stripped-down API could enable a lighter alternative to Electron.


Yes. This Elixir library aims to have a virtual DOM on the server and send minimal diffs to the client: https://github.com/grych/drab

It can already do this for some kinds of diffs, but it doesn't have a full VDOM yet. It basically allows you to build interactive pages without writing any javascript.


Drab is one of the most interesting and impressive libraries to emerge from the Elixir/Phoenix ecosystem.


N2O[0] does exactly that. You write your app in Erlang and the framework delivers the changes via websockets to the browser. You can even choose to have the changes computed in the browser through an Erlang -> Js bridge. Pretty cool.

[0] https://github.com/synrc/n2o


This looks great, I'll try it out!


> DOM diff-ing on the server and have the thin client only transfer the changes. This would hopefully make the initial payload extremely small and the changes being transferred minimal as well.

To achieve similar results, you could granularly partition the build so that scripts are lazy loaded as needed. This would be like sending diffs, except only once since the client can reuse them.

Having a server manage the DOM state of every client, and maintaining sockets between them, seems like a complex apparatus.


I made a proof of concept for such a framework. There are some really cool benefits (like automatic server side rendering, really small js bundle, all code on the backend etc.) but falls short for things that require immediate response, such as drag/drop, or text input.


I once wrote an app that worked by sending diffs, not DOM diffs but JSON diffs which were then applied by the client and rendered by React.

The data store was a repository, and so a client could ask "please give me the diff between states X and Y", and the server would make a diff between those trees in the repository.

Most of the time the client would be long-polling for a new revision with the diff based on the previous one, which could be cached efficiently, but sometimes a client would come back from "hibernation" and ask for a diff based on an older revision.

It was a nice system, I never tried to make it "scale" though.


Meteor is a little that way although it doesn't DOM diff on the server. It kind of diffs the Mongo database on the server and then updates mini Mongos in the clients where appropriate. Js in the clients then update the DOMs.


Seaside from Smalltalk. To some extent the original ASP. There is also a PHP framework that does that, but I forgot it's name.


I've built some prototypes like this, predating web sockets though. Web sockets are simply an optimization over standard form submission.

The problems of state management are the same though. If your protocol isn't stateless, then you're restricted in how you can manage the server, ie. no live upgrades without breaking all clients.


Not exactly the same but you might want to check out Ur/Web, Eliom and the standard Racket web server lib.


Like an X Server for the DOM? While interesting, that means that every communication would need to be tranfered to the server and there'd be no instant one-frame actions. That fits a limited set of UIs. And would lag with substantial network latency


I once saw a company that had a product which was basically JSF as network protocol for Swing.


Its funny that companies like Google work so hard to make Javascript interpretation so fast. Yet people write in an abstracted dialect that needs to get compiled down to Javascript because of productivity, convenience, sexiness, or whatever else. So many layers...

A developer spends a lot of time debugging. Its so fun to write Jasvacript in the console. Yet we need source maps to solve the above problem in order to debug the code we wrote.

Some times I feel the amount of engineering we do in defining new frameworks, dialects, transpilers, etc would be better served writing features for our end users. No, we absolutely need that hammer++ to hit that nail.


That's funny because all the deep-learning frameworks also did the same move: the API is now just functions building an AST that is then translated by a compiler into parallel cpu or gpu code...


It seems like the less code users have access to, the less control they will have over their computers. Example: it might become harder to block tracking and ads, turning the open WWW into something more like mobile phone apps (which are terrible for the freedom of end users).


I think that's the very thesis to the free & open source software movement.


I don't think so.

Browsers will catch up with what developers want.

We have seen that with ActiveX, Flash, Java applets, jQuery etc. Every time tech was needed because "The browser alone is not good enough" the browser became better and made the additional tech obsolete.


I think the effort going into browsers is a double-edged sword. It's nice that we've got so much work going on, pushing new standards, etc. but it's also making browsers more bloated and making it less viable to bring out new browsers.

As more stuff gets done in Javascript, and Javascript gains more APIs, browsers like Lynx, Dillo, Netsurf, and umpteen yet-to-be-written ones are becoming less able to browse the Web. Likewise, the effort required to "upgrade" them would be massive.

I think a consolidation of features would alleviate the situation a little: have a small "foundation" of required tech, e.g. a subset of JS (or WebAssembly) with few primitives or APIs, and have the browser pull in polyfills for everything else, including interpreters for fuller, newer JS standards, etc. If a browser is graphical, it could provide a bitmap-blitting primitive, and use polyfills for CSS, fonts, layout, etc.

The Chromes and Firefoxes of the world could ignore the polyfills and keep going with their highly-tuned native implementations of these things, but it would prevent "kicking the ladder away" from everyone else.


> browsers like Lynx... less able to browse the Web.

they are still able to browse the web that they were able to browse back then. You can't expect that the web won't move on.

The only lament is that most websites don't cater for text only/low-resource/disabled users. But that's a different story to the OP of having lots capability/libs/standards.


While I agree with the overall sentiment, browsers like Dillon and Lynx don't even implement HTTP caching correctly (they cache everything regardless of the headers).

JavaScript is not their biggest problem, as they can plug in one of the existing engines (in fact one of the links forks does just that).


> as they can plug in one of the existing engines

As someone who attempted this for Lynx back in the [~late 90s~] early 00s with the Mozilla JS engine of the time, no, they can't.

Lynx has no DOM. It parses and renders the HTML to a fixed textual pane without any intermediate format - even `document.write()` is nigh-on impossible to implement because it required keeping a copy of the HTML around, jiggering it with the output, reparsing, and going through the layout again. And that only covers the most trivial uses of `document.write()`.

To add Javascript to Lynx would require rewriting the entire internals of Lynx - not going to happen.

[Edited to correct the time period]


> the browser became better and made the additional tech obsolete.

And almost every time, a separate browser codebase died and a vendor switched to WebKit. Because with all this cruft, maintaining a browser codebase has become a nightmare only few can afford.


> with all this cruft, maintaining a browser codebase has become a nightmare only few can afford.

Perhaps this is not all too inconvenient for those few ...


Yeah, except it took browsers a decade to get better enough, and the road towards getting better is mired by compatibility problems, requiring tons of expertise to deal with that sort of stuff. But developers want to build useful stuff yesterday, and users demand good experiences the day before yesterday.


I did build useful stuff yesterday. And I'm happy I never used any of the "additional tech". Because now that stuff still runs today and pays my bills.


> Browsers will catch up with what developers want.

Developers want everything from JavaScript to C++, Python and Haskell. There's no way a browser can specifically address all what developers want. Therefore browsers should become better virtual machines.

Edit: yup, WASM is a good start.


Sounds like you need to read up just a little bit on Web Assembly which is compiled from anything including C, C++, Rust and just about any language. Including Python and others. Web Assembly will make syntax irrelevant for web scripting and JavaScript entirely optional depending on how much WebASM is allowed to access.


>Web Assembly will make syntax irrelevant for web scripting and JavaScript entirely optional

That's what many think, but that's not what WASM is for though. It's still a foreign world to the DOM, and not a first class citizen like JS.

It's more about driving canvas games or offloading some expensive computations than actually writing your app in WASM as opposed to JS.


As soon as Webassembly ships garbage collection and the authors are willed to create a Wasm backend for the given language.


The hope is that one day it will be possible to implement an efficient concurrent GC in WASM. However, right now it still lacks the primitives (e.g. locking primitives, shared memory, and memory barrier instructions).


Yeah, webassembly is kinda changing this.


I think things are changing. Angular 4 was definitely "compiling" their code, instead of just running it. However, with VueJS growing in popularity, lightweight apps are on the rise again. People are tired of 5-10mb web apps.


I think Angular2/4 already goes into that direction. Afaik it optionally precompiles all templates, which means what ends up in the user-facing HTML/Javascript is not the source but optimized generated javascript from it.


Angular 4 does code splitting, but it's bundle size carries a pretty big baseline


Beyond code splitting, I wonder if a future Angular version may do more aggressive compilation as the linked article and much of this discussion describes, to reduce that baseline. I have written quite a lot of Angular code, and I don't think there is really all that much from and application point of view that absolutely requires a large baseline run time. Much of the same experience and application code base (perhaps with some API adjustments now and then, with semver taken very seriously...) could likely be preserved while making Angular work more and more like an optimizing compiler with a small runtime.


They're definitely aiming to make use of the closure compiler (which is why they went with Typescript as a default)

https://github.com/angular/tsickle


The author mentions WebAssembly, yet writes at the end :

" and a security model that allows us to forget that we run thousands and thousands of untrusted scripts every day."

WebAssembly is designed to run in a sandbox. I suppose that does not by itself make it completely safe, but it can not be much worse than the current situation, can it?


I think WebAssembly has a huge expectations problem:

What many people think it is: I can compile my JS frameworks as assembly and ship binaries to the browser, that'll run near-native and be more compact.

What WebAssembly actually is: a way to compile libraries from other languages in a manner that you can run them in a JS runtime, but with no access to the DOM, etc. (Think of things like shipping a JS based OCR library or a JS based audio-engine that runs at near-native speeds)


Thanks for putting the discussion back into the realm of reality.

For as long as I'm in IT, there has always been the story of an upcoming holy grail of technology that'll change everything aka the silver bullet. Reading through this thread, that role seems currently occupied by WASM, with wild projections of all kind of perceived problems of frontend development and beyond onto WASM.

The reality is I have yet to see a single app making use of WASM on the web. And even when, if, there eventually is a WASM app, it won't be able to do anything that other (native) development environments haven't been able to do for ages. I'm not sure the world needs a new bytecode format, when only the ARM and x86_64 ISAs are still around. OTOH, what's new with WASM is that the browser and the internet acts as a gatekeeper for starting apps on your device, which I'm not sure will be met with enthusiasm by users. Furthermore, since WASM lacks APIs for even the most basic things, it would take ages to establish a platform, with all the fragmentation this entails.


> The reality is I have yet to see a single app making use of WASM on the web.

IMHO a spectacular example is stockfish on lichess.org. On some browsers, like Chrome and Firefox IIRC, it is still slower than PNACL or asm.js, but on Microsoft Edge it is the fastest solution and it's what is currently used for this browser. It should be the fastest solution everywhere once multithreading is added.

The developer of lichess stated that is was fairly trivial to port stockfish to webassembly and compile it for lichess. When I saw it running, I was impressed (and still am).


> but with no access to the DOM

There will be access to the DOM, but even without it, WebAssembly is still useful.

You compile libraries that are performance sensitive, you use them from you JS code, and voilà, you have the best of both words : the flexibility and DOM access with JS, and the speed of compiled code with wasm.


Good point – I didn't it wasn't useful, just that there are a lot of people who think that you will be compiling React.js or jQuery as-is to WebAssembly, and just shipping them that way.

That's just not how it'll work.


WebAssembly is not just about compiling libraries to access them from JS. WebAssembly is about compiling other languages to be able to run on the web with reasonable speed. WebAssembly programs will have access to the DOM.


Yeah. I don't understand why people want this, or why people think it will bring huge performance gains… but someday in the future wasm will get direct DOM access. (And right now it's possible by exposing JS functions to wasm.)


but isn't wasm something that is supposed to be compiled into native binary? Which means if there's a way to trick the compiler to compile something that looks innocuous, but when run (or somehow exploited) that it does something it's not supposed to?

In chrome, the process sandboxing is supposed to protect you - it doesn't matter if it's wasm or not. Does other impl. of wasm does similar security-wise?


> but isn't wasm something that is supposed to be compiled into native binary?

No, it compiles into a binary code of its own. That gives it near-native speed, but it's no "hardware-binary" code.


ahh, i thought that the binary format is effectively something you (the runtime) translated into actual assembly (to be then executed).


If you want to make a pure web app without server side "rendering", you should forget about HTML/XML and only use JavaScript and CSS. Managing GUI state in JavaScript sure is boring at first, but the trick is to use functions and abstractions, event listeners/observers, and plugin patterns. Then it becomes fun again, the performance is good, the source code is small, and it's easy to debug. Stop using < and > in your web app code!


> Ember has always been driven by the idea that web apps are becoming more and more like native apps over time.

Well, this is what people keep telling again and again, as if it was enough to repeat it to make it real, but I still have to see one web application that I would prefer to use over a desktop one.

Sure, we have hugely complex webapps, but the browser is a really a shitty environment to think of it as a "platform" to do anything else than to read documents.


> but I still have to see one web application that I would prefer to use over a desktop one.

Sure you have, I would make a huge bet you've used a website that you were unwilling to install the app for. On-demand, truly cross-platform app deployment is a huge feature, not to be underestimated.


> I would make a huge bet you've used a website that you were unwilling to install the app for

HN!


I think that the Ember community beats this drum because they have no real answer to React Native.


I would rather suggest the opposite: the compilers will be moved out of frameworks, because they will become more and more powerful and general purpose, so that there is no need to handle framework specific things.

When we have web assembly and compilers like the Kotlin compiler that can translate a high-level language to web assembly, there is little need for framework specific compilation.


elm is a good example.


Elm is a very good example of a compiler, but (as I recall, unfortunately without information in front of me) it's not that great of an example of doing maximal work at compile time to minimize the size of a runtime support library. I don't remember the kilobytes but they are significant.

That said though, since the Elm language is nicely constrained, it seems possible that a future version of the compiler could keep moving quite far in this direction.


The latest thought on this from the Elm community is that dead code elimination should be done after compilation[1] and that this should be easy[2].

I don't think the OP's main point was about file size optimization though. Elm's killer "optimization" is to remove the mutable parts of JavaScript and add a strong type system that can prevent practically all runtime errors. To achieve that goal the source language is significantly different than the target language.

[1]https://github.com/elm-lang/elm-compiler/issues/1453#issueco... [2] https://github.com/elm-lang/elm-make/issues/91#issuecomment-...


> Native code tends to have the luxury of not really caring about file size—a small 40MB iOS app would get you laughed out of the room on the web.

Sure. And then your investor comes knocking on the door, sits down, looks at you judgingly, inhales smoke from his $200 cigar and says that he wants more returns from your startup. And you have no choice and have to put 10+ spy... - errr, advertisement, I meant advertisement! - scripts, and your web app consumes those 40MB easily.

Don't act like web development is some small elitist circle where things are done extremely well, are optimized, and are with attention to technical detail.

They are not. JS is a shit-show still. Compilers help, I agree. You can't go anywhere without reusable code though. Also known as "runtime frameworks".


> The trend started by minifiers like UglifyJS and continued by transpilers like Babel will only accelerate.

No, the Babel trend is decelerating thanks to ES6 being supported practically everywhere. Minifiers (based on Babel :D) are still used but they're not that necessary thanks to HTTP compression.

We have a component model with Custom Elements, encapsulation with Shadow DOM, variables (custom properties) in CSS, JavaScript with classes and modules, and HTTP/2 push to deliver these modules without extra roundtrips. The future is more platform, less tooling.


Just let me know which open source web framework has the best batteries-included (as part of the main framework) data grid and I'll be happy.


Is there a compiler for server side code? Producing DDL sql-code, views, templates etc from a more abstracted meta model?


Isn't the JavaScript engine itself doing a good enough job of optimizing the JavaScript code !?


Isn't the title backwards? (Still seems so after reading the article)


Yes, I agree it's awkward, I think it's meant as:

> The new web development frameworks are compilers


The thing you have to always remember is there is no specific language that is JavaScript.


What a terrible title. A "web framework" conventionally is the runtime library, e.g. React, or Ruby on Rails.

The optimizing compilers are part of the "build tools" or "build chain" or "make system". They weren't ever runtime libraries (except in isolated examples), and the general patterns have been around since the 70s.

Yes, it's good to learn how a compiler works.


> When it comes to eking performance out of hand-authored JavaScript and accompanying runtime libraries, we’ve reached the point of diminishing returns.

Are you saying that web developers are writing the best code that they can? That future gains are going to come from more advanced js preprocessors instead of more informed developers? I strongly disagree that we have reached any sort of diminishing returns when it comes to the quality of hand written code.

> Between WebAssembly, SharedArrayBuffer and Atomics, and maybe even threads in JavaScript, the building blocks for the next generation of web applications are falling into place.

Read the specs for one of these. Read the specs for any of these. Read the specs for js functions you call every day, and image what the native implementation looks like. The web will never approach native performance because w3 has been sabotaging it for a decade.

> time to learn how compilers work.

I think the author would benefit from learning about how web frameworks work, or perhaps how web specs mandate that your browser works.


> I think the author would benefit from learning about how web frameworks work, or perhaps how web specs mandate that your browser works.

Haha, he is the creator of Ember.js. I think he knows a thing or two about web frameworks ;)


And he has come to the conclusion that he is making an optimizing compiler.

Perhaps the reason he thinks web frameworks will help us 'approach native performance' is because he doesn't know what native performance is? Could you really understand web performance if you don't understand native performance?


There are a lot of people who create things and learn the hard way what they didn't know at the beginning. Creating Ember and then talking about going back to Flash, says all that needs to be said about his credibility.


> my advice for anyone who wants to make a dent in the future of web development: time to learn how compilers work.

Many people in web development went there because it's the one place in IT development where they don't need to learn CS stuff like parsing, compilers, and assembly code.


It's far simpler than that. The reason is that there are far more web development jobs.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: