Hacker News new | past | comments | ask | show | jobs | submit login
The Baseline Costs of JavaScript Frameworks (uncommon.is)
221 points by GeneralMaximus 5 months ago | hide | past | web | favorite | 223 comments

As a US based user of an A53 based phone, where I use firefox. I'm here to tell you more than 50% of the sites I visit with it (HN being one that actually works well) are completely unusable. There was a saying back in the 1990's that developers should be forced to use obsolete hardware to assure that their software was usable for the regular user, and its even more true today. Developers running around with the latest iphones does nothing to actually represent what the average user actually sees.

Picking on amazon, I generally don't even bother going to their site unless i'm on my desktop, where I can physically watch them frequently spike my 4+ Ghz CPU for a few seconds when I type in the search bar, or click a page. Doing the same on my phone or cheap tablet frequently results in a 10+ second waits. The other day I was typing on my tablet and the amazon search completion was literally taking 20+ seconds to display each _character_ and a completion list for it.

My first job I had a machine so slow I could see the screen updates progress. For several months I did perf testing with a stop watch because changes would knock off so much time.

Two places I worked, I intercepted someone trying to throw out old computers and set up a testing station at an empty cube or a table on an endcap. Usually with some older versions of browsers.

They didn’t get used every day but they got used most weeks for sure.

> There was a saying back in the 1990's that developers should be forced to use obsolete hardware to assure that their software was usable for the regular user, and its even more true today

I noticed this "problem" when ssd's came out as well when it came to physically noticing slow queries while developing. Something that used to be perceivable because it took a second to return results suddenly wasn't perceivable because it was taking .2ms, but the problems and inefficiencies were still there being masked by the faster speeds of the drives.

Windows 10 is a big offender here! Machines that fine on windows 7 suddenly seem to develop UI lag on Windows 10. All fixed with an SSD.

> There was a saying back in the 1990's that developers should be forced to use obsolete hardware

Back in the mid to late 1990's I used a 16 MHz, 24MB unix workstation to do all my encryption development for just this reason.

An unexpected benefit - I can't count the number of times I was able to just shut down FUD arguments about how badly adding transport encryption or authentication would affect performance, by simply offering to demo it on my workstation.

Using firefox on mobile is the first mistake. Chrome is by far faster on Android. Firefox's Spidermonkey is quite poor compared to V8.

You don't need the latest iphone for sites to work. Sites poorly impliment mobile sites if there's an app equivalent. Amazon is one of those. Their site is littered with tracking and mountains of compatability code like any other site trying to make a profit.

Firefox lets you use full-fledged uBlock Origin. That means that although Spidermonkey might have a 30% disadvantage conpared to V8, you end up running 90% less scripts, never mind the network savings.

I know, that's what I typically use, and it's still slower and buggier.

You can disable javascript and still order stuff on Amazon - that's inspired me to take a step back and learn how to build applications around HTML forms and CGI. If it's good enough for amazon...

It used to be standard practice for web development to assure the page degraded well if JS was disabled. Of course you need to have really good CSS to avoid the result looking like crap.

And this is the first time I heard someone complain Amazon UI being slow. And Amazon sales or stock are not dropping.

Probably you are not part of the 99% of users that Amazon is targeting.

Amazon is succeeding despite of this. Imagine where it could be it if was also fast!

As a workaround, now when I want to search something in amazon I just google for "amazon [what I want]"

Well there are others https://www.reddit.com/r/amazon/comments/38tttx/why_is_amazo...

And its weird because the landing page tends to be fairly fast, but if you happen to be signed in, and start clicking around you can see it load the page and then spend another few seconds loading/running a bunch of crap in the background. Once its done that a couple times it just seems to get progressively slower. Although, I suspect a big part of the problem is firefox...

As far as their target audience, you might be right WRT their target users tending to be more affluent and having high end phones/etc. But event then, at this point, what are the retail alternatives with such a wide range of products, particularly if one does enough online shopping to have a prime membership and also use it as a netflix substitute?

I'm not surprised that Angular ships a larger bundle than React and Vue, it intentionally covers a much larger surface area of functionality. But Angular also includes build tools meant to reduce the size of the final bundle - can anyone tell if those were used in this comparison?

Yes, his Angular page [0] uses AoT compilation which results in the bundle, instead of individual files by default (meant for development only).

Also I agree with you. It's a bit unfair to be comparing a barebones Angular app to a barebones React/Vue app (even with a router library for each of those). Angular also has a forms and validation library already installed, and its dependency injection framework, etc. The baseline React/Vue apps don't have those pre-installed, which you would typically install when building an actual app.

[0] https://angular-ngrx-router.netlify.com/

That's kind of the point though, right? If you don't need forms, can you remove them in Angular?

While we're on the subject, does anyone know a good forms and validation library for React? :-)

If you're using redux, I can't recommend redux-form enough. It's genuinely one of the best designed libraries I've ever used, including other languages/stacks etc. I think a little hard to grok at first but once it clicks you realise that it's such an elegant way to manage all form state you could ever have a need for, and it just works out the box.

Comes with an inbuilt validation mechanism in the form of a function with errors piped through to fields etc, if that's what you meant? The actual validations you have to write yourself though. I typically do this by hand/regex rather than use a library for that on the client, to be honest, as it's usually pretty simple.

Hi, I'm a Redux maintainer. I would generally recommend _against_ using Redux-Form in almost all cases - it's really not necessary. Most forms can be handled by hand, or with a React-centric form library if you want to abstract that.

I actually just wrote a new Redux FAQ entry yesterday on "should I keep form state in Redux?"


I can see how performance might be a concern but I've never run up against issues with this in my own implementations using redux-form. Maybe if you do some exotic stuff it could be a problem, but yeah, never had an issue with this.

What I do love is the use of redux dev tooling -- for literally every action on my form, I have a log of when it occurred and how exactly it affected the state of the form. This is simply awesome. Yes, there are probably ways to do the same thing with a non-redux implementation, but if I'm already using redux I get this for free in the same tooling I already know and love.

For what it's worth, I have in the past (before understanding the tradeoffs of where redux state vs component state makes more sense and trying to do everything completely 'purely' in redux state) foolishly tried to keep all inputs and other UI components' state in the redux store, in a completely hand-written element id based mess. I totally agree with the sentiment that it's not necessary or worth the effort to manage one-off component state in redux in the general case.

However in this specific case, redux-form encapsulates and manages all of that complexity and you don't have to think about it at all. Actually ironically other than the redux dev tools part I don't really care about the 'in redux' part. I just like the library and its API.

Sure - if you've evaluated the tradeoffs and feel that using Redux-Form is beneficial for you, go for it! I've just seen a lot of people answer questions of "How do I work with forms in React?" by blindly saying "USE REDUX-FORM!", and that's just... no. Totally not necessary. (Right up there with "adding two numbers in JS by using jQuery".)

The author of redux-form created another library, final-form, that doesn’t need redux or react and has most of the features of redux form and has built in lessons he learned from writing redux-form. There’s a separate React-final-form package with easy React bindings as well if you’re using them together. It’s worth a look as well (I’ve used both).

Ah, thanks, wasn't aware of this and will certainly check it out!

Redux-forms has lots of performance issues, and is now considered a relic of the days when everyone was drinking the Redux kool-aid. Formik is a very popular alternative: https://github.com/jaredpalmer/formik

Only with Javascript can a project that's a scant three years old from its first ever commit be called a "relic". :-)

For React forms and validation the combo of Formik and Yup is quite nice. Formik takes a schema prop specifically for integration with Yup, it's covered in the docs.



Formik was nice until I have to do complex logic that needs access to the form outside the component.

react-final-form simply blew my mind after I had to deal with that horrible reactive angular forms... It also works pretty well in tandem with redux

Depending on how well you do the treeshaking, all dependencies that are not in use will be removed from the bundle.

I do not know how agressive the angular CLI settings for webpack are, but with our own config we are removing any noise really effectively

Also, you can use a tool like https://www.npmjs.com/package/source-map-explorer to see if there's code in your bundle you don't expect!

In Ember3.0+ you can strip down the framework all the way to just the rendering engine if you would like. It's all modular now.

Yes you can

>Or consider not using a framework at all. For websites that primarily display content, it’s more efficient and cost-effective to just send some server-rendered HTML down the wire. If there are areas of your website that require interactivity, you can always use JavaScript to build those specific parts.

This should be the takeaway from the article. There's absolutely no need for javascript on most websites that use javascript. I understand if you're forced into it by the time and economic contingencies of your work but if you bring JS frameworks home to your personal sites you're doing it wrong.

Exactly. I avoid JS as much as possible. Unfortunately most of web developers want to develop SPAs regardless of the actual need of the projects.

I prefer the MPA (multi-page application) approach for most projects. If interactivty is needed, I use REST services and a data-binding framework on the front end.

That's very interesting - I use a similar approach. In my case, it wasn't deliberate. It came about from working on a variety of projects with management that had unclear and frequently shifting requirements.

A monolithic SPA approach turned into too much work when you'd have to throw out a bunch of code one day, replace it the next day with something just similar enough to cause a bunch of refactoring and bugs, etc.

In that context, it's easier to treat each page or major component as its own mini-SPA, then glue them together with a bare minimum of code.

For the same general reason, and to echo other comments, I recently chose Angular over React (or Vue) simply because Angular has a lot of built-in functionality that helps prevent a downward spiral of dependency maintenance from third party libraries and plugins, even if it's a little slower overall on many benchmarks.

This is the first time I've seen the "MPA" phrase used like this, but I think this is my preferred approach too.

Same, as a user I like the feel of going to a separate page.

Can you recommend any such frameworks?

Stimulus is is a pretty good example of a simple JS framework that binds via data attributes. It, coupled with Turbolinks or Pjax, should let you do nearly anything you'd typically use a more popular front-end framework for, for much cheaper. Check it out: https://stimulusjs.org/

Thanks, that looks really cool.

Knockoutjs works well for this, have heard vuejs is similar but haven't tried it.

That's for highly interactive pages of course, for simple pages that just need some regex validation or something, no framework is the best framework.

I find JavaScript (especially React) to be an awesome template framework for non-SPA websites. I've used it in situations where I'm just creating server side template components. The JavaScript ecosystem can be extremely enjoyable for fast development.

Saying you have SPA experience with the latest frameworks makes it easier to get a job.

I understand. I just do not accept the incidental complexity and performance tradeoffs these SPAs mean. I have built several of them over the last couple of years but still like to avoid them as much as possible.

SPA building experience, coupled with popular JS frameworks, sells usually better than the skills of carefully engineering solution for the task.

These "carefully engineered solutions" typically aren't - and instead random js bolt-ons to whatever server side language. UI state is smeared across layers, and ability to test, share and modularize suffers. SPAs usually are better engineered UIs than the alternatives.

Do you have data to back that up, or are you speaking from personal experience? I know how it can go wrong, but SPAs can become a mess too.

I had one project where we built a very beautiful, modern site that had about 50 lines of js in total. Of course your page-change times are going to be around 0.5s with an MPA, but SPAs are often slower than that anyway.

I don't know but with a service worker you could possibly reduce that 0.5 in your MPA further by caching likely next pages ahead of time.

Yes, mainly personal experience. SPAs can definitely be a mess too. Mostly calling out that managing state in one place makes unit testing easier, which usually results in a better quality codebase

The backend has to handle all of the scale. That's where the real difficult engineering comes in and if you're using JS there, you're going to pay a huge price. It doesn't seem like many front-end JS developers understand that pulling data from different places isn't actually a hard problem but instead correctly storing and managing that data at scale is where the actual engineering happens.

State management is hard. That is the core issue.

Note you can still use an MPA and have a dynamic UI using React/Knockout/whatever. It is just usually not done that way and most go full SPA.

As opposed to spas that take forty seconds to load on a 48-core machine with a gigabit link? Yeah...

Well in a way it makes sense. Its like learning any other technology stack. The core stack may not be the best fit, but its considered a better bet to hire someone with experience in the target language/stack because the learning curve is reduced.

AKA, a developer that knows the same framework your app uses, only has to learn about your specific application rather than all the background libraries/etc your using. Its the same reason for preferring a C programmer for a C position over a ruby programmer (or flip it). Sure the ruby programmer might be awesome and after a couple years more productive, but in the meantime they will be making a bunch of newbie C mistakes, and generally bootstrapping slower.

This is part of the reason i'm in favor of generally reducing the number of languages/frameworks in general use. Its better for everyone if we all agree on some simple baselines even if they may not be the most optimum for any given problem. The efficiencies of repeatedly avoiding a bunch of rookie mistakes (and then having to debug/fix/test them for the next few years) is well worth it in the long run.

Resume driven development works. Unfortunately.

If your site can render without javascript, it can render without ads.

Some sites explicitly don't want that, unfortunately.

Why is JavaScript required for ads?

It's not. It's required for tracking.

I run several websites where advertisers pay to be on the website so that their brand can be associated with the content, and get viewed — just like a billboard or a TV commercial. None of them have Javascript. They're just plain images and links.

You don't need javascript for tracking. You could just use an invisible pixel.

Some advertisers don't like to trust hits if they can't verify the hit came from a real, live browser. Like the difference between the original recaptcha (read the letters in this image) and the new ones (mouse, keyboard, and browser behaviour looks like a human browsing on a real browser).

In an absolute technical sense of course it isn’t, but virtually all the major players in web ads (Google Adsense etc) typically use JS to ship ads/tracking to your page.

It is not strictly required, but most ad networks want to know about you as much as possible, from display resolution to browsing history (if they can). This requires JS.

Also, a lot of ad content is "rich", animated, etc, also scripted.

In 1990s and early 2000s ads were mostly just images ("banners") served by an ad network, or even text links, without any scripting.

Because most ads are auctioned at the time the page is loading. The clientside scripts identifies the user however best it can and then contacts ad delivery networks to see who is willing to pay the most to place the ad (this is second hand recalling, I might be wrong)

The problem I have is that my UIs generally end up with complex requirements. At first, it's just a standard CMS-like site with relatively static content. Then they want discussions. Then they want live discussions. Then they want the ability to highlight sections of the article and hang running side-conversations off of those... etc... Without using some kind of front-end framework, I end up organically evolving my own to tackle the increasing complexity of the UI. Now, I've got UI logic spread across two layers: my back-end (Rails or whatever) and in my front end (my hodgepodge JS), both of which need to dynamically render / update markup-- at which point I wish I'd just gone with an SPA framework in the first place.

See, that's the point where I usually wish I hadn't used a framework. As your design gets more and more convoluted, the amount of switches, toggles, and special cases that you have to build into the widgets tends to balloon. Most current frameworks make that an exhausting exercise in state drilling, and it becomes hard to reason about how any given change in application state will affect the view.

Its not just that, but due to the difficulties of getting CSS to do what you want, it seems standard practice now has everyone using JS for basic layout/update tasks. Back when I was doing a lot of web development getting something sitting in the right place with CSS frequently took me much longer than just writing a bit of JS to adjust the size/position, but the results with CSS were much faster to respond.

CSS is easy to use today in comparison to what we had 10 years ago (float based grids, clear fixes, arcane margin collapsing rules). And yet sites managed to do their layout without requiring a javascript framework.

I hear you about CSS historically being a PITA. In my experience (web-related dev and arch since 1998), modern tooling, CSS-in-JS and component-based UI makes the DX so much nicer than it's ever been. My current project uses react, emotion, tailwind, autoprefix and postcss, and it's intuitive and efficient.

The big challenge for this are different screen layouts.

Mobile + desktop support is basically a hard requirement. A tablet-optimized version is nice to have.

This gets very hard to do well with just CSS.

Not true. Check out CSS media queries.

I'll second this. FlexBox + CSS Grid with the appropriate media queries can handle just about every layout configuration you can throw at it.

When i checked those, searching uncovered statements that most smart phones are lying. Which would explain why my tests never worked in my phone.

Maybe I've been led astray, but from what i gathered about a year ago, media queries cannot distinguish between desktop and phone, you need JS to do this.

Please tell me I'm wrong about that (at least, if you give me a hint on how to do it).

Media queries combined with the viewport meta-tag solves your problem.

    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
Using that meta-tag, phones will respond to media queries around 240-360 pixels, depending on their screen size.

I like to use bootstrap to not be bothered with media queries myself. But I only use small parts from bootstrap, and I opt-in/opt-out from specific parts by removing/adding includes to specific modules from bootstrap. (you can do that with the LESS or SASS version of the package, copy the generic "imports" file, and comment-out everything you don't want/need)

Bootstrap includes many sane defaults and some fixes for weird browsers. By choosing what to include and what not to include, I can choose whether I want bootstrap's opinionated styles or my own, and I can switch later (start out with bootstrap forms, customise them later & finally remove bootstrap form styles)

You're right, but often your problem is better served by a different strategy, because doing web design based on device type is kind of unsustainable.

you are right. Css media queries are for viewport's resolution and some simple "media" types. Device type determination requires calculations on string (user agent). you cannot determine what devices are with css only. But a backend can. You can divide your app and serve seperate designs by user agent etc.

And grid for repositioning blocks

Yes, we were targeting mobile/tablet devices at the time too (well bolting on to existing code base). For example, the CSS to dynamically swap between a side menu and hamburger button wasn't complex (as well as a few other layout changes we made based on screen size), but getting everything to resize properly had a significant/constant overhead during development. Mostly because any given solution took hours to work out.

The alternative of feeding separate style sheets makes it easier to develop, but much harder to test even with the responsive design buttons in ff/chrome. That is probably also why I haven't seen many web apps that try to do everything with a single set of CSS pages recently.

So is it time for "CSS libraries" (pls no, though)?

Why do you think so many developers use things like Bootstrap, Foundation, Bourbon, Bulma, etc as starting points and languages like LESS and SASS to keep it modular/readable/maintainable? We've had CSS libraries and frameworks for years now.

> This gets very hard to do well with just CSS.

This was once true, but not anymore. Grid and flex coupled with media queries solve this problem almost perfectly.

Or you could just not care about mobile's inadequacies. That's one of the joys of making personal websites. You get to do what you want not what's required to make the most money.

Amusingly enough, his charts & images don’t display without JavaScript. Are they interactive, or is he himself using JavaScript when a simple img tag would suffice?

It's probably because his blog is hosted on Medium.

Blame Medium. Why do people even use this pile of garbage?

* Slow

* Bloated

* No images if you block cookies or js

* Paywalled. Asks for $5 membership every other day. Requiring deleting cookies to read articles

* Probably tracks the living shit out of visitors

* Somehow needs a ton of js to render a blog article

* Frequent divs in the middle of the screen asking to create an account

And they don't even produce content. It's all created by users.

I see business opportunity here.

> I see business opportunity here.

Genuinly curious: do you see one without the annoying aspects you described? How would you monetize it?

Admittedly, I don't know of another blog platform without ads. But I haven't taken time to search either.

> How would you monetize it

I wouldn't. Medium solves a non-problem. It only exists because VC funds have this tendency of pouring money over anything that resembles business.

However, Medium isn't too big to fail and I suspect its grace period is long past and now business decisions are mostly driven by shareholders seeking a favorable exit strategy. [1]

[1] https://marketingland.com/medium-will-eliminate-promoted-sto...

But if I really had to come up with a way to monetize Medium, I'd take a look at Wikipedia business model.

Wikipedia is a non-profit. It by definition doesn't have a business model.

And yet Wikimedia foundation reported an increase of $22 millions in net assets in 2016 alone, totaling $120 millions sitting in the bank. [1]

So my suggestion stands: There are lessons to be learned by looking at how Wikipedia manages without ads.

[1] https://annual.wikimedia.org/2017/financials.html

Because people don’t care about those points, it’s easy, and it gets you an audience quickly.

Speak for yourself. Some people do mind shitty websites. That's one of the reasons Medium had to layoff 1/3 of its employees last year.

There is no need to go raw if you can use a library like Preact (3KB) or HyperApp (1KB) and still get the majority of the benefits of using a modern component architecture.

Javascript has specific client side features that servers cannot provide. Good luck doing google maps without JS.

When I browse the web, I disable JS by default. In overwhelming majority of the cases, it makes everything so much faster and nicer.

I don't understand why this point is brought up on HN everytime there's a discussion about JS.

The behaviour of disabling Javascript represents an almost non-existent subset of web users and developers have more pressings things to think about.

I agree. I appreciate that you _can_ disable JavaScript, but I don't really think of it as a realistic option for using the web in 2018, and certainly not for the average user wanting to do average things.

It actually is a completely viable option in 2018 for advanced user wanting to do average things (with uMatrix for example), and it is a nice experience. Agree about average users though.

But considering how horrifically bad things have gotten, maybe we should be making a concerted effort to make it realistic.

One thing it should be noted with Angular is that there is currently work on a new renderer that should allow framework bundle size to be optimized with the whole framework being tree-shakeable.

I wonder how Angular would compare once that is done, as this has long been known as probably the main negative with Angular performance-wise, the initial load (but it should be faster than most of the rest once it does load).

Really faster than react? Having a hard time with that one...

For all numbers excepting initial load, everything I've seen shows Angular is significantly faster than the rest - the difference is more pronounced with more DOM nodes. Angular also has the advantage of incremental update, although React will also gain that benefit within a year as well.

Note: I used to work primarily with Angular, but I currently use React since the coding style encouraged with React works better with the team I'm on & React is fast enough to us.

>everything I've seen shows Angular is significantly faster than the rest

That contradicts every published test.

It is also hard to imagine. Angular is both reading and writing the DOM.

And i haven't touched it after they switched to a new version, but in the old version they noticed state changes by contiously comparing all of your state. (so more state meant it would be slower).

I'm assuming they switched to a more native implementation of the observable pattern in the newer version, rather than that pathetic hack.

Of course neither React or Angular are as fast as hand written code. Since, what both take responsibility away from the developer. In the case of React: don't worry about what you need to update, just re-render the whole thing again. Simpler code, and the performance price you pay for it is somewhat limited because it then diffes a virtual dom.

Angular's approach, annotating HTML however, simply reinforces what we learned later: that is was written by someone with absolute no experience as a front-end developer. Because performance wise it's about the dumbest thing you could do. Every time you switch from writing to the DOM, to reading the DOM you force the browser to do another layout&paint and for all of that to happen synchronously, instead of the concurrent model that modern browsers actually offer.

What has happened is that these frameworks have become the new platforms. People don't realize that everything from online chess games to google maps isn't written using these frameworks and it's questionable any of them could do that with good enough performance.

At our company when recruiting we now distinguish actual front-end developers from angular or react developers. They may know these frameworks, that does not imply they could write them or understand enough of front-end development in general to debug actual cross browser issues, performance issues or rendering issues. And we give them a test that requires building something complicated in pure html/css/js.

> And i haven't touched it after they switched to a new version, but in the old version they noticed state changes by contiously comparing all of your state. (so more state meant it would be slower).

Are you confusing AngularJS and current Angular? AngularJS is certainly wholly slower than React, but not Angular 2+.

No sure if it could be faster than react but you can check what is accomplishing by reading this:


Or watching this: https://www.youtube.com/watch?v=dIxknqPOWms&feature=youtu.be...

I used to write all my static sites using Hugo, until I saw a site that was built using Gatsby. Gatsby has a larger overhead on the initial page load but feels much faster and subsequent page loads have less overhead (or maybe more, if it's prefetching a lot of adjacent pages which you'll never navigate to).

It seems to me that the next logical step is a site that ships the initial page request in vanilla JS, then async loads react and then async fetches the overhead and then runs Gatsby. This would be the ideal if we're optimizing for perceived latency. So the first page load is fast, then as long as you stay on that page for a second or two before clicking elsewhere, everything after that point will be snappy too.

Ofcourse, many other devs are optimizing for network usage, etc, so to each their own.

You can attain Gatsby's SPA feel/speed with Hugo, via Turbolinks[0]. A much smaller overhead on initial page load, and you can hook up a service worker to do any prefetching/caching.

[0] https://github.com/turbolinks/turbolinks

Awesome, thank you. I dislike working with Gatsby and much preferred Hugo.

But what about prefetching?

From my experience, it seems like you'd have to jump through a few hoops to get prefetching working with Turbolinks. I just relied on having my service worker do all of the prefetching/caching of the site's important pages + static assets.

> It seems to me that the next logical step is a site that ships the initial page request in vanilla JS, then async loads react and then async fetches the overhead and then runs Gatsby.

I don't know about React and Vue, but that's what Angular universal does, and page load is indeed very fast then

> It seems to me that the next logical step is a site that ships the initial page request in vanilla JS, then async loads react

Or just vanilla HTML. (That initial "vanilla JS" needs to get evaluated in some page context anyway...)

Your next logical step probably won't happen for React (outside of server rendering). React is just JavaScript, so you can't capture and serialize event handlers, etc. Though Prepack (by Sebastian on the core React team) is an attempt at doing something similar.

A more likely path is that React enables aggressive and progressive loading of component code, so the initial bundle is smaller.

They've already taken a big step towards that in 16.6, by adding `React.lazy()`: https://reactjs.org/blog/2018/10/23/react-v-16-6.html

Nuxt handles this issue nicely imho.

Love Nuxt. I use it for all sorts of things. Pair it with something like Vuetify and you can start writing business logic almost immediately. Never had a problem with speed.

Damn, Svelte is not on there. I'd be curious what the author thinks to it as an approach https://svelte.technology/

I was about to say, this is precisely the reason Svelte exists. I rewrote my personal site in it a year ago [1], and the development experience was similar to a Vue or Ractive app, but with compilation (ie, the resulting webpage doesn't have Svelte included - it's just my app specifically compiled) the result was tiny.

[1] https://mikemaccana.com, probably has a lot of small bugs since I spent my time working on more important things.

Couldn't SSR with later JS hydration be just as effective at getting instant startup?

Svelte also does SSR (if you set it up, which I haven't bothered), but you don't have to pull down a whole framework, so the hydration step is much faster.

Right, but with SSR the other frameworks basically have no downside. Their loading is also instant. A 1 second long hydration is not a big deal.

1000ms hydration may be a big deal, hydration may also take more than 1000ms.

What matters to people is the content loading and being able to see the page. For the majority of apps the hydration is almost irrelevant because the time it takes to read the page and decide to interact with it takes vastly more than 1000ms, though there could be exceptions. It's certainly the case for all the apps I've worked on.

It can still cause things like janky scrolling while the JS is parsed. Less JS is never ever going to be a bad thing

> the time it takes to read the page and decide to interact with (the page) takes vastly more than 1000ms

OK. We have different experiences then.

> your React application will never load faster than about 1.1 seconds on an average phone in India, no matter how much you optimize it

What about cache? In that 1.1s figure, 0.9s are for the download of react.

Is it reasonable to assume that for most subsequent loads the website would load in 0.2s?

HTTPS everywhere kills caches, in the sense that only end devices can provide them, but other network nodes not.

Check Eric Meyer's complaints.

Does it also affect caching via service workers? Pretty standard practice in PWAs.

No, HTTP caching and PWA web worker caching works fine. (In fact, web workers require HTTPS to even function.)

Eric Meyer's "HTTPS busts caching" complaint refers not to HTTP or web worker caching, but man-in-the-middle local servers caching responses. It sits between the internet and your local box, and serves cached requests. This breaks with HTTPS, which is designed to break man-in-the-middle attacks.

No, but it impacts the initial download as it needs to be end to end, instead of the closest access point.

Also a very good way to waste traffic quota, e.g. a school network connection vs students computers.

Which interestingly is a problem IPFS would solve, right? But IPFS potentially introduces other drawbacks.

It would be cool to able to build dual-distribution applications, where publicly available assets and content blobs (e.g. app.js) could be distributed in an immutable, cache-able manner, and server interactions and private assets could still be handled in a more traditional client/server way.

Perhaps then we might even be able to realize the original intent of including dependencies from a CDN, being able to share a cached version of React across websites.

(I'm not and IPFS advocate or detractor)

> Is it reasonable to assume that for most subsequent loads the website would load in 0.2s?

Wouldn't that number go back to 1.1s every time you deploy a JS change? And isn't it good to deploy often?

With service worker, after you deploy the new changes, user will still load the cached version first, before downloading new version in the background.

So the load time is still short.

A nice way to improve the time-to-interactive is to prefetch the framework after you load an initial basic page.

Here's Addy Osmani explaining how it works: https://medium.com/dev-channel/a-netflix-web-performance-cas...

Used that on my first Ajax app. It was authenticated so we pruned the JS on the login page like crazy and added a script that injected the main bundle into the page after page ready.

Worked great for everybody not using a password manager...

That's assuming you don't do any server side rendering right?

I can send completely readable html over the wire that becomes interactive once the JS finishes loading.

What am I missing.

The days of work needed to set up server-side rendering?

I realize I must look stupid saying this, because when you know your tech stack inside out the setup might be 15 minutes instead of days. But a large proportion of developers only learn to do things when they have to, and so you learn server side rendering the first time you have a customer willing to pay for the time it takes you to learn.

"SSR" is the default behavior of most pre-2016 web frameworks. Only when ReactJS became hip did it appear on lists of nice-to-haves that people weigh the pros and cons of.

Yes, I'm talking in the context of SPA frameworks (which the article referred to).

And conversely they would also have to learn these JS frameworks, right? Something like Angular is far harder to master (and debug) then server side rendering.

Of course, but time is limited and there's a huge number of things to learn. So my point about a customer paying the cost of learning SSR may be moot, but someone has to pay it.

I also assume the ease of implementing SSR is highly dependent on the backend tech stack. My guess is that only Node.js makes it relatively easy.

Additionally the author of this article used time to download the bundle as a metric rather than time to first interactive, or the speed index metric which are actually indicative of UX.

I think that if you’re not very serious about front end then ignoring ssr is fine, but if UX or increasing conversions is a concern of yours then it’s vital.

Well yeah it assumes you don't have SSR. But that isn't a useful metric for this "study" since SSR just gives the bare html. This person is just measuring time to interactivity/fully loaded JS and costs from the JS perspective.

Nothing, apart from you’re thinking of users instead of current dev fashions.

I'm the only person on my current team (an internally-focused engineering team) who has experience building websites/apps for external paying clients (e.g. people who will freak out if anything fails to load faster than they can blink, on their phone's 3G connection).

I was taken aback by how many times I had to argue with my team, our sister teams, our consultants/contractors, and even our (non-technical) leadership about NOT building a SPA when we came across 1 form in our large codebase that required some extra interactivity.

For some reason many developers seem to be fine turning a blind eye to the actual page-rendering time, maybe because it's hard to benchmark with tools? (although I'm skeptical of that). I find an overlap however in that those same developers seem to not understand why multiple database query roundtrips are worse than 1 query followed by some in-memory munging.

I'm sure I'm biased by my background, where we were profiling like mad, doing crazy things like purposely modifying images/hand-building sprites to pack better, minifying CSS before SASS/LESS existed, and every millisecond to finish rendering mattered. Good times.

Really depends on what is your product and how users use it.

If the user just visit a few pages every time, you probably don't need SPA.

However, if your website has hundreds of pages and users need to navigate around a lot, SPA would actually reduce load time by cutting down the amount of data being transferred over network every time users goes to a different page. It also makes the experience navigating between pages smoother.

browser cache solves this problem

Curious what's the strategy you use to invalidate browser cache for non-SPA? When deploying new changes.

SPA's tend to come with build systems, but that is just a side effect of their complexity. You can have a front-end build system independent of SPA/MPA architecture. In fact, you should, at least for minification and whatnot. That system can add a content hash or some other strategy to force invalidation at build time for deployed artifacts. Barring that, whatever is serving the asset should handle it's caching/TTL stuff correctly, but config of that can be hard (in complexity or domain knowledge.) Build hashes have been the most resilient approach that I've found so far. Mostly because it is easily automated in artifact generation.

Versioned URLs of everything except index.html should work just fine.

As others have mentioned: app.min.js?$hash-or-date-or-releasever$

versioned url

Sorry if it is not clear, I was referring to html files returned by servers when user visits old-fashioned server-rendered web pages (non-SPA pages) like `/page1` or `/page2`.

How do you version urls like those?

Something like `/page1?t=201811292359`? That seems ugly to the users.

rules i use:

- content under certain url should never change

- if it has to change then use ESI or/and smaller cache expiration times

- if it has to change very often/realtime, then use static html as skeleton and load only required information via javascript. f.e.: info about currently signed in user, eshop basket

What I find funny about this comment is that your experience of being a flabbergasted subject matter expert is basically the same for every SME in any domain on your team, your sister teams, etc. I've been there too.

It's rare to find developers who've deeply specialized in multiple different domains.

Tangental, but it's frustrating creating a website and optimizing it for performance down to the bone; and then having 'mission critical' plugins totally rip the potential performance gains to shreds.

My ecommerce site [0] (built on a custom stack, not shopify; making it orders of magnitude faster). I got it down to 600kb with 27 network requests total (with it being heavily media/image based). Now find me any other ecommerce site which can boast a comparable network payload.

Unfortunately, after throwing google analytics + fb ad trackers + a chat widget + a youtube video, the network requests have ballooned to 100+. It's very ironic how these companies promote the development a performance driven web, but the tools they want us to embed do quite the contrary. They do, however, offer a lot of value. What to do?

[0] — www.getfractals.com

> after throwing google analytics + fb ad trackers + a chat widget + a youtube video

But do you actually need all of this? Don’t get me wrong, this is still way ahead compared to the garbage the usual sites load, but I think you can do better.

For analytics, is there a way you can only include it for a certain % of requests? This should work fine with analytics and still give you a good overview.

The chat widget can be put behind a link on its own page.

For YouTube consider having a static thumbnail that links to the video instead of the embed code, or see if another provider offers a lighter embed (Vimeo?)

Your site is still really fast compared with almost all ecommerce sites. I built the backend for a friend of mine's blog and I always use it as an example of how fast the web could be but yours might be a more impressive example:


Thank you kindly! The site you have attached is also rocket speed!

CMS's like shopify offer great value, and are a great starting place even for technical folks. However I think the lack of prioritization they've placed on the performance of sites on their platform has overall been a net negative for e-commerce at large. The mobile experience of most shopify stores is usually atrocious and the inclination to bounce before page load is all too real.

I understand they are working with an old stack, and changing course may be nigh impossible at this point. However, I built my stack in 2014, and the dividends of doing something custom vs the status quo have more than paid off.

Check out http://changelog.com for an example of a fast (it's the fastest site I've ever used) site that's rendered server-side without any of the SPA madness. It's open source, built on Phoenix and makes good use of Turbolinks. (https://changelog.com/posts/why-we-chose-turbolinks)

Tiny thing: the purple buttons change border thickness when hovering, you could give them a 1px transparent border in the default state, to prevent content shift around one pixel when hovering over them.

Hey, I actually struggled with that a bit, I really appreciate this advice! Will include it tonight :) thank you!

I've been there myself, glad to help, and love the site :)

When transitioning between a different number of properties (e.g. for shadows, gradients) or different types (inset vs normal shadow), this trick is really useful. Here's a fiddle to demonstrate it for box-shadow:


This is why I'm a happy user of Preact - the core feature set of React minus the bloat. Especially with some recent additions to React (context, hooks) which I don't have any need for, I'm glad to stay with the smaller surface area of Preact instead.

Preact is the most React-like small framework, probably the best for your resume and for hiring people who want a good resume.

Mithril is the small framework that most directly addresses the issues brought up in this article. It has routing built in and comes with a small streams library that, for apps I work on, is all the state management I need.

For those who really love JSX, Inferno is a really clean, small React-like framework. Its advantage over Mithril is that JSX expressions evaluate to components, not vnodes, so your HOC code will be slightly cleaner.

> If you want your application to become interactive on your users’ devices in under 5 seconds, can you afford to spend a fifth of that time just booting up React?

If you care about load time you should pick a framework because of the value it brings, rather than what it costs. That one second overhead might make your application code twice as efficient.

For example Next.js and React will give you route based bundle splitting and navigation preloading for free. Yes, a vanilla JS app could implement the same, but requires a lot more boilerplate to achieve. So in practice most devs are more likely to skip that part and their lightweight no-framework app becomes a bloated mess.

I switched a classical html stack to a next.js app and although the first page load is definitely lagging, the speed gains of inter-page navigation is incredible. People navigate 70 pages within a couple of minutes.

Definitely worth it for many modern-day usages.

I wonder how much bootup time Ember.js takes nowadays. I mean, a few years back Ember was probably the worst when it came to initial loading times (it has other strengths), but I heard it got better over the years.

> Frameworks exist for a reason.

People keep saying that...

> Truth is, when you build your application on top of a modern JavaScript framework, you agree to pay a certain baseline performance cost that can never be optimized away. In exchange for paying this cost, you gain maintainability, developer efficiency, and (hopefully) better performance during runtime.

All of that is completely subjective except for that final bit about runtime performance. I bet nobody can prove runtime performance improves in correlation with layers of abstraction.

It depends. ML or ATS code can perform faster than hand-written C code, despite the fact both MLton and ATS compilers actually generate C code, effectively adding the layers of abstractions.

And you can have the verification for free also.

> We introduce a new approach for implementing cryptographic arithmetic in short high-level code with machine-checked proofs of functional correctness. We further demonstrate that simple partial evaluation is sufficient to transform into the fastest-known C code, breaking the decades-old pattern that the only fast implementations are those whose instruction-level steps were written out by hand.


Compiler output isn't an abstraction. It's what is ultimately executed where the compiler's source sample is not. For an apples to apples comparison I am claiming the code compiled from a framework is going to be slower than well written code without a framework. I am also claiming you well be painfully challenged to prove otherwise.

Frameworks don't exist to make code faster. They exist to provide an abstraction layer.

It's simple. Don't use frameworks for content consumption. It adds no value to the end user. No, I don't need your full fledged my-data-sucking SPA just to read an article on your site. Keep it simple.

The best example?

https://www.react.rocks is a site built on react to advocate people to use React.

This is how it looks like without JS.


A site built on React to advocate React doesn't serve its primary purpose. The irony.

I don't understand your complaint here. Your issue is that a site written with a JS framework doesn't load without JS? The HN anti-JS brigade will never cease to amaze me.

You think the 0.001% of people who browse with JS disabled are the primary target for such a website?

>You think the 0.001% of people who browse with JS disabled are the primary target for such a website?

Because you obviously grabbed this number out of thin air, I would like the HN community to know that the actual percentage of users without JS is much closer to 1%[0].

[0] https://gds.blog.gov.uk/2013/10/21/how-many-people-are-missi...

Do you have any updated sources? This data is from 2013.

No I don't, unfortunately. But I imagine that this figure will be fairly stable for the foreseeable future. There are always going to be people with disabilities who turn off JavaScript for accessibility reasons, and privacy-conscious individuals and corporations that have it disabled.

I think the parent comment's exaggeration of the number is bad form for web developers, and goes against the spirit of the web, which I believe is based on the principles of universal access and inclusiveness.

I have no data here, I would guess that number would be lower today than in 2013. My reasoning being the proliferation of smart phones in second and third world countries and the general trend of more web 2.0 users, though I fully accept that this is just an educated guess.

As a web developer, for me the problem is that writing anything modern without JavaScript is a much harder task that only really benefits a small minority of users. I'm unfamiliar with any disabilities that would require JavaScript to be turned off but I could see this being a valid usecase.

Just found this from 2016: https://blockmetry.com/blog/javascript-disabled

That post also implies that the figure is stable.

Your second paragraph is a perfect example of the kind of attitude that I think is problematic. Yes it's more work to accommodate non-JS users. But so is building ramps for people in wheelchairs. The discrimination gets "baked in" to your/our development processes, and you/we justify the discrimination based on the idea that the only ones who will suffer from it are a "small minority".

It's like assuming that everyone uses a mouse. Clearly some don't, and if we develop the web based on that assumption, we're going to leave a lot of folks in the lurch. We should hold ourselves to a higher standard[0].


Could you elaborate on the a11y benefits to disabling JS?

This is a shot in the dark as I don't use any a11y tools, but, when you load sites and block javascript, you often gets much simpler markup, which is useful for finding the content you want your screenreader to read.

You don't understand, the primary purpose of a website is to deliver content to its intended audience and if your framework doesn't provide you a failsafe way to do that, then there's no point.

The link I provided is from Google's cache, it isn't only about visitors with no JS. It means, that's exactly the way Google sees that page. So, what's the use for your "my-awesome-hipster-powered-react" site if it can't even get indexed by search engines and solve its primary purpose of displaying content?

All of that loading time to show a site that has an index page and a single item page, poorly designed/styled. lolz.

So are you against frameworks or JS?

This isn't about JS frameworks, it's about rendering data on page load.

Whether the server renders the data into HTML it responds with or there's some XHR request made by a client library (including VanillaJS) or framework, you're always incurring some cost. But data-on-load what users generally want and expect from modern web apps.

Having recently built an almost 100% HTML-based web app, I can tell you the user interactions feel unpleasant. Removing JS from web apps is silly at this point.

Comparing a barebones view layer libraries (Vue, React etc.) with full frameworks (Angular, Ember etc.) is not comparing apples to apples.

I'm assuming you didn't read the article? He was comparing react + redux + react-router using create-react-app compared to Angular. So it is an apples to apples comparison.

Even if you add redux and react-router you still don't have feature parity. The comparison is inherently arbitrary. All I really know from his experiment is if I load up a feature rich framework vs. a bare-bones view+route+state combo the framework is going to be heavier. Is that really news?

Yay, it's comparing apples to pears!

> Methodology:

> ...

> 2. Installed routing and data management libraries in each project, and made sure they were part of the JavaScript bundle.

I am surprised by the fact that author completely ignores client side caching using browser and service worker, as well as SSR.

One key point of PWA is that it works offline and loads faster after after first load (even after you deploy new changes, with basic service worker configs from sw-precache). So you need 1 second for first load only. Subsequent loads are much faster.

This is why obsoleting frameworks by expanding the native component model of the web is good for the world.

I'm really loving just casually writing websites for Rust web frameworks because I get to try out all these new tricks in html / css / ecma7 to avoid pulling in any third party libraries. Then you get goodies like compile time templates and regexes while having only 30ms from initial request to tls handshake to routing, request handling, response and rendering all done (on the same network).

Why would you compare React/Vue vs Angular??? I don't think the author understands the difference between frameworks and libraries.

React vs Vue can be a valid analysis, same as Angular vs Ember. It's silly to benchmark React/Vue vs Angular/Ember.

Lets create a benchmark of ponies vs zebras/horses!

Seems pretty clear they understand the difference judging by what was written in the article.

Do they though? have you read the article? Medium says its a 5min read. 5min of a completely wasted time. Btw its says right here that react is not a framework https://reactjs.org/.

Always amazed how people calling themselves "engineers" while being unable to grasp such simple, basic nuances.

Including megabytes of unrelated .gifs in the blog post kind of undermines the point here...

I don't agree. Those gifs are explicitly added. The author's point is about load times that are relatively unavoidable.

Please stop using gzip size for these sorts of comparisons. One, gzip affects download speed, but not JS evaluation and compilation. Two, it makes the libraries seem much smaller than they actually are.

> One, gzip affects download speed, but not JS evaluation and compilation

They measured scripting time directly, which is a much better measurement than uncompressed bytes.

> Two, it makes the libraries seem much smaller than they actually are

Would you prefer they gave unminified sizes as well? The gzipped size of a library lets us predict how long it will take to download, and how much that will cost for users.

Raw file size is meaningless though. For example, if I use long variable names I will have a bigger source file. But with gzip it wouldn’t affect anything at all.

You can think of gripped size as a rough measure of the actual information in a file, as opposed to file size which is largely meaningless in a networked environment.


Assume you have an app, rather than a site.

Your app has a login

Your password is not sent in clear-text, your server uses HTTPS for all its traffic.

You probably don’t want to use compression, then.

Of all the client-side JavaScript libraries available for making responsive web applications, I've found that Elm suits me best. I haven't done any benchmarks, but it feels faster than React and Angular to me and the Elm compiler eliminates virtually all runtime issues before I ever deploy my code.

Another alternative to large client-side libraries I've been looking at lately is Phoenix's LiveView. It is a server-side rendering library that uses web sockets. I've only just started using LiveView, but so far my pages seem to perform better than React or Angular.

Given Live View was only announced two months ago, is in development, and hasn't been released in any form whatsoever, I'm curious as to how you've managed to use it to make that comparison?

Duh! Sorry, I meant Drab. I had LiveView on the brain. You can check it out here: https://tg.pl/drab

He's not using SSR which is wrong in 2018. Vue's startup performance will also supposedly double in the next version. Looking forward to that.

Do you mean halve? That's what I took away from the twitter post by the creator Evan You.

Performance double, time halve. I think my comment was fine.

To better make sense of this data it needs to be compared against the time needed to build a non-trivial interface using each of the frameworks.

This is exactly right. A lot of posts regarding loading times or bundle sizes for frameworks/libraries forget that there are trade-offs when it comes to building an application.

The correct way to frame this is: use a slow/clunky/large/etc framework vs. build everything from scratch (which comes with its own costs).

Sure, you can optimize parts of your application to speed up the JS portion or even remote it completely [1], but it's not always as simple as "your framework is making your application slow so you should think about ditching it."

This article actually ends with a very reasonable conclusion in the section "Are Frameworks Evil?" but I've seen plenty of articles where the author doesn't offer an alternative to some library/framework [2].

[1] https://twitter.com/netflixuie/status/923374215041912833?lan...

[2] https://dev.to/gypsydave5/why-you-shouldnt-use-a-web-framewo...

The main conclusion from this article isn't accurate. This is one reason to load common libraries from a known CDN so that if any website has downloaded that library version previously, and the user's cache has not been cleared or expired, the next time that library can just be loaded from cache, even if from a different website.

This assumes that both your and the the subsequent site's use either pulls an evergreen version (bad practice) or the exact same version (not altogether likely). Doing enough front-end performance auditing for potential clients, it's clear the latter case is as varied as can be.

We pulled everything in-house onto our own Cloudfront CDN and saved a bunch of time on DNS/TLS negotiation, as each unique third-party request is extremely expensive over middling 3G, on the order of nearly 1 second apiece according to WebPageTest. Leveraging 'preconnect' with that CDN saved just a bit more time on top of that.

That's fine for jQuery, Bootstrap etc, however most modern frameworks are bundled into a single package with all of the templates and logic, making a shared CDN useless.

It would be interesting to see some benchmarks of a cached CDN Angular file and non-cached template/logic file, however lots of other sites would have to jump on board to get the cached file into users' browsers reliably.

Historically that was a major security issue. It’s less of an issue now that SRI exists (you’re using SRI, right?), but it’s still a problem when you have a broken pathway to the CDN but your site loads OK. Well, assuming you’re not building with progressive enhancement. You’re building with progressive enhancement right?

sorry what is SRI ?

Subresource Integrity. It’s designed specifically for this scenario: a resource included into your page from a seperate server that could potentially be compromised. Your hash the known good resource and include that hash in your script or link tags as an integrity attribute. When the user’s browser downloads the resource it’ll hash it again and check for a match. Only if the two match will it parse and execute the contents.


Don't ignore the time required to parse and execute JS. You don't get as much benefit from cached scripts as you do from other cached resources.

It doesn't mention framework versions..

Not using server side rendering, not using a cdn or some form of caching. Not sure this is a good test of anything other than how to fail at implementing SPAs.

This person doesn't know a thing about Angular and he is benchmarking it?! Angular is not loading the whole RxJS and I doubt AOT was setted properly.

This is a misleading paper again. I would like that 3 experts on each framework do their best to provide the same app and then we could compare.

This isn't a research paper.

> in exchange for paying this cost, you gain maintainability, developer efficiency, and (hopefully) better performance during runtime.

You also gain responsiveness to user interactions, a programmatic way to reason about your application/website & and some animation-sugar if you want. Don't throw the baby out with the bath water simply because people follow trends instead of good design practices.

And in my case, less Java I have to read or write on the back end :-)

I agree, though - it’s much easier to reason about the data integrity and security when there is a smaller code base on the back end, by virtue of splitting user interface concerns off to a front end code base.

I’ve been programming since the 80s, but I’d rather write FP code in JS than OOP code in Java - so long as I’m not doing anything to make the app work badly for people. So I’ve gravitated towards front end work the last 3 years or so.

Another rant for another time...

> your React application will never load faster than about 1.1 seconds on an average phone in India, no matter how much you optimize it

Weren't your assets deployed on Netlify's CDN? Shouldn't that make location roughly irrelevant (assuming enough PoPs)?

He's referring to the typical phone performance (parse and execute) and network bandwidth for an Indian user, not the network latency.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact