Picking on amazon, I generally don't even bother going to their site unless i'm on my desktop, where I can physically watch them frequently spike my 4+ Ghz CPU for a few seconds when I type in the search bar, or click a page. Doing the same on my phone or cheap tablet frequently results in a 10+ second waits. The other day I was typing on my tablet and the amazon search completion was literally taking 20+ seconds to display each _character_ and a completion list for it.
Two places I worked, I intercepted someone trying to throw out old computers and set up a testing station at an empty cube or a table on an endcap. Usually with some older versions of browsers.
They didn’t get used every day but they got used most weeks for sure.
I noticed this "problem" when ssd's came out as well when it came to physically noticing slow queries while developing. Something that used to be perceivable because it took a second to return results suddenly wasn't perceivable because it was taking .2ms, but the problems and inefficiencies were still there being masked by the faster speeds of the drives.
Back in the mid to late 1990's I used a 16 MHz, 24MB unix workstation to do all my encryption development for just this reason.
An unexpected benefit - I can't count the number of times I was able to just shut down FUD arguments about how badly adding transport encryption or authentication would affect performance, by simply offering to demo it on my workstation.
You don't need the latest iphone for sites to work. Sites poorly impliment mobile sites if there's an app equivalent. Amazon is one of those. Their site is littered with tracking and mountains of compatability code like any other site trying to make a profit.
Probably you are not part of the 99% of users that Amazon is targeting.
As a workaround, now when I want to search something in amazon I just google for "amazon [what I want]"
And its weird because the landing page tends to be fairly fast, but if you happen to be signed in, and start clicking around you can see it load the page and then spend another few seconds loading/running a bunch of crap in the background. Once its done that a couple times it just seems to get progressively slower. Although, I suspect a big part of the problem is firefox...
As far as their target audience, you might be right WRT their target users tending to be more affluent and having high end phones/etc. But event then, at this point, what are the retail alternatives with such a wide range of products, particularly if one does enough online shopping to have a prime membership and also use it as a netflix substitute?
Also I agree with you. It's a bit unfair to be comparing a barebones Angular app to a barebones React/Vue app (even with a router library for each of those). Angular also has a forms and validation library already installed, and its dependency injection framework, etc. The baseline React/Vue apps don't have those pre-installed, which you would typically install when building an actual app.
While we're on the subject, does anyone know a good forms and validation library for React? :-)
Comes with an inbuilt validation mechanism in the form of a function with errors piped through to fields etc, if that's what you meant? The actual validations you have to write yourself though. I typically do this by hand/regex rather than use a library for that on the client, to be honest, as it's usually pretty simple.
I actually just wrote a new Redux FAQ entry yesterday on "should I keep form state in Redux?"
What I do love is the use of redux dev tooling -- for literally every action on my form, I have a log of when it occurred and how exactly it affected the state of the form. This is simply awesome. Yes, there are probably ways to do the same thing with a non-redux implementation, but if I'm already using redux I get this for free in the same tooling I already know and love.
For what it's worth, I have in the past (before understanding the tradeoffs of where redux state vs component state makes more sense and trying to do everything completely 'purely' in redux state) foolishly tried to keep all inputs and other UI components' state in the redux store, in a completely hand-written element id based mess. I totally agree with the sentiment that it's not necessary or worth the effort to manage one-off component state in redux in the general case.
However in this specific case, redux-form encapsulates and manages all of that complexity and you don't have to think about it at all. Actually ironically other than the redux dev tools part I don't really care about the 'in redux' part. I just like the library and its API.
I do not know how agressive the angular CLI settings for webpack are, but with our own config we are removing any noise really effectively
A monolithic SPA approach turned into too much work when you'd have to throw out a bunch of code one day, replace it the next day with something just similar enough to cause a bunch of refactoring and bugs, etc.
In that context, it's easier to treat each page or major component as its own mini-SPA, then glue them together with a bare minimum of code.
For the same general reason, and to echo other comments, I recently chose Angular over React (or Vue) simply because Angular has a lot of built-in functionality that helps prevent a downward spiral of dependency maintenance from third party libraries and plugins, even if it's a little slower overall on many benchmarks.
That's for highly interactive pages of course, for simple pages that just need some regex validation or something, no framework is the best framework.
I had one project where we built a very beautiful, modern site that had about 50 lines of js in total. Of course your page-change times are going to be around 0.5s with an MPA, but SPAs are often slower than that anyway.
Note you can still use an MPA and have a dynamic UI using React/Knockout/whatever. It is just usually not done that way and most go full SPA.
AKA, a developer that knows the same framework your app uses, only has to learn about your specific application rather than all the background libraries/etc your using. Its the same reason for preferring a C programmer for a C position over a ruby programmer (or flip it). Sure the ruby programmer might be awesome and after a couple years more productive, but in the meantime they will be making a bunch of newbie C mistakes, and generally bootstrapping slower.
This is part of the reason i'm in favor of generally reducing the number of languages/frameworks in general use. Its better for everyone if we all agree on some simple baselines even if they may not be the most optimum for any given problem. The efficiencies of repeatedly avoiding a bunch of rookie mistakes (and then having to debug/fix/test them for the next few years) is well worth it in the long run.
Some sites explicitly don't want that, unfortunately.
Also, a lot of ad content is "rich", animated, etc, also scripted.
In 1990s and early 2000s ads were mostly just images ("banners") served by an ad network, or even text links, without any scripting.
Mobile + desktop support is basically a hard requirement. A tablet-optimized version is nice to have.
This gets very hard to do well with just CSS.
Maybe I've been led astray, but from what i gathered about a year ago, media queries cannot distinguish between desktop and phone, you need JS to do this.
Please tell me I'm wrong about that (at least, if you give me a hint on how to do it).
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
I like to use bootstrap to not be bothered with media queries myself. But I only use small parts from bootstrap, and I opt-in/opt-out from specific parts by removing/adding includes to specific modules from bootstrap. (you can do that with the LESS or SASS version of the package, copy the generic "imports" file, and comment-out everything you don't want/need)
Bootstrap includes many sane defaults and some fixes for weird browsers. By choosing what to include and what not to include, I can choose whether I want bootstrap's opinionated styles or my own, and I can switch later (start out with bootstrap forms, customise them later & finally remove bootstrap form styles)
The alternative of feeding separate style sheets makes it easier to develop, but much harder to test even with the responsive design buttons in ff/chrome. That is probably also why I haven't seen many web apps that try to do everything with a single set of CSS pages recently.
This was once true, but not anymore. Grid and flex coupled with media queries solve this problem almost perfectly.
* No images if you block cookies or js
* Paywalled. Asks for $5 membership every other day. Requiring deleting cookies to read articles
* Probably tracks the living shit out of visitors
* Somehow needs a ton of js to render a blog article
* Frequent divs in the middle of the screen asking to create an account
And they don't even produce content. It's all created by users.
I see business opportunity here.
Genuinly curious: do you see one without the annoying aspects you described? How would you monetize it?
> How would you monetize it
I wouldn't. Medium solves a non-problem. It only exists because VC funds have this tendency of pouring money over anything that resembles business.
However, Medium isn't too big to fail and I suspect its grace period is long past and now business decisions are mostly driven by shareholders seeking a favorable exit strategy. 
But if I really had to come up with a way to monetize Medium, I'd take a look at Wikipedia business model.
So my suggestion stands: There are lessons to be learned by looking at how Wikipedia manages without ads.
I wonder how Angular would compare once that is done, as this has long been known as probably the main negative with Angular performance-wise, the initial load (but it should be faster than most of the rest once it does load).
Note: I used to work primarily with Angular, but I currently use React since the coding style encouraged with React works better with the team I'm on & React is fast enough to us.
That contradicts every published test.
It is also hard to imagine. Angular is both reading and writing the DOM.
And i haven't touched it after they switched to a new version, but in the old version they noticed state changes by contiously comparing all of your state. (so more state meant it would be slower).
I'm assuming they switched to a more native implementation of the observable pattern in the newer version, rather than that pathetic hack.
Of course neither React or Angular are as fast as hand written code. Since, what both take responsibility away from the developer. In the case of React: don't worry about what you need to update, just re-render the whole thing again. Simpler code, and the performance price you pay for it is somewhat limited because it then diffes a virtual dom.
Angular's approach, annotating HTML however, simply reinforces what we learned later: that is was written by someone with absolute no experience as a front-end developer. Because performance wise it's about the dumbest thing you could do. Every time you switch from writing to the DOM, to reading the DOM you force the browser to do another layout&paint and for all of that to happen synchronously, instead of the concurrent model that modern browsers actually offer.
What has happened is that these frameworks have become the new platforms. People don't realize that everything from online chess games to google maps isn't written using these frameworks and it's questionable any of them could do that with good enough performance.
At our company when recruiting we now distinguish actual front-end developers from angular or react developers. They may know these frameworks, that does not imply they could write them or understand enough of front-end development in general to debug actual cross browser issues, performance issues or rendering issues. And we give them a test that requires building something complicated in pure html/css/js.
Are you confusing AngularJS and current Angular? AngularJS is certainly wholly slower than React, but not Angular 2+.
Or watching this:
It seems to me that the next logical step is a site that ships the initial page request in vanilla JS, then async loads react and then async fetches the overhead and then runs Gatsby. This would be the ideal if we're optimizing for perceived latency. So the first page load is fast, then as long as you stay on that page for a second or two before clicking elsewhere, everything after that point will be snappy too.
Ofcourse, many other devs are optimizing for network usage, etc, so to each their own.
But what about prefetching?
I don't know about React and Vue, but that's what Angular universal does, and page load is indeed very fast then
Or just vanilla HTML. (That initial "vanilla JS" needs to get evaluated in some page context anyway...)
A more likely path is that React enables aggressive and progressive loading of component code, so the initial bundle is smaller.
 https://mikemaccana.com, probably has a lot of small bugs since I spent my time working on more important things.
OK. We have different experiences then.
What about cache? In that 1.1s figure, 0.9s are for the download of react.
Is it reasonable to assume that for most subsequent loads the website would load in 0.2s?
Check Eric Meyer's complaints.
Eric Meyer's "HTTPS busts caching" complaint refers not to HTTP or web worker caching, but man-in-the-middle local servers caching responses. It sits between the internet and your local box, and serves cached requests. This breaks with HTTPS, which is designed to break man-in-the-middle attacks.
Also a very good way to waste traffic quota, e.g. a school network connection vs students computers.
It would be cool to able to build dual-distribution applications, where publicly available assets and content blobs (e.g. app.js) could be distributed in an immutable, cache-able manner, and server interactions and private assets could still be handled in a more traditional client/server way.
Perhaps then we might even be able to realize the original intent of including dependencies from a CDN, being able to share a cached version of React across websites.
(I'm not and IPFS advocate or detractor)
Wouldn't that number go back to 1.1s every time you deploy a JS change? And isn't it good to deploy often?
So the load time is still short.
Here's Addy Osmani explaining how it works: https://medium.com/dev-channel/a-netflix-web-performance-cas...
Worked great for everybody not using a password manager...
I can send completely readable html over the wire that becomes interactive once the JS finishes loading.
What am I missing.
I realize I must look stupid saying this, because when you know your tech stack inside out the setup might be 15 minutes instead of days. But a large proportion of developers only learn to do things when they have to, and so you learn server side rendering the first time you have a customer willing to pay for the time it takes you to learn.
I also assume the ease of implementing SSR is highly dependent on the backend tech stack. My guess is that only Node.js makes it relatively easy.
I think that if you’re not very serious about front end then ignoring ssr is fine, but if UX or increasing conversions is a concern of yours then it’s vital.
I was taken aback by how many times I had to argue with my team, our sister teams, our consultants/contractors, and even our (non-technical) leadership about NOT building a SPA when we came across 1 form in our large codebase that required some extra interactivity.
For some reason many developers seem to be fine turning a blind eye to the actual page-rendering time, maybe because it's hard to benchmark with tools? (although I'm skeptical of that). I find an overlap however in that those same developers seem to not understand why multiple database query roundtrips are worse than 1 query followed by some in-memory munging.
I'm sure I'm biased by my background, where we were profiling like mad, doing crazy things like purposely modifying images/hand-building sprites to pack better, minifying CSS before SASS/LESS existed, and every millisecond to finish rendering mattered. Good times.
If the user just visit a few pages every time, you probably don't need SPA.
However, if your website has hundreds of pages and users need to navigate around a lot, SPA would actually reduce load time by cutting down the amount of data being transferred over network every time users goes to a different page. It also makes the experience navigating between pages smoother.
How do you version urls like those?
Something like `/page1?t=201811292359`? That seems ugly to the users.
- content under certain url should never change
- if it has to change then use ESI or/and smaller cache expiration times
It's rare to find developers who've deeply specialized in multiple different domains.
My ecommerce site  (built on a custom stack, not shopify; making it orders of magnitude faster). I got it down to 600kb with 27 network requests total (with it being heavily media/image based). Now find me any other ecommerce site which can boast a comparable network payload.
Unfortunately, after throwing google analytics + fb ad trackers + a chat widget + a youtube video, the network requests have ballooned to 100+. It's very ironic how these companies promote
the development a performance driven web, but the tools they want us to embed do quite the contrary. They do, however, offer a lot of value. What to do?
 — www.getfractals.com
But do you actually need all of this? Don’t get me wrong, this is still way ahead compared to the garbage the usual sites load, but I think you can do better.
For analytics, is there a way you can only include it for a certain % of requests? This should work fine with analytics and still give you a good overview.
The chat widget can be put behind a link on its own page.
For YouTube consider having a static thumbnail that links to the video instead of the embed code, or see if another provider offers a lighter embed (Vimeo?)
CMS's like shopify offer great value, and are a great starting place even for technical folks. However I think the lack of prioritization they've placed on the performance of sites on their platform has overall been a net negative for e-commerce at large. The mobile experience of most shopify stores is usually atrocious and the inclination to bounce before page load is all too real.
I understand they are working with an old stack, and changing course may be nigh impossible at this point. However, I built my stack in 2014, and the dividends of doing something custom vs the status quo have more than paid off.
When transitioning between a different number of properties (e.g. for shadows, gradients) or different types (inset vs normal shadow), this trick is really useful. Here's a fiddle to demonstrate it for box-shadow:
Mithril is the small framework that most directly addresses the issues brought up in this article. It has routing built in and comes with a small streams library that, for apps I work on, is all the state management I need.
For those who really love JSX, Inferno is a really clean, small React-like framework. Its advantage over Mithril is that JSX expressions evaluate to components, not vnodes, so your HOC code will be slightly cleaner.
If you care about load time you should pick a framework because of the value it brings, rather than what it costs. That one second overhead might make your application code twice as efficient.
For example Next.js and React will give you route based bundle splitting and navigation preloading for free. Yes, a vanilla JS app could implement the same, but requires a lot more boilerplate to achieve. So in practice most devs are more likely to skip that part and their lightweight no-framework app becomes a bloated mess.
Definitely worth it for many modern-day usages.
People keep saying that...
All of that is completely subjective except for that final bit about runtime performance. I bet nobody can prove runtime performance improves in correlation with layers of abstraction.
And you can have the verification for free also.
> We introduce a new approach for implementing cryptographic arithmetic in short high-level code with machine-checked proofs of functional correctness. We further demonstrate that simple partial evaluation is sufficient to transform into the fastest-known C code, breaking the decades-old pattern that the only fast implementations are those whose instruction-level steps were written out by hand.
Frameworks don't exist to make code faster. They exist to provide an abstraction layer.
The best example?
https://www.react.rocks is a site built on react to advocate people to use React.
This is how it looks like without JS.
A site built on React to advocate React doesn't serve its primary purpose. The irony.
You think the 0.001% of people who browse with JS disabled are the primary target for such a website?
Because you obviously grabbed this number out of thin air, I would like the HN community to know that the actual percentage of users without JS is much closer to 1%.
I think the parent comment's exaggeration of the number is bad form for web developers, and goes against the spirit of the web, which I believe is based on the principles of universal access and inclusiveness.
That post also implies that the figure is stable.
Your second paragraph is a perfect example of the kind of attitude that I think is problematic. Yes it's more work to accommodate non-JS users. But so is building ramps for people in wheelchairs. The discrimination gets "baked in" to your/our development processes, and you/we justify the discrimination based on the idea that the only ones who will suffer from it are a "small minority".
It's like assuming that everyone uses a mouse. Clearly some don't, and if we develop the web based on that assumption, we're going to leave a lot of folks in the lurch. We should hold ourselves to a higher standard.
The link I provided is from Google's cache, it isn't only about visitors with no JS. It means, that's exactly the way Google sees that page. So, what's the use for your "my-awesome-hipster-powered-react" site if it can't even get indexed by search engines and solve its primary purpose of displaying content?
Whether the server renders the data into HTML it responds with or there's some XHR request made by a client library (including VanillaJS) or framework, you're always incurring some cost. But data-on-load what users generally want and expect from modern web apps.
Having recently built an almost 100% HTML-based web app, I can tell you the user interactions feel unpleasant. Removing JS from web apps is silly at this point.
One key point of PWA is that it works offline and loads faster after after first load (even after you deploy new changes, with basic service worker configs from sw-precache). So you need 1 second for first load only. Subsequent loads are much faster.
React vs Vue can be a valid analysis, same as Angular vs Ember. It's silly to benchmark React/Vue vs Angular/Ember.
Lets create a benchmark of ponies vs zebras/horses!
Always amazed how people calling themselves "engineers" while being unable to grasp such simple, basic nuances.
They measured scripting time directly, which is a much better measurement than uncompressed bytes.
> Two, it makes the libraries seem much smaller than they actually are
Would you prefer they gave unminified sizes as well? The gzipped size of a library lets us predict how long it will take to download, and how much that will cost for users.
You can think of gripped size as a rough measure of the actual information in a file, as opposed to file size which is largely meaningless in a networked environment.
Assume you have an app, rather than a site.
Your app has a login
Your password is not sent in clear-text, your server uses HTTPS for all its traffic.
You probably don’t want to use compression, then.
Another alternative to large client-side libraries I've been looking at lately is Phoenix's LiveView. It is a server-side rendering library that uses web sockets. I've only just started using LiveView, but so far my pages seem to perform better than React or Angular.
The correct way to frame this is: use a slow/clunky/large/etc framework vs. build everything from scratch (which comes with its own costs).
Sure, you can optimize parts of your application to speed up the JS portion or even remote it completely , but it's not always as simple as "your framework is making your application slow so you should think about ditching it."
This article actually ends with a very reasonable conclusion in the section "Are Frameworks Evil?" but I've seen plenty of articles where the author doesn't offer an alternative to some library/framework .
We pulled everything in-house onto our own Cloudfront CDN and saved a bunch of time on DNS/TLS negotiation, as each unique third-party request is extremely expensive over middling 3G, on the order of nearly 1 second apiece according to WebPageTest. Leveraging 'preconnect' with that CDN saved just a bit more time on top of that.
It would be interesting to see some benchmarks of a cached CDN Angular file and non-cached template/logic file, however lots of other sites would have to jump on board to get the cached file into users' browsers reliably.
This is a misleading paper again. I would like that 3 experts on each framework do their best to provide the same app and then we could compare.
You also gain responsiveness to user interactions, a programmatic way to reason about your application/website & and some animation-sugar if you want. Don't throw the baby out with the bath water simply because people follow trends instead of good design practices.
I agree, though - it’s much easier to reason about the data integrity and security when there is a smaller code base on the back end, by virtue of splitting user interface concerns off to a front end code base.
I’ve been programming since the 80s, but I’d rather write FP code in JS than OOP code in Java - so long as I’m not doing anything to make the app work badly for people. So I’ve gravitated towards front end work the last 3 years or so.
Another rant for another time...
Weren't your assets deployed on Netlify's CDN? Shouldn't that make location roughly irrelevant (assuming enough PoPs)?