It's probably in Google's interest to limit web bloat that degrades UX. AMP might be one strategy (with many negative aspects that many here will be familiar with), leveraging Chrome's browser share to impose limits might be another one.
The two are related. Work on Never Slow Mode stopped last fall. It picked up after the WebKit/Safari discussions last week. I doubt Chrome will ship a "never slow mode" but I see looser budgets, a la WebKit proposal.
Hi, author of the NSM patch. Still very much under development, but most of the important considerations aren't technical, and making progress on a system like this is more about how to roll things out rather than implementation.
It's great to seeing WebKit folks thinking along the same lines, and I hope to be able to discuss with them. Coalitions -- like the Mozilla/Chrome work on TLS adoption -- are critical in making progress in large ecosystems.
Is there preliminary plan of how Never-Slow Mode would be deployed? I assume the idea is to eventually have it enabled by default on all mobile devices and to prompt laptop/desktop users to enable it if they have a slow connection?
This approach to web performance smacks of someone who has never had to build an app under real world conditions. "It will be smaller if you rewrite your app to be AMP-first. No, scrap that, the PRPL pattern is in, we'll hold the PWA rewrite for next week" are not viable solutions for engineers at companies that aren't Google.
Try telling your marketing department that they can't integrate third party scripts through Google Tag Manager, or a VP that the SDK for that third party service they championed is too large and the project will have to be shelved. Or put a pause on a critical project because the application is approaching the hardcoded JS limit and another department shipped before you did.
Hardcoded limits are the first tool most people reach for, but they fall apart completely when you have multiple teams working on a product, and when real world deadlines kick in. It's like the corporate IT approach to solving problems — people can't break things if you lock everything down. But you will make them miserable and stop them doing their job.
Firstly: Too bad. The dysfunctional practices of corporate software development doesn't override people's right not to have their own computer's performance trashed by shitty websites. It's their right to run whatever browser they like, including one that doesn't allow slow, bloated and poorly engineered software to run, no matter how hard it makes life for the marketing departments of the world.
Secondly: For decades computers operated with extreme limits on their available memory and their processing capacity, relative to today's computers. Despite this, people managed to write software, even including companies with marketing departments and idiot VPs. It might take these people a while to accept that they can no insist on pushing whatever garbage they want, because the cost is borne entirely by the end user, but eventually they will. Or they won't, and these dysfunctional companies will die out. Either way, users win.
That’s transfer (compressed with gzip or whatever) size. So its a bit more than it sounds; but yeah - still tiny.
If this is the future, I know a lot of web apps that will need retooling. That is just enough space for react (+react-dom). We’ll have to split up react and web apps themselves in future apps if chrome goes this route; although I suppose that’s marginally better for caching performance.
The restrictions are wire size. `bootstrap.min.js` is ~10KiB on the wire, which means it comfortably fits (as do jQuery, most analytics packages, etc.).
That said, the prototype does break a lot of the web, but that's not a crisis. The intent isn't to have this rolled out everywhere against unwitting content, but rather (like TLS), to let developers opt-in to a single set of rules when they see value. There are also places (e.g. PWAs) where the browser-imposed quality bar needs to be high. Blocking PWA install for sites that don't send the opt-in header seems like a reasonable place to start.
Moment.js is 16.7k. Moment.js plus locale data for every single locale in the world is 68k. There is no justifiable reason to ever load every single locale in the world at once except for a few extremely niche cases. Real world use cases typically need one locale at a time, occasionally two or three, never hundreds.
The whole reason I use a date formatting library is so that I can localize dates into whatever locale the user is on. I can't just assume one locale and only load that.
"Caps do not apply to workers". I have a limited understanding of them but I understand that if your scripts are fat then you have something closer to an application than a document, and it would be better served with service workers that can run in the background and keep most of your application structure in place ?
Workers works pretty differently than regular script. The way it is being loaded, how to import other scripts into worker and have it communicate with your main thread (window) is quite a standard on its own, it is just that it happens to use javascript as its language.
Which is what popular setup like create-react-app and alikes are against. It doesn't make too much sense for more heavy application like interactive editors to have multiple artificially split files just to conform with this limit.
> Instead of downloading the entire app before users can use it, code splitting allows you to split your code into small chunks which you can then load on demand.
You need to make a business case that your whole application is modularly composable.
Some application might fit under this umbrella, but there are other stuff that don't, like a pdf-render, or something like google slides, which as I just checked, the core editor js is bundled in a single js file and as large as 1.4MB.
The main reason you pack scripts is because you're compiling, and your compiler is checking and optimizing your cross-file interactions. For example, you need to merge scripts to be able to detect and delete unused functions, because otherwise some unmerged script might call the function without the compiler being able to detect it.
It seems like it would be trivial for your compiler to go ahead and split the files up again afterwards. Its not like you cared about file organization on the client side anyways if you were packing them, so its free to split however it feels.
And ofc you’d naturally just attribute this as an artifact of optimization, just as single-file packing was. Which it is, given http 2/3
Following up my own comment...looks like current Gmail will have trouble. Loaded in chrome with mobile set to "Pixel 2", and there's one js file that not only blows the per-script max size, but blows the whole "Total Script Budget" by itself, not including the other 78 js files gmail wants to download. Stopped looking there.
(Per-script max size: 50KiB,
Total script budget: 500KiB)
I don’t think so - the owner of this issue is the tech lead of Progressive Web Apps at Google according to his LinkedIn. He probably envisions a future where GMail can run as a web app on a mobile phone with zero lag. Combined with previous additions to web APIs such as notifications, GMail the web app can be at parity with GMail the native app.
For anything I use regularly, I'll choose a native app over a web site or web app every time because they typically consume less battery, cpu, memory, and bandwidth. They are usually also faster and a well written app is always going to feel better than one running in a web browser.
It’s clearly my personal preference, but also my experience that fewer and fewer in my social circles wants “yet another app” when there’s already a website which works fine. Games are the only exception.
And I work with software. None of my colleagues wants to work with or test anything not strictly web-based.
> None of my colleagues wants to work with or test anything not strictly web-based.
If that's true, I'd be willing to bet there's some extreme self-selection going on. Which is fine, but I'd be very careful about generalizing your experience outside of your immediate social circle.
That's really cool. The only thing missing is the total number of DOM elements on a page. I've seen production sites with literally thousands, and it slows most browsers to a crawl (and balloons memory).
It's more than memory use - it causes tree traversals to take longer, which means unbounded style recalculations (common) and side-effecting DOM changes take longer.
Thousands isn’t really a whole lot. If you have say a table with 10 columns and 50 rows and have complex content in the cells (including div elements) you’ll be well into the thousands.
One thing that always surprises me with Chrome (and other browsers) is how easy it is to get a browser tab to freeze by running an infinite loop in JS. This happens to me from time to time due to programming errors and I usually have to close the entire browser to fix it, as the tab will neither close nor will I be able to access the developer console to turn the script off. That probably means the script runs in the same thread as the render/UI code for a given tab? Anyway I found this quite annoying and I wondered why the browser behaves like this (I’m sure there’s a good technical reason as other people must have encountered the same problem so if this was easily fixable someone would’ve probably already fixed it).
A similar thing that surprises me is how malicious ad networks are still able to effectively get your Chrome tabs into states where you can't easily close them without killing the entire chrome task.
While I'm sure there is a lot of complexity there that I'm not considering to avoid messing with legitimate usages of tab closing event handling by real apps, it would be nice if there were some kind of like alternate tab close input (eg. hold ALT or SHIFT, or both while clicking the tab 'X') where the user is signalling the intent that "yes, I absolutely want to close this tab, no matter what, kill it with extreme prejudice".
If such a thing exists, which it might, someone please correct me by telling me what it is.
But that's unintuitive. Why can't you click the X button on the tab to close it? That's browser UI, not web page UI, so it should never be unresponsive.
I was responding to the parent comment with a way to kill the tab without quitting the browser, that's all.
I didn't make this feature; I don't have the answer to your question. It was intuitive enough for me to find it on my own without reading any documentation. So, shrug.
There's an open issue[1] with people assigned to it. And even documentation for the new hobbled API that is available in both the beta and dev channels[2].
Chromium still being the better browser in most ways, I wonder what could be done about this. Maybe it's time to have a simple patchset for doing content blocking natively in the engine. With patches like that floating around, they can't very well say that it's an efficiency issue like they do in the doc describing Manifest V3.
Chrome feels way slower than Firefox and the only reason keeping me from making the jump to Firefox is laziness. I'm too lazy to find alternatives for all my extensions, move bookmarks, history, passwords etc, but Firefox feels way faster.
If you're comparing daily use browser tag includes a lot of extensions with another browser that's empty - the latter will won.
Then you start moving your stuff, you install extensions and while fighting with your habits from old browser ux you collect history, cookies and all that stuff.
Then some time passes and you're looking back at your old browser, now with cleaned profile, so shiny and fast :)
What extensions do you use that make your browser slower? I only use HTTPS Everywhere, which has no effect, and uBlock Origin, which makes most webpages much faster, so I don't see how this is true.
Don't count on it. If they continue with policies like single-sign-on the privacy-centric crowd will flee, followed by mainstream users. It's happened before and no empire has ever lasted an eternity.
5x reduction? Heck, you'd struggle to find common libraries that fit under those existing limits! jQuery is 88kb (Yes everyone can hate on jquery all you want, but it's still commonly used), which is already over the 50kb limit! At 10kb...
Modern camera's take pictures measured in megabytes, and you're suggesting the per image max be 200kb, and total images to be 400kb?! Two crappy quality images would blow that out of the water (Heck a single crappy image could).
I get that this is a prototype, and seems to be intended for mobile devices, etc. But those limits are already something you'd have to work for, I'd argue it's near impossible at 5x less resources.
tl;dr by todays standards a 5x reduction in those limits is just code golf.
Author of the patch here. Note that the limits are wire size, not disk size. jQuery, post gzip, is closer to ~30KiB, meaning it fits nicely under the per-file restriction. The total JS limit per-interaction is 500KiB gzipped. Uncompressed, that's often more than 3MiB. That's a whole lotta code!
The per-image limit is currently set at 1MiB (not 200KiB).
I've been heads down on Chromium + Electron performance with my app around a personal knowledge repository:
https://getpolarized.io/
Electron has been great for this but it can be slow.
We have to render PDF and HTML documents for annotation and this part can get overwhelmed if I lock up the main thread.
We still have a lot of work to go with threads and web workers to make this stuff easier I think. It's possible to do but it's definitely not easy.