Hacker News new | past | comments | ask | show | jobs | submit login
Chrome Never-Slow Mode Prototype (googlesource.com)
96 points by jupp0r on Feb 5, 2019 | hide | past | favorite | 71 comments



It would be cool to get some of this stuff in Electron but I think in that case it's actually possible to just do this in your app yourself.

I've been heads down on Chromium + Electron performance with my app around a personal knowledge repository:

https://getpolarized.io/

Electron has been great for this but it can be slow.

We have to render PDF and HTML documents for annotation and this part can get overwhelmed if I lock up the main thread.

We still have a lot of work to go with threads and web workers to make this stuff easier I think. It's possible to do but it's definitely not easy.


Your product looks amazing but I see no use of encryption with the cloud sync? Unfortunately that's a dealbreaker for me :(


From the product description doesn't look like the main use case... it seems more for academics?


It's probably in Google's interest to limit web bloat that degrades UX. AMP might be one strategy (with many negative aspects that many here will be familiar with), leveraging Chrome's browser share to impose limits might be another one.

Also reminds me on https://news.ycombinator.com/item?id=19038092 . I wonder if in the future we'll look at excessive JS bloat as we look at annoying pop-ups today.


The two are related. Work on Never Slow Mode stopped last fall. It picked up after the WebKit/Safari discussions last week. I doubt Chrome will ship a "never slow mode" but I see looser budgets, a la WebKit proposal.


Hi, author of the NSM patch. Still very much under development, but most of the important considerations aren't technical, and making progress on a system like this is more about how to roll things out rather than implementation.

It's great to seeing WebKit folks thinking along the same lines, and I hope to be able to discuss with them. Coalitions -- like the Mozilla/Chrome work on TLS adoption -- are critical in making progress in large ecosystems.


Is there preliminary plan of how Never-Slow Mode would be deployed? I assume the idea is to eventually have it enabled by default on all mobile devices and to prompt laptop/desktop users to enable it if they have a slow connection?


> It picked up after the WebKit/Safari discussions last week

looks like it picked up on Jan 3 according to that PR?


This approach to web performance smacks of someone who has never had to build an app under real world conditions. "It will be smaller if you rewrite your app to be AMP-first. No, scrap that, the PRPL pattern is in, we'll hold the PWA rewrite for next week" are not viable solutions for engineers at companies that aren't Google.

Try telling your marketing department that they can't integrate third party scripts through Google Tag Manager, or a VP that the SDK for that third party service they championed is too large and the project will have to be shelved. Or put a pause on a critical project because the application is approaching the hardcoded JS limit and another department shipped before you did.

Hardcoded limits are the first tool most people reach for, but they fall apart completely when you have multiple teams working on a product, and when real world deadlines kick in. It's like the corporate IT approach to solving problems — people can't break things if you lock everything down. But you will make them miserable and stop them doing their job.


Firstly: Too bad. The dysfunctional practices of corporate software development doesn't override people's right not to have their own computer's performance trashed by shitty websites. It's their right to run whatever browser they like, including one that doesn't allow slow, bloated and poorly engineered software to run, no matter how hard it makes life for the marketing departments of the world.

Secondly: For decades computers operated with extreme limits on their available memory and their processing capacity, relative to today's computers. Despite this, people managed to write software, even including companies with marketing departments and idiot VPs. It might take these people a while to accept that they can no insist on pushing whatever garbage they want, because the cost is borne entirely by the end user, but eventually they will. Or they won't, and these dysfunctional companies will die out. Either way, users win.


> Per-script max size: 50KiB

That is quite small actually... My application (internal use, not consumer facing, so no panic) has a single packed script file, that is around 100KB.


That’s transfer (compressed with gzip or whatever) size. So its a bit more than it sounds; but yeah - still tiny.

If this is the future, I know a lot of web apps that will need retooling. That is just enough space for react (+react-dom). We’ll have to split up react and web apps themselves in future apps if chrome goes this route; although I suppose that’s marginally better for caching performance.


A per-script max size of 50 KiB isn't realistic in the wild today.

For example, non-minified bootstrap.js is more than that. Just as a data point.

This will break a bunch of the web. I'll be curious to see how this gets worked out as the work progresses.


Hey, author of the patch here.

The restrictions are wire size. `bootstrap.min.js` is ~10KiB on the wire, which means it comfortably fits (as do jQuery, most analytics packages, etc.).

That said, the prototype does break a lot of the web, but that's not a crisis. The intent isn't to have this rolled out everywhere against unwitting content, but rather (like TLS), to let developers opt-in to a single set of rules when they see value. There are also places (e.g. PWAs) where the browser-imposed quality bar needs to be high. Blocking PWA install for sites that don't send the opt-in header seems like a reasonable place to start.


And moment.js is 68 and luxon 64.


Moment.js is 16.7k. Moment.js plus locale data for every single locale in the world is 68k. There is no justifiable reason to ever load every single locale in the world at once except for a few extremely niche cases. Real world use cases typically need one locale at a time, occasionally two or three, never hundreds.

Luxon, I believe, is a similar story.


The whole reason I use a date formatting library is so that I can localize dates into whatever locale the user is on. I can't just assume one locale and only load that.


"Caps do not apply to workers". I have a limited understanding of them but I understand that if your scripts are fat then you have something closer to an application than a document, and it would be better served with service workers that can run in the background and keep most of your application structure in place ?


Workers works pretty differently than regular script. The way it is being loaded, how to import other scripts into worker and have it communicate with your main thread (window) is quite a standard on its own, it is just that it happens to use javascript as its language.


Workers impose non-trivial architectural overheads vs. ordinary scripts. It's not at all a drop-in replacement for ordinary scripts.


I imagine they’re trying to encourage code splitting.


Which is what popular setup like create-react-app and alikes are against. It doesn't make too much sense for more heavy application like interactive editors to have multiple artificially split files just to conform with this limit.


I don't think it's correct to claim CRA and its peers are against code splitting. Dynamic import is a well-documented way to achieve this:

https://facebook.github.io/create-react-app/docs/code-splitt...


> Instead of downloading the entire app before users can use it, code splitting allows you to split your code into small chunks which you can then load on demand.

You need to make a business case that your whole application is modularly composable.

Some application might fit under this umbrella, but there are other stuff that don't, like a pdf-render, or something like google slides, which as I just checked, the core editor js is bundled in a single js file and as large as 1.4MB.


On the other hand, with advancements such as HTTP/2, there's not a substantial reason to pack multiple scripts.


The main reason you pack scripts is because you're compiling, and your compiler is checking and optimizing your cross-file interactions. For example, you need to merge scripts to be able to detect and delete unused functions, because otherwise some unmerged script might call the function without the compiler being able to detect it.


It seems like it would be trivial for your compiler to go ahead and split the files up again afterwards. Its not like you cared about file organization on the client side anyways if you were packing them, so its free to split however it feels.

And ofc you’d naturally just attribute this as an artifact of optimization, just as single-file packing was. Which it is, given http 2/3


No, because if you split it into two pieces, common utility functions will be called from both pieces, at which point you have to duplicate them.

Splitting probably makes sense only for an infrequently used feature, like video chat in an app like Gmail or Slack.


I wonder if they ran it against Google properities first. I imagine Gmail wouldn't fare well.


Following up my own comment...looks like current Gmail will have trouble. Loaded in chrome with mobile set to "Pixel 2", and there's one js file that not only blows the per-script max size, but blows the whole "Total Script Budget" by itself, not including the other 78 js files gmail wants to download. Stopped looking there.

(Per-script max size: 50KiB, Total script budget: 500KiB)


These settings look like something that's great for mobile phones. On mobile there's native GMail app, so this isn't a problem.


I don’t think so - the owner of this issue is the tech lead of Progressive Web Apps at Google according to his LinkedIn. He probably envisions a future where GMail can run as a web app on a mobile phone with zero lag. Combined with previous additions to web APIs such as notifications, GMail the web app can be at parity with GMail the native app.


That's cool, I have nothing against improving the experience of any site :)


But I prefer webpages over apps. Especially on mobile.

The apps frensy was over years ago. Nobody wants an app anymore.


For anything I use regularly, I'll choose a native app over a web site or web app every time because they typically consume less battery, cpu, memory, and bandwidth. They are usually also faster and a well written app is always going to feel better than one running in a web browser.


> Nobody wants an app anymore.

News to me. You sure you’re not just projecting your personal preference?


It’s clearly my personal preference, but also my experience that fewer and fewer in my social circles wants “yet another app” when there’s already a website which works fine. Games are the only exception.

And I work with software. None of my colleagues wants to work with or test anything not strictly web-based.


> None of my colleagues wants to work with or test anything not strictly web-based.

If that's true, I'd be willing to bet there's some extreme self-selection going on. Which is fine, but I'd be very careful about generalizing your experience outside of your immediate social circle.


> Nobody wants an app anymore.

Citations needed. I, for one, like apps.


Haven't heard that to anyone in my professional circles and, yes, we all work in software.

Anything worth it is native.


WebKit also wanted to do something like this: https://bugs.webkit.org/show_bug.cgi?id=194028


That's really cool. The only thing missing is the total number of DOM elements on a page. I've seen production sites with literally thousands, and it slows most browsers to a crawl (and balloons memory).


This comments page has over 1194 elements on it right now, with only 34 comments. Thousands is peanuts.


If the issue of too many DOM elements is memory use, wouldn't it be better to just limit the memory use of a page?

Apologies if this is a really ignorant question as I know very little about browsers/front end development.


It's more than memory use - it causes tree traversals to take longer, which means unbounded style recalculations (common) and side-effecting DOM changes take longer.


There are legit uses of RAM, fully fledged web apps are ordinary now


Thousands isn’t really a whole lot. If you have say a table with 10 columns and 50 rows and have complex content in the cells (including div elements) you’ll be well into the thousands.


One thing that always surprises me with Chrome (and other browsers) is how easy it is to get a browser tab to freeze by running an infinite loop in JS. This happens to me from time to time due to programming errors and I usually have to close the entire browser to fix it, as the tab will neither close nor will I be able to access the developer console to turn the script off. That probably means the script runs in the same thread as the render/UI code for a given tab? Anyway I found this quite annoying and I wondered why the browser behaves like this (I’m sure there’s a good technical reason as other people must have encountered the same problem so if this was easily fixable someone would’ve probably already fixed it).


A similar thing that surprises me is how malicious ad networks are still able to effectively get your Chrome tabs into states where you can't easily close them without killing the entire chrome task.

While I'm sure there is a lot of complexity there that I'm not considering to avoid messing with legitimate usages of tab closing event handling by real apps, it would be nice if there were some kind of like alternate tab close input (eg. hold ALT or SHIFT, or both while clicking the tab 'X') where the user is signalling the intent that "yes, I absolutely want to close this tab, no matter what, kill it with extreme prejudice".

If such a thing exists, which it might, someone please correct me by telling me what it is.


You can kill this tab without closing the browser by using 'end process' in Chrome's Task Manager (available under the Window menu).


But that's unintuitive. Why can't you click the X button on the tab to close it? That's browser UI, not web page UI, so it should never be unresponsive.


I was responding to the parent comment with a way to kill the tab without quitting the browser, that's all.

I didn't make this feature; I don't have the answer to your question. It was intuitive enough for me to find it on my own without reading any documentation. So, shrug.


I wasn't criticising you, just asking the more general question here of why clicking the close button doesn't work reliably.


So the plan is to break the internet or get rid of all rich apps and just use the browser for crappy static pages?


So how do i install this?


Sounds like something they should have done ages ago.


Chrome is already much faster than Firefox, with stuff like this they'll wipe the floor with the competition.


Chrome is about ready to hobble ad blockers[1]. So soon, Firefox will be faster for many sites with ads.

[1] https://www.bleepingcomputer.com/news/security/chrome-extens...


That's only a proposal and it was put out to get community feedback (which was overwhelmingly negative...)


There's an open issue[1] with people assigned to it. And even documentation for the new hobbled API that is available in both the beta and dev channels[2].

Feels a bit past being a proposal.

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=896897

[2] https://developers.chrome.com/extensions/declarativeNetReque...


Chromium still being the better browser in most ways, I wonder what could be done about this. Maybe it's time to have a simple patchset for doing content blocking natively in the engine. With patches like that floating around, they can't very well say that it's an efficiency issue like they do in the doc describing Manifest V3.


Chrome feels way slower than Firefox and the only reason keeping me from making the jump to Firefox is laziness. I'm too lazy to find alternatives for all my extensions, move bookmarks, history, passwords etc, but Firefox feels way faster.


> all my extensions, move bookmarks, history, passwords etc

Minus the extensions, migrating everything else should be easily accomplished via importing and should only be a few clicks in a wizard.[0]

[0] https://support.mozilla.org/en-US/kb/import-bookmarks-google...


If you're comparing daily use browser tag includes a lot of extensions with another browser that's empty - the latter will won.

Then you start moving your stuff, you install extensions and while fighting with your habits from old browser ux you collect history, cookies and all that stuff.

Then some time passes and you're looking back at your old browser, now with cleaned profile, so shiny and fast :)


What extensions do you use that make your browser slower? I only use HTTPS Everywhere, which has no effect, and uBlock Origin, which makes most webpages much faster, so I don't see how this is true.


Chrome feels slower to me, plus speed is not the #1 concern is it now? Firefox.


Firefox isn't exactly stagnating - Servo has a lot of promise for performance.


Don't count on it. If they continue with policies like single-sign-on the privacy-centric crowd will flee, followed by mainstream users. It's happened before and no empire has ever lasted an eternity.


I'd like to see a 5x reduction in those limits but awesome idea.


5x reduction? Heck, you'd struggle to find common libraries that fit under those existing limits! jQuery is 88kb (Yes everyone can hate on jquery all you want, but it's still commonly used), which is already over the 50kb limit! At 10kb...

Modern camera's take pictures measured in megabytes, and you're suggesting the per image max be 200kb, and total images to be 400kb?! Two crappy quality images would blow that out of the water (Heck a single crappy image could).

I get that this is a prototype, and seems to be intended for mobile devices, etc. But those limits are already something you'd have to work for, I'd argue it's near impossible at 5x less resources.

tl;dr by todays standards a 5x reduction in those limits is just code golf.


Author of the patch here. Note that the limits are wire size, not disk size. jQuery, post gzip, is closer to ~30KiB, meaning it fits nicely under the per-file restriction. The total JS limit per-interaction is 500KiB gzipped. Uncompressed, that's often more than 3MiB. That's a whole lotta code!

The per-image limit is currently set at 1MiB (not 200KiB).

Hope that helps.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: