No: this doesn't make sense :/. The core problem is that stuff sits around executing some tiny input handler or animation in a loop, burning CPU. When I have tracked the tabs that are the worst performers down to the code causing a problem, it is never a large amount of code: it is some stupid mechanism that polls the position of something (like the cursor or the scrollbar), or is trying to push some analytics to a server.
This really has nothing to do with the amount of code being downloaded. I realize some people care complain about how much stuff they have to download, but that just isn't what is actually causing most people problems. Sure, tracking CPU is sort of annoying, but it absolutely isn't hard. Chrome already is running these things in separate processes (for security), and the operating system is tracking the time used for each thread: you can just ask it and make some kind of limit if that is what you care about.
I mean, in this article I see ideas for size limits for images, which is at least consistent... but that is going way way too far: 1MB just isn't good enough for a reasonable image. If you care so much about bandwidth, make a bandwidth cap for the page and if it exceeds it--across all media--figure out some way of blocking or punishing the site.
What most of us care about is that there seems to be no limit on the CPU usage of any given page. This is easy to fix--it is a virtual machine, after all!--by just doing the same trick Erlang uses for compiling a preemptive fiber and then limiting its time execution slices.
What I know I care a lot about is when a tab I haven't looked at in three days is suddenly using CPU time _at all_. Just make it so background tabs get severely limited in their ability to do background execution and eventually get stopped entirely, and the problem is essentially solved.
(Chrome, which is apparently already big on these size limits, doesn't do this, and I swear it is because it is against Google's interests to do it as it mostly makes it more difficult to do stuff like tracking and advertising :/.)
Chrome does throttle background tabs: https://developers.google.com/web/updates/2017/03/background...
It doesn't throttle them all the way to zero, though. If they did that they'd break things like sites that change their favicon to signal "unread message".
(Disclosure: I work at Google, though not on Chrome)
Many services use an open tcp connection (e.g. websocket) rather than polling. Mobile platforms are doing the same thing, just on a system level. I guess service workers are the closest web analogue.
Not sure what you mean, Chrome's background tabs are heavily throttled. Killing completely after a day or two would be nice. I use an extension for that.
I think you're missing the goal when you say what "most of us care about", though. Background tabs aren't an issue for people without many tabs open (many people) or who are on mobile (even more). Slow loading pages definitely are.
That didn't make sense to me: the small (~700px wide) images I have in the library are all 50±20kB.
I went to Pexels, which hosts free stock photos, and I took the first one off their front page that had enough colors and didn't show faces: https://www.pexels.com/photo/person-pouring-coffee-on-white-...
Its maximum size is excessive unless you aim to support 4K: 5472×3648. It weighs 4.9MB. A lot!
I then went to Squoosh.app, which allows one to optimize images in various ways. The default option – MozJPEG at 75% quality – reduced image size by more than a half, down to 2.24MB, with no apparent loss of quality even at the high zoom. Illustration: https://i.imgur.com/2bkHrot.jpg
Do you really need to serve a 4K+ image, though? I reduced the image to 1920×1280, using the same app, with the same compression settings. 184kB! Illustration: https://i.imgur.com/52MctSN.jpg
At 33% zoom (which is necessary for a reasonable comparison, since Squoosh stretches the smaller image for comparison), the compressed image looks very good. It lacks the noise the original had, and looks more glossy. There are also advanced settings that one could tinker with, perhaps to a better compression with equal losses.
Is it a big deal? Perhaps – especially when you look to present the image as-is, with minimal losses between conversion from RAW to, say, PNG. For most websites, though? I reckon it's not going to be a problem: it's the sense of the image that matters, not the details.
And if you regularly serve 1MB+ images, maybe there's some sort of an indicator or tag that you could apply that will tell the browser: "Hey, look, I know you want to save bandwidth, but it's kinda my schtick to show really good images, so let me through, yeah?"
We're right at the point where people are starting to actively support 4K in web apps. Sure, not many web apps actually need it, but the ones that actually do (like photo browsers) definitely need it if they want to keep up with relevant trends over the next five years or so.
I don't think that writing lots of lean web sites and hoping for people to switch for them is the right approach. The approach chosen here, by using the power of the user agent, seems the right one.
Are they? Have web browsers actually gotten any faster at all over the last, say, 5 years? 10?
JS engines got a bit faster, but what about CSS & HTML parsing? 2D rendering performance? Layout engine performance? DOM performance? Mozilla made a bit of noise about this a year or two ago with their whole Project Quantum push - but had you ever heard a peep about this stuff prior to that? Or since? Nobody benchmarks this stuff, and yet it's insanely critical to interactive performance. But since it's harder to measure than JS performance, the only thing ever measured is JS performance. And occasionally, rarely, page load speeds.
Open up a 10MB plain text file in Chrome and it completely falls over. Zero JS. Zero CSS. Zero HTML. Just plain text. Are modern browsers really fast?
And for what it's worth modern computers are wide - 4 core with SMT is damn near low end these days. Yet the web is still incredibly stuck in the single-thread mode of operation. Both the browser internally and the platform itself (WebWorkers are far too slow, heavy, and restricted to meaningfully be used to offload interactive work). And there's almost no work being done to address this. WASM's threads are the only sliver of light here on the platform side. Is it really surprising that people throw RAM at the problem as a result? Throwing more caches at things is the natural response to being heavily starved for CPU on the single thread you can use.
We can't be piling more and more high-level crap onto the standard and expect Moore's law to keep up with it.
That said, my point stands - I was pointing out how browsers like Firefox and Chrome are still bloated softwares compared to early Opera (pre Blink, Presto versions). Them "dropping out of the race" is irrelevant to that aspect.
That said, there were often compatibility problems on various websites, because they favored IE (which wasn't standards compliant), and rarely if ever tested on Opera.
JS is absuerdly fast (comparatively).
The DOM/rendering is sluggish (comparatively).
The observation is the
driving point behind virtual DOM.
Also why high performance rendering uses Canvas or WebGL, at the expense of debugging tools, browser extensions, and accessibility.
That's not true at all, browser used to be much faster when HTML was simpler.
> ...but web sites are bloatier than ever before, eating up all the hardware and software capabilities.
Actually, even a rather simple website can very slow to render, if it does a significant amount of dynamic updates. DOM performance is an absolute disaster, hence all these "virtual dom" implementations like React. CSS Layouting is incredibly expensive. That has nothing to do with website bloat in and of itself, it's the platform coming bloated out of the box.
This wouldn't be such a problem, except he has a huge sway on the language via TC39 and the web via his work at Google, and keeps trying to foist over-designed complex solutions, like Web Components and PWAs onto developers.
> Total script budget: 500kB
This limit would break, nearly, all modern SAP apps. Bootstrap 4 min js bundle is 49Kb. Also, the other limits are far from reasonable.
I would be wiser if Chrome/Firefox/etc would target ad networks. Would be nice to have optimised ad that would be a tiny fraction of the website. Some ads download several megabytes just to show video gifs. These are the bastards that waste a good chunk of my 4G data plan.
Websites need to start to be reasonable about the amount of ads per page. When I pause my ad blocker, I get scared by the websites I usually visit: So many god damn ads everywhere, makes wonder why ads blockers users are not 99% instead of the current 30%.
For website owners: if your website/app is slow, users will stop using it and other websites will replace it. It's your own problem, not the community problem.
I presume you mean SPA, and you’re probably correct. What limits do you think are reasonable?
To me, if a SPA is > 500kb of JS my assumption is it’s unnecessarily bloated.
Gzipped aren’t most modern frameworks/libraries < 150kb? With many being considerably less
Perhaps the issue isn’t proposed limits, but instead the state of most modern SPAs
It used to be innocuous when Google had 20% browser market share, but now they act like they own how users should experience the web. Like the uBlock origin guy said... if Chromium keeps heading down this path it should no longer be called a "user-agent" because it's no longer acting on behalf of users.
There are plenty of reasons why you might want to heavy apps or raw photos on the web. This is the beauty of the web, freedom!
Yes, I meant SPA.
The issue with SPA is simple: You could use, let's say, React with Apollo GraphQl and the final bundle would be below the 500kb. The problem is when you need some UI components, the obvious way is to search for a plugin for that. Many React libraries use stuff like Jquery, Lodash, etc. under the hood that will increase the final bundle size. Over time, it just adds up with every new feature you want to add. Example: Kendo UI for React is just a wrapper around the jQuery version. Many other libraries follow this approach.
To make things worse, the same package can be added to he client with different versions because they're dependencies of different packages.
Enforcing limits could brings us to a world where we need to use iframes and subdomains to get an app running. For sure no one wants that.
I'm guessing you've never written something that can, for example, dynamically generate XLSX files out of filtered datasets in the browser. Sure, we could do it on the server - but our users value fast and regular turnaround time on feature updates far, far more than the site loading a little slower every few weeks when the cached JS is replaced.
Guess again :)
> our users value fast and regular turnaround time on feature updates far, far more than the site loading a little slower
That might be acceptable, for a time. Particularly if you’re working on a product that doesn’t have many/any backend developers. Or a backend. Or a product where the developers are more specialized in frontend. Or a product where the backend does not have a mechanism for handling long running tasks
But at some point for features like various file exports in the browser, one might want to fix the “why can’t we build X feature and delight customers as quickly server side”
Or not ¯\_(ツ)_/¯
Sure. It's just that there's a two- or three-year-long to-do list of features before we can get back to optimizing things that none of the users care about anyway.
Personally, I don't see a problem to be solved here. Bloated sites with ads will always exist. The solution is quite simple - don't visit them. They will eventually die or be replaced by something leaner. A great example is GitHub which has replaced Sourceforge.
Libertarians will upvote this post, they believe the market can solve the issue.
Also, libertarians are statists. Weak statists, but statists nonetheless.
If the user doesn't give the developer what he/she wants, the developer will:
- block them until the user allows however much resource abuse the developer desires (ad-block blocking scripts), or nag them to change their settings
- try to evade the blocking using any and every unblocked mechanism (ads via websockets to get around request filters, etc.)
- do absolutely nothing and let the site stay broken
The problem isn't that there are resources to abuse. The problem is the significant motivation to abuse them.
Ads in iframes are not smart enough to show anything related as there is no content.
As for the frames, that's on the embedder to provide the necessary information instead of doing user tracking.
Also, are any of these limits going to apply to XHR? Or can I just use a loader and eval to get unlimited JS? And if the limits do apply, I assume that means gmail and maps simply stop working at some point?
Gmail still works?
Seriously though the limits applying to XHR is a really good point. I'm guessing browsers would have to enforce some artificial restriction on the eval function, possibly prompting the user with "This webpage is attempting to run a large amount of [insert user-friendly layperson terminology here]. This may be related to advertiser abuse. Do you want to allow this?"
But it's not clear how they plan to solve it for iframes either which could essentially grant unlimited JS as well.
They have good intentions with the discussion and it's an interesting idea but I don't see it becoming reality until they can solve these problems which seem actually rather hard.
I personally do. But I'm never given the choice to make a site lean. The priority is always "make it work" which is of course the right thing to do first, but after that nobody gives you time to make it smaller and faster. There's always the next ticket in the backlog and you cannot argue.
So please, don't bash backenders. Many of us care and do our best with the very limited time budget we manage to STEAL to do optimizations. But proper optimizations require dedicated time and effort with focused sessions -- and we are never given those.
You have to do that with all of the software you run, and the operating system you run it on, anyway.
This is an emotional statement. Mind elaborating it with facts?
Yeah, like Firefox, Safari and... which other browser exactly?
With the near-monopoly state of the browser ecosystem, that particular argument I quoted above isn't very relevant these days.
Although, that does not do much about small hot spots/loops.
That's right: the ad networks will hyper-optimize their script sizes and runtime footprint. They'll just become much better.
Most websites are reached from Google search. If Google de-ranked slow, ad filled, and paywalled sites, the internet would fix itself overnight.
But they will never do this because Google cares about protecting their own ads above everything else
Nowadays, I'm essentially out of touch with the latest ads as I never see any. …And what a truly wonderful condition that is.
Guess what language that browser add-on you are using is written in.
In a user-centric environment, we put people needs first. So having a language that empowers developers to put features that are useful to the user is the primary focus. If you want to create JS less websites, you STILL can do so.
While certainly curious, it is in no way hypocritical to use a hammer to smash a hammer factory.
> empowers developers to put features that are useful to the user is the primary focus
No, there is and always was a tension between usability and developers putting up new features. If you truly focus on user experience you definitely can't empower developers to invent features, you have to constrain them and force them to follow UX guidelines.
Form submissions and such have been part of HTML for a long time, and map viewing most certainly doesn't need JS. See this comment from 3 years ago:
Unfortunately the map images in the article there no longer work because Google has decided to "deprecate" that API, but there's absolutely nothing about serving what is essentially a large tiled image that requires the capability to run arbitrary code on the client. In fact, here's a tile URL I just found that still works: http://mt1.google.com/vt/lyrs=y&x=1325&y=3143&z=13
You're right, that can be a significant problem. Often what I do is to turn JS on and let the page load then turn it off again. When I encounter Smart Alec sites that try to catch people like me out by refreshing JS every few seconds, I either toggle network access to the internet off or capture the text by various other means.
There's always a solution one way or the other.
The issue is not that the code that websites make your browser execute is written in a bad language, but that websites make your browser execute code.
> In a user-centric environment, we put people needs first. So having a language that empowers developers to put features that are useful to the user is the primary focus. If you want to create JS less websites, you STILL can do so.
Have you actually taken the time to survey typical pages these days? I'll bet not, for how could you honestly justify or defend them as they are? It's only possible if you're on the receiving end of those cents. (I'll demonstrate later in a separate reply with some of my own stats.).
I am sorry that you've missed main thrust of my argument. What many website owners and developers simply do not realize or just blatantly ignore is how truly alienating it is for text-based junkies like me to be, say, well into reading the second paragraph of a story only to have a pop-up (or worse for an overlay) to suddenly appear with the aim of having me joining a mailing list or such. …And, as we already well know, that's just the beginning of the assault on us users when we visit websites. As far as I am concerned, such behavior is just not on.
If you're not visiting websites that rely on JS, you're not using the same internet as the vast majority of people who use the internet. Good for you, I guess. Good luck looking at a map in a web browser without JS, although I'm sure you'd never sully your computer by visiting a Google website.
Absolutely not, as I have no need to do so as there's so much more on the Web other then Google, Facebook and Amazon. I can't even remember my usernames let alone the passwords.
Clearly, what I do is irreverent for everyone else (and no one else would care anyway). My major concern is that those who find themselves having to use Google or Facebook essentially have no other alternatives. In this way Big Tech has has effectively monopolized the Web and I consider that completely unacceptable. So should most other people.
I don't care. I can use:
And so on.
If I presented visitors with js disabled the opportunity to download a native client (yes, real native not just handing you a single-executable bundle of a browser and the web app), would you download it?
a) If I provided the binaries hosted on my server, served to you over TLS?
b) If I linked you to it in Windows Store / Mac App Store / an Ubuntu PPA on launchpad.net / Google Play Store / iOS App Store / F-Droid?
c) If I told you to install the Rust tool chain via https://rustup.rs/ and to run “cargo install” plus the name of my package that would be hosted on crates.io?
d) If I linked you to the build instructions in the wiki on GitHub that would tell you how to build it from source?
Please rank these from most likely to least likely that you would be willing to do if you were interested in my solitaire game.
And also, how would I best demonstrate the value to you of my solitaire game? Embedded video (plain video tag that doesn’t require js)? Screenshots and text? A link to the video on YouTube?
Most likely yes.
Users could then gain the upper hand over websites by essentially feeding back any information that would satisfy a website or trackers; depending on the circumstance data could be accurate, part-accurate, part-obfuscated, misinformation or all or part randomized and tailored for all or just specific websites [right, it needs to be very flexible]. For example, all machine and O/S parameters could be obfuscated or scrambled, misinformation supplied such as saying ads were being displayed or clicked on when neither was the case, personal information scrambled or obfuscated and trackers supplied with misleading and deceptive junk. Furthermore, the process could be fully automated to allow users a smooth and unhindered Web-viewing experience.
If you think these suggestions harsh or unfair then I should not have to remind you that this is effectively what thousands of commercial websites and especially Big Tech—Google, Facebook et al—are already doing with your personal data (remember Cambridge Analytica?). Essentially, most users don't have a clue about the extent of the personal data that's collected from them by these websites nor of its contents or how it is actually processed nor do they know to whom it's sold. Moreover, websites unfairly vie for both users' attention and personal data by using tactics which are unethical, overly-invasive, highly-obfuscated and deceptive.
I agree with you that it certainty won't happen in the current web climate. In an earlier reply I've made allowances for this - see my reply above to kirion25.
If I installed ad blockers on some publicly accessible computer, or a computer for multiple users, I think very few would complain about issues. If I disabled JS on the other hand...
Clearly, it depends on how one uses the web (i.e.: what sites one frequents). In my case well over 95% of sites I visit work without Java Script. The reason for this is I primarily visit sites that have text as their main content.