One of my clients loads a 4.5 Mb bower.js (including Angular with a lot of components and jQuery), they also include an extra jQuery script, a full jQuery UI and several other scripts on each pageload. Nothing is minimized. The bower file alone has 300k in comments.
The CSS file is also nearly 1 Mb.
It's just a simple website with some forms.
They have 2 developers working on the site, a scrum master, a project manager and 2 testers but somehow they can't find the time to find out which legacy code they can remove.
It's only after I pointed out that they have an exceptionally high bouncerate on their (expensive) Adwords traffic and the slowest pageload time of all their main competitors that I managed to get some priority to optimize their site.
Serious question: are there any tools that can scan a full website for unused code and unused CSS?
Note: The color-coding is likely to change in future Chrome releases. <-- Well, someone eventually found out that some people cannot distinguish red and green, but it was too late :-D
I am a little surprised to find such an issue in Google software as it is a topic for first semester CS undergraduates ;-)
I learned it in both my computer graphics course and my user-interaction design course as part of my CS degree (German university). None of these courses are required, but they are popular enough that the most important lessons were pretty universal knowledge among students.
I taught a ton of a11y stuff in my intro to web classes. Even did screen reader demos. I wasn't the only one. There are lots of folks teaching this stuff. Not a critical mass, but I'm so thankful it's happening. :)
If it was actually part of the curriculum I sure hope that it was in some design centered elective class. IMO if your CS degree spent time teaching that as part of the core curriculum, they missed an opportunity to put more Math, PL theory, and interesting algorithms in there, because there's more than can be sanely covered in any one curriculum.
Since it's accessibility based, it's more laudable than teaching CS students how to center a div, but it's not like it really requires a mentor of some sort to express the nuances of, right? Or even if it does, it's still design.
It's some years now, and there were at least two courses in which it could have been. The one was elected, HCI. The other was not, it was an introduction into graphics and audio, and since you need to understand basics of human perception to understand compression in that area (jpg, mp3), they talked about stuff like that.
A good CS degree definitely has the space to teach some basics in that area. To mention Gestaltgesetze, to explain human perception a bit, and give an introduction into usability. You do not get a useful developer in the end otherwise
Seen the same on maps where green is a good route and red is a dangerous route. Switched to a blue -> light blue -> yellow -> orange -> red -> black scale with great success. I also believe chemistry uses blue to mean safe, not green.
Even more off topic, but after Firefox (before Chrome) had introduced this feature in dev tools, they also recently introduced it as extension (and it might even work on Chrome)
That's interesting! Do you know if it is possible to access this data from an extension? I would love an extension that crawls through the website and combines the data for all pages.
I don't think more powerful tools are the solution here. The problems and solution are obvious. The root problem is that many "product" teams will take any 10x improvement from tooling and adapt by being 10x more inefficient. Because they can.
Hardware (laptops, phones), and browsers js and rendering engines, are all 10x faster than they were 10 years ago, giving 100x to 1000x overall speedup. Think about that, it is truly incredible. But web pages are also 100x larger and less efficient than 10 years ago. This is absolutely not justified by productivity increases. This is just how the world / humans seem to work.
Developers tend to over time expand their programs to fill available memory and take up available CPU, too. My currently open text editor's memory footprint is apparently 36.6MB, and it is displaying a single file composed of ~1000 characters.
The scary thing is that I actually consider that to be remarkably small. My current Emacs session (recently opened) is just over 200 MB is size. And Emacs isn't even considered a hog anymore compared to the real hogs like Atom.
Remember when Emacs meant Eight Megabytes And Constantly Swapping?
> Serious question: are there any tools that can scan a full website for unused code and unused CSS?
From my limited knowledge of your situation this seems like the last move to take. I would try the following, in roughly this order:
Serve libraries (Angular, jQuery) from a CDN to at least have the chance of hitting a browser cache on a visitor's first encounter with your site.
Use a minifier to strip all the unnecessary comments and whitespace (in addition to other benefits from minimization).
use gzip to compress the remaining minified files.
Those alone will likely significantly impact your load times. At this point, the remaining dead code should be relatively small and you can look into tools for surgically removing it.
As for JavaScript, Google Closure Compiler does some dead code elimination (DCE, a.k.a. tree shaking in Lisp lingo) as an optimization but it unfortunately doesn't tell you which parts of the code is dead. It also does not to DCE alone but does some aggressive size optimizations with it so, again unfortunately, it cannot output otherwise-identical code with only DCE applied.
This part is more of a shameless plug, but I believe public/industry/developers should be aware of what tools & techniques academia develops so that the researcher's efforts can be more useful. There are some static analyzers for JavaScript such as JSAI, SAFE, and TAJS (disclaimer: I'm working on a project related to JSAI on the lab which developed it). These static analyzers can discover the dead code if given all the entry points however:
- They don't have facilities to report the dead-code AFAIK (this is a trivial thing to add _in theory_, if there are parts of the program _unvisited_ by any of these, that part of the program is guaranteed to be dead). If somebody wants to add such a facility to any of the tools they are welcome. In case of JSAI, I'm willing to provide them all the information I can.
- They are conservative (e.g. sound) tools and may find too few dead code results.
- They don't play well with `eval` & friends in general although they try their best[0], the web frameworks rely on eval-like highly-dynamic approaches a lot to be generic enough and that hurts precision of such analyses dramatically.
> Serious question: are there any tools that can scan a full website for unused code and unused CSS?
We're starting to get there.
JS Modules gives us static exports/imports, so we can statically analytics the dependency graph and identify functions that aren't used and do tree-shaking to remove them. Rollup and Webpack2 already does this, but there's more to go.
CSS Modules creates a contract between the your CSS and what uses it. As you have to import it, and exports are static, you can understand which style files are imported where. Next step would be to understand which classes are used inside that module and strip those that remain unused. This is theoretically possible (providing you use a more 'statically safe' syntax), but tooling hasn't yet popped up to do this.
Yes, it's called "tree shaking"[1], modern Angular apps can be cut down to a around 50-100k (assuming nothing heavy feature set). Find build tools that can do it, like Webpack or Rollup.
Yeah. Except you need to do Ajax and fetch doesn't cut it. So new lib. And manipulate the dom with something better then the browser API or you loose your mind. So new lib. Then normalize browser events. Oh wait, you can do that manually. But you are writing a new lib. Eventually the code will grow to be the size of jQuery anyway. Only not as well tested, documented and cached.
You can implement an minimal ajax interface in js (no jquery) in less than 30 LOC. But I grasp what you mean, it not just that. There are other concerns and requirements and you do not want to reinvent the wheel everytime...
Sure, if all you do is some onclick() and a few appendChild().
Plus for the time you spend chasing compat issues, writting wrappers, choosing small libs and packing it all together, you could have done your site. For less than 40ko.
People are not blinking at including react ecosysteme, which is huge. But attack jQuery ? Seriously ?
> chasing compat issues, writting wrappers, choosing small libs and packing it all together, you could have done your site.
I don't know about you, but usually my projects last longer than a week, so those sorts of savings don't represent a significant chunk of my overall development time. Also, I'm not a shit programmer, so I can actually write a for loop that does what I want on the first try. I've spent more time trying to figure out jQuery's goofy syntax and what the hell it's doing with AJAX queries than it has ever saved me in development.
Network bandwidth usage is also not the only metric of size we care about. After it's ungzipped, it needs to be parsed by the JS engine, and more code = slower execution.
Programming is mostly creating functions and interfaces to decompose problems into reusable parts where reuse makes sense. Nothing wrong with creating a simple function for making XMLHttpRequests or for fetch API. Most of the time there's no need for the level of generality of jQuery interfaces.
Just by making a function you're not creating a library. Sometimes I find DOM API verbose too. Nevertheless the fix does not need to be a interface like jQuery to completely hide DOM API under another interface. It may be enough to extend the HTMLElement prototype a bit or write a function to "compress" the code a bit and make it more readable.
Do you realize JavaScript is a prototype based programming language? It is a major and very useful feature of the language. There's no reason not to use it. There are some things to be careful about with interop if you need it, but that's it.
We've already learned from our past mistakes changing the prototype of the standard library - the Ruby community learned the same thing with monkey patching their standard lib too. The maintenance overhead far outweighs any immediate benefits.
So, you don't like the polyfill, created to the specification. Fine, but depending who you target, you might not need it. Plenty of modern browsers have good support.
> And even then. You need to encode params manually.
Or use the Request constructor.
> And of course no middleware, so for special decoding or repeating headers, you end up writing a wrapper.
Request constructor will let you handle the headers quite nicely.
A polyfill is a short snippet to implement a missing feature. Notably almost all recent additions to JS and DOM have been designed to be polyfillable where possible. Generally that means under a thousand lines, by the way.
A library in this context means a set of functionality built on top of the platform, like jQuery, with its own API, etc. "Use a library" and "use a polyfill" are not the same advice.
Event + Ajax + DOM is most of jQuery. And is most what your webapp is. Now I'm more of a fan of Vue.JS + axios myself. But if I have to go minimal, 40ko of jQuery is nothing and save so much trouble. All the "you don't need jquery" thing is done by people doing prototypes that are not tested on most browser or assess in term or ressource vs reward, which is a lot of the projects in the JS community.
Oh. Well that's true, but I think the point is more like you end up implementing some significant subset of that, except with homespun code that isn't tested as well.
> Except you need to do Ajax and fetch doesn't cut it
curious as to why you think this? fetch is pretty convenient to use. We created our own wrappers around it, but that basically just consists of ajax = (url) => fetch(url).then(res => res.json())
Exactly, you had to write a wrapper. And the next project will do so. And if you want to save time, and improve reliability, you'll test it, and write doc for the 3rd. And the new team member.
CDNs are a lie. The privacy implications outweigh the performance benefits since there are many, many, many CDNs and many, many versions of jQuery. It's rare for users to benefit from caching.
Also, the performance benefits from free CDNs that you have no service level agreements are unsubstantiated at best, but you'll notice very quickly how much your site is broken when your CDN is over capacity.
Good luck supporting old android browsers, IE and dealing with the inconsistencies between browsers. It only takes one look at the jQuery source code to understand why it's not a good idea to "roll your own jQuery".
Simply don't target old browsers. If people don't use JS, or don't have a modern browser, feel free to go elsewhere. I don't target edge cases as I don't get paid to.
uncss[0] works well for me. It removes the unused styles and then I run the css through a minifier to finish the job. The css filesize is greatly reduced.
Lol @ testers. How about getting rid of the project manager and testers, and hire 2 decent developers who have some experience with automated testing, continuous integration and continuous delivery instead? Productivity rises 300% I would presume.
Having browsed nearly JS-free for the last two-ish years, aside from a bit of tweaking at the start, I have to say that it has made things a lot faster, more stable, and just much more of a pleasure to navigate. If anyone is interested, I have found that two tools make life a lot easier:
- NoScript, which blocks execution of scripts save for those you whitelist. This means sites that reasonably require JS (e.g. YouTube, Google Maps) can remain functional.
- A bookmarklet that redirects you to the Google text-only cache of the current page, which is great for text-based articles that inexplicably require JS to show content.
My main issue with having no script on is for website checkouts. Especially when buying flight tickets. A lot of sites will introduce new scripts in the middle of a stepper that you didn't need to complete step 1. Now you can't submit step 2 or 3 and when you allow the new scripts the page reloads and you lose all your form data.
Presumably this occurs when the site uses a CDN for their js libraries (or even their main js). After a few days you start to spot which domains need to be enabled and which can be safely ignored.
Best i recall, even if you whitelist a CDN it only comes into action if you whitelist the root domain of the site you are visiting.
Beyond that i guess there is GRE, where you can build "firewall" rules.
For example i have Facebook blocked via it unless i visit Facebook.com directly, thus they don't see me browsing all manner of sites via their like buttons and comment boxes.
This is everyday for me. Just whitelist the few sites that actually need it and most of the rest are fine. This should be the default. What do I care if some images don't load or CSS is slightly off? The main content is there. For sites that won't do anything without JS, I can consider whether I want to use them or not. Mostly not. Fuck other people running code unnecessarily, without my permission, on my computer. Especially Javascript. It was an extremely stupid and costly decision to have a scripting language like this run by default in browsers... though great for doing things against the user's wishes and making money off of it. And security holes. And privacy issues. Etc etc. The irony is that AJAX and SPAs were created as a response to the request/response model that was seen as too slow. Now it's these SPAs that are unbearably slow and buggy while the request/response model has only improved as hardware and networking have gotten better. I think if more people understood how the Internet works and what is actually happening, they would also turn off JS. No chance of that happening though.
> Fuck other people running code unnecessarily, without my permission, on my computer.
You gave them permission. Actually, you asked them to do it when you sent that GET request to their servers that returned a bunch of HTML that included script tags.
And what if the user does not download the .js file?
And what if the user downloads the .js file but does not run it in an interpreter?
What if the browser authors refuse to process what is enclosed in script tags and to run Javascript?
Maybe they believe it consumes too much memory.
Based on your comment (trolling?), I think maybe Javascript is not the problem. It is developers with thinking such as yours.
If we put inline js or links to .js files in files requested via HTTP, then we cannot claim that users "chose" or gave "permission" to fetch these .js files or to run them. As another commenter stated, those requests were not initiated by the user, but by the browser.
The selection of browsers is small, and made smaller by web developers who promote use of certain browsers i.e., big fat intepreters that will run their needless Javascript. (The keyword is "needless". Almost always users can get the data they seek without running Javascript.)
These "favored" browsers present a massive memory hog and attack surface compared to the minimal tcp client required to retrieve text via HTTP.
Not really. They gave permission for the initial GET request for the HTML. The rest of the requests were inserted into the document by the programmer and executed by the browser. The browser assumed those links should be fetched automatically and executed. :)
The rest of the requests were triggered by the browser, not the programmer, because they were part of the initial HTML and the browser follows the HTML spec. The person complaining about it and is technical and knows this, they were under no misconception that they were viewing a document format that doesn't include code execution. Of which I'm having trouble of even thinking of an example, because even Word documents can execute code by default.
If you dislike HTML, then don't use it. Use another format that serves your text-only needs. Gopher is a possibility, just don't click on any HTML files if you're using a web browser though.
Trivial to strip HTML tags and produce useful text.
Send all the extraneous garbage you want to the user. She can remove it and keep only what she wants. Meanwhile from the user's perspective you are wasting bandwidth. Who is paying for that bandwidth?
A request to an HTML document with no JavaScript src links, image links, or CSS links will not make multiple requests. I'm old enough to remember when I had to actively tell images to load.
The code for the page is written by programmers. Programmers cause browsers to make multiple requests. Often times without understanding "the spec".
I find myself doing the same if I'm not on WiFi. I'll tether my samsung phone to my laptop, disable JS on chrome, whitelist what I like, jump into incognito to temporarily whitelist some page that requires JS; when done in incognito, the site isn't permanently whitelisted, it's just for that incognito session.
I know! Laborious, but it must be done.
I got into this habit because of the insane number of websites that have gotten into autoplay which essentially eats up all my mobile bundles. I love forbes for instance but it's essentially unreadable when on bundles.
flash was obsoleted because 1) iphone didn't support it, 2) iphone didn't support it, 3) iphone didn't support it, and 4) JS replaced the non-DRM use cases (and now even replaced that).
I thought it was fine on Android, I played many games on it that worked well even on those old underpowered phones. It was never good, but probably about 0% of flash apps were written that targeted mobile so I wouldn't have the expectation of good.
When it worked it was... not great, due to UI issues, but it at least worked.
Trouble is it seemed to not consistently work. I worked at a flash game developer at the time (Zynga) and it would not always load our games even if it had done so previously - the difference was sometimes just refreshing a page.
May very well be the case. All things flash are permanently disabled on my browser. If a site requires flash, I would much rather ignore the content altogether no matter how juicy it is.
I did a lot of Flash/Flex work - it was the bees knees of front-end dev in the days before JQuery, Firebug, Chrome etc. and was the most consistent across platforms, as the same plugin was used on each, as opposed to trying to make your web-standards-based app work on several incompatible implementations. HTML/JS/CSS didn't come close at that time, but there was hope that it would develop to make Flash obsolete - which it now has.
People always complained about how Flash "drained batteries" / "crashed browser" / "allows all this intrusive advertising". A case of blaming the technology for the way it was being used. People would say "I can't wait until Flash dies and we have the open web standards instead so we don't have these problems any more".
So, fast forward a decade or so, Flash has been replaced by open web standards and ... guess what? We still have those problems...
If I may, it seems to me like there are several possible points of discussion:
1) Javascript is bad (vs. Javascript is good and variations thereof)
2) The use a lot of websites make of Javascript is overcomplex, gratuitious, uncalled for, intrusive, etc.
3) Sites should provide some (even minimal) functionality to people browsing them without Javascript
#1 is largely a matter of personal opinions
#2 is a known, undeniable fact
#3 is where everyone could contribute
Personally I browse normally with javascript turned off since years (and I use another browser with the capapbility to display the site "fully" when really-really needed).
Particularly when browsing HN, I follow given links and often avoid a lot of pop-ups, ads, and what not, and from time to time, when I find a site that won't load AND I really think that the linked to site is worth it, I use the "other" browser.
What is curious is that most "new" products, Saas, whatever simply fail, showing just a blank page, that is exactly what the site shouldn't do.
I would gladly accept a "You need Javascript enabled in order to fully appreciate the site" kind of message, but only if some (basic, simplified how much you want it, even text only, etc.) content is shown nonetheless.
I simply cannot bear the "blank page" or the "Your browser is not supported, please download Chrome, Firefox, Internet Explorer and try again".
This makes me sure that the people behind the site in the best case have not understood the "showcase" nature of a website and in the worst case care nothing about user (please read as customer) experience.
So, if you create a site, consider how a part (even small) of the visitors may have not javascript enabled but they are anyway potential customers, they are somehow anyway interested in your site contents, by making their no-javascript experience so miserable you are turning them away from your site, sometimes forever, which is the perfect negation of the reason why you put the site together in first instance (to make your whatever content visible to the world).
I simply will not spend time tailoring a nascent or even mature product to 0.0001% or whatever minscule percentage of the population turns off JS. If you turn off JS, you get a blank screen. Why should I accommodate your cohort? Why write unit tests for someone who takes a standardish client with above 1% market share and does something completely unstandard with it?
I find that it's at least (usually more) double the work to build the JS-free version of something for a fraction of the user experience.
For example, imagine a forum where clicking the "edit post" button turns your post into a <textarea> editor and saves with AJAX so that you can continue scrolling once you make your edit.
To build the JS-free version, you typically need a separate endpoint, a new template, a redirect, and a less empowering editor. And all this for UI that 99% of your users won't see.
It just doesn't seem like a opportunity cost savvy way to build a website.
Then there are the UIs that take some real backsplits to accomplish without Javascript like a table that lets you mass-modify the rows with checkboxes and a <select> at the top that lets you choose options like Move | Archive | Delete.
You can wrap the entire <table> with a <form> such that all of your checkboxes submit to your mass-modify endpoint. That works without JS.
But since you can't nest <form> within other <form>, your <form method="DELETE" action="/things/{id}"> delete buttons can't appear in each row.
I have a hard time understanding how so many people in these threads can suggest that the opportunity cost of building for the 1% is always worth it. Does everyone just have basic blogs on the mind when they envision the labor involved in what they preach?
Non js users aren't expecting the website to work perfectly without scripts on. The bare minimum they ask for is that the site can be read without using scripts. Obviously it would be nice if all the moving components didn't need scripts either, but most people can accept that this is too much work.
Depending on the site, even just reading may be problematic. Imagine a webapp where all data is requested and sent as small JSON updates. This is faster and it's less data than full page copies, and can be more targeted to that user.
To build this feature for the tiny fraction of non-JS users, you're looking at duplicating all of this functionality on the server. This greatly increases dev time and complexity of the codebase. It means moving from an entirely API-driven platform to one where your server needs to understand the app logic as well.
Like it or hate it, Javascript is a web standard. You know that websites will break if you disable CSS -- the expectation should be the same for disabling Javascript.
That's an unreasonable expectation. Many websites have complex content and layouts, and no web designer will build their site to gracefully fall back with CSS disabled. That just doesn't happen.
It does happen when the developer cares to do it; I know several that do.
Website layouts need to be made linear (single column) for display on mobile and the content order usually reflects that anyway. They also need to make sense when using screen readers.
Disabling CSS and checking the result gives you a quick sanity test that the content order makes sense and that the markup is used somewhat correctly. It won't be pretty, but it should be useable.
For me, yes. I should be able to see at least the main content of your blog without JS. I don't care if it's perfectly formatted or styled. Just put the text in a div.
When I go to a blog post linked from HN and see a white empty page I will just move on.
Good. Please leave. I don't walk into a restaurant and demand that they cook my food without using a knife. Don't expect modern websites to bend over backwards for a (frankly rather juvenile) view of a modern web standard.
Code execution is part of the web, it's here to stay, the number of sites that support non-JS experiences will continue to dwindle.
You're not some special snowflake. Other people make real decisions based on cost/value models, and you cost way more than you're worth.
A blog post is one thing, but many web developers that venture onto this site -- myself included -- are building complex web applications that would simply not make any sense as a "read only" website.
You are more than welcome to disable JS, but the number of web developers that I've met that actually care about a non-JS user experience is next to zero.
> But since you can't nest <form> within other <form>, your <form method="DELETE" action="/things/{id}"> delete buttons can't appear in each row.
Of course they can. The <form> action submits via POST to an endpoint that disambiguates the selected action, and forwards inside the server side to the appropriate endpoint. Shouldn't be hard with a competent framework.
Then, when JS is enabled, you override the click action on the action buttons, so that the <form> action never gets used.
When you add up all of the ridiculous hoops web developers have to jump through for cross-browser compatibility, a non-JS UX is the least of their concerns.
You're asking web developers to come up with two methods to accomplish the same goal, which doubles the number of ways the app can break. It's totally unrealistic for any moderately complex web application.
I think it depends on the user demographic. If you're actively developing a service that you hope to be used even in the most deprived areas with outdated equipment and poor connections, then it may be worth it. For example, outreach programs, charities, and emergency services.
If you're developing something that is more or less a needless, but useful, product -- your demographic is more than likely the upper 30% of earners in the world... yeah, probably not worth your time to develop for that 1% who won't be using JS (or have the capacity for only older versions).
In my experience, optimizing for perceived latency (time between user request sent and site becoming usable) inevitably leads to server side rendering, in geographically distributed availability zones. If you know this in advance, then you can choose to use tools that support code reuse across the client/server divide. This way, SSR is mostly a one time cost, and it can be implemented at the very beginning of the project, while everything is still nice and simple. It does impose additional constraints during UI development, but many of those are good practices anyway (proper use of HTML elements, prefer native form elements to fully custom inputs, use URL and history to manage state, etc).
It has been somewhat humbling to watch users on low speed connections use the app before the JS finishes loading; it shows everyone the true value (or lack thereof, for the passive content consumers who make up ~98% of the visitors) of the JS portion of the app, since they see it with and without JS. Sure, they can't drag a box on my fancy visualization, but they can see a snapshot of it, and use the peripheral controls as links to navigate to the desired application state. Then, as they are reading something that is useful to them, the ~2mb JS blob slips in and quietly brings the application to life.
Also, I love the way the 3rd party JS blobs that management mandated, are so obviously slower than the rest of the page. This gives me good ammunition for the arguments where I demand an HTTP API, not a JS blob, from our partners. They hate that, because then they can't stuff their own trackers into our clients. Fork 'em.
I use NoScript. If a site interests me I unblock it, I may also unblock one or two external scripts. What I encounter quite commonly is a list of 40 external scripts, which results in a closed tab. There is reasonable use of JavaScript and then there are web devs. playing gotta catch them all with tracking scripts and, at the risk of repeating myself, social media integration.
> For the same reason we build features in websites that work for blind people (aria tags everywhere), disabled people (accessible buttons), people who speak different languages (i18n), VIM users (GMail keyboard shortcuts), etc.
Great counter example, thank you. Accompdating people with disabilities is totally worth it though imo, as an example of a very minor cohort that's not all about percentages.
I use Vimium in Chrome to be able to use the keyboard to navigate web pages. Gmail in particular conflicts with this by implementing its own keyboard shortcuts that interfere with my workflow.
Some people may like it, but I'm just saying that it's not a feature that isn't disliked by some people.
Supporting a JS-disabled experience should not be your first priority. But, reducing needless reliance on javascript will improve your site for every user - making the page load / respond faster, use less CPU and memory, etc. It will also improve the development experience for you - faster to test, less crazy codebase to work in. It's really just architecting the site better.
Some people are so scared of "premature optimization" and "catering to a tiny number of geeks" that they trick themselves into making a huge JS mess. Most websites these days seem to be examples of this.
because its good engineering. you don't know what might break your JS in the future. you also don't know which user agents people will use in the future. I was quite surprised to discover that some "UC browser for Android" had quietly racked up a 9% usage[^1]. UC doesn't come without JS, but has a limited feature set, and without feature testing there's a good chance it will break your site. It broke [mine](https://qwtel.com/hydejack/), if it wasn't for feature testing and serving vanilla HTML. Also, and I know it seems impossible, but sometimes programmers ship broken code. Again, not relying on JS gives you safety net to fall back on.
Nonsense. This is definitively YAGNI. Good engineering is accommodating the greatest number of people with the resources you have in the least amount of time. Thinking about some abstract future scenario is categorically not good engineering.
You also don't see car manufacturers change their designs so that they will accommodate drivers who've decided to remove their steering wheel, nor do car manufacturers design their cars for some possible future world without gasoline.
Consumers by in large know driving is dangerous and safety features are something car manufacturers advertise to consumers. Not to mention over the last 30-40 years drivers have come to expect these sorts of features in cars because accidents happens and those features increase survivability, by a lot.
The same sort of consumer expectation doesn't exist for no-javascript. Major browsers default it enabled. The web has developed to be seriously built on top of javascript for the last 15 years. Consumers have expectations for how the web should work in 2017 (for better or worse) and no-javascript isn't anywhere on the radar.
Consumer expect cars to have airbags and seatbelts, consumers expect websites to behave as they always have and that means javascript.
JS is necessary. But what do we do about the extensive tracking and extremely unoptimal requests, for which we already have numerous articles out there?
If there's no soft solution, disabling JS altogether is a good fish net for a lot of people -- yes the percent is close to a rounding error but the absolute amount can go to 1 million or more.
Counter point: many cars aren't made to accommodate people as tall as a professional basketball player. Should I call them all bad (or at least not good) engineering?
Counter-counter-point: I am above 180cm (far from 200+ where many basketball players are) and many taxis aren't tuned for me and I bang my head on the ceiling so yes, I definitely call the manufacturers of those cars bad.
Statistical averages are a back-patting technique between managers, let's face that reality, shall we? They very rarely have the customer's interests in mind.
You have to draw a line somewhere, though. Every engineering decision has a cost, whether time/money, opportunity cost, or design tradeoffs, so it's not practical to cater to every single user you have.
Good engineering requires good bean-counting. Figuring out what you can do with infinite resources is not good engineering (except perhaps when funded by DARPA).
As any good civil engineer knows... “Any idiot can build a bridge that stands, but it takes an engineer to build a bridge that barely stands.”
The essential characteristic of engineering is making mindful, appropriate tradeoffs between various costs and benefits, including features, risks, externalities, financial costs, etc.
what a ludicrous analogy. engineering is an applied science, not a purely theoretical one where you can remove the human impact of your decisions from the equation. comments like this are a prime example of why engineers need to take more liberal arts classes
I agree with you about liberal arts and have the college transcripts to prove it. Unfortunately, your HN comments have been frequently uncivil and/or unsubstantive. Would you please not post like that? It's just what we're trying to avoid here.
It was a slightly exaggerated example. Nothing can be infinitely optimized of course, but the corner cutting in today's tech culture is a bit too much for my taste.
I browse with JS on (but I block ads and most tracking scripts) and only use a separate browser profile with JS off for sites that look fishy.
Requiring JS for a web app or interactive features in a website is fine.
A few years ago I had to use an old PC for a while, and disabling JS by default improved my experience significantly, but too many websites simply didn't display any content without JS, or didn't load images. The "web apps vs. web pages" debate has been beaten to death, but being unable to e.g. read a blog post without enabling JS just feels wrong to me.
>I simply will not spend time tailoring a nascent or even mature product to 0.0001% or whatever minscule percentage of the population turns off JS.
This is reasonable logic and I think it is an important point in getting more people into a technology movement, whether it is removing JS or anything else. A similar example is getting encrytion: many users don't know or care to use it, so it is usually a side issue if mentioned at all in a company. But if the percentage of people who do use it increases, companies will have a greater interest in supporting it.
Currently, normal users just care that content loads when the click the link for a website, and perhaps don't understand or care that JS is running or what the consequences of that is. A more constructive way to get better experience without JS is to ask others to also block it to show companies that this is an issue, rather than just directly asking companies to add support.
It might make sense to think about who disables JS. Likely it's technically knowledgeable people with a concern for privacy and security. If this describes your users or customers you may benefit from at least some level of non-JS functionality.
Adobe flash is dead since many years, people that was warned that they should not do only flash website,and for the same people can browse without JS what you tell them now? they site aren't at all usefull, you failed. All website or software should never fail so easily. I truly support jaclaz. No message : you should use blablabla, or enable this, or anything else, people goes away.
Even though I am a frequent advocate of leaning on more old school techniques for building web apps (even with new school tools), I vehemently disagree that this is a generally worthy cause. A small list of (I think non-controversial) things:
1. You browse without JS. You're a power user. You already know why the page doesn't work. A courtesy message would be nice, I guess, but it seems fairly pointless.
2. Should my HTML and CSS still target IE7-friendliness? IE7 users still have more market share than actual humans who browse without JS. They get treated like lepers in the Year 0, and we all seem to be okay with that.
3. Well-designed use of JS can help distribute computation load that might otherwise all fall on the server. A very contrived-to-be-important (but fairly common) example of this is scaling/cropping an image before it's uploaded. Or, even though I have some other reasons to hate on SPAs, page rendering itself. The no-JS user is more expensive to serve than the typical user.
4. Except for some extreme cases of JS reliance, search engine spiders are a head and shoulders more accommodating to developers' choices than no-JS users are.
I just don't think that making accommodations for no-JS power users, who are not plentiful, and know full well how to turn JS back on, is valuable. Making JS bundles smaller and making them load more efficiently? That's a worthy cause for sure. But people who browse without JS at all are extreme outliers in the 1st world, and all of them know how to toggle it back on. Developing countries? If I were trying to serve those users I would do the research and make choices accordingly.
>1. You browse without JS. You're a power user. You already know why the page doesn't work. A courtesy message would be nice, I guess, but it seems fairly pointless.
Well, the point is that a lot of people post their site on HN as "Show HN", and the "intended audience" is then (or should be) "power users", which should also mean that "power users" (few as they might be) are potentially interested to your site, but you make sure that only a subset of these power users can access your site (not necessarily in full, sometimes a simple text summing up what the thing is about would be enough to either - as I do if the page doesn't load - discard it or get me interested enough to turn javascript on/use the "other" browser).
If you don't post your site on "Show HN" the missing "no-javascript message" is only a (small) lack of courtesy, if you do post it, it sounds (to me) like a plain lack of attention.
In the real world, you open a new shop in town, and advertise on the local newspaper "everyone is invited for the shop opening tuesday at 7:00 P.M.", than you put at the door a couple big guys that only let in those dressed "black tie".
I do have a suitable attire, but I am not going home to change into it, not without having at least the possibility to peek inside your shop and see what it is about.
Will I become a customer? Maybe yes, maybe not, but more likely not.
I would see it as more equivalent to a "No shirt. No shoes. No service." sign. The vast majority of people will already meet the criteria, and those that don't are already well aware that they are likely to run into problems.
This is only a valid analogy if in our real world I could make a tuxedo appear on my body with no more effort than snapping my fingers. (Which would be awesome.)
Honestly if you're so persistent that you must have a message that you need Javascript then you're not the type of customer I want viewing my site anyway. Someone that picky will find another problem anyway.
For #2 (JS abuse by site owners) I wonder if it might be useful to set size limits, like: maximum allowed bytes per resource type (.js, .css, .*), per file and per page, possibly with some local-vs-remote adjustments.
You could just read the Content-Length header and drop the connection if the site is over the user's limits (at which point you'd want to display a status message instead of the partial site).
which show a usable flash of the site for one second and then bounce me to a useless "You must enable JavaScript to use this site." page. By the flash I know that this is technically not correct.
NoScript includes a toggle to disable exactly this behaviour (meta tags within noscript tags). meta refreshes in general are pretty annoying as a user though
One problem is that more and more, people expect the browsing experience to include a lot of responsive features that require a bunch of client-side JavaScript.
Except JavaScript blows, the emerging JS extension ecosystem that tries to fix that is still highly in flux (and mostly blows), so developers try to keep their server-side stuff NOT JavaScript, and then supporting server-side rendering means re-implementing everything twice.
Of course, this doesn't excuse document-centric sites like news and blogs.
1) It's irrelevant since it's the only language we have on the browser.
2) I think this is actually a cultural problem. Are users steering away from overcomplex/bloated/intrusive websites? If not, why not?
3) That's an argument similar to should a site support IE8? IMO it really boils down to the cost vs the benefit of supporting user without JS. And that of course will depend on your target users.
I honestly think the security/privacy arguments are the only valid argument against requiring JS as standard.
The idea of browsers sending data that the user has explicitly elected to send (in addition to, if we're being pedantic, that the server put there in the first place) is very compelling in this day and age - only problem being that in order for that to have any meaningful impact on web security in the general case, you'd have to more or less universally deprecate JS support, which would set the web in a very different direction indeed.
That would be a valid reason to provide a JS alternative depending on context, not so much a valid argument for JS-free alternatives as standard.
There will always be something you can do to reduce power consumption: expecting (or advocating) JS-free versions of all websites for that reason is a completely arbitrary line to draw.
You know - at one time we built plenty of websites - and in some cases applications - using little to no javascript (either it didn't exist, or it was only used for very minimal things) or any CSS (it didn't exist).
So what could we do today? That is - if we purposefully limited ourselves to just basic HTML, and server-side processing, without any (or minimal) CSS or javascript, and only using the (expanded) tag and attribute HTML we have today?
I don't know for sure - but I believe we could create some amazing things.
Think about it this way: Look at what demoscene people are able to create when they put extremely artificial challenges in front of themselves; we could try to do the same - and see what happens?
Perhaps there needs to be a demoscene-like competition space for this kind of stuff (how much can you get done in under 100k browser-side? 10k? 1k? Limit bandwidth to the backend, too, and maybe server-side size of code?).
Just some random thoughts - it might be something I'd have to try myself, but maybe this post might inspire someone to do it as well...
What kind of things are you imagining people would get done with such a limitation?
Recently I was using an expensive mobile connection and I discovered w3m, a text based browser (runs in the terminal). I was amazed at how much faster everything worked.
Most of the sites were still perfectly usable (w3m has pretty good HTML rendering) but some were very difficult to navigate.
My point is, without JS and CSS, the web is just... text.
And when you browse with w3m you realize, that actually the web has always been text. It can be quite refreshing to get all the decorative crap out of the way.
--
edit: I now see you didn't say to remove JS, but to challenge yourself to see how much you can make in a tiny amount of JS. Such competitions exist!
WTF? You do realize most of the users on the internet don't even know what CSS/JS is and would be thoroughly displeased at being presented with a page with no styling. They don't appreciate that the rendering is done without JS/CSS. A properly built website is one that caters to most users, not developers or power users concerned about JS being enabled in 2017.
So far as I know there has been exactly one such ad blocker released, and it was promptly abandoned in favor of the kind of adblockers people actually want to use.
Sure, but the he only has to wait for the cycle to swing around again. Browsers will officially turn into JS/WASM runtimes. Sites will balloon until instant delivery is no longer feasible. Browsers will respond by allowing sites to version themselves to aggressively cache client code. Eventually browsers will start offering site-specific-browser features to 'new-modern' sites to combat the extreme waste of Electron and the like.
Someone with political capital and influence in the developer world will realize that the web is still mostly text and design a browser optimized for that use case. It will be hailed as a revolution and people will start preferring it over the 'old-web' for accessing static content like news.
Then under pressure from developers pushing the limits of static content they will include a simple, limited scripting language that promises to facilitate basic interactive elements and start the cycle anew.
We should call it something trendy, maybe 'The River'?
Sorry, I won't. I don't trust you and I won't simply run your code by default. It's not like I actually need whatever your web site wants to provide; there are plenty of other things to do on the internet.
I'm amazed at how many websites there are on my phone that take 15+ seconds to load... and they've clearly been "optimized" for phones in the sense that the site reacts to being on a mobile browser and lays itself out for a phone... if I wait for the whole thing to load.
(I see this less often, but it's also amusing how many sites pop up their "hey, you've been here for three seconds so clearly we are an integral part of your life from now on even though the main page layout isn't actually done rendering, would you please like us on facebook, subscribe to our newsletter, and tell us your mother's maiden name and SSN? [allow this site to use location services [yes] [no]] [allow this site to push notifications to your home screen [yes] [no]] [allow this site to enter your bedroom at 3am to remind you how much it loves you [yes] [no]]" dialog where both the X to close out and the submit button is unreachable, because the position and zoom is fixed and the whole thing is too large.)
All google products. But it is ridiculous to disregard JavaScript because some people can't write it properly. It is the same as saying that I wouldn't use a browser because I once went to a website that displayed a popup.
It is not ridiculous, machine crippling JS malware is a click a way and you will never know until you click. The safe strategy is JS off by default plus whitelisting.
> wouldn't use a browser because I once went to a website that displayed a popup
I have been using the internet nearly daily for about 20 years now. I can't recall a single time I clicked on something that gave me "machine crippling JS malware".
Barring some serious security gaps, JS isn't even capable of doing anything more than lock up the browser and maybe send some extremely minimal information about you back to someone elses server.
> Barring some serious security gaps, JS isn't even capable of doing anything more than lock up the browser and maybe send some extremely minimal information about you back to someone elses server.
I'd be very interested to see some proper analysis of the difference in speed between HTML/CSS vs just JS, it seems obvious to me that the former would be much faster but then again I know how often "obvious" things are wrong.
As a very general answer, JS webapps will typically be slower on the first load because they require an extra step to show the content.
Subsequently loaded-pages or resources can be significantly faster in webapps however, as they can request (or generate) exactly what they need.
The problem of slow initial loads is being addressed by tools like React by offering server-generated pages on the first load, and then transitioning to JS-based loading for subsequent loads. This offers the best of both worlds.
100% of users want websites to load faster though - that's a fairly big market share.
Obviously you can't make a web app without JS, but I still think you should minimise JS, or remove it if the site truly is static. It's just a better experience.
One thing to consider is that at larger companies, we already have a difficult enough time testing all of our site with JavaScript enabled. Netflix building a version that works without JavaScript is like asking them to build a second website, especially from a QA perspective. When you look at your users and find that .5% of them are not running JavaScript, how do you justify spending the money for that few of people?
The NoScript extension is the 5th most downloaded extension for Firefox (https://addons.mozilla.org/en-US/firefox/extensions/?sort=us...). If you want to show a blank screen or a broken website to those users, that's up to you. Some of them may really want to see your site, so they might try to figure out which of the twenty domains they should enable, and which of the thirty they should enable after that one doesn't help them. Most will likely click on the back icon and see what's next on the list.
Yes, and Firefox is the 3rd of 4th most used browser at less than 10% (which is bad). So the subset of actual Firefox users that use NoScript is absolutely not worth supporting.
How do you know the percentage. Some stuff you may just never know. I very consciously avoid websites I know use a lot of JS or are bloated on mobile. Mostly news websites.
I often read in subway, where I need to load the page in 3-4 seconds or I'm in the tunnel again without the signal. In some stations I only get 50kbit/s or so.
Most websites don't cut it so I don't even bother. Those who cut it though, I get engaged with more, because I can't go anywhere else without a signal.
In a nice world I would write a scrapper + alternate website that hosts only the content and some simple navigation over plain HTML, preferably with as little CSS as possible. There's the annoyance of copyright though, so no way of making this public.
The reason is mobile. While only .5% of your users may be non Javascript users it's highly likely a significant percentage of your users are on mobile and not optimizing for progressive enhancement means a significantly degraded experience for all of them at various times when it fails to load or loads too slowly on a mobile connection.
I can stand slow loads, my phone has wifi after all. But js bloatware doesn't even allow me to scroll without seconds lags and random misclicks due to delayed event dispatch. Me turned js off not because of security concerns, but because most js-only sites are not scrollable beyond the first screen anyway.
To help our (internal) customers better troubleshoot, we're adding HTML responses to our HTTP (web) services. Add a header, a footer, maybe some HATEOS... Starts looking like a barebones web site.
Netflix is one of those sites that makes we want to run noscript. I regularly makes firefox unresponsive and eventually popup a "this script is using too many resources". When a large tech company can't create working javscript for a minimally interactive interface then something is seriously wrong.
Less javascript would be an improvement for everyone, not just the noscript users.
Try doing some automated testing, it'll take care of most of the non-JS testing and a lot of the JS enabled testing as well. Explority testing is another thing, that QA really does need to be involved with, but if you can't test static content without human intervention, you have other problems that are nothing to do with whether you have JS turned on or off.
They can't justify the cost. I wouldn't want them to, anyway. It would mean fewer improvements for their non-crippled site as resources are diverted for the sliver of people who like to be contrarian.
As much as I hate JavaScript, Google Maps gets a pass. I'm pretty sure the code behind it is thoroughly tested. We can just suck it up and turn JavaScript on for one of the most useful tools from the internet -- how hard is it to find something that whitelists domains to run JS anyway?
While I agree with you, I still want to give them points for admitting their failure. It's a cute image, with a cute line that tells you exactly what the problem is.
Compare that to sites where you just get a blank page.
Firefox has Noscript, and perhaps another tool i forget the name of (never tested it). Not sure if there is anything similar for Chrome (and the various offshots/rebrands) or Edge, never mind IE or Safari.
If I recall correctly there was an old version of Google Maps which worked without JS.
It was before they replaced the old snappy and functional Google Maps with the bloated and slow app we have now.
It also had a habit of freezing an older (core2) laptop I was still using completely (with Firefox). Not just the browser, the whole OS. Had to power cycle. From time to time I'd forget and go back to Google maps were the cycle would repeat.
Why was I still using a core2? Because it worked fine for almost everything else. Got a better second laptop and no more issues but still... there wasn't anything wrong with the first.
It's probably WebGL-related. Some Firefox update seems to have fixed it for me recently but for quite a while I could only visit Google Maps in a brand new Firefox instance, because any attempt to use it in a well-aged process would crash firefox and emit graphics errors to dmesg. You may have a poorly-supported WebGL stack.
Agreed, I just dusted off an old laptop with 14.04 ubuntu (or maybe it was 14.05) and tried to run a webgl app I'm making off of local host and the whole computer nearly locked up (after updating firefox too). Chrome ran it at 60fps easy with whatever version was installed on the computer three years ago.
Now that I've updated to 16.04 firefox runs better than it did but chrome still blows it away.
There should be something as a fallback, though, surely? Take out all the functionality, by all means, but at least give me a map image. If you can't even get link-to-zoom working to display different images, just give me a map image based on my IP location; give me anything rather than nothing.
Remember the HTML map tag? [1]. There is a huge amount they could do to make maps usable without JS (or even CSS). Zooming and panning would be pretty easy to implement, but would involve reloading the page with new coords.
Can you do lazy loading with just HTML and CSS? You can load the map around the GPS coordinates you start on, but what if you go 20 miles north? How do you: A) figure out what are the new GPS coordinates that your viewport is looking at? And B) load the map within the viewport centered around the new GPS coordinates that you're looking at without ajax?
It's funny how devs feel threatened by a non-JS trend. Even when their favorite frameworks are providing tools to accomplish minimal functionality when JavaScript is off.
Server side rendering has been a big priority in React, Vue, Redux, ReactRouter, etc.
As a rule, people tend to support the status quo--even when the status quo is problematic. I would argue that there are good reasons for thinking that the current widespread use of JS is problematic. How, for example, is allowing the execution of arbitrary code in your browser not a major security flaw? Didn't we learn anything from Java applets and Flash?
Usability is also a problem. JS heavy sites often break standard web expectations such as the back button. They are often overly complicated, favor design cleverness over usability, and don't work on mobile devices.
Sure JS is great for web applications but is really needed for every single site? Whatever your view it is at least worth discussing whether or not the status quo is misguided.
I'm not threatened but no-js but I also am not going to maintain 2 code bases or thoroughly complicate things for such a small percentage of people. Yes if you have a news/blog site you should probably support noscript but for web apps I'm sorry, it's just not worth the time and effort.
There are ways where you don't have to maintain 2 code bases. For example with react, when doing SSR, you can import your components from the client and use them in the server, allowing you to keep minimal functionality (links, text, images, etc.) when the JavaScript is off.
There is one function on the server that you need to use to render any component you imported.
Are there any JS frameworks that make it easy to build SPAs that degrade gracefully when JS isn't available?
I know there's a lot of work done on making server-side rendering work with React / Vue / whatever, but I'm wondering if there's a framework available that asks for some tradeoffs in how your SPA is structured in order to maximize the amount of functionality available when scripting is off (similar to how Redux asks you to make state-management tradeoffs in order to get hot-reloading, dev tools, etc.).
For example, you'd probably want, at a minimum:
* By default, all code related to fetching or subscribing to data from a server is tied to URL routing.
* By default, all code related to sending data to a server works with standard webforms.
The closest analogue to what I'm thinking about is Turbolinks and Ruby on Rails, but I'm curious if there's some equivalent in JS-land because you'd get less context-switching when adding extra JS-only SPA interactions on top of the core JS-optional form-based interaction.
If you use Redux and React server side rendering, you can quite easily wrap forms/interactions on your page with a <form /> tag and a hidden input with the Redux action name, and then handle the actions/reducers on the server. I've played around with it in the past and managed to convert a SPA which was using redux-form heavily to work fully without Javascript enabled, and it was only like ~200sloc.
I wouldn't say it's "quite easily" done. The most obvious challenge is components that depend on data loaded asynchronously: If the only thing the component renders is a "loading" message while it fetches something from an API, that's all that the server is going to render as well. So you have to figure out a way to render asynchronously on the server-side, and then you also have to come up with a generic way to handle cookies, redirects, fallbacks for interactive features, etc. So it might be easy for really simple apps, but it gets incredibly involved as your app becomes more complex.
The nice thing about this, though, is that most of the work happens up front, and as long as you develop the app within the proper constraints, server-side rendering is more or less "free" after that.
Agreed, but the overhead of coming up with "generic" ways to handle all of these interactive features like data fetching, redirects, etc, isn't that big (at least from what I've experienced).
It's good to have a standardised way of doing all of those things anyway, and when you do, you can make sure they all work on the server.
I don't agree. Dealing with asynchronous API calls in a generic manner is, in my experience, non-trivial. It's been a little while since I've tried it, so maybe things have changed, but when I tried this, the React server rendering API converted the immediate rendering of the component to a string, which means you effectively have to pre-load all the relevant data outside the React app, which may be really difficult depending on how you've architected your app.
You can achieve that with any view library which can rehydrate from a server-rendered version and can support your minimum list of requirements. The specifics depends on the libraries you're using.
I was experimenting with this [1] a couple of years ago with React, react-router and a form library based on django.forms. The key was to handle form submission in components which behaved like they were servicing a request on both sides, passing in the POST body on the server or form data extracted in the same format on the client.
There's some not-so-fun plumbing to make sure everybody has the data they need to re-render with user input and validation errors when they happen, and to handle redirect-after-submit properly on both sides, but I initially had to hack some of the necessary plumbing into a pre v1.0.0 react-router so the specifics are probably no longer relevant.
I don't have a specific answer to your question. What I think would be much more sensible is to design your website without JS, and then sprinkle it here and there.
Ex: Make a request to an API upon page load. Populate tables or whatever you want. Then if JS is enabled, load a script that makes AJAX calls to the API to update the data every x seconds. That way your site gets "enhanced" by JS and is completely usable without it.
Yes, the issue is then that you have to write two pieces of DOM rendering code (some view code in erb/jinja/whatever to populate a template on the server, and a piece of JS to update the table afterwards).
And if you want to allow JS-enabled browsers to transition between views (and different data subscriptions) without having to reload the entire page, then you'll have to write two pieces of routing code (one route in Rails/Django/whatever and another in JS).
And if you want to allow JS-enabled browsers to post form data back to your server and make minor changes to the page (e.g. edit one field in a table) without reloading the entire DOM, then you'll have to write two different routes on your server -- one that processes the form data and returns the HTML for a newly rendered page, and one that processes the same form data and returns only what's changed.
You could, of course, use server-side-rendering JS to DRY things up for issue one. I'm more curious if anything reduces the boilerplate for issues two and three.
Fastboot (and SSR in general) is nice, but this only ensures that no JS is required for the initial page load. For a true no-JS experience, you'd also need to handle (among other things) posting data back to the server without JS.
I turned off JavaScript and Cookies for all sites a few months ago and only whitelist (or open in another Chrome profile) when they refuse to display or need an authenticated session.
There are some sites that require JS to render, but the great thing is that majority of pages I visit work just fine as plain html/css. In general I feel less distraction from popups, overlays, ads, etc.
I use uMatrix as a substitute for NoScript + uBlock, and while it's a pain to get some sites working manually without bulk-enabling everything on them, I'm mostly happy about it, and the web looks better for me.
I use Noscript and uBlock Matrix together. It's a little annoying to have to double-whitelist pages, but they offer some non-overlapping functionality. uBM has finer whitelisting control, while NS has finer control (and better visibility) over specific kinds of objects on the page.
However, neither offers a UI for whitelisting specific scripts, which I see as an important feature in the Year of the CDN. I think NS has that feature but it's hidden away in the preferences.
I tried using uBlock (EDIT: uMatrix) for a while, but I felt like it was a lot of hassle, and most of the time it still wasn't granular enough to actually block the things I wanted to without breaking sites. It seemed like usually all of the bullshit javascript making sites sluggish came from the same domain as the few bits I might actually want. (Or at least needed to make the site useful.)
> wasn't granular enough to actually block the things I wanted to without breaking sites
I don't understand the criticism: it's no less granular than NoScript. If considering only the dynamic filtering panel[1], it can be argued it's more granular given that one can block inline-script tags only, and also one can block/allow on a per-site basis.
Static and dynamic URL filtering of course allows you to filter on a per-URL basis.
Mozilla is partly to blame here by taking away the "Disable Javascript" control, which effectively told websites they could go crazy with JS and ignore the "no-JS" users.
What might be helpful now is something like a MAX-JS-BYTES=n header that tells site owners that the useragent will ignore any JS beyond the first n bytes on the page, and will not request more once that limit is reached. (I know this is a terrible idea; hard byte limits could never work. But somehow site owners need to be made to feel more immediate pain when they screw their customers with too much JS bloat because they've hired lazy developers, or because their idiot VP of marketing is all about TEH BLING.)
>Mozilla is partly to blame here by taking away the "Disable Javascript" control, which effectively told websites they could go crazy with JS and ignore the "no-JS" users.
One might say, for other reasons, that Mozilla is 100% to blame for all of it.
> What might be helpful now is something like a MAX-JS-BYTES=n header that tells site owners that the useragent will ignore any JS beyond the first n bytes on the page, and will not request more once that limit is reached.
I love this idea. Web development seriously needs some discipline. Myself included.
Uh oh, but that would mean frontend developers have to reason about their resource usage, and have a vague idea how many programs they are even using! Javascript usage is simply out of control, things load themselves automagically, asynchronously, unpredictably.
I generally agree with what he is saying, but the web has moved beyond the point where its reasonable to expect websites to have JS fallbacks. It is just too prominent now a days.
I've always been a JS hater, though I have to code it frequently.. However, with that said I find the arguments against the language becoming more and more obsolete as the years go on.
It's not unreasonable to expect hyperlinks or images to work without JS. Fancier stuff sure, but basic navigation through a content only site should be functional without it. This is what HTML does and you have to be profoundly lazy to screw that up.
This is really the problem I think. It's not as much fun to create something that functions properly.
The best FE devs make sure that they create functioning web pages first. Then they go ahead and make them better with some JS. Only the least competent/laziest developers create JS only sites (this is discounting things like JS games etc where there's no text or images, or pretty much anything that can be done in HTML).
I find it frustrating when these devs try to paint the web as something more advanced, or somehow different (seriously, it's not a special snowflake) in order to try to validate their preferred way of working. It's unprofessional to their employers, it's a disservice to themselves and it's a big "screw you!" to their users.
My favorite pet peeve is JS links, that aren't actually <a> links, but spans or divs with click handlers, so there's no way to right-click->open in new tab or ctrl-click->open in new tab. It's like intentionally doing something that is harder, and works shittier; I don't get it.
My favourite pet peeve is the lyrics wikia, which has some css embedded in a <noscript> tag that blocks out the lyrics and replaces it with a "sorry! This site requires javascript" message.
It's not becoming obsolete. Because of legacy code, docs and the fact the old behavior is still valid JS, beginners are still completely lost because of this bad design that never been fixed, only compensated by adding things.
Now add the fact that:
- the ecosystem is exploding like a holy grenade, only not as funny
- having to learn what "rebinding this in the closure's callback" is still core to the path to the language understanding.
- you have so many ways to do things, especially asynchronously, and yet no simple way to do simple stuff like removing an item from an array
- pre-processors are used everywhere and you have so many of them
- safari is becoming the new IE6
We are still very much paying the price of the terrible language that is Javascript.
No, it hasn't really. I like JS, but I think the attitude that it's a requirement now really needs to go away. I'm hoping it will when the current generation of more junior FE devs start to grow up and realise their work is about the users and their clients, not about what they want to do.
The article shows quite nicely that it's completely possible to create great websites without the requirement for JS to work well. Make it an enhancement and your users will love you for it. Make your users love you, and your employer/client will love you as well.
Not for me, I only use web apps when there are no native alternatives, even though I also do web dev when customers require me to do so.
I take this to extreme on mobile apps, only native ones get into my devices. If it looks like a web widget after being installed, it gets the boot shortly afterwards.
Third-party scripts aren't just a performance problem---consider privacy/trackers as well. I gave a talk at LibrePlanet 2017, where one of the things I showed is the drastic effect disabling JavaScript has on third-party scripts, graphed by Lightbeam:
Well, while I like what you can build with JS, I often don't like what people build. For example, npm makes it so easy to import megabytes as a dependency to your project while you might only need a few kilobytes. And some devs simply do not seem to care about that.
On the other hand, JS allows us to build great user interfaces which better to use than our old school point and click adventures we use to have. And if you do it right the performance is also ok, but it is also not so hard to fail and ruin the whole user experience by rendering the scene twice every frame ;-)
I actually liked the old point and click adventures better, because you always knew exactly what to expect. There was a fixed suite of tools provided by the browser and that was that. Now... well... every dumbass designer with a bright idea is free to inflict some bit of novelty on you. Fads sweep through the design community and suddenly old sites which worked fine have to be "updated" to work differently, and you have to learn them all over again, for no benefit.
Disclaimer: I am, and have been for the last 5 years, a JavaScript developer
My current off-work project is a web application, that is built with Elixir. I've written 0 lines of JS. It feels really good. This is my way of internalizing some of the points made here[0]
Ironically, this articles confirms that investing into the no-JS functionality is becoming less and less important, with those major sites already forcing the user into JS. Why then would you invest your time and money to cater to the "eccentric" 1%?
There might be 1 or 2 billion "eccentric" people that rely on old/cheap devices and networks, plus the visually impaired that need to use braille interfaces with little support for dynamic contents.
You’re not wrong about screen readers, no. I wrote a blog post about this last year[1] but I’ll copy and paste a list of possible reasons from it here:
- script hasn’t finished loading, or has stalled and failed to load completely
- application route is up, but the route to the Content Delivery Network is broken
- user has chosen to install a plugin that interferes with the DOM
- user has been infected with a plugin that interferes with the DOM
- user’s company has a proxy that blocks some or all JavaScript
- user’s hotel is intercepting the connection and injecting broken JavaScript
- user’s telecoms provider is intercepting the connection and injecting broken JavaScript
- user’s country is intercepting the connection and injecting broken JavaScript
- JavaScript code has functions not implemented by your user’s browser (older browsers, proxy browsers)
NoScript definitely works without XUL support and is the fourth most popular extension listed on the compatibility page here: https://www.arewee10syet.com/
e10s is multi-process, I thought, which is a different animal to the new plug-in framework they are pushing. I'm pretty sure I read no-script will have problems, but that might have been with regards to tree style tabs instead (which is shown to work in the multi process branch)
What benefit could Google possibly get out of making Google Maps work without JavaScript? The main takeaway I have here is that older applications are more likely to support a no-JS scenario, and I'd imagine that's because they can just fall back to older legacy code.
I negotiate the javascript issue by using two web browsers -- w3m for reading hypertext and Firefox when I want javascript. I'm fine with my bank or an online store expecting to use my browser as a thin client, but if you require javascript to present your blog, I'm going to find a different blog to read.
This policy makes it pretty much impossible to use social media, because social media is basically blogs-that-require-javascript. I see this as a beneficial side-effect.
http://mbasic.facebook.com
if you ever do miss social media this gets you a javascript-free, minimalist version of facebook. I use it all the time on mobile and get hours more battery compared to their app or standard mobile site.
I have not used Javascript in years. I boot to textmode and use a text-only browser.
There are exceptions when I use JS. When I have to use someone else's computers or manage certain accounts using the www. The later not being by choice.
In the 90's, there was an attempt to allow web developers to run their code on the user's computer via the user's web browser. This was called Java applets.
There was no limit to the pie-in-the-sky promises that developers were making back then. All based around the web browser and Java.
Not surprisingly, Java applets failed. I think there were some security issues. Maybe. Not sure.
Javascript reminds me of Java applets.
I am not sure if there are security issues with Javascript. From discussion surrounding this language, today we are asked to believe there is nothing that cannot be accomplished with a web browser and some Javascript. Unlike Java applets, this time, it is real. I think.
But, as far as I know, I too can do anything I want to do without using Javascript. And without using Java applets. If there are websites that truly require Javascript (see below) in order to access data, I cannot find them.
1. require
Here, "require" means that the data could not be served to the user via any other e.g. more direct mechanism than through a series of steps that includes Javascript being run by a web browser. Here, "require" does not mean the choice by the website owner to use Javascript in this way or a message displayed to users that suggests "Javascript is required". The meaning here refers to technical capabilities not design choice.
Re [0], note that Google Maps on the other hand seems to handle lack of JS very well. I guess it can be blamed on different teams in the same big company.
If by "seems to handle lack of JS very well" you mean "does not work at all", that makes sense. Not being sarcastic either, it seems they decided to take an all or nothing approach and force the user to enable JS to use Google Maps, or instead see this page: https://i.imgur.com/Qc156Ds.png
Maybe the maps product wholly depends on JS for all functionality?
Yeah, my point is that they put actual effort to show you a nice-looking page when viewing Maps without JS enabled. Compare to the Chrome download page, which blanks out. This is orthogonal to whether or not a service can/should work without JS enabled.
HTML isn't just simple to the programmer, it's also simple in other ways which gives it unrivalled accessibility (to disabled folks, different browsers and devices, web crawlers). Not sure any SPA framework can compete.
Have you tried relying on SPA crawling in practice? As far as I know it's still messed up and hasn't changed in half a decade, but maybe I'm missing something.
> …it’s a sad indictment of things that they can be so slow on the multi-core hyperpowerful Mac that I use every day, but immediately become fast when JavaScript is disabled.
> It’s even sadder when using a typical site and you realise how much Javascript it downloads. I now know why my 1GB mobile data allowance keeps burning out at least…
As a developer, I understand not wanting to devote a large amount of time, catering to some insignificant portion of your audience who disables JavaScript.
And I love JavaScript, but I cringe at the thought that we are needlessly slowing things down with MBs of JS that change too often to be meaningfully cached.
If you are serving MBs of JS after gzip, please make sure that I am not going to have to download all of it every time I pull up your site on my phone every week or so.
I just did the same experiment and, coincidentally, also checked feedly, only to receive a totally blank page. However, a little digging revealed an intent on feedly's part to handle this case. They have a 'feedlyBlocked' element, with content, that is simply fixed to "opacity: 0"; obviously, without javascript enabled, they can't then dynamically display that content. Another approach required.
As an aside, it's very difficult to contact feedly directly, but they do have a uservoice account (https://feedly.uservoice.com).
I surfed without JS on for a long time. I found that it made the web very very pleasant!
What totally breaks is when you do a complicated bank transfer or work on some older site. They often transfer you across multiple domains, so whitelisting doesn't work, as you can't anticipate what site to whitelist (when you check your points, bank.com redirects you to bankrewards.com, things like that).
What's really needed is a chrome extension to let you turn JS off and on, and if you turn it on, it can be set to auto-turn off after some time. What would be even better is if Chrome offered this behavior in an easily accessible way.
My credit union's web site hilariously misinterprets my browser's portrait-mode aspect ratio and lack of Javascript as evidence that I must be using a phone, and serves me up a "mobile" site which is lighter, simpler, faster, and generally more pleasant to use than the normal one. Yes, there are a couple of extra pages to click through when transferring money from one account to another, but the pages load so quickly that it's still actually faster than the Javascript-based menus in the desktop site.
Every time I set up a new browser, NoScript is the second plugin I install, after uBlock Origin. Between the two, the web is far less annoying than it used to be.
There is a famed quote that goes something like this:
"An engineer is someone who can do with ten schillings what any fool could do with a pound."
By this definition one could argue Javascript developers are not engineers.
If the user can get the desired data from a website without having to run 4.5M of js in a large browser, but the developer "needs" 4.5M of js and a team of people to deliver the data, then who is the "engineer"?
This reasoning i.e. "supporting" people who are not using Javascript make little sense.
It is the Javascript and website complexity, the embellishment of data with needless garbage, that necessitates "support" i.e. work for developers. It creates more work.
Serving text without embellishment requires less work, not more.
At some time or place in every website development project, data exists in plain text or some other raw form.
Some users might just want that data as it is, without embellishment, before web developers even start working.
This requires little if any "web development" work. Basic HTML can be autogenerated with ease.
Or the user can just use a link to json file and generate the text/html themselves.
# usage: $0 section
# sections: world, etc.
curl -4o .$0 https://static01.nyt.com/services/json/sectionfronts/$1/index.jsonp
exec sed '/\"guid\" :/!d;s/\",//;s/.*\"//' .$0
For those who "want" and "demand" it (the 99.9% as you would have us believe), web developers can also create a whiz bang version of the site that encapsulates this data in a cutting edge "web app".
Meanwhile we are having this discussion on a web site that does more or less exactly what I am suggesting. It can be easily autogenerated. The HTML could have been written in the 1990's. It is trivial to remove the tags and have plain text.
I guess we are the 0.1% that would ever access a website that did not need Javascript?
The fact that the popular browsers run all manner of javascripts and process all sorts of tags does not obligate anyone to use them. Whether it is web developers or users.
That argument only makes sense for informational websites with heavy focus on textual data. But the article looks at web apps like google maps. Good luck creating a static version of that.
+ I'm not saying 99.99% WANTS javascript, I'm saying 99.99% HAS javascript. And the economics of supporting the other 0.01% often don't add up.
So people can arrogantly claim here: 'If your site requires javascript, I don't want to use it'. But it's much more the website owner saying: 'If your browser doesn't support javascript, I don't want you as a user'. Since it probably costs more to have you as a customer then I will make out of it.
I download static maps. I prefer maps as images or images in a PDF. But I understand your comment. Indeed it is the textual data that is at issue with respect to gratuitous javascript use.
A large amount of data maybe even the majority is already textual. For this data, from a user's persective there is no valid argument that "supporting" users who want to read text requires additonal work. Serving textual data certainly does not require javascript.
The "arrogance" if any is displyed by a website owner who for some unexplained reason does not want to let any users (e.g. 0.1%) read text without runing javascript. As if any user who does not care about whether the website makes use of the latest popular browser features is a user they do not want. But maybe that is not really the reason.
It is the last sentence in your comment that is the interesting one. Perhaps the use of javascript is designed to take something from the customer e.g. personal data via some discreet mechanism that requires running code on the user's computer. If this is true, then one might argue it becomes clear why website owners do not want users who do not use javascript. Because the website cannot take something from the user where the mechanism of extraction is powered by javascript.
If that is the case, it may inform the user about the website e.g. any website that tries to force users to use javascript may be one that is trying to extract something from the user.
And as such may be a wesbite that the user should avoid.
As a sidenote, websites routinely serve content as text without the use of javascript because that is how they are most efficiently indexed by Google. Thus the website must "support" Googlebot. Some users may be quite satisified with the "Googlebot version" of the website.
Feedly being completely blank might be bad, but I had that debate with me when building my feedreader: Do you actually want to use a feedreader without js? Things like marking article as read would be really cumbersome, and marking automatically what was read on screen just now would not be possible.
I think it's pretty critical to note that, with only one exception, every website that works well without JS is what HTML was built for - static documents - and every website that doesn't, what HTML is famously bad at - apps.
More people should be aware of static blog generators.
It's a kind of software used to make websites where you write your posts in a markup language like reStructuredText or Markdown, and once done a script transforms it into HTML and CSS. Various plugins exist, including for comments, although this tends to sacrifice your "staticness" in varying degrees, especially if you just use Disqus.
While that's true it's not reasonable to assume the user has JS enabled. Ad-blocking plugins that also block JS are increasingly common.
and it's no longer reasonable to expect all websites to work very well without it.
The question should be "what does work very well" actually mean?
For a closed application that requires the user to log in I don't really care. Users won't use the app if it doesn't work for them, so it's entirely up the developers to build something the users want. Talking about web applications is silly. For a public-facing website though, I would expect it to at least work without JS even if it doesn't necessarily present as good an experience as it would with JS (which is nothing new, it's just progressive enhancement). The problem is web sites that present an essentially a meaningless page (blank, no content, populated but broken, etc) to users who have JS disabled. That's where work needs to be done.
Instead of building for 0.5% of users that can just enable JavaScript, why shouldn't I invest the time into building a new feature, writing more unit tests, or writing an app for Windows Phone?
For the same reason we build features in websites that work for blind people (aria tags everywhere), disabled people (accessible buttons), people who speak different languages (i18n), VIM users (GMail keyboard shortcuts), etc. We choose who we build for. If you want to ignore 0.5% of your users then that's your choice, but don't complain when you submit something to HN and get a page of responses saying it doesn't work.
There are more places to have a browser than on desktop computers, also not everyone wants to waste battery with JavaScript junk, the <blink/> tag of the 21st century.
It's 2017 and we should be over this "but it works on my machine!" attitude. It's completely reasonable to expect text and images to render without the need for the extra fluff.
I guess I should expect that an article about not using JavaScript is written on a page with standard Google Analytics tracking and a JS bundle of... partially-minified code that is primarily used to apply styling.
> "(…) but I just can’t get down to less than 30 in any one window"
Just like me! I also suffer from the same disease, but on a much more serious level… lol! Everything just crashed yesterday and the count was on 270+… :-o
I can see the desire to disable JavaScript, but I think it's more useful to disable CSS on most sites. I'd be interested to see this article except without CSS instead of JavaScript.
People should realise that most sites today aren't websites, they're web applications. HTML is just the template/output language.
Without JS you would have to download an even crappier, probably Windows only desktop app since it wouldn't be possible to implement the logic.
No, most websites are still websites, and one of the biggest problem on the web is the plague of people who think their website should be an application instead.
Actual web applications, like e.g. office suites, map applications, ect. are obviously extempt from "should work fine without JS" rule.
Are you really trying to say there's no argument to be made that SPA + prerendering is a design pattern with advantages over flat html, even for static content?
Obviously there are disadvantages as well, but it's a tradeoff, and if executed correctly it's one that, as fewer people turn off JS and browsers become better at running web apps, is increasingly becoming worth the complexity for certain use cases.
In theory, maybe there is an argument. In practice, most of the time it'll be a pretty bad one.
For me it's about how user-hostile those decisions are. I can understand SPA-s when e.g. the service would be infeasible if all the processing was done server-side. But more often than not, it's just laziness. The devs can deploy a sleek-looking SPA in 5 minutes with their JS toolchain, forever dooming users to download tons of pointless JS that adds nothing valuable to their experience. But who cares, today's zeitgeist is "privatize the profits, socialize the costs".
Think of all those sites with articles and blog posts that blank out if you have JS disabled. There's no reasonable argument to be made that they should be SPA. That's just a case of user-hostile laziness.
> The devs can deploy a sleek-looking SPA in 5 minutes with their JS toolchain
Oh, if that'd be true...
Our experiments with React took quite a lot of time (like, weeks) before we got things in shape. And then some more time before that shape wasn't a pear.
And if the backend is not in JS, server-side rendering is quite a mess.
If the main work your site is doing is showing pictures and text - and this is still and always will be a large part of the web - the browser has a number of features that work very smoothly and reliably out of the box: scrolling, page history, following a link when it is clicked, right-click save as, searchability, basic accessibility, etc. - and typically much quicker rendering than something which waits for a lot of extra HTTP requests and extra work being done in a VM somewhere.
Reimplementing these features in JS is often pretty feasible, and occasionally there are use cases for it, but it's a lot of additional overhead and it's not at all unusual to introduce bugs that really hurt the UX. (UX is not just adding as much branding as possible on top of vanilla browser behavior.)
The average newspaper website should not load 15 megs of junk from 20 different domains with a massive amount of Angular code just so I can read a single paragraph of plain text. This is an example of a document, it's what the web was made for and browsers already do it well without turning most documents into an SPA.
I think he may well be saying that, and it's actually true. Want to make an SPA for static content? That's fine, but don't kid yourself that it's not worse in every single way. It's not a design pattern either, it's an anti-pattern.
> as fewer people turn off JS and browsers become better at running web apps
In my anecdotal experience, I am running across more people that are turning off JavaScript, partly because browsers are becoming more bloated and worse at running web apps.
Why don't you try browsing the web without CSS enabled? Or just go straight to blocking all HTML?
JavaScript is a core part of website development, regardless if it is not the only way to do it. It is unfair to judge websites on whether they can run without JS.
What is the motivation behind this mentality? Sure there are still webpages on the web, but there are also web apps. Why or even how would you expect a web app to work without JS? Why would you expect developers to spend time writing a non JS version of their app? Before taking on this endeavor, I would prioritize native apps, optimizations of all sorts, adding features, etc. Unless your app is gmail, why would you ever spend time on creating an HTML/CGI based web app? I keep seeing these posts and I just don't get why anyone cares about this.
The content is the main thing users care about and it should be available as soon as possible even on a low-bandwidth connection, or a low-end device, or using a screen reader. This is what Progressive Enhancement https://en.wikipedia.org/wiki/Progressive_enhancement is about.
Progressive enchancement is a rubbish paradigm because it's incredibly difficult to implement. React isometric rendering has made it slightly easier but it's still one of these catchphrases people throw around lightly until they try it.
- Twitter, is quite literally, a text based feed. There's no reason for this not to work without JS
- Netflix would depend on what you're doing. Much of it can work just fine as text, images and forms
- Youtube has a lot of text and the videos could be played without JS if they wanted to bother (HTML5 ftw!)
- Maps can also work OK without JS (just without the additional stuff, which is fine for progressive enhancement). Google "google maps without JS" for examples of what other people have done. I'm pretty sure you could also get around JS with pure HTML5 + CSS3, but I have no examples of it
So easy to write this in a comment but none of this is true. Twitter is not a text based feed. If it was, people would still expect it to be iteractive w/o full page reloads. Same goes for all your examples. DHTML happened, and for a reason. It's not going to unhappen.
Please don't post unsubstantive personal attacks (which this is) to HN. If you know more than someone else, teach them; then we all learn. If you don't want to do that or don't have time, that's fine, but then please just don't post anything.
Commenting just to condescend about how ignorant someone is, instead of helping them, is lame.
Maps would be severely limited without JavaScript. However, Twitter, YouTube, and Netflix could just as easily be implemented using standard HTML <form> and <video> tags without much loss of functionality.
Looking forward to the author's follow up: "A day without a CPU". I am sick of lazy, profligate coders assuming that my computer is a von Neumann machine. And that just because I run their software, I am happy with it spending billions of my CPU cycles. How did we, as an industry, get to this point!?
The CSS file is also nearly 1 Mb.
It's just a simple website with some forms.
They have 2 developers working on the site, a scrum master, a project manager and 2 testers but somehow they can't find the time to find out which legacy code they can remove.
It's only after I pointed out that they have an exceptionally high bouncerate on their (expensive) Adwords traffic and the slowest pageload time of all their main competitors that I managed to get some priority to optimize their site.
Serious question: are there any tools that can scan a full website for unused code and unused CSS?