Hacker News new | past | comments | ask | show | jobs | submit login
A day without JavaScript (sonniesedge.co.uk)
386 points by pmlnr on June 7, 2017 | hide | past | web | favorite | 394 comments



One of my clients loads a 4.5 Mb bower.js (including Angular with a lot of components and jQuery), they also include an extra jQuery script, a full jQuery UI and several other scripts on each pageload. Nothing is minimized. The bower file alone has 300k in comments.

The CSS file is also nearly 1 Mb.

It's just a simple website with some forms.

They have 2 developers working on the site, a scrum master, a project manager and 2 testers but somehow they can't find the time to find out which legacy code they can remove.

It's only after I pointed out that they have an exceptionally high bouncerate on their (expensive) Adwords traffic and the slowest pageload time of all their main competitors that I managed to get some priority to optimize their site.

Serious question: are there any tools that can scan a full website for unused code and unused CSS?


Chrome 59 has a coverage tool built-in: https://developers.google.com/web/updates/2017/04/devtools-r...


Note: The color-coding is likely to change in future Chrome releases. <-- Well, someone eventually found out that some people cannot distinguish red and green, but it was too late :-D

I am a little surprised to find such an issue in Google software as it is a topic for first semester CS undergraduates ;-)


Wait what? CS students learn color about color blindness now? I really wasn't in the good classes!


I didn't learn anything graphics-related in my CS degree. Seems like more of a UX or software development topic than a CS topic.


I learned it in both my computer graphics course and my user-interaction design course as part of my CS degree (German university). None of these courses are required, but they are popular enough that the most important lessons were pretty universal knowledge among students.


I taught a ton of a11y stuff in my intro to web classes. Even did screen reader demos. I wasn't the only one. There are lots of folks teaching this stuff. Not a critical mass, but I'm so thankful it's happening. :)


I think it's a joke.


I learned that in my 1st year HCI module at University nearly 20 years ago. It shouldn't be a joke. It should be normal.


I actually learned that as a CS student.


If it was actually part of the curriculum I sure hope that it was in some design centered elective class. IMO if your CS degree spent time teaching that as part of the core curriculum, they missed an opportunity to put more Math, PL theory, and interesting algorithms in there, because there's more than can be sanely covered in any one curriculum.

Since it's accessibility based, it's more laudable than teaching CS students how to center a div, but it's not like it really requires a mentor of some sort to express the nuances of, right? Or even if it does, it's still design.


It's some years now, and there were at least two courses in which it could have been. The one was elected, HCI. The other was not, it was an introduction into graphics and audio, and since you need to understand basics of human perception to understand compression in that area (jpg, mp3), they talked about stuff like that.

A good CS degree definitely has the space to teach some basics in that area. To mention Gestaltgesetze, to explain human perception a bit, and give an introduction into usability. You do not get a useful developer in the end otherwise


> You do not get a useful developer in the end otherwise

Not all developers do stuff with UI, and of those that do not all do anything with a UI that is actually graphical beyond a terminal.

> since you need to understand basics of human perception to understand compression in that area (jpg, mp3), they talked about stuff like that.

That is a good reason to teach it, and counters my overly assertive original comment.


Seen the same on maps where green is a good route and red is a dangerous route. Switched to a blue -> light blue -> yellow -> orange -> red -> black scale with great success. I also believe chemistry uses blue to mean safe, not green.


Off-topic, but:

> Full-page screenshots. Take a screenshot of the entire page, from the top of the viewport to the bottom.

I'm surprised to see the Chrome developers not know what 'viewport' means.


Even more off topic but I'm surprised this isn't a consumer feature. There are so many sketchy chrome extensions that try and do this.


Even more off topic, but after Firefox (before Chrome) had introduced this feature in dev tools, they also recently introduced it as extension (and it might even work on Chrome)


That's interesting! Do you know if it is possible to access this data from an extension? I would love an extension that crawls through the website and combines the data for all pages.


On a minified app it only seems to show line-by-line coverage of the minified files and not the source maps :(


You can pretty-print the file through the Sources panel.


I don't think more powerful tools are the solution here. The problems and solution are obvious. The root problem is that many "product" teams will take any 10x improvement from tooling and adapt by being 10x more inefficient. Because they can.

Hardware (laptops, phones), and browsers js and rendering engines, are all 10x faster than they were 10 years ago, giving 100x to 1000x overall speedup. Think about that, it is truly incredible. But web pages are also 100x larger and less efficient than 10 years ago. This is absolutely not justified by productivity increases. This is just how the world / humans seem to work.


"Work expands to fill the time allocated for its completion." This holds true for computer scripts executing a workload as well.


Developers tend to over time expand their programs to fill available memory and take up available CPU, too. My currently open text editor's memory footprint is apparently 36.6MB, and it is displaying a single file composed of ~1000 characters.


The scary thing is that I actually consider that to be remarkably small. My current Emacs session (recently opened) is just over 200 MB is size. And Emacs isn't even considered a hog anymore compared to the real hogs like Atom.

Remember when Emacs meant Eight Megabytes And Constantly Swapping?


> They have 2 developers working on the site, a scrum master, a project manager and 2 testers

... and daily standups? so busy working that there is no time to do the job


Exactly. And the people around the developers are busy coming up with things that make them look busy.


> Serious question: are there any tools that can scan a full website for unused code and unused CSS?

From my limited knowledge of your situation this seems like the last move to take. I would try the following, in roughly this order:

Serve libraries (Angular, jQuery) from a CDN to at least have the chance of hitting a browser cache on a visitor's first encounter with your site.

Use a minifier to strip all the unnecessary comments and whitespace (in addition to other benefits from minimization).

use gzip to compress the remaining minified files.

Those alone will likely significantly impact your load times. At this point, the remaining dead code should be relatively small and you can look into tools for surgically removing it.


As for JavaScript, Google Closure Compiler does some dead code elimination (DCE, a.k.a. tree shaking in Lisp lingo) as an optimization but it unfortunately doesn't tell you which parts of the code is dead. It also does not to DCE alone but does some aggressive size optimizations with it so, again unfortunately, it cannot output otherwise-identical code with only DCE applied.

This part is more of a shameless plug, but I believe public/industry/developers should be aware of what tools & techniques academia develops so that the researcher's efforts can be more useful. There are some static analyzers for JavaScript such as JSAI, SAFE, and TAJS (disclaimer: I'm working on a project related to JSAI on the lab which developed it). These static analyzers can discover the dead code if given all the entry points however:

- They don't have facilities to report the dead-code AFAIK (this is a trivial thing to add _in theory_, if there are parts of the program _unvisited_ by any of these, that part of the program is guaranteed to be dead). If somebody wants to add such a facility to any of the tools they are welcome. In case of JSAI, I'm willing to provide them all the information I can.

- They are conservative (e.g. sound) tools and may find too few dead code results.

- They don't play well with `eval` & friends in general although they try their best[0], the web frameworks rely on eval-like highly-dynamic approaches a lot to be generic enough and that hurts precision of such analyses dramatically.

[0]: https://pdfs.semanticscholar.org/8140/feec021818815c55a43b54...

[1]: https://www.cs.purdue.edu/sss/projects/dynjs/eval-TR.pdf


> Serious question: are there any tools that can scan a full website for unused code and unused CSS?

We're starting to get there.

JS Modules gives us static exports/imports, so we can statically analytics the dependency graph and identify functions that aren't used and do tree-shaking to remove them. Rollup and Webpack2 already does this, but there's more to go.

CSS Modules creates a contract between the your CSS and what uses it. As you have to import it, and exports are static, you can understand which style files are imported where. Next step would be to understand which classes are used inside that module and strip those that remain unused. This is theoretically possible (providing you use a more 'statically safe' syntax), but tooling hasn't yet popped up to do this.


Yes, it's called "tree shaking"[1], modern Angular apps can be cut down to a around 50-100k (assuming nothing heavy feature set). Find build tools that can do it, like Webpack or Rollup.

[1] http://blog.mgechev.com/2016/06/26/tree-shaking-angular2-pro...


Not really an answer to your question, but this might be useful: http://youmightnotneedjquery.com


Yeah. Except you need to do Ajax and fetch doesn't cut it. So new lib. And manipulate the dom with something better then the browser API or you loose your mind. So new lib. Then normalize browser events. Oh wait, you can do that manually. But you are writing a new lib. Eventually the code will grow to be the size of jQuery anyway. Only not as well tested, documented and cached.


> Yeah. Except you need to do Ajax and fetch doesn't cut it. So new lib.

Ah, so this is how front-end got the way it is.


You can implement an minimal ajax interface in js (no jquery) in less than 30 LOC. But I grasp what you mean, it not just that. There are other concerns and requirements and you do not want to reinvent the wheel everytime...


I am so sick and tired of this argument. Having done it several times, no, you do not end up with something as big as jQuery, not by a long shot.


Sure, if all you do is some onclick() and a few appendChild().

Plus for the time you spend chasing compat issues, writting wrappers, choosing small libs and packing it all together, you could have done your site. For less than 40ko.

People are not blinking at including react ecosysteme, which is huge. But attack jQuery ? Seriously ?


> chasing compat issues, writting wrappers, choosing small libs and packing it all together, you could have done your site.

I don't know about you, but usually my projects last longer than a week, so those sorts of savings don't represent a significant chunk of my overall development time. Also, I'm not a shit programmer, so I can actually write a for loop that does what I want on the first try. I've spent more time trying to figure out jQuery's goofy syntax and what the hell it's doing with AJAX queries than it has ever saved me in development.


As big as jQuery? Gzipped it's 27k, a ridiculously small size.


Ridiculously small? That could take 10 s to download on a 2G network.


Yeah, sure, most sites today are totally browsable in 2G.

Event the damn native APP for hackernews, using only JSON, is not usable in 2G.


For me HN seems to be one of the few websites that is usable on 2G.


Network bandwidth usage is also not the only metric of size we care about. After it's ungzipped, it needs to be parsed by the JS engine, and more code = slower execution.


> Yeah. Except you need to do Ajax and fetch doesn't cut it. So new lib.

Why doesn't fetch and its polyfill cut it?


Polyfill. So new lib.

And even then. You need to encode params manually.

And of course no middleware, so for special decoding or repeating headers, you end up writting a wrapper.


Programming is mostly creating functions and interfaces to decompose problems into reusable parts where reuse makes sense. Nothing wrong with creating a simple function for making XMLHttpRequests or for fetch API. Most of the time there's no need for the level of generality of jQuery interfaces.

Just by making a function you're not creating a library. Sometimes I find DOM API verbose too. Nevertheless the fix does not need to be a interface like jQuery to completely hide DOM API under another interface. It may be enough to extend the HTMLElement prototype a bit or write a function to "compress" the code a bit and make it more readable.


Just by making a function you're not creating a library.

Have you seen what passes for a library on npm?


Good point. :D


> it may be enough to extend the HTMLElement prototype a bit

You lost me here. Anybody extending standard objects cannot be taken seriously.


Do you realize JavaScript is a prototype based programming language? It is a major and very useful feature of the language. There's no reason not to use it. There are some things to be careful about with interop if you need it, but that's it.


We've already learned from our past mistakes changing the prototype of the standard library - the Ruby community learned the same thing with monkey patching their standard lib too. The maintenance overhead far outweighs any immediate benefits.


What maintenance overhead? Current and future costs are reduced by making code simpler and cleaner in the first place by using prototypes efficiently.

Also, to be clear, I'm suggesting adding new methods to DOM object prototypes, not changing behavior of existing methods.


Applications may be able to do this in a controlled manner. Library authors never should.


> Polyfill. So new lib.

So, you don't like the polyfill, created to the specification. Fine, but depending who you target, you might not need it. Plenty of modern browsers have good support.

> And even then. You need to encode params manually.

Or use the Request constructor.

> And of course no middleware, so for special decoding or repeating headers, you end up writing a wrapper.

Request constructor will let you handle the headers quite nicely.


No, a polyfill is not a library. It's a dozen lines, not a thousand.


> No, a polyfill is not a library

Yes, it is.

> It's a dozen lines, not a thousand

A polyfill can be any size, but there's no size minimum to be a library.


Overly-literal response.

A polyfill is a short snippet to implement a missing feature. Notably almost all recent additions to JS and DOM have been designed to be polyfillable where possible. Generally that means under a thousand lines, by the way.

A library in this context means a set of functionality built on top of the platform, like jQuery, with its own API, etc. "Use a library" and "use a polyfill" are not the same advice.

A polyfill is not a library.


Some people are mistaking a book (actually a module) for a library.


>Eventually the code will grow to be the size of jQuery anyway.

Um what? jQuery is huge. You won't possibly use everything in it.


Well, define "huge." It's like 28KB that your user may already have cached if you use a CDN. https://mathiasbynens.be/demo/jquery-size


I meant huge in terms of functionality. It supports so many things that you can't possibly need everything.


Event + Ajax + DOM is most of jQuery. And is most what your webapp is. Now I'm more of a fan of Vue.JS + axios myself. But if I have to go minimal, 40ko of jQuery is nothing and save so much trouble. All the "you don't need jquery" thing is done by people doing prototypes that are not tested on most browser or assess in term or ressource vs reward, which is a lot of the projects in the JS community.


Oh. Well that's true, but I think the point is more like you end up implementing some significant subset of that, except with homespun code that isn't tested as well.


> Except you need to do Ajax and fetch doesn't cut it

curious as to why you think this? fetch is pretty convenient to use. We created our own wrappers around it, but that basically just consists of ajax = (url) => fetch(url).then(res => res.json())


Exactly, you had to write a wrapper. And the next project will do so. And if you want to save time, and improve reliability, you'll test it, and write doc for the 3rd. And the new team member.

Or you can use the few ko of jQuery.


Also, vast majority of users will have a cached copy of jQuery from Google CDN already.

Now bundling it with the rest of your JS, that's indeed a mistake.


CDNs are a lie. The privacy implications outweigh the performance benefits since there are many, many, many CDNs and many, many versions of jQuery. It's rare for users to benefit from caching.


I believe this is true, but the world will only be convinced if there are metrics for this. Are you aware of anything that verifies it?


Also, the performance benefits from free CDNs that you have no service level agreements are unsubstantiated at best, but you'll notice very quickly how much your site is broken when your CDN is over capacity.


Good luck supporting old android browsers, IE and dealing with the inconsistencies between browsers. It only takes one look at the jQuery source code to understand why it's not a good idea to "roll your own jQuery".


Simply don't target old browsers. If people don't use JS, or don't have a modern browser, feel free to go elsewhere. I don't target edge cases as I don't get paid to.


I really envy the industry where you work on. In my industry you can't just ignore half a billion of outdated Android phones and Government seats.


I presume you get paid for this specifically?


uncss[0] works well for me. It removes the unused styles and then I run the css through a minifier to finish the job. The css filesize is greatly reduced.

[0] https://github.com/giakki/uncss


Have a look at PurifyCSS: https://github.com/purifycss/purifycss


For unused CSS, Helium https://github.com/geuis/helium-css


Lol @ testers. How about getting rid of the project manager and testers, and hire 2 decent developers who have some experience with automated testing, continuous integration and continuous delivery instead? Productivity rises 300% I would presume.


I think the best tool for this job in this specific scenario is simply "rm -rf. /"


Having browsed nearly JS-free for the last two-ish years, aside from a bit of tweaking at the start, I have to say that it has made things a lot faster, more stable, and just much more of a pleasure to navigate. If anyone is interested, I have found that two tools make life a lot easier:

- NoScript, which blocks execution of scripts save for those you whitelist. This means sites that reasonably require JS (e.g. YouTube, Google Maps) can remain functional.

- A bookmarklet that redirects you to the Google text-only cache of the current page, which is great for text-based articles that inexplicably require JS to show content.


uMatrix [0] "Point and click matrix to filter net requests according to source, destination and type" is also nice.

0: https://github.com/gorhill/uMatrix


Would you mind sharing that bookmarklet?


Sure:

    javascript:(function()%7Bvar%20loc%3Dwindow.location%3Bif%20(window.location.protocol%20!%3D%20%22https%3A%22)%7Bloc%3Dwindow.location.toString().replace(%2F%5Ehttp%3A%5C%2F%5C%2F%2F%2C'http%3A%2F%2Fwebcache.googleusercontent.com%2Fsearch%3Fq%3Dcache%3A')%3B%7Delse%7Bloc%3Dwindow.location.toString().replace(%2F%5Ehttps%3A%5C%2F%5C%2F%2F%2C'https%3A%2F%2Fwebcache.googleusercontent.com%2Fsearch%3Fq%3Dcache%3A')%3B%7Dwindow.location.replace(loc%20%2B%20'%26num%3D1%26strip%3D1%26vwsrc%3D0')%7D)()
Glancing at it, it just seems to drop the webcache URL in front of the current page location. Credit to http://www.localseoguide.com/easily-check-the-text-only-cach..., if I recall correctly.


My main issue with having no script on is for website checkouts. Especially when buying flight tickets. A lot of sites will introduce new scripts in the middle of a stepper that you didn't need to complete step 1. Now you can't submit step 2 or 3 and when you allow the new scripts the page reloads and you lose all your form data.


NoScript doesn't actually work half the time; even when you whitelist a site through it the site could well remain busted. Source: bitter experience.


Presumably this occurs when the site uses a CDN for their js libraries (or even their main js). After a few days you start to spot which domains need to be enabled and which can be safely ignored.


Can you whitelist a CDN but only for a specific site?


Best i recall, even if you whitelist a CDN it only comes into action if you whitelist the root domain of the site you are visiting.

Beyond that i guess there is GRE, where you can build "firewall" rules.

For example i have Facebook blocked via it unless i visit Facebook.com directly, thus they don't see me browsing all manner of sites via their like buttons and comment boxes.


Depends on your blocker. uBlock Origin does this well.


Where can we get the bookmarklet from?


This is everyday for me. Just whitelist the few sites that actually need it and most of the rest are fine. This should be the default. What do I care if some images don't load or CSS is slightly off? The main content is there. For sites that won't do anything without JS, I can consider whether I want to use them or not. Mostly not. Fuck other people running code unnecessarily, without my permission, on my computer. Especially Javascript. It was an extremely stupid and costly decision to have a scripting language like this run by default in browsers... though great for doing things against the user's wishes and making money off of it. And security holes. And privacy issues. Etc etc. The irony is that AJAX and SPAs were created as a response to the request/response model that was seen as too slow. Now it's these SPAs that are unbearably slow and buggy while the request/response model has only improved as hardware and networking have gotten better. I think if more people understood how the Internet works and what is actually happening, they would also turn off JS. No chance of that happening though.


> Fuck other people running code unnecessarily, without my permission, on my computer.

You gave them permission. Actually, you asked them to do it when you sent that GET request to their servers that returned a bunch of HTML that included script tags.


Is this really how developers think?

And what if the user does not download the .js file?

And what if the user downloads the .js file but does not run it in an interpreter?

What if the browser authors refuse to process what is enclosed in script tags and to run Javascript?

Maybe they believe it consumes too much memory.

Based on your comment (trolling?), I think maybe Javascript is not the problem. It is developers with thinking such as yours.

If we put inline js or links to .js files in files requested via HTTP, then we cannot claim that users "chose" or gave "permission" to fetch these .js files or to run them. As another commenter stated, those requests were not initiated by the user, but by the browser.

The selection of browsers is small, and made smaller by web developers who promote use of certain browsers i.e., big fat intepreters that will run their needless Javascript. (The keyword is "needless". Almost always users can get the data they seek without running Javascript.)

These "favored" browsers present a massive memory hog and attack surface compared to the minimal tcp client required to retrieve text via HTTP.


Not really. They gave permission for the initial GET request for the HTML. The rest of the requests were inserted into the document by the programmer and executed by the browser. The browser assumed those links should be fetched automatically and executed. :)


The rest of the requests were triggered by the browser, not the programmer, because they were part of the initial HTML and the browser follows the HTML spec. The person complaining about it and is technical and knows this, they were under no misconception that they were viewing a document format that doesn't include code execution. Of which I'm having trouble of even thinking of an example, because even Word documents can execute code by default.

If you dislike HTML, then don't use it. Use another format that serves your text-only needs. Gopher is a possibility, just don't click on any HTML files if you're using a web browser though.


No, according to the spec, I can enable / disable JS and not run any code, which I'm doing: https://html.spec.whatwg.org/#enabling-and-disabling-scripti...

So an HTML document viewed in a browser with JS off should not itself run code, ever.


Trivial to strip HTML tags and produce useful text.

Send all the extraneous garbage you want to the user. She can remove it and keep only what she wants. Meanwhile from the user's perspective you are wasting bandwidth. Who is paying for that bandwidth?


I was oversimplifying, hense the smiley.

But since you want to argue...

A request to an HTML document with no JavaScript src links, image links, or CSS links will not make multiple requests. I'm old enough to remember when I had to actively tell images to load.

The code for the page is written by programmers. Programmers cause browsers to make multiple requests. Often times without understanding "the spec".


Unless there's a non-JS src link, or a non-CSS link tag, or an xml:base attribute, or an area element with an href attribute, or... you get the idea.


All things put in by a programmer.

You get my point.


I find myself doing the same if I'm not on WiFi. I'll tether my samsung phone to my laptop, disable JS on chrome, whitelist what I like, jump into incognito to temporarily whitelist some page that requires JS; when done in incognito, the site isn't permanently whitelisted, it's just for that incognito session.

I know! Laborious, but it must be done.

I got into this habit because of the insane number of websites that have gotten into autoplay which essentially eats up all my mobile bundles. I love forbes for instance but it's essentially unreadable when on bundles.


Why i use Opera Mini even though the phone is quite capable of rendering the full page.

Just wish there was a way to install Opera Mini on Windows tablets...


I fully agree. Do you stop to think flash/action script was "obsolete" because all this? and now we have the exact same problems...


flash was obsoleted because 1) iphone didn't support it, 2) iphone didn't support it, 3) iphone didn't support it, and 4) JS replaced the non-DRM use cases (and now even replaced that).


Well that and Flash was a massive vector of security problems.

Plus it wasn't very good on mobile, even when it was supported (ie - on Android it was still bad and never got out of beta anyhow)


I thought it was fine on Android, I played many games on it that worked well even on those old underpowered phones. It was never good, but probably about 0% of flash apps were written that targeted mobile so I wouldn't have the expectation of good.


I think "fine" is being generous.

When it worked it was... not great, due to UI issues, but it at least worked.

Trouble is it seemed to not consistently work. I worked at a flash game developer at the time (Zynga) and it would not always load our games even if it had done so previously - the difference was sometimes just refreshing a page.


May very well be the case. All things flash are permanently disabled on my browser. If a site requires flash, I would much rather ignore the content altogether no matter how juicy it is.


I did a lot of Flash/Flex work - it was the bees knees of front-end dev in the days before JQuery, Firebug, Chrome etc. and was the most consistent across platforms, as the same plugin was used on each, as opposed to trying to make your web-standards-based app work on several incompatible implementations. HTML/JS/CSS didn't come close at that time, but there was hope that it would develop to make Flash obsolete - which it now has.

People always complained about how Flash "drained batteries" / "crashed browser" / "allows all this intrusive advertising". A case of blaming the technology for the way it was being used. People would say "I can't wait until Flash dies and we have the open web standards instead so we don't have these problems any more".

So, fast forward a decade or so, Flash has been replaced by open web standards and ... guess what? We still have those problems...


Because designers keep trying to turn static pages into "interactive" glossy booklet emulators, never mind "app"...


If I may, it seems to me like there are several possible points of discussion:

1) Javascript is bad (vs. Javascript is good and variations thereof)

2) The use a lot of websites make of Javascript is overcomplex, gratuitious, uncalled for, intrusive, etc.

3) Sites should provide some (even minimal) functionality to people browsing them without Javascript

#1 is largely a matter of personal opinions

#2 is a known, undeniable fact

#3 is where everyone could contribute

Personally I browse normally with javascript turned off since years (and I use another browser with the capapbility to display the site "fully" when really-really needed).

Particularly when browsing HN, I follow given links and often avoid a lot of pop-ups, ads, and what not, and from time to time, when I find a site that won't load AND I really think that the linked to site is worth it, I use the "other" browser.

What is curious is that most "new" products, Saas, whatever simply fail, showing just a blank page, that is exactly what the site shouldn't do.

I would gladly accept a "You need Javascript enabled in order to fully appreciate the site" kind of message, but only if some (basic, simplified how much you want it, even text only, etc.) content is shown nonetheless.

I simply cannot bear the "blank page" or the "Your browser is not supported, please download Chrome, Firefox, Internet Explorer and try again".

This makes me sure that the people behind the site in the best case have not understood the "showcase" nature of a website and in the worst case care nothing about user (please read as customer) experience.

So, if you create a site, consider how a part (even small) of the visitors may have not javascript enabled but they are anyway potential customers, they are somehow anyway interested in your site contents, by making their no-javascript experience so miserable you are turning them away from your site, sometimes forever, which is the perfect negation of the reason why you put the site together in first instance (to make your whatever content visible to the world).


I simply will not spend time tailoring a nascent or even mature product to 0.0001% or whatever minscule percentage of the population turns off JS. If you turn off JS, you get a blank screen. Why should I accommodate your cohort? Why write unit tests for someone who takes a standardish client with above 1% market share and does something completely unstandard with it?


I find that it's at least (usually more) double the work to build the JS-free version of something for a fraction of the user experience.

For example, imagine a forum where clicking the "edit post" button turns your post into a <textarea> editor and saves with AJAX so that you can continue scrolling once you make your edit.

To build the JS-free version, you typically need a separate endpoint, a new template, a redirect, and a less empowering editor. And all this for UI that 99% of your users won't see.

It just doesn't seem like a opportunity cost savvy way to build a website.

Then there are the UIs that take some real backsplits to accomplish without Javascript like a table that lets you mass-modify the rows with checkboxes and a <select> at the top that lets you choose options like Move | Archive | Delete.

You can wrap the entire <table> with a <form> such that all of your checkboxes submit to your mass-modify endpoint. That works without JS.

But since you can't nest <form> within other <form>, your <form method="DELETE" action="/things/{id}"> delete buttons can't appear in each row.

I have a hard time understanding how so many people in these threads can suggest that the opportunity cost of building for the 1% is always worth it. Does everyone just have basic blogs on the mind when they envision the labor involved in what they preach?


Non js users aren't expecting the website to work perfectly without scripts on. The bare minimum they ask for is that the site can be read without using scripts. Obviously it would be nice if all the moving components didn't need scripts either, but most people can accept that this is too much work.


Depending on the site, even just reading may be problematic. Imagine a webapp where all data is requested and sent as small JSON updates. This is faster and it's less data than full page copies, and can be more targeted to that user.

To build this feature for the tiny fraction of non-JS users, you're looking at duplicating all of this functionality on the server. This greatly increases dev time and complexity of the codebase. It means moving from an entirely API-driven platform to one where your server needs to understand the app logic as well.

Like it or hate it, Javascript is a web standard. You know that websites will break if you disable CSS -- the expectation should be the same for disabling Javascript.


Websites shouldn't break if you disable CSS - just look worse. The text should still be perfectly readable and images viewable.


That's an unreasonable expectation. Many websites have complex content and layouts, and no web designer will build their site to gracefully fall back with CSS disabled. That just doesn't happen.


It does happen when the developer cares to do it; I know several that do.

Website layouts need to be made linear (single column) for display on mobile and the content order usually reflects that anyway. They also need to make sense when using screen readers.

Disabling CSS and checking the result gives you a quick sanity test that the content order makes sense and that the markup is used somewhat correctly. It won't be pretty, but it should be useable.


For me, yes. I should be able to see at least the main content of your blog without JS. I don't care if it's perfectly formatted or styled. Just put the text in a div.

When I go to a blog post linked from HN and see a white empty page I will just move on.


Good. Please leave. I don't walk into a restaurant and demand that they cook my food without using a knife. Don't expect modern websites to bend over backwards for a (frankly rather juvenile) view of a modern web standard.

Code execution is part of the web, it's here to stay, the number of sites that support non-JS experiences will continue to dwindle.

You're not some special snowflake. Other people make real decisions based on cost/value models, and you cost way more than you're worth.


Where does the cost of making a blog readable without javascript com from? It takes a lot of extra effort to make a blog that requires javascript.

As far as I'm concerned, if it's not readable without javascript then you've got nothing intelligent to say anyway.


>As far as I'm concerned, if it's not readable without javascript then you've got nothing intelligent to say anyway.

Great! Leave please. I have no desire to interact with folks who approach the world in that manner.


A blog post is one thing, but many web developers that venture onto this site -- myself included -- are building complex web applications that would simply not make any sense as a "read only" website.

You are more than welcome to disable JS, but the number of web developers that I've met that actually care about a non-JS user experience is next to zero.


> But since you can't nest <form> within other <form>, your <form method="DELETE" action="/things/{id}"> delete buttons can't appear in each row.

Of course they can. The <form> action submits via POST to an endpoint that disambiguates the selected action, and forwards inside the server side to the appropriate endpoint. Shouldn't be hard with a competent framework.

Then, when JS is enabled, you override the click action on the action buttons, so that the <form> action never gets used.


When you add up all of the ridiculous hoops web developers have to jump through for cross-browser compatibility, a non-JS UX is the least of their concerns.

You're asking web developers to come up with two methods to accomplish the same goal, which doubles the number of ways the app can break. It's totally unrealistic for any moderately complex web application.


I agree almost fully.

I think it depends on the user demographic. If you're actively developing a service that you hope to be used even in the most deprived areas with outdated equipment and poor connections, then it may be worth it. For example, outreach programs, charities, and emergency services.

If you're developing something that is more or less a needless, but useful, product -- your demographic is more than likely the upper 30% of earners in the world... yeah, probably not worth your time to develop for that 1% who won't be using JS (or have the capacity for only older versions).


In my experience, optimizing for perceived latency (time between user request sent and site becoming usable) inevitably leads to server side rendering, in geographically distributed availability zones. If you know this in advance, then you can choose to use tools that support code reuse across the client/server divide. This way, SSR is mostly a one time cost, and it can be implemented at the very beginning of the project, while everything is still nice and simple. It does impose additional constraints during UI development, but many of those are good practices anyway (proper use of HTML elements, prefer native form elements to fully custom inputs, use URL and history to manage state, etc).

It has been somewhat humbling to watch users on low speed connections use the app before the JS finishes loading; it shows everyone the true value (or lack thereof, for the passive content consumers who make up ~98% of the visitors) of the JS portion of the app, since they see it with and without JS. Sure, they can't drag a box on my fancy visualization, but they can see a snapshot of it, and use the peripheral controls as links to navigate to the desired application state. Then, as they are reading something that is useful to them, the ~2mb JS blob slips in and quietly brings the application to life.

Also, I love the way the 3rd party JS blobs that management mandated, are so obviously slower than the rest of the page. This gives me good ammunition for the arguments where I demand an HTTP API, not a JS blob, from our partners. They hate that, because then they can't stuff their own trackers into our clients. Fork 'em.


I use NoScript. If a site interests me I unblock it, I may also unblock one or two external scripts. What I encounter quite commonly is a list of 40 external scripts, which results in a closed tab. There is reasonable use of JavaScript and then there are web devs. playing gotta catch them all with tracking scripts and, at the risk of repeating myself, social media integration.


> For the same reason we build features in websites that work for blind people (aria tags everywhere), disabled people (accessible buttons), people who speak different languages (i18n), VIM users (GMail keyboard shortcuts), etc.

From a comment below.


Great counter example, thank you. Accompdating people with disabilities is totally worth it though imo, as an example of a very minor cohort that's not all about percentages.


I use Vimium in Chrome to be able to use the keyboard to navigate web pages. Gmail in particular conflicts with this by implementing its own keyboard shortcuts that interfere with my workflow.

Some people may like it, but I'm just saying that it's not a feature that isn't disliked by some people.


Agreed. Before reading further down I commented on another response with a similar message.


Supporting a JS-disabled experience should not be your first priority. But, reducing needless reliance on javascript will improve your site for every user - making the page load / respond faster, use less CPU and memory, etc. It will also improve the development experience for you - faster to test, less crazy codebase to work in. It's really just architecting the site better.

Some people are so scared of "premature optimization" and "catering to a tiny number of geeks" that they trick themselves into making a huge JS mess. Most websites these days seem to be examples of this.


It really depends on your pages.

If you are building a blog or a news site (a classic "content" site), it should absolutely work without javascript.

If you are building an admin interface for said news site, where articles are edited and published, it is okay to require javascript.


because its good engineering. you don't know what might break your JS in the future. you also don't know which user agents people will use in the future. I was quite surprised to discover that some "UC browser for Android" had quietly racked up a 9% usage[^1]. UC doesn't come without JS, but has a limited feature set, and without feature testing there's a good chance it will break your site. It broke [mine](https://qwtel.com/hydejack/), if it wasn't for feature testing and serving vanilla HTML. Also, and I know it seems impossible, but sometimes programmers ship broken code. Again, not relying on JS gives you safety net to fall back on.

[^1]: http://caniuse.com/usage-table


Nonsense. This is definitively YAGNI. Good engineering is accommodating the greatest number of people with the resources you have in the least amount of time. Thinking about some abstract future scenario is categorically not good engineering.


You misspelled "good bean-counting". Good engineering requires you to cover for everything you can.

Case in point: car seatbelts and airbags. Most people don't crash, right? But you aren't seeing the car manufacturers remove seatbelts and airbags.


You also don't see car manufacturers change their designs so that they will accommodate drivers who've decided to remove their steering wheel, nor do car manufacturers design their cars for some possible future world without gasoline.


I've had this one coming. Serves me right to believe contrived example won't be nitpicked. :D


When everything is made from beans, the bean supply must be kept in mind.


Consumers by in large know driving is dangerous and safety features are something car manufacturers advertise to consumers. Not to mention over the last 30-40 years drivers have come to expect these sorts of features in cars because accidents happens and those features increase survivability, by a lot.

The same sort of consumer expectation doesn't exist for no-javascript. Major browsers default it enabled. The web has developed to be seriously built on top of javascript for the last 15 years. Consumers have expectations for how the web should work in 2017 (for better or worse) and no-javascript isn't anywhere on the radar.

Consumer expect cars to have airbags and seatbelts, consumers expect websites to behave as they always have and that means javascript.


JS is necessary. But what do we do about the extensive tracking and extremely unoptimal requests, for which we already have numerous articles out there?

If there's no soft solution, disabling JS altogether is a good fish net for a lot of people -- yes the percent is close to a rounding error but the absolute amount can go to 1 million or more.


Counter point: many cars aren't made to accommodate people as tall as a professional basketball player. Should I call them all bad (or at least not good) engineering?


Counter-counter-point: I am above 180cm (far from 200+ where many basketball players are) and many taxis aren't tuned for me and I bang my head on the ceiling so yes, I definitely call the manufacturers of those cars bad.

Statistical averages are a back-patting technique between managers, let's face that reality, shall we? They very rarely have the customer's interests in mind.


You have to draw a line somewhere, though. Every engineering decision has a cost, whether time/money, opportunity cost, or design tradeoffs, so it's not practical to cater to every single user you have.


There are many people above 180cm.

Again, let's face it -- it's bean counting, some medium-level product manager wants a raise by saving expenses.

That's all it is, that's all it ever was, and it ain't ever gonna change while there are people on these positions.


Good engineering requires good bean-counting. Figuring out what you can do with infinite resources is not good engineering (except perhaps when funded by DARPA).

As any good civil engineer knows... “Any idiot can build a bridge that stands, but it takes an engineer to build a bridge that barely stands.”

The essential characteristic of engineering is making mindful, appropriate tradeoffs between various costs and benefits, including features, risks, externalities, financial costs, etc.


As I mentioned in another comment, it was a bit exaggerated example to illustrate a point, not to make the people nitpick it.

The corner-cutting in the current tech culture is a bit too much for my taste.


what a ludicrous analogy. engineering is an applied science, not a purely theoretical one where you can remove the human impact of your decisions from the equation. comments like this are a prime example of why engineers need to take more liberal arts classes


I agree with you about liberal arts and have the college transcripts to prove it. Unfortunately, your HN comments have been frequently uncivil and/or unsubstantive. Would you please not post like that? It's just what we're trying to avoid here.

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/newswelcome.html


Would you be so kind to point out how would a liberal art knowledge would help people like me understanding the issues better?

I think I understand where my opponents stand, but I disagree with a part of them.


I don't know how the point applies in this case, or if it does at all. It's pretty hard to draw specific lines like that.


It was a slightly exaggerated example. Nothing can be infinitely optimized of course, but the corner cutting in today's tech culture is a bit too much for my taste.


you're right. let's get rid of safety nets, because YAGNI. it's so efficient!


> If you turn off JS, you get a blank screen.

I browse with JS on (but I block ads and most tracking scripts) and only use a separate browser profile with JS off for sites that look fishy.

Requiring JS for a web app or interactive features in a website is fine.

A few years ago I had to use an old PC for a while, and disabling JS by default improved my experience significantly, but too many websites simply didn't display any content without JS, or didn't load images. The "web apps vs. web pages" debate has been beaten to death, but being unable to e.g. read a blog post without enabling JS just feels wrong to me.

I'm not sure if things are as bad now.


>I simply will not spend time tailoring a nascent or even mature product to 0.0001% or whatever minscule percentage of the population turns off JS.

This is reasonable logic and I think it is an important point in getting more people into a technology movement, whether it is removing JS or anything else. A similar example is getting encrytion: many users don't know or care to use it, so it is usually a side issue if mentioned at all in a company. But if the percentage of people who do use it increases, companies will have a greater interest in supporting it.

Currently, normal users just care that content loads when the click the link for a website, and perhaps don't understand or care that JS is running or what the consequences of that is. A more constructive way to get better experience without JS is to ask others to also block it to show companies that this is an issue, rather than just directly asking companies to add support.


It might make sense to think about who disables JS. Likely it's technically knowledgeable people with a concern for privacy and security. If this describes your users or customers you may benefit from at least some level of non-JS functionality.


those knowledgeable users might be installing JS blockers in their friends and family computers right now.


And then they get "since you played with my computer the internet doesn't work properly".


"you broke google!"


That sound fine if you don't see any problems with showing a blank page for no-js which was supposed to displays only some texts/images/links.


Adobe flash is dead since many years, people that was warned that they should not do only flash website,and for the same people can browse without JS what you tell them now? they site aren't at all usefull, you failed. All website or software should never fail so easily. I truly support jaclaz. No message : you should use blablabla, or enable this, or anything else, people goes away.


I agree, maybe kids at Google get paid big wedges for this, I don't


Even though I am a frequent advocate of leaning on more old school techniques for building web apps (even with new school tools), I vehemently disagree that this is a generally worthy cause. A small list of (I think non-controversial) things:

1. You browse without JS. You're a power user. You already know why the page doesn't work. A courtesy message would be nice, I guess, but it seems fairly pointless.

2. Should my HTML and CSS still target IE7-friendliness? IE7 users still have more market share than actual humans who browse without JS. They get treated like lepers in the Year 0, and we all seem to be okay with that.

3. Well-designed use of JS can help distribute computation load that might otherwise all fall on the server. A very contrived-to-be-important (but fairly common) example of this is scaling/cropping an image before it's uploaded. Or, even though I have some other reasons to hate on SPAs, page rendering itself. The no-JS user is more expensive to serve than the typical user.

4. Except for some extreme cases of JS reliance, search engine spiders are a head and shoulders more accommodating to developers' choices than no-JS users are.

I just don't think that making accommodations for no-JS power users, who are not plentiful, and know full well how to turn JS back on, is valuable. Making JS bundles smaller and making them load more efficiently? That's a worthy cause for sure. But people who browse without JS at all are extreme outliers in the 1st world, and all of them know how to toggle it back on. Developing countries? If I were trying to serve those users I would do the research and make choices accordingly.


>1. You browse without JS. You're a power user. You already know why the page doesn't work. A courtesy message would be nice, I guess, but it seems fairly pointless.

Well, the point is that a lot of people post their site on HN as "Show HN", and the "intended audience" is then (or should be) "power users", which should also mean that "power users" (few as they might be) are potentially interested to your site, but you make sure that only a subset of these power users can access your site (not necessarily in full, sometimes a simple text summing up what the thing is about would be enough to either - as I do if the page doesn't load - discard it or get me interested enough to turn javascript on/use the "other" browser).

If you don't post your site on "Show HN" the missing "no-javascript message" is only a (small) lack of courtesy, if you do post it, it sounds (to me) like a plain lack of attention.

In the real world, you open a new shop in town, and advertise on the local newspaper "everyone is invited for the shop opening tuesday at 7:00 P.M.", than you put at the door a couple big guys that only let in those dressed "black tie". I do have a suitable attire, but I am not going home to change into it, not without having at least the possibility to peek inside your shop and see what it is about. Will I become a customer? Maybe yes, maybe not, but more likely not.


I would see it as more equivalent to a "No shirt. No shoes. No service." sign. The vast majority of people will already meet the criteria, and those that don't are already well aware that they are likely to run into problems.


This is only a valid analogy if in our real world I could make a tuxedo appear on my body with no more effort than snapping my fingers. (Which would be awesome.)


Honestly if you're so persistent that you must have a message that you need Javascript then you're not the type of customer I want viewing my site anyway. Someone that picky will find another problem anyway.


For #2 (JS abuse by site owners) I wonder if it might be useful to set size limits, like: maximum allowed bytes per resource type (.js, .css, .*), per file and per page, possibly with some local-vs-remote adjustments.

You could just read the Content-Length header and drop the connection if the site is over the user's limits (at which point you'd want to display a status message instead of the partial site).


The sites that REALLY piss me off are the ones with a timed redirect like:

<meta http-equiv="refresh" content="1;url=http://www.example.com/nojavascript" />

which show a usable flash of the site for one second and then bounce me to a useless "You must enable JavaScript to use this site." page. By the flash I know that this is technically not correct.


NoScript includes a toggle to disable exactly this behaviour (meta tags within noscript tags). meta refreshes in general are pretty annoying as a user though


One problem is that more and more, people expect the browsing experience to include a lot of responsive features that require a bunch of client-side JavaScript.

Except JavaScript blows, the emerging JS extension ecosystem that tries to fix that is still highly in flux (and mostly blows), so developers try to keep their server-side stuff NOT JavaScript, and then supporting server-side rendering means re-implementing everything twice.

Of course, this doesn't excuse document-centric sites like news and blogs.


1) It's irrelevant since it's the only language we have on the browser.

2) I think this is actually a cultural problem. Are users steering away from overcomplex/bloated/intrusive websites? If not, why not?

3) That's an argument similar to should a site support IE8? IMO it really boils down to the cost vs the benefit of supporting user without JS. And that of course will depend on your target users.


[flagged]


>#1 is largely a matter of personal opinions

>This is wrong.

... in your opinion ...


>>>#1 is largely a matter of personal opinions

>>This is wrong.

>... in your opinion ...

Yep. But sometimes my opinion is right ;-)


> For me it’s a matter of elegance and simplicity over unnecessary complexity.

Simplicity is having one place where the DOM is created and managed, and optimizing for the 99% use case.

Complexity is splitting up DOM rendering over two networked systems for servicing a 1% use case.

Browsers are JS runtimes now. Get over it.


Naw, I will keep running noscript and blocking script by default.

If your site doesn't work, I don't care, I will go to a different one.

I may be the minority now, and while there is some great use of js out there; most of it is bloated, slow, insecure, and often privacy destroying.


Not all websites not using JS are good: http://www.theworldsworstwebsiteever.com/

It is not a question about JavaScript or not. It is a question about well developed site.


At least that site loads quickly, doesn't call home with tracking info, or use up all of my expensive mobile bandwidth.


According to my uBlock Origin plugin, this page does use Google Analytics and Easy Counter.


I honestly think the security/privacy arguments are the only valid argument against requiring JS as standard.

The idea of browsers sending data that the user has explicitly elected to send (in addition to, if we're being pedantic, that the server put there in the first place) is very compelling in this day and age - only problem being that in order for that to have any meaningful impact on web security in the general case, you'd have to more or less universally deprecate JS support, which would set the web in a very different direction indeed.


What about power consumption as another valid argument? Your battery will last longer without Javascript.


That would be a valid reason to provide a JS alternative depending on context, not so much a valid argument for JS-free alternatives as standard.

There will always be something you can do to reduce power consumption: expecting (or advocating) JS-free versions of all websites for that reason is a completely arbitrary line to draw.


It will last even longer without CSS


And most websites that work without JS also work fine without CSS. Almost like a pattern that can be found in properly built websites.


You know - at one time we built plenty of websites - and in some cases applications - using little to no javascript (either it didn't exist, or it was only used for very minimal things) or any CSS (it didn't exist).

So what could we do today? That is - if we purposefully limited ourselves to just basic HTML, and server-side processing, without any (or minimal) CSS or javascript, and only using the (expanded) tag and attribute HTML we have today?

I don't know for sure - but I believe we could create some amazing things.

Think about it this way: Look at what demoscene people are able to create when they put extremely artificial challenges in front of themselves; we could try to do the same - and see what happens?

Perhaps there needs to be a demoscene-like competition space for this kind of stuff (how much can you get done in under 100k browser-side? 10k? 1k? Limit bandwidth to the backend, too, and maybe server-side size of code?).

Just some random thoughts - it might be something I'd have to try myself, but maybe this post might inspire someone to do it as well...


What kind of things are you imagining people would get done with such a limitation?

Recently I was using an expensive mobile connection and I discovered w3m, a text based browser (runs in the terminal). I was amazed at how much faster everything worked.

Most of the sites were still perfectly usable (w3m has pretty good HTML rendering) but some were very difficult to navigate.

My point is, without JS and CSS, the web is just... text.

And when you browse with w3m you realize, that actually the web has always been text. It can be quite refreshing to get all the decorative crap out of the way.

--

edit: I now see you didn't say to remove JS, but to challenge yourself to see how much you can make in a tiny amount of JS. Such competitions exist!

web pages https://a-k-apart.com/

java games http://www.java4k.com/


WTF? You do realize most of the users on the internet don't even know what CSS/JS is and would be thoroughly displeased at being presented with a page with no styling. They don't appreciate that the rendering is done without JS/CSS. A properly built website is one that caters to most users, not developers or power users concerned about JS being enabled in 2017.


Why doesn't the market care? Where is the JS-less Facebook/Twitter/Airbnb competitor that blows the customers away with "properly built" websites?


As the number of people that won't put up with the JS crap anymore and start using JS blockers increase, we will see such websites.

Same thing that is happening with ads and ad blockers.


Ad blockers are now going in the other direction with non-intrusive ads being shown.


So far as I know there has been exactly one such ad blocker released, and it was promptly abandoned in favor of the kind of adblockers people actually want to use.

All ads are intrusive to some degree.


I would be extremely happy to use a JS block that allows non intrusive JS.


Privacy Badger is what you're looking for.

https://www.eff.org/privacybadger


In the Facebook case this is supported by Facebook themselves. mbasic.facebook.com :)


It will last longer without a screen too.


>I may be the minority now,

You will also be in the minority later


Sure, but the he only has to wait for the cycle to swing around again. Browsers will officially turn into JS/WASM runtimes. Sites will balloon until instant delivery is no longer feasible. Browsers will respond by allowing sites to version themselves to aggressively cache client code. Eventually browsers will start offering site-specific-browser features to 'new-modern' sites to combat the extreme waste of Electron and the like.

Someone with political capital and influence in the developer world will realize that the web is still mostly text and design a browser optimized for that use case. It will be hailed as a revolution and people will start preferring it over the 'old-web' for accessing static content like news.

Then under pressure from developers pushing the limits of static content they will include a simple, limited scripting language that promises to facilitate basic interactive elements and start the cycle anew.

We should call it something trendy, maybe 'The River'?


> Browsers are JS runtimes now. Get over it.

Sorry, I won't. I don't trust you and I won't simply run your code by default. It's not like I actually need whatever your web site wants to provide; there are plenty of other things to do on the internet.


> Browsers are JS runtimes now. Get over it.

Sure, dude, ignore anyone without a machine that can run MBs of JS within a reasonable time.


> Sure, dude, ignore anyone without a machine that can run MBs of JS within a reasonable time.

I'd wager 90% can even run the strawman you're building


"Cell phones."

I'm amazed at how many websites there are on my phone that take 15+ seconds to load... and they've clearly been "optimized" for phones in the sense that the site reacts to being on a mobile browser and lays itself out for a phone... if I wait for the whole thing to load.

(I see this less often, but it's also amusing how many sites pop up their "hey, you've been here for three seconds so clearly we are an integral part of your life from now on even though the main page layout isn't actually done rendering, would you please like us on facebook, subscribe to our newsletter, and tell us your mother's maiden name and SSN? [allow this site to use location services [yes] [no]] [allow this site to push notifications to your home screen [yes] [no]] [allow this site to enter your bedroom at 3am to remind you how much it loves you [yes] [no]]" dialog where both the X to close out and the submit button is unreachable, because the position and zoom is fixed and the whole thing is too large.)


This is a question about JS vs no JS, not about optimizations of JS.


The fastest code is the code which never gets written.


That is fair, but when was the last time when a js only site was running on optimized JS?


All google products. But it is ridiculous to disregard JavaScript because some people can't write it properly. It is the same as saying that I wouldn't use a browser because I once went to a website that displayed a popup.


It is not ridiculous, machine crippling JS malware is a click a way and you will never know until you click. The safe strategy is JS off by default plus whitelisting.

> wouldn't use a browser because I once went to a website that displayed a popup

you would use a popup blocker though.


> machine crippling JS malware is a click a way

I have been using the internet nearly daily for about 20 years now. I can't recall a single time I clicked on something that gave me "machine crippling JS malware".

Barring some serious security gaps, JS isn't even capable of doing anything more than lock up the browser and maybe send some extremely minimal information about you back to someone elses server.


> Barring some serious security gaps, JS isn't even capable of doing anything more than lock up the browser and maybe send some extremely minimal information about you back to someone elses server.

Not true. First JavaScript Rowhammer exploit was publicized in 2015: https://www.youtube.com/watch?v=9L5MJ43nbkI


Interesting. I stand corrected.


Crippling in the sense that I have to forcibly kill whatever browser I'm using because it is using 100% cpu and the machine becomes very unresponsive.


There is a massive difference in downloading tonnes of JS every other day because there's a new React bundle and between having annoyance _once_.


Why are you trying to browse the modern Web on your Windows 98 potato from last century?


My core 2 potato^Wlaptop from 2008 can run pretty much everything I throw at it quite well, except for the "modern" web.


What are you throwing at it? Word 2008? Command line? Winamp?


If what you say is true, Gopher needs to make a resurgence, its immune to the kind of cancers riddling the modern web.


I'd be very interested to see some proper analysis of the difference in speed between HTML/CSS vs just JS, it seems obvious to me that the former would be much faster but then again I know how often "obvious" things are wrong.


As a very general answer, JS webapps will typically be slower on the first load because they require an extra step to show the content.

Subsequently loaded-pages or resources can be significantly faster in webapps however, as they can request (or generate) exactly what they need.

The problem of slow initial loads is being addressed by tools like React by offering server-generated pages on the first load, and then transitioning to JS-based loading for subsequent loads. This offers the best of both worlds.


the list of sites on https://hnpwa.com/ vs this https://news.ycombinator.com/


Would you prefer that Hacker News reloaded the page every time you voted on a comment?


I wouldn't mind. But it doesn't work that way, look:

https://vimeo.com/58808364


Interesting video, but it still seems to be a JS-driven technique?


Even without JS, there are tricks to make the upvote button work without refreshing the page. HN used to do this, not sure if it still does.


Do you have a reference to any of those tricks? Sounds interesting.


You could use a link with a background-image

<style>.vote-42:active{ background-image: url(/api/vote/42) } </style> <a class="vote-42">Vote</a>


I ran a quick test. That actually seems to work, at least in Chrome. Very cool!

https://codepen.io/anon/pen/weMwRz

Realistically, I'd probably never use it because it's so dependent on browser behavior, but still a cool alternative.


100% of users want websites to load faster though - that's a fairly big market share.

Obviously you can't make a web app without JS, but I still think you should minimise JS, or remove it if the site truly is static. It's just a better experience.


[flagged]


I'm not


One thing to consider is that at larger companies, we already have a difficult enough time testing all of our site with JavaScript enabled. Netflix building a version that works without JavaScript is like asking them to build a second website, especially from a QA perspective. When you look at your users and find that .5% of them are not running JavaScript, how do you justify spending the money for that few of people?


The NoScript extension is the 5th most downloaded extension for Firefox (https://addons.mozilla.org/en-US/firefox/extensions/?sort=us...). If you want to show a blank screen or a broken website to those users, that's up to you. Some of them may really want to see your site, so they might try to figure out which of the twenty domains they should enable, and which of the thirty they should enable after that one doesn't help them. Most will likely click on the back icon and see what's next on the list.


Yes, and Firefox is the 3rd of 4th most used browser at less than 10% (which is bad). So the subset of actual Firefox users that use NoScript is absolutely not worth supporting.


Sure, but we're not talking extreme niche here or anything. It was downloaded by over two million users.


And the vast majority of those, if they want to use netflix, just whitelist it anyway. The amount of revenue lost must be tiny compared to the costs.


How do you know the percentage. Some stuff you may just never know. I very consciously avoid websites I know use a lot of JS or are bloated on mobile. Mostly news websites.

I often read in subway, where I need to load the page in 3-4 seconds or I'm in the tunnel again without the signal. In some stations I only get 50kbit/s or so.

Most websites don't cut it so I don't even bother. Those who cut it though, I get engaged with more, because I can't go anywhere else without a signal.

In a nice world I would write a scrapper + alternate website that hosts only the content and some simple navigation over plain HTML, preferably with as little CSS as possible. There's the annoyance of copyright though, so no way of making this public.


The reason is mobile. While only .5% of your users may be non Javascript users it's highly likely a significant percentage of your users are on mobile and not optimizing for progressive enhancement means a significantly degraded experience for all of them at various times when it fails to load or loads too slowly on a mobile connection.


I can stand slow loads, my phone has wifi after all. But js bloatware doesn't even allow me to scroll without seconds lags and random misclicks due to delayed event dispatch. Me turned js off not because of security concerns, but because most js-only sites are not scrollable beyond the first screen anyway.


To help our (internal) customers better troubleshoot, we're adding HTML responses to our HTTP (web) services. Add a header, a footer, maybe some HATEOS... Starts looking like a barebones web site.


Netflix is one of those sites that makes we want to run noscript. I regularly makes firefox unresponsive and eventually popup a "this script is using too many resources". When a large tech company can't create working javscript for a minimally interactive interface then something is seriously wrong.

Less javascript would be an improvement for everyone, not just the noscript users.


Try doing some automated testing, it'll take care of most of the non-JS testing and a lot of the JS enabled testing as well. Explority testing is another thing, that QA really does need to be involved with, but if you can't test static content without human intervention, you have other problems that are nothing to do with whether you have JS turned on or off.


They can't justify the cost. I wouldn't want them to, anyway. It would mean fewer improvements for their non-crippled site as resources are diverted for the sliver of people who like to be contrarian.


> fewer improvements

so, win-win?


>Verdict: Cartography catastrophe.

As much as I hate JavaScript, Google Maps gets a pass. I'm pretty sure the code behind it is thoroughly tested. We can just suck it up and turn JavaScript on for one of the most useful tools from the internet -- how hard is it to find something that whitelists domains to run JS anyway?


While I agree with you, I still want to give them points for admitting their failure. It's a cute image, with a cute line that tells you exactly what the problem is.

Compare that to sites where you just get a blank page.


Firefox has Noscript, and perhaps another tool i forget the name of (never tested it). Not sure if there is anything similar for Chrome (and the various offshots/rebrands) or Edge, never mind IE or Safari.


uBlock Matrix has very fine whitelisting controls.


If I recall correctly there was an old version of Google Maps which worked without JS. It was before they replaced the old snappy and functional Google Maps with the bloated and slow app we have now.


Google maps is a useful tool.

It also had a habit of freezing an older (core2) laptop I was still using completely (with Firefox). Not just the browser, the whole OS. Had to power cycle. From time to time I'd forget and go back to Google maps were the cycle would repeat.

Why was I still using a core2? Because it worked fine for almost everything else. Got a better second laptop and no more issues but still... there wasn't anything wrong with the first.


It's probably WebGL-related. Some Firefox update seems to have fixed it for me recently but for quite a while I could only visit Google Maps in a brand new Firefox instance, because any attempt to use it in a well-aged process would crash firefox and emit graphics errors to dmesg. You may have a poorly-supported WebGL stack.


Agreed, I just dusted off an old laptop with 14.04 ubuntu (or maybe it was 14.05) and tried to run a webgl app I'm making off of local host and the whole computer nearly locked up (after updating firefox too). Chrome ran it at 60fps easy with whatever version was installed on the computer three years ago.

Now that I've updated to 16.04 firefox runs better than it did but chrome still blows it away.


There should be something as a fallback, though, surely? Take out all the functionality, by all means, but at least give me a map image. If you can't even get link-to-zoom working to display different images, just give me a map image based on my IP location; give me anything rather than nothing.


Even if you could just put in two addresses and get directions, that would be better than nothing.


Remember the HTML map tag? [1]. There is a huge amount they could do to make maps usable without JS (or even CSS). Zooming and panning would be pretty easy to implement, but would involve reloading the page with new coords.

1. https://developer.mozilla.org/en/docs/Web/HTML/Element/map


Can you do lazy loading with just HTML and CSS? You can load the map around the GPS coordinates you start on, but what if you go 20 miles north? How do you: A) figure out what are the new GPS coordinates that your viewport is looking at? And B) load the map within the viewport centered around the new GPS coordinates that you're looking at without ajax?


How would you "go 20 miles north" without JavaScript? The only sensible way would be by clicking a link, at which point you can load your new tiles.


I think OpenLayers did this a looooooong time ago.


Google maps did this a looooooong time ago.


They did provide a fallback, old MapQuest-like UI without JS until a few years ago. Then they stopped, possibly because of the lack of users.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: