Hacker News new | past | comments | ask | show | jobs | submit login
Living without the modern browser (an3223.github.io)
206 points by skilled 10 days ago | hide | past | web | favorite | 261 comments





> JavaScript, captchas, and logins, are the main “gotchas” for text-based browsing, if the functionality of a site (or part of the site) relies on any of these then most likely it will not work.

Really, there is no reason why a form should require JavaScript. When jQuery was hip, the js community celebrated "Graceful degradation". I frequently have the feeling this attitude is lost. That's super sad because it also excludes all the handycapped people.

Does even anybody remember the good old HTTP basic acess authentification? This is one of the most accessible ways of protecting ressources, it can be consumed by any HTTP client (!), and it is just reduced to the basic.

One of the worst things are these terrible captchas everywhere. I wonder why we cannot come up with a web standard interface in a way the captchas could be offloaded to customizable and adaptable GUIS rendered by the browsing client. This way, clients (users) could even declare what kind of captcha they can solve (blind people obviously cannot read images).


> Does even anybody remember the good old HTTP basic acess authentification?

old, yes. good, no.

* Browsers never implemented a way to "log out", so creds end up cached in memory until the browser is restarted.

* There is no concept of a session so the credentials need to be included in each request and verified every time. If you use something like bcrypt or pbkdf2 on the backend that means every request will take hundreds of milliseconds.

* Since every request needs to be verified you can't easily use 2FA, without ugly hacks that 'remember' that the earlier creds were valid.

Hopefully things like WebAuthn will result in a resurgence of browser native autentication.


How does a "session" exist without "credentials" that have to be verified every time? Http is stateless so even non-native authentication solutions have to deal with these problems. Why can't your server generate a "session token" and require it to be passed using basic auth? Wouldn't you want to hash these "session tokens" on your server in case it becomes compromised?

non-native authentication solutions do deal with these problems. HTTP Basic auth does not. full stop.

> Does even anybody remember the good old HTTP basic acess authentification?

Yes! It was great! People would just transmit username and password in plaintext!

Wait. That's actually the opposite of great. That's why nobody uses it any more.

And no, handycapped people aren't excluded just because something doesn't degrade gracefully into a no-JS environment. You can (and many people do) make perfectly accessible web sites that still require JS.

As for offloading captchas to the client? That suffers a similar problem to basic auth - it's extremely vulnerable because attackers can obtain access to information that makes their job trivial. That's why we cannot "come up with a standard" - the client has to be treated as hostile, because there's no way to verify their integrity.


> Yes! It was great! People would just transmit username and password in plaintext!

> Wait. That's actually the opposite of great. That's why nobody uses it any more.

Unless of course, if you're using HTTPS, in which case HTTP basic auth is fine and very simple to configure for many use cases.


Yeah, I'm kind of surprised that the age of HTTPS-All-The-Things hasn't led to a resurgence in the use of Basic Auth. As you note, the security problems that have been the big knock on Basic Auth for decades are completely obviated if your whole site is served over HTTPS.

My guess on the reason why is two-fold. First, we've been telling people that Basic Auth is a terrible thing only stupid people would use for more than two decades now, and once a lesson like that gets hammered deep enough into the mass consciousness it tends to stay there even long after the underlying facts have changed. And second, using Basic Auth means giving up any possibility of styling/designing your login experience and relying instead on the browser's built-in dialogs, which tend to be clunky and ugly in the extreme.


The UX for basic auth is terrible on most browsers. In Firefox, the dialog box refuses to give up focus, preventing me from copying the password from my password manager (which, while integrated into the browser, doesn't support filling basic auth dialogs).

I've had reasonably good experiences with filling out Basic Auth dialogs in Firefox via KeePassXC/KeePassXC-Browser. But by "reasonably good" I mean that it works, the password gets filled in, but the whole experience is still more clunky and error-prone than filling in an HTML login form would be. So yeah, lots of room for improvement here.

It's still a security risk. MITM happens regularly. Which means you better constrain it to either low-risk auth, or to networks you completely control.

For further risks, see e.g. https://security.stackexchange.com/questions/988/is-basic-au...

Yes, it's simple. And you're trading off security for simplicity. Make that choice, maybe, for your Intranet or your small blog. Commercial sites that have stored PII really, REALLY shouldn't.


> It's still a security risk. MITM happens regularly.

Then it's more or less as broken as alternative authentication methods


Nope. Because other methods never transmit a password in cleartext. (And they especially don't transmit it with every single request)

How do they manage this feat over unencrypted stateless http? Surely they must transmit something with every request?

Thanks for the link, it's a good summary of the trade offs.

I think any thorough investigation will conclude that there is, in fact, no way to securely authenticate a web user over unencrypted http.

> Really, there is no reason why a form should require JavaScript.

I’m a web developer who has never done significant forms without JavaScript.

So I’m curious: how would you handle reactive form fields without client side scripting? For example (perhaps a bad example): the user is entering data in “rows”. The user clicks “add row” and then a new row of fields appears. The user can also delete rows on a whim. Their changes should only be persisted once submit is clicked.

Would the “right” way be a full page reload with the row added/deleted, and caching all the values?

Not to mention, if fields require cross validation, is it customary to have to submit the form to get the validation error messages to occur? On some of the forms I’ve worked on, this would be very complex.


Split the form into pages, and use session variables to maintain state. Regarding form fields appearing and all your fancy front end logic: none of this would fly in user testing.

I find it mind-boggling there’s a generation of developers after me that doesn’t know the basics of server side state. No offence intended to yourself, and good on you for putting yourself out there to ask :)


As a web developer for the past 20 years, I do know how server-side state works. What baffles me is that, while there is certainly much to complain about the state of front-end JS dev today, the attitude of some that moving the web back 15 years is a good thing.

There is a reason single-page apps became a thing, so I'm not clear why you think a page refresh for every user interaction is somehow a good thing. The dismissive attitude that "Regarding form fields appearing and all your fancy front end logic: none of this would fly in user testing" flies in the face of how 90% of heavy consumer front ends out there work today.


> There is a reason single-page apps became a thing, so I'm not clear why you think a page refresh for every user interaction is somehow a good thing.

So the reason I always hear is performance: Less HTTP-requests, less latency, therefore better user experience. While that might be the case for flaky mobile connections (although executing lots of Javascript on resource constraint devices might not be the best idea either), with fast wired connections I experience the opposite: The single page apps of today feel a lot slower to me, than the websites with the same functionality did 10 years ago, thanks to all the computation which has to be done on client side. Just take Youtube for example: What a bad experience nowadays, especially with Firefox. That was way better back in a time when they didn't try to avoid page refreshes. And it's not just Youtube, it's all over the web. A welcome change to all of that is HN, because it still feels as fast as websites did, before frontend development was even a thing.


You can disable the new polymer layout for YouTube on Firefox using a console command. It brings back the old layout which is much faster and more functional.

Performance of JS is much more controllable than round-trip latency.

> the attitude of some that moving the web back 15 years is a good thing.

Straw man. We don’t want to go 15 years back, only 10 :-)

Seriously, the web was at a local maximum around 2009. Google worked, GMail was OK, Ajax was expected for r new stuff but was still written with graceful degradation in mind.


> What baffles me is that, while there is certainly much to complain about the state of front-end JS dev today, the attitude of some that moving the web back 15 years is a good thing.

The problem is that many people, for various reason (accessibility, poor network connection, low-end hardware, etc.) are stuck "in the past" with regards to the web. This doesn't mean you have to make this the experience for everyone using your website: just make sure it degrades gracefully for those who can't use the new shiny stuff.


This is often much easier said than done. Graceful degradation isn't free, and for many types of applications, simply isn't worth the effort to support.

> Graceful degradation isn't free, and for many types of applications, simply isn't worth the effort to support.

I hate to say it, but you're essentially saying "we can't afford to support people who are disabled, cannot afford the newest hardware, or are simply driving through a place with poor reception".


Yes, unfortunately that's exactly what I'm saying. In a perfect world, every app would be fully accessible, colorblind-friendly, work offline, etc etc. But in reality, all of those things cost money, and sometimes the cost outweighs the benefits. To give an extreme example, if my app has 1 million users, but only one is blind, do you expect me to spend (potentially significant) development resources on that single user?

You have to do that cost-benefit analysis based on how important those things are to your business—for instance, screen reader support is an absolute necessity for the public sector and certain information-heavy apps, but probably not as important for visual tools.


It's absolutely ridiculous how some content based websites require several MB of js to function. And the devs say this is because it is 'easier' to write such apps. And if the thought of optimization even comes to their minds, its just adding another layer on top of this madness to manage the layers below, or saying its too hard. Of course it is hard if you start with an overcomplex solution in the first place and try to trim away stuff from it. How hard exactly is it to write some plain html and css, with js doing some optional improvements? Most problems of web development are self inflicted.

I was talking to someone once about how ridiculously bloated Imgur is and that I had to use Imgoat because of the slow internet I was experiencing then. Of course they were trying to make all kinds of excuses like imgur not having servers nearby etc. But of course the really simple fact couldn't get to their heads that Imgur downloads like what 2-3 MB maybe even more of crap minus the actual image whereas Imgoat is something like 500 KB. No amount of network optimizations and caches etc are going to get over this fact. And also, such optimizations are over and above the page download size improvements anyways. If Imgoat used such techniques it'd be even faster. Most importantly: both sites have the exact same and simple core function, yet one site is so many times more bloated than the other. I will let you derive your own conclusions as to what this means.

And imgur also has the typical horrendous js misuses such as scrollbar hijacking that makes absolutely no sense etc.


There is nothing limiting a disabled person’s browser from executing JavaScript, screen readers are generally not concerned with how an HTML element was created (server side or client side), but just read what’s on the page.

>just make sure it degrades gracefully

Okay, example: client pays me to make an somewhat interactive website. Client wants it to be done using React as that is easy to get other devs for in the future, if the need arises. Client doesn't want to pay double (random estimate) to make everything behave in e.g. Lynx. Do I do it out of pocket, or just say no to the client?


Yeah there are a lot of ivory tower developers here with little to no real world experience and have never done anything of any substance or complexity pushing their tiny limited experience and knowledge and you can spot them from miles away in these comment sections. Those of us who work in the real world know how silly some of these comments are.

Please don't post personal attacks here; they contribute nothing to the discussion.

I think you have a duty to your client to tell them why they should be paying you double to do the job right. If they don't, well, Hacker News is full of people like me to shame them for excluding people from using their website.

> If they don't, well, Hacker News is full of people like me to shame them for excluding people from using their website.

And the absolute number of HN people like you shaming folks for not supporting text-mode browsers like lynx is a rounding error for most websites.


Develop with React but use something like Next.js to do Server Side Rendering (SSR)?

Theres a huge spectrum between SPAs and pure js-less apps.

Single-page apps became a thing because of lack of development resources to build native apps or properly engineer server side solutions (as well as a business desire to centralize and lock down the product so people have to experience ads, pay subscriptions...). It has nothing to do with creating a user friendly experience. 100% of the time a user would benefit from a native app or properly engineered server-side rendered web page over a SPA.

> It has nothing to do with creating a user friendly experience.

This is absolutely not true. Think about any kind of dynamic dashboard or data management platform. Having full page reloads for any minor change to filters, etc, is simply an unacceptable UX. But it has to be web-based because you need to be able to access it from any device, anywhere, without installing an app.

Plus, on a personal level, there are many services for which I will never install a native app—because I don't use it often, or because the native app provides no added benefit, or for one of a hundred other reasons. Those services don't get my business if they don't have a web app.


That is a significant, but limited-use, case.

Theere's no reason a static contnt page must require JS Simply to render text, with a blank page as fallback.

Better ways of addressing the dashboaard example might be worth considering.

Too: bullshit dashboards with JS-based fakery;

https://www.theverge.com/2016/11/8/13571216/new-york-times-e...


Limited? That describes about half the applications I've built in my career!

In all seriousness, there is a big difference between static content pages and full-blown applications; the cost to make the latter work properly without JS enabled is almost never worth it. But the same is true even for some static pages; for instance, I would never expect that NY times election forecast page to work without JavaScript, and don't see any reason that it should. (The "JS-based fakery" is pretty irrelevant to the discussion, IMO.)


What fraction of all HTML pageloads?

And: Sampling bias. A simple static page shouldn't require development. Hence, anyone doing web dev will have experience skewed strongly to more complex designs.

How would you describe the basic underlying structure and mechanics of a typical app you've built?


> 100% of the time a user would benefit from a native app

I strongly prefer single-page web apps over native apps. Web apps are constrained by my browser. Native apps get a lot more trust.


That's a problem we should have fixed on the platform side long ago, but didn't. Now we're stuck with the garbage fire of the modern web. That doesn't make it better.

I have uBlock Origin on my browser. I can even change the font-size on a web page if I need to. A native app can be doing anything it wants without my knowledge, and I can't even change the font-size.

Native apps have a performance benefit at the expense of everything we take for granted on the web, like being able to open up a developer console and inspect it.

For example, I noticed that the top language translation app on the Mac App Store incurs a Google Analytics request for every keystroke. And since there's no easy network tab I can open on any app, I only found it out because I was curious how it worked and mitm'd it with a proxy. And I couldn't just toss uBlock Origin at it to de-trackify it like I can any web app.

Doesn't seem like a 100% epic win for users to me. In fact, the only upside I can even think of is that it probably uses less RAM than a Chrome tab opened to Google Translate. Let's slow down and acknowledge the trade-offs before we go all-in on HN memes like native app superiority.


> 100% of the time

Nothing in software engineering is absolute. And even if something were to be 100% the correct solution one day, the next it could change.


I agree, but there is one problem: most people would rather use an SPA than have to download an app and have to keep it up to date themselves.

I wonder if there's a market for an alternative "browser" that loads "native" code (or as close as practically possible) and runs it in a sandbox.


> I wonder if there's a market for an alternative "browser" that loads "native" code (or as close as practically possible) and runs it in a sandbox.

This is probably not what you had in mind, and I'm not sure how good it would be at sandboxing, but REBOL Reblets imho come close to this <http://www.rebol.com/reblets.html>


The question is what "browser" means. If viewing data connected by hyperlinks is the goal, that's pretty easy. Rebol's approach to X-Net (Executable Internet) apps is very different, with the goal of being a way to share both data and active content. You can do both, the web we know just wasn't designed for that.

> I wonder if there's a market for an alternative "browser" that loads "native" code (or as close as practically possible) and runs it in a sandbox.

Electron comes close, but many don’t like it at all. (I have somewhat mixed experiences, I love VS Code but I’m not too fond of Slack.)


Electron isn't native in any sense of the word. I meant something more like OS-specific binary executables that get loaded and executed in a sandbox.

Mike Acton once said something about having created a browser plugin his team used that loaded and ran binary exe tools on demand, but that it was also wildly insecure. At least I think he said something like that. My memory could be wrong about that.


We had this 20 years ago with Java (not native obviously) and ActiveX (Windows x86 only.)

> most people would rather use an SPA than have to download an app and have to keep it up to date themselves

Citation?


> Citation?

No.


Hah, you mean Java applets? Everything old is new again.

I meant something more like OS-specific binary executables that get loaded and executed in a sandbox.

Since they specifically point out back-end state and full page reloads as an option, they're clearly asking from a place of incredulity. As in, "Wait, HNers actually think this is good UX?"

And no, moving client-side logic to the back-end isn't self-evidently superior. Not for your implementation nor for user UX. So you'd have to use more than condescension to make your point. Full page reloads for an "Add Row" button died for more reasons than just "web developers are idiots and don't realize how everyone actually loved that."


Until someone presses the back button. :)

I find it mind-boggling that people forget the hell of flash scope and the back button and broken history and POST-redirect-GET. Session state was miserable and the best thing about SPAs is avoiding that.


The truth is SPAs have to re-invent all of that on the client side. More SPAs have broken back button and history than server side apps ever did.

Yes. Use JavaScript for adding rows and validation, and fallback to new requests for both tasks if JavaScript is disabled. This is not difficult to do.

It's not difficult to do, I agree, but time spent doing it, testing it, maintaining it carries the opportunity cost of doing other things that may be more valuable.

Usually, such things are in abundance and take priority over supporting browsers without JS. For the vast majority of apps and businesses, JS as a minimum entry requirement is a perfectly acceptable tradeoff of lost users vs additional development cost.


I don't disagree. Heck, all of the websites I maintain for work require JS (although to be fair I also control the requirements for users: the latest version of Firefox or Chrome). I do wish the cost of requiring JavaScript was higher, though.

Genuinely interested, not assuming/implying I'm right - why?

I think just insisting on JS is, for most applications, the best thing for the most people. If the app is such that it has maximum availability as one of its top goals (government public service, public infrastructure, bill paying etc) I completely get it, but otherwise I'm just not convinced that not requiring JS is somehow inherently good.

If you'll allow a pretty weak analogy, it's almost like refusing to use a car and insisting upon using a horse and cart. You may have your own good reasons for doing that, and that's completely fine, you are free to make that choice, but you absolutely should expect the modern roads not to cater to you, and to be very often inconvenienced. Indeed, you shouldn't really expect society to spend lots of tax money catering to your choice and designing all roads, junctions, etc with it in mind, beyond the very minimum level so you can at least use the most important roads.


It's more like suggesting that perhaps sometimes walking is better than taking a car, while more fashionable developers insist on trying to use the car to go from the kitchen to the living room. And you still have to walk to the car.

I would use instead the analogy of a car and a bike. And indeed society does cater toward the bike, in many places.

But rather than any analogy, think about how many times more powerful a modern computer is than a computer from 2000. Now think about how many times more "things" you can do on that computer simultaneously. The latter is much smaller than the former.


Well, it depends on who your users are and the regulatory requirements you are operating within.

A US college or university site can be sued for lack of Section 508 compliace.

So, it goes beyond development costs to legal risk of lack of compliance.

OTOH, a site selling cars probably does not need to worry about potential car-buyers using screen-readers.


As other comments have pointed out, there is no causal relationship between the use of JS and lack of accessibility, screen reader compatibility etc. So supporting no-JS and being accessible are two different concerns.

I'm absolutely all for accessibility and would never argue against making things accessible as part of some cost-based tradeoff. Frankly this shouldn't even be an issue that comes up if you 'just do things right' from the get go - but even if it does, the imperative is always on you to just fix it.

There might be a correlation where apps that happen to use JS are more likely to be written in a way which is less accessible by default, but that's another conversation.

With that out the way - totally agree with you, for certain apps with regulatory requirements insisting that JS not be required for use, that's what you do - and for good reason - max availability to anyone for public services etc - that makes complete sense.


I don’t disagree.

The point I may have misunderstood is putting development effort into a SPA that only uses JS and works well with a screenreader versus the non-JS alternative that also works well with a screenreader.

Of course, it depends on where in the development process the developer becomes ‘aware’ of making the site compliant.

As you said, ‘do the right thing from the get go’.


In the old days the user would submit the form, the validation would happen on the server, then return a result. If the result was a failure, you would render the errors on a new page load. If the result was a success, you could redirect the user to "confirmation" page or reload the same page but with a different output.

> I’m a web developer who has never done significant forms without JavaScript.

Perhaps you should give it a try and learn how to do it. That way you'll know when building non-js fallbacks is feasible and how do it easily.

> is it customary to have to submit the form to get the validation error messages to occur?

You trust the client to perform validation? Client-side validation should only be used to improve UX, not as a substitute for server-side validation.


99% of the forms I run in to in the wild are one-page forms without any interactivity, and just a single "Submit" button. Yet these web sites usually require Javascript (usually for tracking and ad delivery).

Something like 90% of sites I run in to deliver static content, yet almost all use Javascript (again, for tracking and ad delivery).

Fortunately, the overwhelming majority of these sites still work great or at least good enough without Javascript.

Most sites out there really have no legitimate need for Javascript, with the exception of truly interactive sites like ones that let you play games in the browser, let you use some sort of application (like a spreadsheet, word processor, graphics program, or, arguably, web mail) in the browser, or do something like let you see live updates of things like stock tickers. But most sites are not like this. They just serve static content that could be served up just as well or even better without JS.


> Would the “right” way be a full page reload with the row added/deleted, and caching all the values?

I don't know if this is the "right" way, but it sure sounds like a more reasonable fallback for clients without JS than serving them a blank page.


The main article was talking about login forms, posting comments, and other simple use cases.

I would add a new @type property to table elements and publish it as HTML6 standard. @type=reactive would let you add/delete rows in the table.

> Would the “right” way be a full page reload with the row added/deleted, and caching all the values?

If the user didn't have Javascript enabled, then yes.

You'll have to do the validation server side anyway, client side scripting is just a "nice to have" it shouldn't be mandatory.

There are also probably too many people using the web to build applications that should be native.


> There are also probably too many people using the web to build applications that should be native.

I hear this a lot, but I'm personally glad I don't need to download an app to my phone/computer to order pizza or find directions.

Obviously this is a balance, but of the two extremes that could exist, having too many web-apps seems to be better than having too few. The quickest, simplest step you can take ATM to improve your device security is to install fewer apps -- particularly if you use an OS like Android.

It's not clear to me that situation would get any better if half of the web disappeared.


> Does even anybody remember the good old HTTP basic acess authentification? This is one of the most accessible ways of protecting ressources, it can be consumed by any HTTP client (!), and it is just reduced to the basic.

Yes! I recently used an e-shop that uses HTTP auth for user accounts. It was a bit unsettling, when the browser asked me for login credentials. :)


It's also quite disappointing that I can't use my 1Password vault when a website asks for my HTTP password via a standard dialog in Firefox.

PS: I am using Ubuntu so I can't use 1Password as a standalone application; but how does it happen that a website is perfectly capable of blocking my interaction with the browser? Why doesn't Firefox ask for the password as part of the actual webpage instead?

Hadn't tried it in other browsers - does it work at all?


In this case, the u/p needs to be sent along with the request - which means it needs to be asked for by the browser before the request to the site is ever sent..

No, I mean just show a customised page for 403 that also has a username/password that you could put in and log in.

The cross-browser modal dialog is entirely anti-Ux.


Bitwarden is able to auto-fill basic auth dialogs in Firefox, but only if there is only a single entry for the page in store. Thought, sometimes it sometimes does not work.

I do wish Mozilla would finally do away with the modal auth dialog and use in-tab content that extensions can interact with.


I just set up my own Git central using HTTP basic auth (with cgit and Gitolite). It is refreshingly simple and also looks good in text-based browsers.

https://github.com/h2o/h2o/wiki/Hosting-private-and-public-r...


Yup, I also use basic auth for in-development websites for my customers. It's just a few lines in nginx, and very flexible. The website doesn't even need to know.

>Really, there is no reason why a form should require JavaScript.

This is one of those things that's technically true, but misleading.

Without scripting, you can't do any client-side validation, so you have to submit the form.

Without scripting, it's then harder to repop the form for the user when it fails.

I'm not saying things aren't excessive today -- they clearly are. But tossing client-side scripting entirely seems like a baby/bathwater situation.


Html validation is actually quite powerful and can help with most "quality of life" validations. You know the "this field is required" or "this field needs to be a number" kind of thing.

Even more, html validation has by far a much better UX than most custom made JS solutions. For example if there is an error in the validation, your cursor is immediately transported to the offending field where you can do the correction, with the appropriate css classes available to highlight the field properly.

And for the stuff that html validation is not powerful enough, well then you most probably want to check that on the server as well anyway.


HTML5 validation is excellent, even better if you use JS to customise it...

You need to re-do all your client-side user assistance prompts+ with real validation serverside anyway...

+ client-side validation is not validation.


Server side validation is a backstop. It doesn't need to have fancy error messages or anything like that. Client side can have friendly, nice error handling and validation. Server side can damn near barf on the user's shoes when it encounters a validation issue.

You absolutely have to do server-side validation anyway, since clients can lie.

Since you're doing the server-side validation you have to do anyway, it's child's play to return the HTML form with the values the client submitted pre-filled.

Client-side validation is a user convenience; it's awesome, but not sufficient in itself, because it can only protect the client, not the server, from invalid values.


Just use Weboolkit (Wt, http://webtoolkit.eu/). It handles graceful degredation on no-JS automatically. This is one of the neatest things about it, you get full single page interaction w/ sockets if supported, or just normal HTTP pages if Javascript is disabled or not supported.

Yes you can do client-side validation. You're constrained to the pattern fields and the browser implementation, but it's enough to block most invalid submissions.

Most forms either don't need any kind of validation or you still have to validate the data on the server and do something reasonable when it fails.

Repopulating the form on the server side after validation failure is trivial and even more so when there are libraries that abstract that away (eg. wtforms).

On a similar note I somehow fail to see the benefit of writing (and often duplicating) large parts of business logic in client-side javascript and replacing perfectly good HTML form-submission mechanism with random JSON "REST" APIs.


    > Repopulating the form on the server side 
    > after validation failure is trivial 
You'd think so, but the number of sites out there that still require me to re-type the full form because one field failed validation is mind-boggling.

> Really, there is no reason why a form should require JavaScript.

True. But I'm glad to use Javascript on forms where fields are depending on input from other fields.

Or when you want to give the user the option to submit multiple entries of the same field type. For example when you ask for a job history.

Yes you can sumbit the form on every step but this is much harder to develop and has a lot of negative interactions for the user. For example the scrollposition is gone on each submit. And the user has to wait for a reply from the server on each submit.

Javascript is not the problem, bloated and spammy websites are.


> For example the scrollposition is gone on each submit.

Could this be fixed by dynamically-created anchors, so for instance when one is adding a new row to the "job experience" section of an application form, the reload to add the new row could also direct the user to "somesite.com/jobapp#JobExp2"?


That would be where the new row is, but not where you were scrolled to previously.

Javascript itself is not the problem, it's the means. Same as we want gun control (not 100% confident using this analogy though).

> Javascript is not the problem, bloated and spammy websites are.

It is part of the problem. Because of what Javascript allows and makes possible, bloat can thrive and endure in the modern web.


Well the same applies to images because they can be used for tracking.

Images don't access your device and check your system's configuration, extensions, plugins and more, as far as I know.

>I wonder why we cannot come up with a web standard interface in a way the captchas could be offloaded to customizable and adaptable GUIS rendered by the browsing client.

Because then Google would lose a large free pool of labor for training its machine learning.


So that’s why it’s always traffic stuff.

Yes, and that's why it used to be text from books, where they wanted to confirm their OCR.

The OCR actually helped digitize books, whereas the new captcha only helps Google's self-driving efforts…

I highly doubt that selecting traffic signs in pictures does anything to help Google's self-driving efforts. The models used in autonomous vehicles are extremely state of the art and object detection is largely a solved problem (many people make object detection apps as Hello World projects).

I'm no expert in this area, but as far as I'm aware object detection at the level required for self-driving vehicles is far from a "solved problem" (and requires significantly lower error rates than a "Hello World" project…).

So, if it is largely a solved problem, the captcha is near useless, no?

Yes, people have used Google Cloud's own public APIs to bypass reCAPTCHA v2.

It helped digitize books... for Google. AFAIK they didn't release it to anyone else?

Also, for a while they used it to determine street numbers for Google Maps.


It started out as scanned books 10? years ago before Google bought them up.

>That's super sad because it also excludes all the handycapped people.

That’s not how accessibility works. Requiring JavaScript does not change how accessible your site is to persons with disabilities.

What does hurt is making sites that are difficult to navigate by keyboard, without proper accessibility hints, and actually it’s quite easy to do that without JavaScript.


> the captchas could be offloaded to customizable and adaptable GUIS rendered by the browsing client

Using Chrome with all tracking enabled is basically that. You just don't see CAPTCHAs. Blocking bot is a hard problem to solve.


> Does even anybody remember the good old HTTP basic acess authentification? This is one of the most accessible ways of protecting ressources, it can be consumed by any HTTP client (!), and it is just reduced to the basic.

Basic authentication is fine if all you need is to control access to documents, but it can not be used for access delegation or any scope control.


> but it can not be used for access delegation or any scope control

Why not? It's just a header with a username and password. Your app can do whatever you want with that info. It also doesn't strictly have to be a username and password, you allow app tokens or whatever you want in there, that's just what the browser UI calls it.


That depends entirely on how/where you implement this.

You can make your application code check the authorization header, get the username from there and do authorization based on that.


I assume you're not a .NET developer...

No I'm not. Care to elaborate?

> Does even anybody remember the good old HTTP basic acess authentification?

Yes, I use it on some of my sites. For some reason chrome password manager won't work with it. Not sure why.


HTTP basic auth has also been broken on Firefox for iOS for a long time

https://bugzilla.mozilla.org/show_bug.cgi?id=1437817


> I wonder why we cannot come up with a web standard interface in a way the captchas could be offloaded to customizable and adaptable GUIS rendered by the browsing client.

What would prevent the client from not rendering anything and always replying "yes this is a human"?


That's not how any captchas work. It's more like a challenge/response mechanism, so there would have to be a standard for defining a challenge and accepting a response.

The client would take care of displaying the challenge, and taking and sending the repsonse.


Wow; of course there are plenty of reasons why forms require javascript unless you've never coded anything of substance before then yes, in your limited experience no forms require javascript.

If you write even mildly complex business applications with business rules and want a professional and non shitty user experience you absolutely need Javascript, there's no substitute.

If you make simple sign up forms then by all means don't use Javascript.


I've written a SaaS system for SOX compliance tracking that ended up being a multiuser, role-based task manager that had to track dependencies between assignees and tasks. Did the application require JavaScript? Yes. But the JS had nothing to do with it being a "complex business application with business rules." If I'd implemented business rules in JavaScript, running on the client, I'd have been exposing the company to lawsuits.

JavaScript doesn't deserve all the hate it gets, but the flip side of that is that it gets most of that hate because of the way developers keep using it.


You can do the logic on Postback. Yes it requires a round trip to the server it it works perfectly fine.

It's doable, but filling out forms with no immediate feedback about whether you entered your data correctly doesn't seem great from a UI perspective. Consider how useful a calendar picker is for travel.

Also, no JavaScript means no Google Maps.


are people really suggesting basic auth again, is it 1993 outside? Basic auth is horrendous security practice, you're passing a header that has your user and password in plaintext, it's insane to suggest that in 2019

What makes submitting a plaintext password in an Auth header worse than submitting a plaintext password in a POST body?

there's no way to log out, the browser has to store it instead of a session cookie which can be invalidated if it's stolen.

plus what's in this comment https://security.stackexchange.com/questions/988/is-basic-au...


Thanks, I didn't realize that the password was re-sent for every request to the site.

Http auth is a mess of a system that has been a net negative for security by making people screw up hostname parsing because they didn't expect the auth syntax.

Why are you parsing URLs yourself? It's been a solved problem for decades, so use a library.

Because captchas do more than just "user clicked on image", and has to be very wary of creating openings bots can abuse.

from the server (which is the only one you shall trust) captchas are a challenge sent as some bytes in a http request, and then returned later as some bytes in a http request. What the client does with them should be hard for a computer. It may involve javascript, clicking on images, hearing something and entering as text or solving the 3rd riemann conjecture. You cannot prove the javascript has even be interpreted client-side. All you can be sure is that the naswer is correct and that it's hard to do by a computer.

The problem is that we're quickly running out of things that:

1. can be done by humans most of the time

2. cannot be done by machines most of the time

3. fit into the workflow of a web form

Point 3 is the interesting bit: You cannot build a captcha that tests dexterity, emotional intelligence or creative problem solving but is still accessible to a large internet audience.


[flagged]


If there existed some form of global consensus across the English-speaking world on what the inoffensive version is, I'd be happy to abide by it. In the meantime, suggesting that people on a venue as global as HN have some sort of a responsibility to follow and observe the random cultural variations in whichever narrow bubble you were referring to strikes me as presumptious and absurd.

I also wish more people valued clear language over these battles of ever more contrived euphemisms. George Carlin has a brilliant sketch on this very issue [1], which is both thoughtful and inoffensive (specifically references "handicapped" around the 5:30 mark).

[1] https://www.youtube.com/watch?v=o25I2fzFGoY


> have some sort of a responsibility to follow and observe the random cultural variations in whichever narrow bubble you were referring to strikes me as presumptious and absurd

I suggested no such thing. I just pointed out that it felt odd to me to read that. From the rest of the discussion below it's pretty clear that there isn't much of a global consensus. However, it's also clear that, whatever term is preferred, 'handicapped' isn't it.


Might be a straightforward translation for non-native speaker. In my language, we regularly use similar sounding word, with no negative connotations.

I'd guess this is the case here, since the word is spelled incorrectly, too.


In my cultural bubble, handicapped and disabled are used relatively interchangeably, and I'm a native English speaker.

That's one of the reasons I felt it was important to point out. I'm not even sure there is a negative connotation in American English (I have vague recollection that "handicapped parking" is sometimes mentioned in American TV shows). The fact that it's being downvoted to oblivion probably indicates that there is not.

Handicap or handicapped-accessible seems to be widely applied to facilities like parking or restroom facilities. I still sometimes see it used to describe people but disabled appears to be the preferred term in that context.

Speaking as an American.


I'm at least as annoyed by people who say you did something wrong without saying what the right way would be.

What is the current best word to use?

I tried to think of a good synonym (“differently abled” “visually challenged”) but nothing really felt right.

I think the phrasing itself is problematic because it describes some people as broken or less than whole. That puts the emphasis on something wrong with a person rather than something wrong with the system.

Better to talk about the system: “this makes the site less accessible.” Because really, that’s the issue. It’s what you have control over and where the problem lies. Software engineers can change a websites but not the people using them. People should expect websites to accommodate their needs, not the other way around.

Respect is the core principle here. Respect your users and enable them to come as they are. Don’t place the focus on their weaknesses, regardless of what those are or why they have appeared. Focus on how you can build a system that’s accessible to everyone.


> can build a system that’s accessible to everyone.

you can't build something that is accessible to "everyone". It's a myth. Because some people will stray so far away from the "median capabilities you can expect", you can't even imagine their problems or use cases. It's a very, very hard problem and well known in numerous areas of product design.


This is true, but also overly pedantic -- we can give the original poster a more generous reading and assume that they understand that "make this available to everyone" doesn't literally mean accessible to every single person imaginable. ("Your web site isn't accessible to people without computers! Checkmate!") We can work to make our systems more accessible to more people.

> “this makes the site less accessible.”

They problem is that when someone then ask - "to whom", you are left being unable to answer.


Is that really an issue though? A website could be made less accessible to people with low powered devices, slow connections or small screens, their accessibility issues need not be medical.

Practically, there is no way to identify all the potential issues a user of your site might face.


I work for a non-profit that helps disabled children and their families in the USA. Up until recently we were using "special needs", now it's "disabled" or disability. This is what the people with disabilities want to be called currently and I hope this helps.

> This is what the people with disabilities want to be called

Like most large communities, there is less consensus on this than that kind of statement suggests. The terminology doesn't change because everyone in the community flips over night in their preferences, but because the ebb and flow of how many people do each term preferable (or intolerable) ends up changing what is on balance the best (or least bad) term.


In the US, go with "visually impaired." That's an accurate description without any baggage, and it also tells you what needs to be done to make the technology accessible to them.

It also describes people who might have limited sight now, but will have full sight later (recovering from surgery, etc). And that is something that could happen to any of us, so we should be building all our tech to cover at least common edge cases, if resources don't permit covering all of them.


It's probably culture / location dependant. In the UK disabled or lesser-abled would probably be preferred. I'd probably have skirted the issue by saying it may exclude people using screen readers.

Avoiding discussing a particular issue simply because you don't want to refer to people as handicapped or disabled seems absurd to me. Language is a tool, and we shouldn't reduce it's effectiveness simply because we are afraid of offending someone. Some people in the world are disabled, I think it is far more offensive to avoid talking about them then to use the term disabled or handicapped - how do you think it would feel to be a disabled person, and have people think that it is so offensive that they don't even feel comfortable saying that disabled people have a hard time on the modern internet?

I don't think we should avoid discussing the issue at all. But I also feel that groups have the right to determine what they find acceptable as a collective noun. If disabled people in the UK are uncomfortable with the term handicapped then I won't use it; that has been the case for as long as I can remember so it struck me when I read it.

> But I also feel that groups have the right to determine what they find acceptable as a collective noun.

The group of people with disabilities is not one hive mind with a collective opinion of which nouns they prefer: some will like some terms and some won't. Handicapped seems to be generally accepted as inoffensive to most people, so I don't see any reason to not use it.


i agree and is a very neutral term for a wide range of disabilities. now cripple would have some negative connotations to it.

though i suppose context and use of any term could be used to degrade especially something like this which does point to capabilities but because of a condition and specific to the disability.


> "Handicapped" is a word which many disabled people consider to be the equivalent of nigger.

http://news.bbc.co.uk/1/hi/magazine/3708576.stm

I wouldn't describe it as neutral at all.


i would not put it in that category but i did address how it could be construed.

how would you describe the term handicap in the form of a lack in capability specific to the disability?


That's your call. I couldn't have been clearer that this applies to my understanding of the majority view within the UK. If those people aren't part of your audience then knock yourself out.

It could very well be true that disabled people are more offended by third parties policing language, especially if the expression was used in a neutral context. Because that can be very humiliating and shifts the topic to something many people want to overcome in their lives.

I was more referring to you saying you would skirt the issue rather than using the term disabled or handicapped. If there is an alternative word that is less offensive, of course there is no problem with using it if it means the same thing.

Ah, I see what you mean. No, the issue I intended to skirt was choosing the most appropriate label, not the whole issue that JS may make websites less accessible.

In 10 years we will be back to preferring "handicapped" as "disabled" slowly slides into offensive territory and we still need a word to talk about this concept. Grandparent was probably just being considerate of future Internet Archive readers.

Yeah, in my culture / location we call them "people that have to go through extra obstacles".

People with disabilities. Emphasises 'people' rather than their disability.


> "Graceful degradation". I frequently have the feeling this attitude is lost.

Lost? It is just ignored for the sake of cool SPAs and hottest new js stuff.


Captchas are an anti-pattern. Make account creation easy then handle any spam issues within the system yourself. Why make your users suffer for what is basically the service's problem to fix? It's a lazy way out.

Because most users won't subject themselves to the signup fee to make this revenue neutral.

It's an interesting alternative, though. Pay 5 bucks with the credit card to validate your identity if you don't want to subject yourself to captcha...


Tutanota does this, you choose between solving a Captcha and donating a minimum amount

With invisible recaptcha most users won't even see a captcha challenge when they sign up to forms

Recently, there seems to be this ever-growing, romantic vision of turning the clock back to a time when the browser was simpler.

Certainly, in terms of accessibility, the JavaScript (and Browser) landscape has never been better than it is now. General awareness and tooling for building online experiences that consider disabled users has led to an infinitely better time for these people when using the internet, overall.

Sure, some people build shitty products that make your fan spin and ignore disabled users. But that's because they've built a shitty product, not because of the 'modern browser' itself.

> The web can be a resource hog, sometimes devouring CPU and memory. But it doesn’t need to be that way.

It also doesn't mean returning back to horse and cart!


Whenever I see lines like that, "the web can be a resource hog, sometimes devouring CPU and memory", I really want to respond "no one cares".

OK, not no one. And people certainly DO care if a site is very slow, or user interactions are bogged down and choppy. But there is this somewhat weird (IMO) belief, very common among developers and tech-savvy users, that heavy use of resources is a bad thing in and of itself. I certainly understand where that belief comes from, but it's just not relevant to 98%+ of people if it doesn't get to the point of being heavily noticeable in user interactions.


Heavy use of resources IS a bad thing in and of itself (if it's not necessary for the task being done - I fully expect a game to eat up my CPU and GPU). We're not on DOS with only one application running at a time. If all my resources are being eaten up by bloated websites and Discord and Atom and other crappy Electron apps, that's less resources for other programs that might actually need them.

I can't help but get the feeling the only reason my Google phone has an 8-core processor is so news sites can play more ads.

It is bad because if browser uses too much memory then it swaps out other programs like gnome-shell to the point that you cannot use Alt-Tab because the code responsible for it got swapped out.

It would be better if poorly designed sites (for example newspapers or TV channel sites) were slow or displayed without images and iframes with ad rarher than situation when browser swaps out everything else to help this poorly designed site run faster.


no one cares

Such a compelling argument! But seriously, MOST websites are slow and bogged down, especially in cheap hardware, we just have grown accustomed. The fault doesn't lie in hardware not being there yet or over-zealous power users, it's in webdevs being just plain lazy. And it's just gonna get worse with each iteration of new js library woo, each one another brick in this clusterfuck of a Babel Tower.


> Whenever I see lines like that, "the web can be a resource hog, sometimes devouring CPU and memory", I really want to respond "no one cares".

That's an excellent summary of modern web development. Folks with 32 GB of RAM and the latest and greatest Intel CPU + the most expensive cutting edge video card decide what's acceptable resource usage. Cause hey, it works for me.


A lot of people use portable devices such as laptops or phones: they do care when your website makes their battery last an hour less.

How many people would know what caused it, though? Maybe they spent a while on a few different websites, but only one was a resource hog...

Hoping the user doesn't notice that you've been using their resources has always seemed like an awful gamble to me: both in the sense that you're deceiving your users by pinning the blame on other websites/their browser/their computer in the their mind, and also in the sense that you're just making it more likely that someday users will have access to this information.

It's pretty noticeable which websites hog resources if you use your laptop or phone regularly. People are well aware that streaming and social media apps sinks battery like a boulder compared to looking at a wikipedia article. My macbook is dead in 5 hours or 1 hour depending on if I'm on reddit/hn or content heavy websites. Fans kick into gear when netflix starts streaming and the laptop becomes untouchable. Back when I used messenger on my phone I used to watch my battery percent tick down by the minute.

Imagine how fast our computers would be if all developers tuned for performance and did their best to conserve resources. We've gone from people trying to squeeze every last bit of performance out of an extremely low resource system to the current state of "just go for it, most people have enough resources for my one app I'm developing".

It blows my mind that companies think that delivering a full-featured browser with a web page to make it into a native application makes any sense whatsoever. In a lot of cases, the same app, written 30 years ago, would fit on a floppy disk.


Now imagine how slow development would be in your hypothetical world. Most of the apps and sites you use wouldn't be build because the cost of development would be so high.

We throw tons of CPU and memory at these problems precisely so we can build fast.


And we build fast so that we can make tons of low quality apps. I think we're over saturated as it is.

I thought so too. When I was 16 I moved to a university dormitory. It was flat with 3 rooms and 2 people in each room. I was FreeBSD mega-geek but my roommate was a Linux fan(lately he moved to FreeBSD too). We were writing our own fork of DWM(https://dwm.suckless.org/) with functionality like in hackers films, using text-based clients for email, IRC, ICQ(it was really popular in Russia in that time), lynx, etc. It was like we are a hippie-hackers from 70th with a long hair(my was about 70 cm) and very creepy beards. It was really awesome because of no one except us have on so deep knowledge of networks and Unix systems. We were kings of the local network because the building was new without a local network, so we build it from scratch.

And now, 15 years later ... I have a MacBook Pro and iPhone. I'm so lazy. Every day at work I write tons of C/C++ code. I know how works Linux TCP/IP stack, TLS, HTTP1/2/3. But I just want to watch the new Stranger Things episode after the workday. I don't have any customization app for OS X. No special hotkeys or c00l h4x0r terminal app.


I've started using Brow.sh[1] occasionally - it runs headless Firefox and pipes the output over a terminal.

Try `ssh brow.sh` for a 5 minute demo (`ssh brow.sh -t https://maps.google.com` is pretty cool too)

I really need to try out the experimental vim branch[2] to emulate Vimium/Tridactyl. Firefox rendered to a shell with Vim keybindings would be so amazing to use!

[1]: https://www.brow.sh/ [2]: https://github.com/browsh-org/browsh/pull/264


Browsh author here. I feel so guilty about not having merged that branch :( It's been there 8 months or so. It's by far the most requested feature. I'm just so overwhelmed with other things to do. I'd love Browsh to be my main focus, but I've got to earn money. There's so many more things I want to do with Browsh.

So basically I just wanted to say sorry to the people that have submitted code and not had it merged and sorry to the people that really want this feature, like a handful of blind users that keep emailing me about full keyboard support. I haven't forgotten about Brwowsh, I've just got to earn some money first.


Open source maintainership can be an exercise in stress and guilt. You don't owe people anything and self-care is very important. Thanks for creating the project, and good job handing out some commit bits. Take care of yourself first, the code will come along eventually.

Are you willing to cede some control and let someone help you maintain it?

For sure. In fact there are already 4 people with commit permissions to the repo.


Interesting. And https://html.brow.sh/maps.google.com generates a timeout.

The only thing I can think of is the filtering I had to setup after Browsh got blacklisted for rendering https://html.brow.sh/https://mail.google.com as it was seen as a phishing attempt. Now I 301 redirect all matches for `[mail|accounts].google.com`.

If you install Browsh yourself you can customise the redirect filters however you want.


No need to feel guilty - thanks so much for your amazing work :)

I also enjoy reading your blog - your article on China was eye opening.


If you want to make brow.sh usable over a network connection, I would highly recommend using mosh. Also, a couple of relatively basic things don’t work, such as file uploads (somebody made an issue a while ago but there’s been no activity on it since November of last year).

Agreed - Browsh over mosh on a cheap VPS is good fun!

Also, some commands interfere with Tmux I think, such as closing tabs. I need to figure that one out!


It doesn't make much sense to use terminal instead of normal GUI. Terminal has no advantages over normal GUI for displaying web pages.

Try loading up 20 JS-heavy tabs in Firefox/Chrome and looking at your RAM/CPU/battery use :) Offloading that computation to a VPS somewhere with a fast internet pipe is attractive.

Most webpages are mostly text, and terminals are great at text.


If you use your browser for media consumption, this is a great shell addition (add to ~/.bashrc):

  function streamer() {
    youtube-dl -o - "$1" | vlc -
  }

Then:

  $ streamer <just about any media url>

Of course you need youtube-dl and vlc

A while back I migrated from VLC to `mpv`. I much later learned from an HN comment that `mpv` automatically calls `youtube-dl` if it's installed; you can run

    $ mpv <just about any media url>
without setting up any aliases or functions or mucking with calling youtube-dl yourself.

I have this in my zshrc to save the trouble of pasting URLs manually:

  mpv() {
    if [ $# -eq 0 ]; then
      command mpv "$(xsel -b)"
    else
      env mpv "$@"
    fi
  }

Thanks for suggesting mpv! I had never heard of it, but I'm not too happy with VLC or smplayer on MacOS so I gave it a shot. It's fantastic. Fast, responsive and it plays everything I've thrown at it.

To anyone on MacOS who wants the efficiency of mpv with a nice GUI: IINA [1] is a slick video player for Mac, with minimal UI, web browser plugins, and gesture-based controls, and it's FOSS with mpv and youtube-dl on the backend.

[1]: https://iina.io/


I gave IINA a shot. It has a nicer UI, but it's not nearly as fast or responsive as mpv. No sale.

Interesting. I’ve found it to be significantly more responsive than VLC while being much more consistent with the Mac UI, but to each their own!

Very nice, I will give this a try!

VLC is natively capable of playing YouTube videos.

    $ vlc <URL of YouTube video>
Or just copy the URL and hit Ctrl-V at the main screen of the UI on Windows/Linux

But what if you want more than just YouTube? :)

https://ytdl-org.github.io/youtube-dl/supportedsites.html

Very useful for even strange sites like my local news station that uses a broken Flash media player.


You can combine it with something like xclip and a desktop starter. So all you have to do is to copy the URL and click the icon.

A browser extension would be even fancier, but I don't think that would work so easy permission wise :)


Youtube also has rss feeds you can use in fun ways to fetch your content automatically.

What I don't like that modern browsers tend to trade memory for performance, i.e. get slightly better numbers in benchmarks by using significantly more memory. For example, have you heard about image decoding cache? A browser can waste as much as 20 or 30 Megabytes to store uncompressed bitmaps, although a better solution would be not to store them at all. If the website uses too much images, then it is not a browser's problem.

Well, wasting 100-200 Mb for one tab is fine, but what if I have many tabs opened? Browser quickly uses up all available memory and starts swapping out other programs including a shell.

I think that browser could at least reduce memory consumption for background tabs: they don't need to run when they are in background. A browser could free everything that is not necessary, including: image decoding cache, caches to speed up DOM and CSS, canvases, compiled JS bytecode (it takes more memory than source code) and JIT'ted code, any network caches, SQlite cache. Also they could gzip text resources in memory like CSS or JS code.

Note that this approach is better than simply unloading a page like mobile browsers do. But if the user didn't input anything into a page, I think it should be fine to unload it completely after some timeout (for example, 30 minutes).

The same approach could be used with iframes. Typically they are used mostly for ads and analytics. There is no need to waste precious memory for slightly better animation of an ad banner.

I remember that many years ago Opera could handle tens of tabs with 512 Mb of memory and not get swapped out.


> I remember that many years ago Opera could handle tens of tabs with 512 Mb of memory and not get swapped out.

Tabs back then weren't full page SPAs, though.


Most of the time I use my own browser [1] because I set this javascript to False and can therefore "roughly" do without an adblocker. Unfortunately no social media or other login services work. For Youtube, Twitch and other video services I also use mpv which turned out to be a very good and fast solution.

The author is right when he says in the conclusion that it is really difficult to do without Firefox. Although many services can be fixed with external programs, these are often only emergency solutions.

But I also learned to distinguish between the "information" web and the "entertainment" web and since I do a lot of research, programming and reading in data sources I don't need Javascript to 95%. Other points are important to me, e.g. fast loading times, low CPU and minimalist design. But you have to do without many features.

Text-based web browsers are cool, yes. But if you're not in the Linux scene and ride on (e.g. Manjaro I3) it's incredibly cumbersome and exhausting.

[1] https://voidnill.gitlab.io/cosmic_voidspace/alligator.html


I used to do this many years ago. Then I added JS to upvote and downvote on HN and reddit. Then I gave up. Now I just use umatrix on Chrome or FF

You don’t need javascript to upvote and downvote on Hacker News.

Whoa! And as I just notice by your comment, not for the login either. I only tested the HN search function once and it only worked with Javascript. Therefore I assumed that this is the case with all other functions. Thank you.

Yeah, it uses a service called Algolia, and unfortunately their frontend depends on JS. But you can use http://hnapp.com/ for JS-free search.

(I'm not affiliated)


After a workday, I used to be repelled by using a computer. I started to use it again when I uninstalled Firefox. Now I just use elinks for basic stuff and I'm happy again to hack or just use a computer again, instead of a bloated web browser.

If I want to do "fancy" things, like book a flight, I just do that on the iPad.


I used to have access to the Amadeus API, and the experience to book flights `curling` it was very interesting :P

Booking flights via curl sounds like it’s a much better experience than navigating the endless choices, options & dark patterns on a typical airline website.

Don't check this checkbox if you want to miss out on not getting amazing promotional emails, great poems and the occasional bobcat.

On the subject of JavaScript and NoScript.

I'm a JS dev working with some relatively large institutions building single page web applications using JS frameworks. Unfortunately I have seen there is often an unwillingness from both management and some developers to invest in server side rendering of SPAs. There is always a willingness to support users who have visual or other accessibility requirements, yet users who want or need to disable JS are often left out in the cold.

I love JavaScript but I still believe it is important to provide alternatives for those that don't.


> yet users who want or need to disable JS are often left out in the cold.

Because it's simply not worth it. Do the math on the number of people who want to be able to use a site without JS (and the even smaller number who would absolutely refuse to use a site without JS) vs. the large cost of maintaining essentially 2 versions of the site. Couple that with the fact that the types of people who disable JS tend to make horrible customers (and I'm not saying that's at all a bad thing, but from the pure economic perspective of the site owners it means their opinion counts less).


>There is always a willingness to support users who have visual or other accessibility requirements, yet users who want or need to disable JS are often left out in the cold.

Because accommodating disabled people is mandated by the ADA. No such requirement exists for noscript users.


> yet users who want or need to disable JS are often left out in the cold.

Isn't that just a preference of the user, that has repercussions? In fact the WCAG 2.0 specifically says that a JS fallback is _not_ a requirement for accessibility - a common misconception is that disabled users disable JavaScript - this isn't actually true.

Fair enough if you want to turn off JS for tracking reasons or whatever, but you have to accept that some things might not work without it.


Exactly! Whilst I understand the reasons, it is nevertheless a shame that the willingness is born from requirement rather than the want of actually making things accessible for as many users as possible.

The marginal benefit from making a site available to the relative handful of people who refuse to run javascript is miniscule. The marginal cost of doing so is huge.

Stupid question: How would you even accomplish an SPA with server-side rendering but no (client-side) JS at all?

SPA to me implies that essentially all the content is fetched not via traditional navigation but AJAX requests fetching pages/content etc. and the local JS code making sure to switch out the currently displayed contents.

My understanding was that server side rendering simply speeds up first load of an SPA since it doesn't need to be rendered on the client, but then you still need JS to actually get any additional content and navigate.

I guess one could build something that looks/behaves like a traditional web app with links leading to new pages that are rendered by the server and if JS is enabled switch to what I described earlier. Would the former really be considered an SPA though?


If you have a blog that lives at `/` for example and posts that live at `/post/:num` for example, your SPA will have a href="/post/5" or whatever link, and then when you navigate there the SPA will get that blog's content from the server and be rendered there. I agree though that if you have comment functionality for example that would probably be made through an AJAX request in Javascript and it probably wouldn't be possible to automatically switch that to a form.

> A single-page application (SPA) is a web application or web site that interacts with the user by dynamically rewriting the current page rather than loading entire new pages from a server.

I guess the definition of an SPA is not as strict as Wikipedia says? But what makes such a web app (like you mention with actual page navigation) an SPA? That's just a ... normal web app/site.

I guess the point was that a perfect SPA has a fallback mode that works without JS? I'm just not sure that exists at the moment. If you invest in an SPA and its (perceived) advantages aren't you choosing it precisely because you want it to be more than a traditional web app with individual pages? Hence all the JS and reactiveness in the browser. Building a degraded fallback mode would be admirable but it's a whole lot of added complexity (have to find alternative UX to accomplish those JS interactive pieces).

If your SPA isn't actually an SPA maybe just stick with a traditional setup. That's why I don't understand the drift in that original comment.


I agree. But you don't need completely abandon the GUI. There are modern browsers like Pale Moon that reject the idea of the browser as the new OS layer and don't implement all the new attack surfaces and CPU wastes. Combined with NoScript temp whitelist only you can browse normally but without most of the resource wasting (ie, ram usage from multi-process, web(gl/etc), and the like. I regularly do hundreds (ie, 300+) of loaded tabs with less than 2 GB of ram used.

That said, I still use tools like youtube-dl on most of my computers and IRC and RSS and all the rest.


Agreed on not abandoning the GUI, I'm writing this from dillo for example. Also agree on the idea of not having the browser as the new OS and not expanding attack surfaces so much. But Pale Moon seems to be a far cry from that, specially seeing as it keeps NPAPI plugins support.

I now use a dedicated Chrome profile for a handful of sites I trust like my banks and gmail. The profile has cookies and JavaScript disabled by default and I have to opt in to let each domain use them.

Then I do all other browsing on a Chrome Guest session. It's cut down my web usage a bit as I have to log in to sites like HN or LinkedIn, Facebook, in the guest session to use them.


Just to give a head's up in re: Dillo browser: https://www.dillo.org/

On modern hardware it's effectively instantaneous. Click the icon to start it and it's up and rendered the homepage before I release the mouse button.

Many sites don't work well in Dillo, but then many sites do.


There's also NetSurf <https://www.netsurf-browser.org/>, which is in active development and which is almost as lightweight as Dillo, while having better support for CSS3.

It doesn't really support JS yet - it's included and you can compile it in, but there's IIRC no DOM still - and 3.8 behaves a bit weirdly with word wrapping and fonts (I think the latter is because they are compiled in the executable), but it works pretty well, and it works even outside X: it has a framebuffer version.


While I use GUI JS-enabled browsers, when at a real keyboard, with a shell and terminal enabled (or even on Termux in Android), fast command-line lookups are useful. Unfortunately, much of the Web formats poorly (HN included).

I have locally defined bash functions for DuckDuckGo (ddg), the Online Etymological Dictionary (etym), weather, and stock quotes (well, plus extensive cruft-stripping for the latter). DDG + bang searches gets me numerous other services, particularly Worldcat, which wins the award for least-useful web design: it breaks on both mobile GUI and console devices, though at full screen rendered by w3m it's passable (I'm considering how to extract and present useful info bypassing the web design entirely).

Third-party utilities give further resources: surfraw for a whole slew of sites, rvt (no longer maintained :-( ) for Reddit, and a set of Wikipedia tools. There's dict/dictd for dictionary lookups (including forwarding OSX queries to my Debian Linux box which has both a server and far better dictionaries).

For RSS I wrote a far-too-complex "rsstail-pretty" which reads from a set of feeds and restructures output -- because there's all kinds of useless and annoying crap in RSS feed content. Similar tricks for stockquote (a bash function wrapper around an awk script) and weather (bash function, sed, and awk -- because features of each are required).

The browser or http agent (w3m, curl, wget, generally) fires once, grabs content, and gets out of the way in many cases. No 25GB VSS memory allocations (yes! Firefox) resident for hours or days.

The amount of useless cruft wrapped around desired information payloads makes the Saturn V launch / return capsule mass ratio look downright sleek.

Particular disappointments: Archive.org, entirely useless without JS.


When traveling, to keep bandwidth usage down I exclusively use Firefox as my iPhone browser because of its ability to turn off images. Turns out most web pages are actually a lot better with only text.

> Probably the most effective thing is going to be an ad blocker.

If for some reason I was unable to use an ad blocker then my web browsing would drop to such an extent that you could say I no longer used "the web" but rather a handful of sites. Similarly, if my only TV option was cable with commercials, I'd just stop watching TV altogether.

I normally browse without JS, and JS-only sites are increasingly common. 95% of the time I just close the tab, even if I'm researching local businesses. From the other 5% of the time, requiring JS to read some text has become such a reliable predictor of regret that the choice (to just close the tab) is easy and immediate.

On the flipside, people here often complain about not being able to view a site or that its format is unreadable, when it works great without JS, so it goes both ways. You only ever hear that blocking JS makes things fail, but not that it makes things work.


I remember disabling Javascript for a couple of months in Chrome some time ago and it barely broke anything. Had a great browsing experience. This might be because of my own personal preferences when it comes to sites, but I don't think that JS is a necessary evil in today's modern web.

A few months ago I decided to try uMatrix out, which defaults to blocking third-party JS, but allowing first-party JS. It broke a lot of things. But I persevered, and manually whitelisted various things as time went by; I decided to always whitelist by individual third-party for each domain, rather than just throwing in the towel and letting a site do what it wants. Some things were very painful to get working; Dropbox login, which I occasionally need to use for work, was excruciating.

More recently I’ve been playing around with heading in the other direction: when things break, I try blocking first-party JavaScript instead. This has often been successful, so that I’m considering disabling all JS by default and only turning it on when needed.

All this definitely speeds things up. I installed NoScript on my fairly slow phone (Firefox for Android) a month ago, which I don’t use for much, but do a little web reading on sometimes. Disabling JS definitely sped things up a lot.

A few sites do stupid things like visually hiding all content if you don’t have JS. Those I don’t use, use reader mode or once or twice have added a user stylesheet for if I’m going to touch them more than once but decide not to whitelist them for probably capricious reasons.

But this disabling-JS lark is definitely not for the consumer.


yeah, on Android Firefox+uBlock is definitely an interesting tradeoff. Without it Firefox is (or I perceive it that way) a lot slower than Chrome - with adblocking things get even. I seriously hope that they dont mess up their new android browser in this regard, I think a faster experience without ads will make a lot of technical chrome users (which supposedly were key to the 70% market penetration it has now) think twice about supporting Google.

Sites don't "visually hide all content" without JS, they are using JS to render content in the first place, thus no JS, no content.

No. Many sites have a style like "display: none" on body element, to hide content until JS is loaded. But of course they have content in HTML for search engines optimization. So they show content to Google robot instantly, but the human user has to wait.

Google also uses this trick in search results if I remember correctly.


Exactly. Here’s an example I encountered today: https://status.digitalocean.com/, all the content is in a div that is affected by `.layout-content.status.status-index { opacity: 0 }`. JavaScript code then sets `.style.opacity = 1` on the element to undo that.

This is a bad technique. If you really want that effect, something friendlier would be this in the head of the document:

  <script>document.documentElement.className+=' js'</script>
… then prefix that earlier selector with `.js`.

Or another approach: remove the `opacity: 0` from the stylesheet, and immediately inside the element to be hidden while loading, add

  <script>document.currentScript.parentNode.style.opacity=0</script>
Remember that external JS is never guaranteed to load; even on normal people’s computers, networks are not reliable and even first-party resources can fail to load, though it’s ones on a different origin that are the least reliable.

a browsing experience without noscript is painful

add pi-hole and it gets even better

If you’re already adding extensions, uBlock Origin does a better job than Pi-hole (client-side is more powerful than DNS). Enable advanced features and then you can use the uMatrix-esque dynamic filtering panel to prevent 3rd party or all JavaScript. I recently learned to use its power and now no longer feel the need to use uMatrix or NoScript.

Why not both? A client-side adblocker can obviously do a better job at blocking, but DNS-level blocking removes crap from most applications on any device, not just from browsers.

If you enjoyed navigating with JavaScript turned off, why did you stop doing so (or so it seems)?

Chrome even has (what I consider) a great interface for minimally controlling JavaScript and cookies, by placing an icon on the omnibar when either are blocked, letting you enable them on a per-site basis.


I changed OS, had to reinstall, then forgot about it. I also don't browse as much as I used to? I'm mostly working on personal projects when I'm home, so I barely spend time outside of podcasts or youtube videos.

Everything was fine for me until I noticed all my bank and cc and utility websites broke without JS.

One way to avoid the browser, is to scrape. These days I mostly use nodejs for that. I tend to do that for websites that I would otherwise visit often, or websites that have terrible UI, yet simple data model.

This post is a bit short on information. I like the take is instead of "living without GUI" (hence CLI/TUI) it is living without the modern browser. But you'd be surprised how much stuff is flying over HTTP(S). For example consider VLC, and use the download subtitle extension. You could think of Beautiful Soup 4 or similar clients (didn't check if it uses that but as a Python example).

Which leads me to the question: when is a browser considered modern? If it has a GUI? If it supports Web 2.0? If it supports Web 3.0? If it supports JavaScript? If it supports all JavaScript by default?

"There are multiple text-based browsers to choose from. The oldest and maybe the most well known is Lynx. Personally I use Links instead, which provides a similar experience, but also includes a “graphical” mode that is capable of displaying images, and the default background color is black as opposed to gray (only for the non-graphical mode, though the background color can be changed in graphical mode). Here’s a list of text-based browsers from Wikipedia (not all of them are maintained)."

Every once in a while I check this out. They're all either horribly out of date, or they do not work with current technologies, or they have some experimental features but they're out of date ports from Firefox (with security vulnerabilities!). Hence I recommend Browsh instead which is based on Firefox. But I would not say it fits the narrative of "living without the modern browser" as it is Firefox under the hood. It fits better in the living without GUI narrative.


Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: