It's not popular because it's a fad, it's not about replacing good old static websites with fancy over-engineered JS code.
It's about making desktop-class applications more accessible via the web. Desktop-class Apps have lower latency requirements then server rendered frameworks are capable of delivering, plain and simple. You could certainly build Facebook as a fully server-rendered PHP app, but that would hurt Facebook's business because its servers would need to do more work and its users would have to wait longer for content.
Fully server-rendered frameworks are not capable of delivering low-latency desktop-class applications. If your app doesn't require low-latency updates, then you can certainly use a classic PHP or Ruby on Rails stack with no problem.
Sure, you can use JQuery style code to make your PHP app more interactive, but you're probably just going to end up with a messy, hard to understand JS code base eventually if you you don't have some sort of
low-latency client-side declarative templating framework.
There's certainly some companies that are caught in the hype and build a SPA app when they would probably be better served with a simple PHP site, but on every project I've worked on that used React across a few companies (interactive apps for ex: cropping & manipulating images, building visualizations and reporting on data), not using an SPA framework would have slowed down development dramatically, or left us with a really poor product.
Everyone understands this is what devs are trying to do. The complaint is that my local newspaper doesn’t need a desktop class application. Nor does my bank, nor does Reddit for that matter.
There are vanishingly few websites that need a “desktop application” performance profile. Most websites are just viewing documents. The size of the SPA frameworks is frequently higher than the actual content being viewed.
Finally, if matching desktop app performance is truly the goal, the majority of SPAs fail horribly. Poor request patterns and transitions make the pages just as slow as server side rendered html. I would rather wait one second to get a JS-free response than look at another damn spinning circle for 5 seconds as the page sputters new elements in.
Another point, it's true Reddit doesn't need a SPA. However, while I hate their new design, at the scale of Reddit an SPA can have real effects on server costs as well as user engagement. And, if you look at their growth numbers it seems to be working out for them, despite their design churning me as a user.
Third, the ecosystem of well-built components means that even if you don't really need a SPA, using React could save you money in launching your MVP, which is a big consideration for startups.
In the end, widgets with lower-latency interactivity, even when not absolutely necessary, is a better experience (for example, a form field that tells you what's wrong with your password as you're typing). Also, because the SPA crowd are building richer applications, you're going to have access to richer open-source widgets when building with that technology. So it's a hard thing to ask people to avoid React when it could be the difference between a great user experience and a mediocre one down the line.
Let's just continue to build better SSR tools so we can have the best of both worlds. We can build tools to help developers cut JS from their app, because it's true, eventually the latency bottle-neck isn't the speed of a round-trip request with PHP but the speed of downloading and executing all that JS with your SPA.
I believe this is a loosing battle, not because of hype, but because of the business reasons for using frameworks that are capable of delivering richer widgets. We can have our cake and eat it too, instead of fighting the shift in technology.
If you want to check hospital bed occupancy by state, it's 3 clicks, a 5 second spinner while it loads the results and shows the first 20 rows. It doesn't sound that bad, unless you refresh, in that case it throws you back at the landing page.
This should be a static site, but even if not, this is actually slower than a server-side rendered page. SPAs are not desktop-like when they constantly request server data, even if they hide it with slow animations and spinners.
Compare their approach to Worldmeters' COVID statistics page, which while less functional, it's very lightweight and responsive.
In my experience, that ecosystem of components does not compensate the cost os having to take a thousand decisions and making an equal amount of discussions and considerations and trade/off evaluations within your team (How to do state management, what router to use, class or funcion components, do we do SSR or not? Should we use typescript? etc,etc vs just using Ruby On Rails / Symfony / etc.
> Let's just continue to build better SSR tools so we can have the best of both worlds
Yes, just adding more and more complexity to compensate what you get by default with traditional MVC seems to be the right approach (irony).
Almost none of the SPAs I've been involved in in the last 5 years, required "desktop like" interativity. All of those would have been served a lot better by more traditional approaches.
I think SPAs and in general frontend-heavy frameworks are an amazing technology, but certainly overused. Bussiness wise, it makes no sense. Most of us, building SPAs, shouldn't. Problem is, it is not trendy or hip to use RoR, and everyone wants to have fun too.
In 2014-2015, React was eating the world, and newspapers didn't want to get left behind.
Yes, I encounter it frequently. Some slow fucking main page loads for 5 seconds with no article in sight (presumably chewing on all of the trackers, assets, whatever, but no story nonetheless). Then a spinner starts to load the article and something stutters in after another 5 seconds and I get false hope that I can start reading. Another two seconds and gdpr cookie notice pops over, then a subscribe widget, then maybe a local weather widget. Close all this shit and now read the article on half of my screen because the top half is occupied by a banner with a breaking news ticker.
> Another point, it's true Reddit doesn't need a SPA. However, while I hate their new design, at the scale of Reddit an SPA can have real effects on server costs as well as user engagement. And, if you look at their growth numbers it seems to be working out for them
I’m sure their developers smoke some pot too, but we don’t attribute that to their growth. Why would you think an SPA helps when it’s one of the most widely hated features of the site? Has it occurred to you that the growth might be happening for a different reason?
> Third, the ecosystem of well-built components means that even if you don't really need a SPA, using React could save you money in launching your MVP, which is a big consideration for startups.
The ecosystem of server side stuff is far deeper and more mature. I don’t buy this argument.
> believe this is a loosing battle, not because of hype, but because of the business reasons for using frameworks that are capable of delivering richer widgets. We can have our cake and eat it too, instead of fighting the shift in technology.
Unlikely. Once the hype dies down people will realize SPAs are like mobile apps. You don’t need them for the vast majority of use cases and their instability and general shit performance will result in punishment by the search engines to the point where people with html5+css sites will rank higher and suffocate the bloated turds.
One thing I can tell is: they get worse over time, and the more JS they involve, the worse they end up being. I can talk a lot about the UI design itself and it won't be nice (suffice to say: it's getting worse, and it's most likely because the goal of the bank isn't to provide a productive financial management UI, but to confuse you and upsell you financial products), but beyond that, performance goes down with each iteration.
(My least favorite piece of garbage of user interfaces is the offering of IDEA Bank in Poland - an Angular.js monstrosity where every operation - like opening account history, or downloading a PDF with transaction details - seems to take 30 second to one minute on a good day. The interface itself lags a lot, starts to visibly slow down if listing more than couple dozen items - say, transaction history for the past 6 months. But showing such long lists isn't a good idea anyway, because if you scroll to the end of it, some random XHR will fire and reset the list to "last month" or something like that, because of course it's a reasonable thing to do.)
Banking pages absolutely do not have anything in them that would require "desktop like functionality". They're the poster child of the document model - their job is literally to be digitized bureaucracy. They present you with forms and respond to queries. Every interaction you want to have on a banking interface boils down to that. Request a list of this. Request details about that. Send that much money there.
I think this is heavily dependent on how fast the screen refresh is. Page reloads can be quick enough to appear pretty responsive - but if the website takes 3 seconds to serve the page up that’s obviously no good.
As a React developer this makes me anxious, lol.
I also bank with another bank that does full-page reloads and it's a huge pain. And it's frustrating when I have to do something because of how slow it is comparatively.
On the other hand, web applications that provide full-featured experiences are only possible because of the full spectrum of web technologies. Choosing not to run JS and claiming that Google Docs should work without JS is ridiculous.
In the middle, you have countless SPAs, but also slightly interactive websites that are 90% content, and 10% calculators/maps/widgets.
I increasingly rely on either Internet Archive's Wayback Machine (which also fails remarcably often to fully, or even partially, present SPAs), or Archive.today (archive.is, archive.fo, and friends), which is painfully slow to acquire content but does manage to render most it attempts.
Yeah, like your example, Facebook. Ever since their redesign, I've switched to only using the mobile+noscript site (on desktop), because the SPA version is resource-hungry to the point that it regularly DoSes whatever browser thread it gets assigned to and has UX that, ironically, seems to be terrible as anything other than a mobile app (they've replaced text with abstract square "touchable" buttons and introduced airy spacing everywhere that allows you to see maybe about half a post per screen).
It's as if its designers have been trying to ram their mobile app down my throat for years (by nagging screens at first, and then by outright removing the ability to view private messages from the mobile page - except sometimes by refreshing sufficiently many times you could trigger a bug and still drop through to the old messenger view, adding insult to injury), and when I still didn't bite, they decided to replace the whole service (which up until then had been one of the last remaining decent mainstream websites) with a facsimile of one.
I have literally not heard of anyone complaining about this (not that your argument is invalid). A whole lot of people are just not gonna bother or check how resource intensive it is.
EVERYONE complains about the speed of "new" (2020?) Facebook.
I am not sure what "web" you're using, but as someone who uses noscript and has been enabling scripts for over a year now, I can firmly say JS is NOT used for making "desktop-class applications more accessible".
It's used for ads.
And spying. Lots and lots of spying.
Googletagmanager/analytics is everywhere, it doesn't deserve to be, it doesn't need to be. That domain needs to die a painful, horrible death.
facebook, twitter, sessioncam, and many others are used to bloat pages, increase my energy usage, decrease my battery quicker and contribute to the wastage of energy on an unprecedented scale.
Just ask yourself, how much money, battery life, bandwidth is spent every year on downloading useless scripts that as far as I can see offer no value whatsoever. By selectively deciding what scripts to enable I get the folliowing result:
1: pages are lighter, less bloated, and STILL WORK
2: I download less scripts, use less battery and save bandwidth and energy for myself and all humanity.
3: There is less spying as fetch() requests are blocked and there can be hundreds of them across a single web-page session (you can watch really bloated pages for 10 minutes, there can be 100's of requests easily).
Test: load the following two pages, study their usage, test each with JS enabled/disabled:
1: https://old.reddit.com (with JS:2.69, without: 2.34mb)
2: https://reddit.com (with JS: 8.58mb, without: 8.10kb, but the page is broken...)
Baring in mind old.reddit works fine with JS disabled, showing JS is not needed for a site like reddit to work at all.
Yet, "web developers" use them as if it's nothing. Pages that execute over seconds, possibly MB's of data, all the additional requests that are made to enrich the likes of FB/Google et al.
So no, SPA are a scam and the web is worse today than I remember back in the 2000's, at least we didn't have cookie pop up boxes because people can't help but abuse JS.
Most of the GDPR violations I've found thus far are from scripts that have no place or purpose, that slurp up user data without remorse, that, if disabled, doesn't impact the functionality of the page, and that enables the great surveillance capitalism and data-raping we are seeing today.
yes it is. It is used for ads and spying too, but to say it is "NOT" used for desktop-class apps is wrong
> at least we didn't have cookie pop up boxes because people can't help but abuse JS
pop-up and pop-under banners with IE6 were BAD.
Why does that necessitate disabling all JS? You can enjoy the performance of an SPA while selectively disabling tracking.
Then there's the issue of [randomstring].clouldfront.com - what does this script do? Do I need it?
Often finding the right combination of scripts to enable to get one piece of functionality to work makes the whole experience painful and frustrating, often I just forget what I was trying to do and go elsewhere.
You might not care, but probably most people publishing any sort of content online want to know who visits it ( from where, mobile or web, what numbers, when, etc.). The easiest way to achieve that ( considering there's shared hosting and that most people who care about those numbers aren't the people who installed whatever software runs the site, or developed it. The average website runs wordpress ) is via Google Analytics.
Yet, in practice, this is what happens.
When turning JS off is an option, that's usually one of the biggest improvements I can make to my page surfing experience. Things load faster, ads don't expand over content, the page reflows less often as things are injected, shit doesn't autoplay, and spinners and animations don't distract. I can just read the content.
It's possible to use JS well, but I don't see it happen very often. And the unpleasantness from of abuse usually outweighs the benefits from good use.
So far I have seen no evidence that this would be the case (rather, I have reasons to believe the contrary).
> and its users would have to wait longer for content.
They would actually have to wait less. Recently two of the sites that I used to use switched to react. I was able to have open literally 100s of tabs with them, each of them loading almost instantly (excluding media). Now I can barely hold 2 open and they load slowly (even if we ignore the time that it takes to load the media).
Go and take a look at a "desktop class application" from 10 years ago - say Photoshop 6. Compare the speed, the UI, the native look & feel.
Go to a SPA today.
To be clear a SPA is generally a front end for user input, such as a form (or series of forms) clobbered together so that a page is loaded fewer times. This a traditional web path that exchanges page traversal for interaction, often to maintain state locally. Conversely, a browser application is an application that executes in the browser without regard for data and state, such as photoshop or a spreadsheet in the browser, which aren’t concerned with any server application.
The problem I see is folks unnecessarily turning their websites into desktop-class applications. It's especially popular on ecommerce sites -- today I tried to look at some lumber prices and the website had input latency measured in seconds before my phone's browser just crashed.
If we, for a second, stick to the meaning accessibility used to have, namely, usability by people with disabilities, quite the contrary is the reality. The SPA trend, "lets just move every app into the web" fuels the digital divide like nothing else. It has become harder and harder to actually use the modern web, and a lot of why that is comes down to SPA and JS.
Most people understand the promises of SPAs, but there are several forces that play against everything you said:
- Some websites don't need a desktop experience yet they go the SPA route.
- SPAs are -in my experience- significantly harder to get right than server-rendered apps. For sites that are a good fit for "the regular old web" we are speaking about at least an order of magnitude here.
- Oftentimes it is companies/products with significant resources that embark on the SPA ordeal. This usually means that they also have several teams demanding product analytics, A/B testing and what not, and hence their sites end up loaded with random shit (gtm, analytics, optimizely, facebook pixel and the kitchen-sink).
For all these reasons, it takes an extraordinary (i.e: significantly better than average) team, from the developers all the way to management, to deliver on the SPA promise.
As a result, most SPAs suck, and hence a lot of people cultivated an aversion to them. It really is that simple.
I think they do that mostly because for the users who’ll have to download the header 100 times for each action they’ll do. Not really sure that all benefits from SPA are hidden in company costs. Mostly they are in modern tech approach.
Writing this I still agree with the article. SPA is needed when you are under authentication but publicly you can live with progressive web.
If only they weren't deploying some changes every other day, pretty much eliminating any benefit from caching...
One spreads the load over CPUs better.
Better for companies running high-powered servers with economic benefits of scale, or for users running low-end devices? This is just dumping an externality.
Also, arguably, most performance issues of SPAs don't come from business-relevant calculations being slow. It comes from scaffolding, gratuitous, and bunch of other overhead that would not happen at all, server-side.
It's usually not the hobbyists who pack their sites with all the modern fancy js.
Remember when the web was for people to communicate? Not just to 'satisfy business needs'?
What are you referring to?
I really think that the business first mindset made internet rot faster than necessary. No monetization means no tracking means no cookies warnings.. so on and so forth.
Then Netscape came. They were the first having a primitive idea of the potential of the Web beyond simply linking pages.
The Dotcom bubble of 1999 and the first boom of e-commerce was exactly because in the few years preceding, everyone was scrambling to get dominate this Web thing. This included Microsoft, Sun and so on.
Now, nothing of this is new.
The entire idea of "Rich Web Applications" is as old as the Web. Silverlight, Java Web Applets, Flash,... They were all about this idea of building applications that could be loaded - and more importantly: controlled - via the Web.
Over the past 15 years, Google succeeded in dominating the browser market, and building a set of API's that made it easy to build such Rich We Applications without needing extra plugins or 3rd party sandboxes. You can just do it using Javscript, HTML and CSS.
And while that's not a bad thing in itself, the problem is that an entire generation of engineers has created an entire layer of abstractions on top of the browser engine in order to reinvent the exact same things which already existed back in the '90s. Only now, instead of running separate native programs and applications, everything is now corralled into a single browser environment... which tends to be entirely controlled by Google, if you use anything powered by Chromium.
None of that is truly necessary when it comes to plain text. Heck, none of that is even necessary to render a single image on the browser canvas.
Plenty of major, highly visible, high traffic outlets like newspapers or media use these layers of complexity to publish text. Why? Because it gives them control over your experience and what you can do with the content e.g. paywalls, DRM, advertising, intricate metrics,...
Now, plenty of the Web still offers plain HTML and CSS, just like 20 years ago. And that's awesome. But that's a long tail of websites which largely remains in the dark since the vast majority of Internet users have been corralled into a set of centralized services that tend to promote links to these highly visible outlets through their recommendation engines.
Remember bandwidth, machines and access used to cost way more back then than they do now, so resource owners wanted to get the best bang for their buck from the get go. Note that resource owners means those who actually paid for network resources (companies, administrations, schools, labs), not people who merely benefitted from them for free because of their positions (teachers, students, lab rats etc)
You seem to be the unfortunate victim of a false memory, a similar phenomenon to that experienced by the "things used to be better" crowd.
It's a harsh reality, but it's very much the case and it's the first thing you learn on any business course.
Communication with coworkers, customers, investors, regulators, partners, etc. One of the main reason people interact and communicate with humans that’s aren’t their close circle is for business
A charity is not a business, it's a different type of organization. The fact/reason that most of them decide to operate like businesses is outside of the scope of what I stated.
That's not a proud age; it was less than what we have now.
Maybe a right-leaning news site gets archived (for lack of JS) and a left-leaning one doesn't. I would certainly want to see what both sides were writing about at the same time.
Maintenance is hard when your site is littered with code you never wrote and never understood beyond the API.
Access as in the original manuscript of the book is still around somewhere in a readable format so new ebooks can be published in whatever format is in demand then? I certainly hope so.
But should ebooks sold now be readable by anyone in 50 years? I don't think many people worry about that, nor should they. Archiving should be for persistence, so done in some format we can be absolutely sure we can read (such as plain text).
Distribution should be for convenience, i.e. likely whatever proprietary format users want.
But they really effing should. 10 years is not a long time at all.
> it's usually not the hobbyists who pack their sites with all the modern fancy js.
BS. News outlets, Google Anything, Facebook itself, etc., are usually way worse.
Same about worrying about personal photos. I don't have many photos about the youth of my grandparents. And none about my great grandparents. But that's okay. I don't want to be defined by them, and I don't want my great grandkids to know tiny details of my life. Legends and stories are just the right amount to know. It's a feature that the past decays not a bug.
In 10 years, most of the data on the web is outdated and useless to everyone but historians.
What does that even mean? Saying browsers won't support JS in 10 years is an idiotic claim. Even more so considering it was written 5 years ago.
Beyond that, I don't know why people think that the web should just be either simple pages for serving content or massive desktop-class applications. There is an entire world between these two that is perfectly valid. Yes I want to host a blog but I also want it full of widgets and other fancy JS. I want to use new frameworks and rewrite it every few months. It may be messy, it may be sometimes inaccessible, and yes it may not be available a few years from now. But the web is and always has been about creativity and expressing yourself in any way you want. Heck my personal site ran on Flash once upon a time.
People here are the kind who would complain about geocities or myspace pages back in the day ("why is it all so flashy? Why can't it just be a simple page of text?")
If the same content was just a normal HTML document it could at least be easily archived. It's also trivial to keep online. Even if some CDN dies and CSS or JS doesn't load it's still readable.
I wonder if brick and mortar stores are worried about their place of business if they fail? Not likely.
The internet is diverse, some things will be archived, some won’t.
My issue, and I think a serious issue with the web today, is all of the essentially static content treated like it has the same needs as some web app. Your gig delivery app needs to represent the instantaneous state of the database back end. Your blog post does not.
True, and that is fine. However, by using JS + APIs, we risk that others who want to immortalize content for purposes of historical or cultural research fail to capture quite a bit of our current culture and content. I would be sad if my grandchildren are unable to get a feel for the world I live in like I could get a feel for the world my grandparents lived in by means of archives, magazines, newspapers from their day.
If archive.org isn't doing the same, that's a bridgeable tech gap.
This means that archiving services need do to a lot more expensive work. It raises the bar to exclude smaller, more diverse web crawlers and indexers.
And regardless of that, it's a big gamble to hope that the third-party CDNs that you rely on will continue to host exactly the same JS libraries ad infinitum.
Yes, if a site loads a ton of external content it has the potential to break. But that doesn't apply to just JS but also images and all other static assets. If a JS-heavy site is hosted completely from its own domain there's nothing stopping easy archival of it.
I don't agree with this:
> Search engines and archival tools can handle all this very well today.
I think most developers have been in a situation where they had to explain to users that a moon on a stick isn't easy just because Google can do it. It may be commonplace, but it's not easy or cheap.
The difference between real HTML and JS execution is orders of magnitude of time, expense, maintenance, and stuff to break. That really can't be underestimated. It's the difference between a 500 millisecond call to curl and a whole headless browser and a 5 second wait.
I hope you don't think I'm putting this too strongly, but this raised bar is a force in the monopolisation and de-diversification of the web.
not to mention potentially rewrite the executable code to still work on the archive copy, or potentially end up with a copy where all clickable links are broken depending on the specifics (and sometimes also content further down the page if scrolling triggers dynamic loading). And if the site doesn't update the URLs to be unique for each "page", well good luck finding that dynamically loaded content in the archive at all to begin with.
- you will not have it running 10 years from now
- I am not able to make a snapshot of the content using tools like curl
Google search hasn't died because sites use JS.
I believe it's the other way around. Websites use just as much JS as they can get away with, without sinking in Google results. If Google never ran JS in the first place, most websites wouldn't depend on it.
And now every crawler has to keep up with Googlebot's computing power if they want to archive the web.
If the content is only visible when executing JS, then an archive crawler which does not execute JS will not archive it, and a search engine crawler will not index it.
Sadly yes. Occasionally, sites use Subresource Integrity (SRI, https://caniuse.com/subresource-integrity) hashes to ensure the scripts are identical to what they expect. But it still surprises me to see secure, sensitive sites loading third-party scripts.
And the issue of CDNs being offline or deleting packages can be solved by vendoring dependencies, which is possible without Webpack and was already the common practice for more than a decade before using CDNs for dependencies became super popular.
That doesn’t mean there aren’t dumb packages like `is-array` that are nothing more than `Array.isArray` with a polyfill for those working with Chrome 4 or earlier. Even IE9 has `Array.isArray` according to MDN! And that package somehow had 59,000 downloads this past week?!
The adjacent profitable is ever so much more compelling.
The Risk Makers. The nuclear, auto, and food industries
The way you write a comment on HN is likely to change over the course of decades, I doubt it'll still be the same HTML, HTTP, networking stack etc.
I think his reductionist take on it has resulted in you missing his point. Lemme take a shot at expanding on that for you.
1. Yes, browser will run JS in 10 years. However, in 10 years, the market will have changed. Firefox might be dead, and everything might standardize on a Chromium-derived monoculture. Backwards compatibility will be kept to a modest extent but we may see a new 'quirks mode' shift in the ecosystem or other attempts to carve off backwards compatibility.
Look at how Apple uses this to their advantage to have internal agility, where MS uses the opposite approach and sacrifices it for stability. The former can eat the latter's lunch when it comes to technology on the cutting edge, even if the developer ecosystem and end user pays the price in subtle ways. If Google follows this lead with their monopolistic control of Chromium, we could easily have a world in 10 years where JS on a lot of sites that use it deeply, exploring the limits of the APIs, sites that already experience cross-browser compat issues, etc. will be slightly broken.
The problem may only get worse over time; after all, we already have many use cases within IT to run a VM of Windows XP and IE 6 to access older devices that simply don't work any more with anything newer. If they'd been designed with simple, server-side frameworks and vanilla Web 1.0 stuff they would have aged a lot better. Isn't it possible that we're on the wrong evolutionary path here if that's the case?
Think also of "applications" written in Node that you might want to run on your own server - it might not be possible to build them anymore, the dependencies might have dried up, making them difficult to rehydrate without doing extensive surgery.
2. Archival is another layer of the problem. If your single-page app doesn't present some mechanism where a scraper can meaningfully put in a URL and get back some amount of HTML reflecting the desired article / content / etc, it may not end up in the Wayback Machine or other archive sites, which are absolutely critical civilizational infrastructure if we want to have any hope at giving future historians anything even approaching a balanced and truthful perspective on this pivotal age.
Is Composer inadequate? Or maybe just not widely adopted by the community? I only used it for a short period of time several years ago.
It would help if software was more self-contained, or at least offered the option to be that. This has nothing to do with JS specifically, of course.
I'm also fond of this comment from Roy Fielding:
> As architectural styles go, REST is very simple.¶ REST is software design on the scale of decades: every detail is intended to promote software longevity and independent evolution. Many of the constraints are directly opposed to short-term efficiency. Unfortunately, people are fairly good at short-term design, and usually awful at long-term design. Most don't think they need to design past the current release.
You simply can't get away from this fact, even if you don't like it.
Sometimes I wonder what exactly folks want, desktop apps only, the web as purely as basic html?
All sites like FB, Twitter should by law be required to work without JS at a minimum due to how important they are in our society today.
It makes sense in that context; but I agree it's a shame they won't keep a small alternative frontend just to view tweets.
Flatly, this is my observation and I hadn’t as of the time of this post really seen it mentioned:
JS is over used not just because of tracking and ads, though this is a big part with all popular websites I’ve visited in the last 30 days (side note: thanks uBlock origin!)
Web components are somewhat an exception to this, as API considerations go they do attempt to strike a balance between server side rendered content and dynamic client side content, our industry just isn’t heading in a direction where that balance was struck
I am not sure websites are designed to be anything more than "it's here today in this society, in this context, and may break in a few hours to a few years". Much of the "early" internet is now gone, just as FB, YT and others will also one day, be "gone".
It's entirely possible the concept of "data storage" may also one day be gone, we can't be sure the technology of today will be there in 5 years or 500 years, we'll all be long dead by that point anyway.
This message will likely be read by humans alive today, right now, and never seen again for the rest of time. Not everything needs to be archived and remembered.
Honestly, I'd prefer that our internal business app DIDN'T appear in google...
Imagine if people conflated Android development with programming in Java. (This might be a more appropriate metaphor than intended, since most of the time the people who strike me as being out for blood regarding JS tend to exhibit the arrogance and nuance of someone coming straight from their university's second year Java course.)
I tend to browse with JS disabled as a default (images too, that's another conversation maybe), and for sites that are really important to me, I enable it in browser settings, just for that site (sometimes temporarily, depending). I leave a couple of tabs open that I can get to very quickly for that purpose, but fortunately most of the time I can ignore sites that require JS.
Reasons include speed, convenience, and security. (I also do most browsing in separate user accounts, depending on the security level of what I am doing, and what other data that account handles.)
Edit: Sometimes for those same reasons, or for automation purposes of some tasks (like checking a particular page for a certain fact that I want to act on when it changes, such as some security updates), it's nice to be able to use text browsers like links (or wget & curl) too, and have the info available w/o requiring JS for it.
Bizarre argument. Seems like something you don't need to worry about but whatever floats your boat
It's a presponse to "my site is js; if you don't like it, leave it!"
Curl isn't even that great to archive with anyway is it? wget --mirror used to be the hotness.
I think it's a valid point to raise, but how much you care about it is going to depend hugely on what you're doing.
Obviously if a company wants tight control over their content then this may work to their benefit - although they may still find that their ability to render their own content requires non-trivial maintenance over time.
But that's a pretty narrow view of the web, and not everyone publishing on the web fits that model.
One of the things I enjoyed doing without JS was using a comma-separated tag system for posts and then filtering the post list by tag.
Pretending to know anything 10 years in advance is foolish.
There’s a degree of contempt here. That makes me evaluate this as a rant.
If my target audience is developers, I’ll consider using server side rendering. For normal people I doubt it matters either way.
Did you ask? Like, honest to god talk to people about how they feel about the website - not just rely on deep telemetry that mostly serves to let people lie to themselves and whitewash doing whatever they want to do as being "data-driven".
A big problem with modern technology is that the feedback from users does not reach the vendors.
I have plenty of things to talk to users about without getting them into whatever this is (still deciding if I class this as paranoia or being cautious or some kind of elitism). Based on the level of vitriol, looks like one of the outer 2 so far.
If you want to persuade developers to do things differently, bad mouthing their tech choices is a really bad strategy.
You can't expect normal people to browse with JS disabled, because by definition of "normal" they have not enough technical knowledge to do it, and to deal with the issues it causes on the modern web.
> I have plenty of things to talk to users about without getting them into whatever this is
This is the real answer to the question, "why is my computer so slow"? Regular people still mostly think "it's viruses", but it's not viruses - just web developers making websites with no consideration of end-user performance.
> If you want to persuade developers to do things differently, bad mouthing their tech choices is a really bad strategy.
I think this is a viable (even if not most efficient) strategy, because the current standard practice came from praising these choices. Counterbalancing the cargo cult through pointing out the trade-offs being implicitly made is a good idea, IMO.
> This is the real answer to the question, "why is my computer so slow"?
Yes I can. That's one of my criteria to start taking this seriously. Though I could be persuaded other ways.
Do you have any data to back this up?
Do any internet security suites (whose job it is to "protect people from viruses") advocate disabling JS?
Is Apple Safari switching off JS?
Are there popular Chrome extensions that do this?
If it's really that important to you, you can probably go help make one of the above happen.
Actually I care about front-end performance a great deal. I advocate for it, as well as server side rendering.
I'm just not convinced about the evils of JS. I routinely have plenty of tabs open and if my fan starts running I close the one that's using the most CPU. Half the time it's Slack.
So far, "I've disabled JS" translates in my mind to "tech extremist". Not seeing a lot of people doing it and not believing the reasons. At least not yet.
My technical decisions are driven by efficacy, expediency, skill marketability and what I enjoy working with. If you want to change my mind you'll need to speak to those factors.
OTOH, twitter, slack, and package tracking are consistently changing. It makes more sense for something like that to load content via JS, since there's no guarantee the page looks the same between two loads
If the search feature fails due to me not having JS enabled, fine. But the entire documentation walled off from non-js browsers? For what purpose!?
- : https://chrome.google.com/webstore/detail/just-read/dgmanlpm...
aa;dr = Ad auction. Didn't Read.
Coincidence? Or, is the world of technology really that small? :)
* Reddit: https://web.archive.org/web/20210106020050/https://www.reddi... and https://archive.vn/https://www.reddit.com/
* Youtube videos are understandably not archived because of their size; but the rest of the website (comments, collections, ...) is barely usable because of the consent popup https://web.archive.org/web/20201112024051if_/https://www.yo...
* Microsoft support: https://web.archive.org/web/20210109125722/https://support.m...
* Instagram, even before it was all login-walled
* Amazon product pages are partially broken (no images, the rest loads without JS)
That's what web archiving technologies are for. See for example: https://github.com/webrecorder.
Does the author discount this kind of use as well?
I think JS is quite fine by comparison.
At the same time, JS-heavy sites often have dynamic elements that are difficult for screen readers to handle (i.e. an element pop ups somewhere: easy to see for a sighted user, tricky for a screen reader to decide if it's important, and if it moves focus there the user looses position on the page, which is confusing), and are equally as likely as non-JS pages to not have been built with accessibility in mind, while often being more complex, which makes the structure harder for both screenreader software and user to understand.