A few weeks back, individual files were no longer readable with Javascript off and a much worse browsing experience with JS on. Now its main repo pages too.
I've not seen any announcement about these changes.
My gripe with them switching to SPA (or whatever they changed in the last few weeks/months) is that viewing files got really jarring and I always switch to the raw view. Searching through a file is really bad with their lazy loading and scrolling horizontally on Firefox on Android is near impossible. And sometimes hitting the browser back button won't switch the page.
I've been dealing with this ever since the beta. Reviewing PRs on mobile is really, really frustrating. I almost feel like they're trying to push us into their mobile apps a la Reddit.
JavaScript itself is not usually the problem (aside from certain fundemental issues). The computational gap between a developer on a M3 Macbook and a person on a used Dell from 2008 is already quite large, meaning what feels "snappy" and "responsive" to the former can feel "sluggish" and "annoying" to the latter.
The other factor between the two people can be network speed, many areas of the world have access to extremely limited bandwidth, very slow bandwidth, or a mixture of both!
Combining both the computational gap and the bandwidth gap means the issue of large JavaScript files blocking access to information on a global website is an accessibility concern. People who block JS are pushing for a more accessible web, either directly or indirectly.
I won't fight tooth and nail for all websites to _not_ have JavaScript, but I wish they would be progressively enhanced by that JS and still (at least in the core sense) usable without it.
The websites are also written badly, and appear to optimize being written badly, because of their lack of goal alignment with users.
If you do a View Source on Reuters.com main landing page, it's script file is a single line that's 1,300,000 characters long. And every time you land on a page it tries to dump 1,300,000 characters of script on you (on top of the multi-MB videos it auto-loads) [and the pop-ups]
Most major corporate websites are that way. All written by algorithms with massive JS downloads and huge 64bit hash keys on every <div>. Tracking 64bit click numbers on each element and dumping all the calculation on the user.
Thanks. That's a cool site I had no idea existed. Frankly, most major news websites should offer this as a normal alternative. The news browsing experience would be so superior in many ways.
I think the general consensus is that people would like it if anything that doesn't need javascript wouldn't use it.
There are a few who just won't engage with the utility of it, but for most people I've talked to, it's just a matter of oversaturation. Case in point with github - why should I need you to use a script to handle documents? The web was built for documents. Browsers are made to interpret documents. Putting javascript in the middle of that should require a good reason. And, unfortunately, for a lot of applications - and even static websites - devs tend to just use javascript because it's easier than reasoning about how to handle static content differently than any actually dynamic content that the dev is working with.
But, like others have said, Github does have some pretty compelling reasons to be an SPA. So there's a little bit of reasonable understanding here. The problem, in this instance, is a compound: first, github WAS document-first and has only been made to require an unnecessary technology layer, which is pretty galling and, second, Microsoft is not some 5-dev operation that is prioritizing tasks for their first app. They are one of the most well-established companies in the world, whom are famous for their backward compatibility. If ANYONE should be able to produce a competent update to a product (github) that threads a needle between implementing more functionality while preserving best-practices for content delivery, it's Microsoft. And yet, they've chosen to roll out an undeniable degradation in service to the production site. It's just kind of baffling.
Personally I have nothing against it (or to be more exact - I've given up hope for a Javascript-optional web) but I see the argument of people who keep it disabled because:
– it consumes more battery and bandwidth (the latter still being expensive on mobile in some parts of the world;
– it's often used to track you (including tracking everything you do - mouse movement, things you type into forms without even submitting it, etc.), serve you ads, and so on;
– most security issues (XSS, browser-based 0-day exploits that escape the sandbox) rely on Javascript being turned on
– it overall slows down many pages with no tangible benefit to the user (this one is obviously not always the case, depends on the website).
I have js disabled by default on my phone for these reasons.
It also eliminates most the incredibly annoying bits of the web: cookie banners, modal dialogs asking me to subscribe to newsletters, auto play videos.
On my phone 90% of the time I'm looking to read your content not interact with your spa. Enabling js is a 2 touch process if I need it.
I think it is fairly stupid on my part but it does add another layer of effort between me and doom scrolling
We love javascript running on the browser. We just want it to be optional. I'm not always using a machine or connection capable of handling today's (or even yesterday's) javascript frameworks. Sometimes I just have a machine that can read HTMl perfectly well, but that's about it.
From my point of view, there are several of reasons:
• It’s just not necessary. The Web is, at its heart, about resources and actions on them. Those resources are mostly documents. There is just no need to execute a Turing-complete programming language in order to display a list of links to files (which is what GitHub is).
• It hinders use of lightweight browsers. There is no good reason that I should be forced to fire up a VM like Firefox or Chrome when I could use eww, w3m or elinks.
• It is insecure. Javascript enables insecure exploits. Large browsers such as Firefox and Chrome have a much larger surface area than small HTML viewers such as eww, w3m or elinks.
• It hinders privacy. Javascript and large browsers enable much more persistent user tracking than lightweight browsers such as eww, w3m or elinks.
For me, the first reason is the strongest: I think that those who push Javascript fundamentally misunderstand the Web. I am grudgingly fine with using it to provide functionality which would be impossible without it, but I also think that experiments such as htmx show a way forward to put more behaviour into the browser itself. Browser apps are certainly neat! But the Web is ultimately about linked documents. It is built on the Hyper Text Transfer Protocol and the Hyper Text Markup Language, not the Network App Protocol and the Network App Programming Paradigm for Inexperienced Engineering Students (although I do sometimes think Javascript proponents just need a nap and to have their nappies changed grin).
Try disabling Javascript and browsing the web. On the sites that work, its a lot nicer. No cookie pop-ups, no auto-playing videos that chase you as you scroll down, no random pop-ups asking you to sign up for a newsletter. No browser fingerprinting to sell my behavior to advertisers or worse. Just no more dirty tricks in general. Unfortunately, these dirty tricks are how companies make money, so they have an incentive to make websites not work for users with Javascript disabled.
I think the point is that you can provide really good, lightweight, uncomplicated access to data in pure HTML, but instead websites keep creating "solutions" and complications to non-problems, and turning simple things more complicated for no reason.
People have the ability to decide what does and does not run on their machine. I'm sure a major motivation is to stop annoyances (e.g. ads, sounds, video players, etc).
That being said: There is a certain entitlement that comes along with it, while they're absolutely entitled to decide what runs on their machine, they have this expectation that sites should dedicate engineering/QA/time/etc to this niche. In essence give a sub-3% user base a disproportionate amount of attention. IE11 has a higher usage, and you likely shouldn't support that either.
Sites should both morally and legally support ADA users. But screen readers and other accessible technologies have had full JavaScript support for going on 20-years now. If you're spending energy/money on this no-JS cause, you're doing it for a small handful of contrarians who won't thank you.
> In essence give a sub-3% user base a disproportionate amount of attention.
Some comments in this thread are arguing that many less economically developed countries provide poorer connectivity and lesser bandwidth than elsewhere. Are the users in these countries truly "sub-3%" of the global user base? I honestly don't know.
Depends on the site, naturally, but it seems to me that devoting dev resources to serve users in less developed countries is a good thing. Wikipedia, for instance, renders essentially the same with or without Javascript. That helps to account for its vast international uptake, is my guess.
This is a really important point, because the worldwide web (www) was designed for openness, collaboration, and compatibility.
This is also the origin of ideas like XML, which were designed to have schemas, namespaces, and transformability.
A lot of principal ideas have been lost in the name of productivity or profitability.
Which is undermining innovation, free data movement.
Also, JavaScript also used to be a weapon in the browser wars for creating incompatibilities in browsers (see Internet Explorer).
So the saga is rather complex.
But the conclusion is clear: if you care, insist on openness and compatibility on the web.
And study the classics, the original design principles of the web, and their motivations.
I don't want to leave my editor to read an HTML page that has content that does not require javascript to display. I have no problem with js in a full-on web-browser, but up until 4 hours ago, Github displayed readmes and code listings without the need for javascript, so I could run it in the editor's web browser, which doesn't have a js engine.
I can do that via more complicated emacs web browsers that essentially render chrome in a buffer. But I shouldn't have to for things that are only text in the first place.
A lot of people don't have access to the latest laptops and phones or fast internet connection speeds, however still need to use the same bloated walled gardens online that each have megabytes of JavaScript libraries.
I didn't turn off JavaScript on my devices until I took a 2 month trip to East Africa and needed to do that to get any sane functionality, and since then have been a strong proponent for testing on slow devices and speeds for usability because that is the norm for a lot more of the world than we ever think about.
JavaScript is fairly slow due to technical constraints. It interrupts the rendering of the page both through increased network traffic and while it is executing.
It's usually better to avoid it unless necessary, but many web developers overengineer their pages with JavaScript to the point where it's a problem.
Check performance gap between old HTML Gmail (20KB of pure pre formatted html) if you still can (being depreciated this January) and the new javascript one (over 1MB of js/json to be processed and slowly rendered one div at a time client side).
I think GitHub being an SPA is largely fine, it has good reasons to be in many situations, what I object to is when a shopping site or, even worse, a content site requires JS to work. This seems terrible when serving HTML is a much better experience for most users.
No, it's not fine, there's absolutely no need whatsoever for Github to be an SPA. What functionality do you think can't be done ... in HTML 4.0 for that matter much less HTML5? Sure, a good editor inside the browser but then let that be a standalone, that has nothing to do with the repository viewer.
(As an aside I hate the Jira UI for being the same.)
I agree but I'll take it a step further. I actually rather dislike the in-browser editor when I'm just browsing code. It messes with my standard keybindings and has weird focus issues. Give me plain, syntax highlighted text, with optional symbol referencing to make it easier to look around. If I get to the point that I want a real text editor, I'll just clone it
GitHub displays text! And is representative of a trend of websites that really only display text requiring JavaScript to work.
Which is fine ... but something has gone wrong somewhere if we require a full blown programming language to display text on the internet. It is absurd. It is one thing for HTML with a bit of extra JS on the side, but to be unable to display text without scripting enabled is comedy.
And if nothing else this really makes searching and indexing harder. That isn't good for the average internet denizen.
I dunno, I semi-frequently want to look for a handful of files beyond the readme - license for obvious reasons, Dockerfile often tells me a lot about how to actually compile a thing, a handful of language-specific files indicate ex. what compiler version it needs. (Though I agree it's a pretty hard 80/20 rule with the readme being most of the value, then a tiny handful being the next 80% of the remaining value)
I'm really struggling to see why a shopping site, which is inherently interactive, would be less reasonable that github, which displays text. I guess maybe if you're using it as an in-browser IDE, but I'm not convinced that that's how most people are using it.
It would seem that GitHub is (was?) not doing that, though? Remember when they announced that they dropped the cookie banner because they didn't need it: https://github.blog/2020-12-17-no-cookie-for-you/
I didn't make a judgement on the fact that it's good or bad for the user in my previous post, albeit my opinion is mostly negative.
Very poorly collected data, analyzed by people without much understanding about ux/product or data, and without any sensibility or intelligence took the place of thorough user interviews.
An example: in an ecommerce I've worked, a much better image gallery was released to show products than the previous one. All data showed users interacted with it much more. But it also showed conversion going down. The old gallery nobody used was then picked again and conversion went up again.
Now, what really happened was that users liked the gallery, what they didn't like where the pictures of the items. Those made them re-think buying it.
Thus, a crappier version of the website was released again, and users ended up having a worse experience, all in the name of the better conversion.
Seriously, with all due respect, I worked enough on front end and data collection to know that the whole tracking is generally harmful towards the user, slows down websites considerably, violates their privacy, leads to a worse experience and only rarely shows anything meaningful that user interviews wouldn't got.
It's mostly snake oil for c-suite, data "analysts" and product people and marketing so they can pretend they are providing any value. They generally aren't.
It doesn’t? I’m confused I mean no one used pure HTML anymore everyone has at least some JavaScript GA is a JavaScript file that can be used to track people pretty reliably.
Pure html nowadays is paradoxically harder to maintain though.
Can confirm, the header & sidebar do (mostly) load, but the file list and README contents are now loaded asynchronously via Javascript and with JS disabled it results in a broken page.
If you don't care about issues or wikis, it's now time to create a habit for using the command line (or the dillo plugin which automates this process):
dir=$(mktemp -d)
git clone --no-checkout --depth 1 https://github.com/torvalds/linux.git $dir
cd $dir
git ls-tree -r HEAD
git checkout HEAD -- README.md ; less README.md
# etc.
# if you just wished to take a gander at the repo...
cd
rm -rf $dir
Yeah if you don't have a *nix-something it'd be harder, yeah you must have some spare space but usually the git bare repo shallow-cloned (downloaded only the latest revision) is under 1MB. Whatever.
I've given up disabling JS for my most visited pages, but you can still disable JIT if you want to drop a million LOC of attack surface. The slow-down is barely noticeable. At least not on the sites I visit, including GitHub. I still use NoScript, of course.
GitHub was bought by Microsoft, now it's turned into a platform for surveillance and other crapware. Time for all privacy-respecting developers to find alternative means of hosting code.
They changed something recently, now I'm no longer able to browse Github using their web based file viewer for both individual files and directories using my SeaMonkey 2.53.18.1[0], released just 9 days ago.
Sure they might be missing a few JavaScript features compared to mainstream ones like Firefox or Chrome but not that much..
Yep. Can't read readmes from the emacs eww browser anymore or browse individual code listings. This is ridiculous. I get it for their code editor, but it should at least SHOW the code and line numbers and allow text search via form submission like it used to from text-only browsers.
it was sad to see another website being ruined by javascript
hopefully once htmx becomes a thing and then html standard changes we can go back in time and start having useful websites without javascript garbage on top
I do have JS turned on, but it won’t work for me either. The reason is I use Firefox 78 which is the last version supported on my OS which can’t be upgraded on my computer.
For sure as of today GitHub is a web app, but it's questionable whether it should forces you to use it as one. In essence, GitHub is just a remote server for a .git folder. This does not need any JS.
I see that GitHub now tries to become at the same time a social media for hacker-friendly people and some sort of super-CI server with automation. For sure these are good features. However I also see why people are sad that the old (and defining) functionalities stopped working without JS.(look at commit, browse the sources, read and submit issues / PR and code review). It saddens me a bit to often see working websites getting more and more complex, adding more and more features, probably to remain "fancy" and competitive.
While I can understand the annoyance at the change, if your perspective is that GitHub is just a server for a .git directory then why even bother with accessing it via a web browser in the first place. The git cli seems like it should be both more than sufficient in that case and preferable since it’s much less resource intensive and designed for the purpose.
For sure there is a discussion to be had about whether the addons that GitHub provides over and above being a server for a .git directory need JavaScript, but that feels like a different discussion than “it’s just a server for .git directories”
When working in a team, having a lightweight shared git remote for sources is super handy. For instance Gerrit, or what uses Google (Critique I think it’s called). For sure these services have a little bit of JS for things such as code reviews, but they’re useable without any.
SourceHut is modelled on the notion that people can contribute to your code even if they don’t have a SourceHut account. Wouldn’t that make it more social?
Because it used to work without JS, but more importantly because it used to work better without JS – stuff loaded more quickly and the overall experience felt snappier. Not sure if it is front page worthy but it is still sad.
Probably because they also significantly regressed page load times when they decided to asynchronously load the most important part of the page (i.e. the source file or Readme you are navigating to).
Lean, mean javascript only bothers hardliners. But everybody suffers a user experience hit from async-javascript-for-the-sake-of-it
This is disingenuous to the point of being insulting.
Their modus operandi is to be the social-development hub of as many projects as they can, and they are successful.
Their platform has lock-in and network effects that you must acknowledge, often being the only bug-tracker available, only mechanism to submit patches and often being the only source of truth for a given repository of code.
Unless you're going to tell me that a private company can do what it likes, in which case I will reignite efforts to put FOSS on FOSS platforms. If Github wants to hold it's market position (made precarious based on their owners and fact that it's backend is closed-source) then they will have to keep good-will.