This one can be abandoned to spyware and apps. Whatever. That's fine. But I don't want safe hyptertext browsing comingled with the same program that can and constantly does load spyware and other abusive junk.
Unfortunately, what you can't do is force everybody else to participate. HTML3 still works great in modern browsers, but the people making the sites want to use modern features.
Making yourself out to be a victim doesn't make it so.
What, that people have too much freedom to express themselves?
When did hacker culture become so... fascist?
> HTML3 still works great in modern browsers, but the people making the sites want to use modern features.
I'm not disagreeing with your point, but I disagree with words like "modern" and "older".
The most "modern" browser that I'm aware of (not counting the myriad Webkit/Gecko skins) is EWW, which was first released in 2013. It's now a built-in feature of Emacs, a reasonably widely used piece of software. I think this fits the parent's desire for a readable hypertext renderer.
In contrast, Google Chrome dates from 2008. If we include that in our definition of "modern" then I think we should include Netsurf (first stable release in 2007), which is a full graphical browser but might still be acceptable to the parent. If we call Firefox "modern" then I'd say it includes everything back to Netscape Navigator.
If we're instead talking about maintained and up-to-date browsers then w3m, Links and lynx have all had releases in 2019. Again, those are readable hypertext systems like the parent describes.
We might consider "modern" to mean implementing certain Web standards; yet that seems precisely against the nature of this discussion. Plus, many of those described above have support for e.g. HTML5, certain ECMAScript standards, etc.
In contrast, "older" browsers would presumably include bloat like Internet Explorer's support for ActiveX and VBScript; and the IRC client + WYSIWYG editor + mail reader + news reader from Netscape/Mozilla/SeaMonkey. Precisely against what the parent asked for.
My browser can have maybe 20 page templates built in, and sites can specify which templates work best for their content (article, video, forum, search results, recipe, howto)
Edit: and don't forget to add the broken website to your "remove from search results" addon, so you never see it again in search results..
Sure they offer an in person alternative, but due to the prevalence of online usage they've been able to dramatically cut the number of physical agencies.
Try booking a flight and hotel without js. It's just not possible, you can't simply "use another airline" you have to use an entirely different booking medium.
Content-Security-Policy: sandbox allow-forms allow-top-navigation;
I don't want some restricted version of web sites that work OK in current web browsers, provided you're very careful not to leave those sites. I don't want constant fiddling with umatrix or custom privacy settings that break 90% of the Web. I want a safe hypertext browsing system back, in its own program that will kick open a "real" web browser (with warnings and flashing lights) if you try to link from it to the spyware web, like if you try to open a PDF. As long as I'm in it, I shouldn't have to worry about any links I open trying to spy on me. I shouldn't have to track down which tab in this system is making my nigh-supercomputer laptop's UI stutter (hi, Asana, Trello, Slack, Jira). None of them should break because of my privacy settings—privacy, old-school web style, should be the only way it operates.
I want a safe-only Web that is for browsing hypertext documents. The current web can keep the spying, security vulnerabilities, resource-eating, and apps. I know I'd still have to use it, and that's fine, I just don't want safe, secure, resource friendly browsing to happen alongside this shitshow, because then it's not safe and secure.
How do you propose the no-spying part? With laws passed by country? Nothing in your setup prevents spying, and spying is extremely lucrative in a capitalistic society, so people will still find a way to spy.
You are putting all your trust in the server. But those servers are where the spying would move to if it needed to. You cannot create a technological system that prevents spying, since you have not removed the incentive to spy.
Make it illegal to do targeted marketing and then you will have a dramatic reduction in spying. You'll also have a short-term dramatic drop in economic efficiency, something the HN crowd is likely to decry as equal to the end of the world. But I think it's much more likely that we build a sustainable society with good, long-term-growth economics if we put more limits on capitalism.
Edit: Facebook doesn't care if you're reading those documents on some simple text-web. They'll find out anyway, either via your absence from the 'regular web' or any other clue they can find. The spying won't stop just because you're using some primitive technology.
I agree we also need to outlaw mass collection of info about people, or at least put in place such massive penalties for leaks that no sane company without a really compelling reason would dare store such things. Hopefully that'd end the damn credit card company and loyalty card and credit system spying, too, or at least force them to shape the eff up. I'd just also like a hypertext network that cannot constantly spy on me or eat an entire core because "programmer productivity". That's all.
That said, I'd like my browser to be much more on my side -- by default disclose very little about my setup, by default partition cookies according to the headline site I visited, and delete them soon. I'd like JS completely off on most sites I visit, and more convenient ways to turn on as much of it as needed on some sites.
Just don't allow those things. Those aren't hypertext browsing, they're app-things. Wikipedia or Timmy's Ninja Turtle Fan Club or even banking and ordinary ecommerce sites don't really need those things, and having those things on the same platform as all these "apps" get them all mixed up in a spying and vulnerability nightmare for no good reason. I think hypertext browsing is distinct enough from what the current web is, and that it's still a useful enough idea, that it'd be nice to split that off from... whatever this has become.
My current plan is to make lots of Fluid-app browsers, for certain sites, and then leave JS off most of the time. But there are enough sites which don't work at all, yet I want to visit, to make this a bit of a pain.
I know for instance that UK govt. websites are designed for such use first, which just means sticking close to the original spirit of hypertext. Bank websites I have little hope of... and anyway I trust my bank quite a lot! But commerce would be great.
To me, the "heaviness" of the web is making it much less enjoyable then it could be.
As a user, I regularly read the HN discussion of an article rather then the article itself. Simply because HN is so lean and other websites are usually bloated. So bloated, that I prefer to infer the content from the HN discussion just so I can avoid visiting the website.
I run multiple very lightweight websites myself:
I categorically refuse to add heavy assets or frontend libraries.
Many of my developer friends insist that "Normal users don't care!". I am not so sure. I wonder how big the target audience for "Lightweight websites" really is. Would love to see a poll that asks people "How much do you like the typical website? What are the most common reasons that make you dislike a website?".
I am with you. I've spoken to a lot of non-technical users in my life. Their very typical and often response on the matter is "it is how it is, I cannot change it, of course it's annoying but I have no alternative" -- this is about ads, heavyweight sites and possible tracking/spying.
So IMO most non-technical people accept it as a fact of life but would rejoice if the Web became lightweight again.
Yup. There's also a subset of non-tech people who don't have a basic mental model for how things work; from those, I hear "my computer is slow, probably has viruses, could you help?". I come, and the computer is often fine - it's popular websites that just went through another bloat-up cycle, and the computer struggles with them. Installing uBlock can make such computer live for another year, but ultimately people end up discarding perfectly good hardware just because the web keeps accruing bloat.
What most non-technical people want is for the web they're using now to work faster. The problem is, the people who want to make the web lightweight again seem to want to do so by eliminating the modern web entirely. No one seems to be asking how we could make the web that exists, and that billions of people actually use, better.
Not entirely, though it may seem that because many proponents of lightweight web - like myself - are just tired of the modern one, and have comparative experience of how to make a site many times more functional and use the fraction of resources at the same time.
> No one seems to be asking how we could make the web that exists, and that billions of people actually use, better
Circling back to 'pdimitar point upthread: people use what they're given. They have no love for it, they just don't have any other choice. This way people adjust to the growing bloat of the web, and the same way they could adjust back to the leaner web.
And yes, many are asking how to make the web that exists better. Not many want to hear the answers, because they begin with "stop doing the crap you're doing right now". The answers are, stop doing SPAs for pages that fit the document model (which is 90%+ of them). Stop putting useless gimmicks on the frontend - like that Cleave.js thing from yesterday, for example. Stop pulling in all the trackers and ad network scripts. Design your page function-first, form-second.
It's a hard sell, because it goes against several strong social phenomena, like CV-oriented and fashion-driven development, very lucrative surveillance capitalism business models, vendor lock-in and SaaS-ification of everything. That's why at least part of the solution will have to be technological - just asking people to behave isn't going to cut it.
 - https://news.ycombinator.com/item?id=19233637
Forrester Research did some research in why people return to web sites. There were 4 main reasons: good content (thank goodness this was number one!): 75%, usability: 66%, download speed: 58%, frequency of updating: 54%. All other reasons were noise compared with these 4.
It's open source, so feel free to make some PRs if you need a place to add resources or something to go with the subreddit.
Otherwise I will take a look at them later on myself
There is no section for individual blogs.
Not sure what your $2 would be doing another than adding an actor between me and the page I wish to read. For privacy purposes, I'll be keeping well clear of such "solutions".
Old reddit is smooth as silk. I honestly don't understand why they changed it.
Netlify is also built with React and its homepage loads amazingly fast (<1MB in size).
Another example could be Facebook which qualifies into "complex apps" category and at the same time is very well known for sluggish performance.
I have a minimal chat application that heavily relies websockets and JS, that said, it strives to be data minimal. The heaviest part is the initial load of emoji images which get cached. Other than that there are no third party scripts or data transfer.
I get that JS can suck and be abused, but I don't really understand the complete allergy to it. A websocket implementation is going to have a lot less overhead than page loads for everything.
Consider HN as an example of a document-oriented social media site actually implemented as documents.
Every now and then there's an odd top story on HN which makes me wonder who actually reads this site. I guess I'm not hip enough!
Id like to see a browser come out that does this. Reading mode by default, along with some p2p network of userstyles and templates. Opera sort of did this once, where the community maintained .js patches to make websites work better in the browser. It would be nice to have a set of standard browser templates, and then be able to have an on/off button for "allow most popular community patches to load automatically," those patches basically being css tweaks for sites that dont quite fit a template.
The web has HTML5 elements and the CSS grid layout engine that can be made fully responsive with maintainable CSS variables to look excellent in evergreen browsers. However nobody is able to code up a page this way as there is so much technical debt in the way web development teams work and the hacks that can't be dispensed with overnight.
Recently I tried to make a CMS form look pretty. The forms needed have to have the simplest markup, e.g. label followed by input, label followed by input with those inputs having HTML5 niceties such as placeholders and the 'required fields' being styled with simple CSS that is based on 'required=true' being on the input boxes. However, the templates had two divs and spans around the labels and two more divs around the inputs with a container div holding it all together, everything decorated with untold class tags. To make this look good and responsive in CSS grid I needed to get rid of all of it apart from the inputs and labels. If I didn't have confidence in what I was doing I would have just added yet more CSS and yet more complexity.
Anyone can make stuff complex, making stuff simple and concise is an entirely different skill. None of the examples on this list have fully grasped what a modern web page should be like. It is as if nobody has taken the time to learn how to write content with the correct elements and then style it up for the evergreen browsers people actually use with modern CSS.
Given there is no guide out there I think I had better write one.
Before and after: https://imgur.com/a/EJympO9
So, could a better separation of content from software improve the situation?
It turns out that lite is basically a modifier for product names. Early in the 20th century it was used for products like "Auto-Lite" and "Adjusto-Lite", apparently as a diminutive form of "light". But then later it was used to connote fewer calories (as in "Miller Lite", "Kikkoman Lite") whereas other products still use the full form ("Yoplait Light"). Lite's other definition is "diminished or lacking in substance or seriousness ... innocuous or unthreatening", implying an almost childish, immature, or unthreatening form.
In this website's case, the list is specifically referring to the "weight" or "heft" of the websites, relative to the content which isn't solely what the user is looking for. It also is not referring to a product name. That falls squarely under the definition for light, and not lite.
But the name of the repo, awesome-lite-websites, can be seen as a product name. So the word lite fits for the name, but not in the description. (indeed, in the README the word "lightweight" is correctly used)
 https://grammarist.com/words/lite/  https://blog.apastyle.org/apastyle/2012/10/lite-or-light-whi...  https://www.etymonline.com/word/lite