Maybe we need a content oriented alternative to the web browser.
Maybe a modern Gopher.
Display preferences are handled by the client. Sure, support video and audio. Support tables. But keep it minimal. Put all the scripting on the server side, as it should be. Lock down the specification - do everything to stop it becoming web browser 2.0.
Then we wouldn't need junk like AMP.
Maybe it would take off well in some nice to start with.
The web is really well suited to content heavy sites, it's just being used to do all the things humans like to do (spam,ads,control over access), as gopher would be if it were popular. I think your problem is more with the behaviour than the medium and if you restrict the medium people would just look elsewhere.
> creators of value want to be paid for it.
The WWW has been a proof of concept of how much people will do _without_ direct monetary compensation. If you're thinking who's going to build Kayak on a glorified Gopher though, maybe you should start by asking where audiences will go when they find something that provides a better experience than Chrome.
* They don't care about the money and don't want ads on the site.
* They're someone whose content is the advertising. A database consultant might write an article on using indices to speed up low-selectivity queries. He doesn't need to plaster his site with ads, but his articles do serve a secondary purpose of advertising himself. Or something like Angie's List, where people literally visit the site with the goal of being advertised to.
* They don't mind unintrusive ads, and these help provide money and motivation to work more on the site. Google ads would be acceptable here.
* In addition to their site's primary content, they also write sponsored content for companies that give them enough money.
* They only care about extracting as much value from their site as possible. They might hire ghostwriters on Fiverr to write cheap articles, apply SEO techniques to get them to rank higher than they should, and then plaster noisy ads and popups everywhere. They might buy cheap traffic on low-contested keywords, and redirect them to sites with more expensive ads to arbitrage the traffic. They might have a form to collect your name, email address, and phone number; and then sell your information to a mortgage reselling company. They might have loud ads that play music on page load, that automatically play video, that pretend to be a Windows error dialog, that look like download buttons, etc. Some of these are arguably the ad networks' fault, but the maintainer of the website is ultimately responsible for anything that appears on his website. They might release news articles with clickbaity headlines just to drive traffic to their ads.
I think most people take offense to #5. A minority also take offense to #3, since these track location and can form profiles of you between pages. I don't think anybody really minds the #2 or #1 people.
If you banned ads entirely, you'd still be able to monetize using method #2.
This is where I think tracking went wrong. Advertisers were so happy they could track users that they (kind of) forgot to track the content. I think basing ads on the page content is, in the end way more safe and beneficial to everybody.
In other words, create things is not enough.
There's rather more discussion of this in economic litereature than might be expected, both early (pre-Smith and 19th century) and contemporary.
Trying to make all three of these agree is a bit of a problem. Economic orthodoxy variously tries to do this / pretends they do (and mind, "agree" != "equal").
Content has cost. There's time and effort necessary to create it.
Content has value. It can improve, even drastically change, the lives of those it reaches. That cost may be negative as well -- disinformation or distraction, for exaple.
What content doesn't have much of is exchange value -- it's difficult to draw walls, fences, boxes, or whatever else it is that prevents free access -- around content such that people should pay to get it. And doing that ... has its own sets of problems (generally: under-provisioning of content, particularly to the economically disadvantaged).
Advertising essentially hijacks one content stream (that the audience seeks) with another (that the producer wants distributed). Advertising has value to the author, not audience, and so the author is willing to pay for it to be disseminated. This ... produces numerous perverse incentives. Crappy Web pages among them.
But claiming that no market value (exchange value / price) means no use value, or no cost basis, is simply false. Common, but false.
"Therefore, in the course of the economic transactions (buying and selling) that constitute market exchange, people ascribe subjective values to the commodities (goods and services), which the buyers and the sellers then perceive as objective values, the market-exchange prices that people will pay for the commodities."
It's not a personal point of view but it describes well the reality for most people.
Give me HTML. Let my browser render it. If I need some fancy-sauce like websockets, then I will selectively enable them.
Every time I open the uBlock panel, I see even more new domains pop up that aren't on block lists yet. One big UK detailed had over 50 external scripts. Worst is when a sites main layout JS relies on a function in some third party as JS (trackThisGuy()) and completely fails when that code isn't present - block the ad provider, site breaks.
That said it doesn't prevent ads or tracking scripts. It's just HTML stuffed in XML
Here's a Rebol one liner that opens a GUI, reads web page and sends it as email:
view layout [u: field "email@example.com" h: field "http://" btn "Send" [send to-email u/text read to-url h/text alert "Sent"]]
Most software systems have become needlessly complex. We rebel against that complexity, fighting it with the most powerful tool available, language itself.
which is a descendant of the original Readability.js published by Arc90, before they decided to turn it into a proprietary Instapaper clone.
It extracts what looks like article text from the page markup. DOM elements included for a printer-friendly view could possibly be helping, but it doesn't target that directly.
I suspect it's looking at the page before it's fully rendered and determining that it doesn't meet some criterion for reader mode.
The separation of structure and formatting is not practical for serious media of any kind. You can't separate form and content. Nor would most people want to, because (visual and other) complexity is an inevitable and rewarding part of human experience...
Your blog post may be beautifully styled, but that makes no difference if I can't even load it, or the fonts are so thin I have to squint to try to read, or the glaring white background makes me see floaters. A blind person doesn't care about the whizz-bang of $framework if your JS-only site breaks their screen reader.
For other use-cases... that's why we have HTML and stylesheets. The problem has been solved, mostly sort-of.
As already mentioned, this is demonstrably false. I'm choosing to give you the benefit of the doubt and assume you don't understand how a great amount serious media is served nowadays, since the alternative means you are attempting to score internet points with a second-level contrarian post. But I have to wonder where you've been.
The separation of structure and formatting is not practical for serious advertising platform of any kind.
They may not be on the device you chose to support.
Kept the page size below 1MB, server time around 0.3s (ProcessWire), onload below 3s.
And too many Kindle books...
I haven't ever paid for any blog-type content, but I probably would if someone marketed it right.
With advertising, I don't see why everyone's assuming it would be impossible on a plain text medium.
I listen to podcasts that have sponsors. They just talk about their sponsors in the middle of the show. They try to make it sincere ("I use this product myself" etc). It seems to me like this is a very high quality type of advertising, compared to blockable little ad server banners.
The problem with this is that the content producers have incentives to add content they get paid for, and people who pay to have others shove content in front of readers have an incentive to make that content as eye-catching as possible.
So first it's just text. Then it's bold and underline, both of which have legitimate uses in real content but both of which are also obviously useful for ads. Then it's blinking, and to hell with light-sensitive epileptics. Then it's all over.
The alternative is the old radio trick of weaving all of the content together as tightly as possible, so it all gets consumed in one big gulp. That has implications for long-term credibility, but that's a long-term problem beyond the time horizon of the bills your financial people just got today, and then there's payroll...
(Tables first arrived in Mosaic 2.0 Alpha 8 in December 1994, according to http://www.barrypearson.co.uk/articles/layout_tables/history...)
A post from Cameron Kaiser explaining some features of gopher:
Something lightweight like gopher and markdown is enough for a lot of people.
"by divorcing interface from information, Gopher sites stand and shine on the strength of their content and not the glitz of their bling."
Forcing content creators to focus on the strength of their content is pretty much a non-starter for most modern commercial enterprises. Which is why it's amazingly useful, but likely futile.
However - those who enjoyed text + links were left behind. I really just wanted text with some links. Thinking about setting up my own websites as a gopher site, just don't know the best ways to proxy it back to HTTP best.
I think pocket, instapaper, readability etc were all essentially peeks into a style of web we could have had where content presentation was something a user has decent control over.
Someone wrote up their experience using Lynx on the web around 2012.  Conclusion: "Not all the sites are usable with Lynx, but many of them offer at least basic functionality using the text-only web browser."
As long as there's HTML output, a more humane web browser could allow better customization of the information; it could infer the usual human organizational methods; lists, metrics, groupings, buttons, etc, and present it with your favorite background color, font size, line-length, image size, videos, etc, based on the hierarchical structure. The other camp is web apps, which is a different use case.
I'm not sure why this can't be done with HTTP? I have a Neocities page which is nothing but text and links with a barely-formatted page to be slightly-better-than-default.
Examples (NSFW language as obvious from the URL's but otherwise SFW):
(Note: Above two examples aren't my sites.)
"On the Web, even if such a group of confederated webmasters existed, it requires their active and willful participation to maintain such a hierarchical style and the seamlessness of that joint interface breaks down abruptly as soon as one leaves for another page. Within Gopherspace, all Gophers work the same way and all Gophers organize themselves around similar menus and interface conceits. It is not only easy and fast to create gopher content in this structured and organized way, it is mandatory by its nature. Resulting from this mandate is the ability for users to navigate every Gopher installation in the same way they navigated the one they came from, and the next one they will go to."
Simply put, it can't reasonably be done with HTTP/HTML because it requires active management (which as I pointed out is basically a non-starter), instead of being 'baked in' like it is with Gopher.
Since the announce of Webassembly, I have a scenario in mind. As you say, the VM can become a complete hypervisor that can can run a full OS. At that point, the browser will become useless and we could get rid of all the complexity of CSS and HTML and close that episode of internet.
Another point is for AI and robots, beautiful design is not a priority.
And worry about it deleting your root file system (https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/commi...).
I agree with your points, but we should remember why the web got so popular: a safe, seamless software distribution mechanism. Recent OS's have approached this. (https://xkcd.com/1367/)
It can also be a tool for programmers or geeks (like markdown). It's more a niche.
I've also been thinking about NNTP. Usenet never went away, entirely, but maybe it will be having a little renaissance, as well. Old-school distributed, independent networking. And again, where content rules over graphic design.
P.S. That second part is NOT a poke at MetaFilter, which I rather like -- although, I guess I've been away from it for a while, now.
But, I'm interested in communities that are orthogonal to any one particular platform, or perhaps I should say, host. Among other things.
It sorta came back as Reddit. I was a huge Usenet junkie from the first time I saw it, but it eventually turned into 99% binaries and junk. All the interesting discussions in the areas I was involved in went to sites like HN, Metafilter, Reddit, Web-based forums, etc.
It seems like we are going through a period where everything that used to be its own protocol is now recreated using HTTP and JSON.
Hell, it may well be that the web browser has become the new X server.
I've never used Slack but their front page alone is enough for me to turn away. The first two words I see is "Product" and "Pricing". Nothing on the page hints to an open protocol so I guess it is proprietary.
A really interesting (and probably really hard) project would be a proper decentralised Usenet replacement, with proper identity and reputation control. It might actually be something that blockchains could be good at, if you could avoid the requirement to replicate the entire database on every machine.
It might be feasible to have a single blockchain for identity and reputation, and then have each 'newsgroup' have its own, referring to it; that way I should be able to reduce the disk and bandwidth footprint to something reasonable.
Coming up with an appropriate costing system so that spammers were priced out without also requiring real money to actually use the thing would be the trickiest part, I think.
I've been trying to come up with some metrics for the size of "traditional" Usenet -- Big 8 hierarchy, say, early 1990s. Gene "spaff" Spafford thought that 50k - 500k users was probably in the right ballpark.
Note that to gain access you needed to be student or faculty at a research university, or work for one of a handful of tech companies (and almost certainly within their engineering divisions), or for a government agency with access. Very limited independent options existed.
The Usenet Cabal who managed things, such as they were managed, was about the same size as Reddit's technical staff, if that.
And the system proved highly vulnerable not only to spam and crap, but to users not acculturated into the system itself, and behavior protocols. The Eternal September was a thing for a reason.
These days, I've found a free provider for my text group access, of which
there are several.
Here's the actual search query:
Ah well. I should turn my own site into gopher!
I really hope Alon will run again at some point so I can jump back into action.
http interface: http://themade.org:70/ (there's a proper gopher server as well, of course)
Gopherspace ain't dead!
: a printed book!
They may not quite suit the modern world very well, but I like protocols that I can use via telnet/nc
nc gopher.metafilter.com 70
Below is an 'ascii screenshot'
__ __ _ _____ _ _ _
| \/ | ___| |_ __ _| ___(_) | |_ ___ _ __
| |\/| |/ _ \ __/ _` | |_ | | | __/ _ \ '__|
| | | | __/ || (_| | _| | | | || __/ |
|_| |_|\___|\__\__,_|_| |_|_|\__\___|_|
sharing and discussing neat stuff on the web
(DIR) Ask MetaFilter
asking questions and getting answers
pop culture discussion -- TV, movies, podcast, books
creative work by MetaFilter community members
original musical and audio recordings by MeFites
employment opportunities and member availabilities
organizing meetups and community events in real life
where the commuity talks about MetaFilter itself
frequently asked questions
Commands: Use arrow keys to move, '?' for help, 'q' to quit, '<-' to go back.
Arrow keys: Up and Down to move. Right to follow a link; Left to go back.
H)elp O)ptions P)rint G)o M)ain screen Q)uit /=search [delete]=history list
Not that I believe it'll really be useful for anything. :b
My real plan is to write a web-to-gopher gateway that actually looks nice (because the existing ones look like they're from 1995) and then use it to host my homepage / blog / CV before my next job search (in London!) for the retro-hilarity factor.
On an other note, I think I remember web browsers used to support gopher natively. I wonder when it was removed from the last major browser.
I'm submitting a bug report right now.
Now, if only there was a way to browse Reddit and HN using 'tin' or 'nn'.
The same kind of thinking (Reddit->FUSE->Files) could be applied to a Reddit -> Gopher proxy. To reduce the cost of the project, I'd make it self-host the Gopher server on the user's PC and make the API calls from there, rather than setting up gopher-hackernews.xyz:70
My gut feeling is that the slowest part of this would be the API call to HN  or Reddit .
I have a repository somewhere with a thing that does that, through Reddit's .json URLs, mirroring subreddits into Git repositories full of JSON files, and then serving a minimal local web interface with real time updates. I'll make a note to clean it up and publish it...
I've been playing with FreeBSD's ports system this afternoon. There's something to be said for using the filesystem instead of a database, API calls or whatever.
HN used to not be mobile friendly at all, but recently they added the viewport meta tags and now it works very well on iOS. I now prefer vanilla HN over any other HN client on iOS.
Oh, but it also differentiates between text and binary files. Binary files are sent as-is, text files are sent line by line with a \r\n terminator. .\r\n indicates the end of file. Leading .s must therefore be doubled. And then directories are sent in another format (type/name, path, hostname, port -- tab separated).
Which means the client has to know before hand if it's a text or binary file or directory. How does it know that? Well, there's a 1-character code at the start of the filename which gives the filetype. 0 is a file, 1 is a directory, 9 is a binary file. There are also other filetypes which will make you reminisce about the 80s (Unencoded file, BinHexed file, n3270 session, etc.
The whole text vs binary reminds me of FTP, and most clients there defaults to requesting everything as binary transfers.
Edit: MIME is no cure-all, i wonder how many times i have seen Firefox get royally confused because the MIME type is wrong.
1. your url looks like gopher://dipstick.io/0file or gopher://watchingpaintdry.museum/9folder/file. The filetype is part of the url, but only the client is aware of it -- it's not passed to the server.
2. When using a URL, the client has one idea of the file type but it does not necessarily match what the server thinks the file type is.
2. Error messages. HTTP has a status code to indicate the file doesn't exist (or was moved, etc). Gopher can send the error message back as the payload but... is that in text format or binary format? The server has no idea what format the client expects.
Always sending binary data and using out-of-band status codes and file type just keeps life simpler.
What is the point of serving that on gopher? :)
Flashy ads and social network dropins need to go away. A good content author can figure out how to work sponsorships and other revenue generation into their work. I don't mind reading through someone's text based plug for a product, it creates a stronger focus on writing quality content rather than link-baiting.