It feels like the main feature of this project is incompatibility with HTTP. The protocol is new and primitive, it's easy to write tools around it, write texts about it, etc. A small community forms, and you're part of it. You can advertise it to others, or rant against the mainstream, or whatever. But now what? You lose interest and abandon it, and eventually it dies.
We already have a transport protocol that "everybody" agrees on: HTTP. We can define a specification for a subset of it, and a subset of HTML/browser features we want, create tools around them, and form a small community.
The advantage here is that the community is not an island. Users of Big Browser can still read your latest rants. They can even learn about this project and, while perhaps not using Mom-and-Pop browser, may support it in their sites, since it wouldn't require another server; mostly just having their site work without JavaScript would be a huge step forward. Right, you don't have Google filtering based on Accessibility. The community can create a search engine that does. Now what? You just get on with your life, producing and consuming AccessibleWeb content without the gratuitous incompatibility.
Incompatibility with HTTP is a feature, and a good one at that. It's drawing a line in the sand: this is bad, and we don't want to be a part of it. We don't want a subset of HTTP. We don't want a subset of browser features. Yes, we are fully aware that it would not work with "modern" "browsers". We want something which isn't rotten from the start. We don't want to click a link and end up back on another maggot infested pile of JavaScript crap - we want a network where everyone who's playing is playing on a sane baseline.
The whole point of this project is to shrug off legacy. Yes, that means reinventing a few wheels.
I like tiny communities as much as anyone but I personally see no need for this to exist. Firefox might consume 2GB but it is also running hundreds of tabs. 8GB of RAM was more than enough. On Linux I rarely went over 4.5GB. Worst offenders are JVM based applications. Notably modded Minecraft can consume absurd amounts of RAM.
I'm more terrified of the average electron app or java app than I am scared of a single browser tab.
I wonder if everyone in this thread is willfully missing the point, or else if we're just so deep into the brainwashing of the modern web that they can no longer see the surface.
I’m just not convinced that HTML and especially HTTP are “rotten to the core”. You can make http://motherfuckingwebsite.com and the markup is all very clean.
the markup might be clean but it's pretty ugly and network inefficient because it's full of boilerplate that is only required because html also supports 1000 times more nonsense than this site uses, so you have to type <p></p> to render every line of that text, or <a href=""></a> to just render a link. you need to do <h></h> for headings. if all you want to do in your content is those things, that markup is ridiculous.
> meaning the net effect is just to make browsers more complicated.
Maybe it would, slightly. But would it make the browser anymore complicated then the unessential features that Firefox has in its default build right now?
Pocket integration, FF Sync, some screenshot function?
The web is so broken that my Firefox even has some Protection Dashboard thing (about:protections). Not that I've ever used or noticed it before.
I totally agree with that. It is not like some web police force is holding people at gunpoint to add javascript and angular or react. People use all those features because they want to and they find it useful.
While web sites can avoid JavaScript all right, browsing most sites does require JavaScript. As a NoScript user, I'm keenly aware of how many web sites simply do not work without JavaScript.
And I'm not even talking about all the third party spyware.
If you're already using uBlock Origin you can disable JS per site (or by default, like NoScript). I've found that the UI is way easier to understand compared to NoScript.
I'm not sure what you would gain from disabling things like the ability to upload multiple files, autocomplete fields and rich text editing. Raging against and disabling Javascript is just stupid in my viewpoint. I'm not going to cut out absolutely essential features just to satisfy some fringe group.
What if JS over gemini:// would only be able to render some things and would not have network access?
It just is a question of what your html renderer allows.
Add Lua if you think that's better!
But you will have the same problems if embedded Lua can do network stuff.
...or let's just P2P our Emacsens... the last non divisive browser... :-P
gemini doesn't at the moment have any formalized systems for client-side scripting, effectively eliminating the need for that, and most of the gemini community seems pretty opposed to ever implementing such a thing, so it doesn't seem like that will ever change.
This is more intended to be a comfier gopher than a less-shitty http. Most of its early adherents are ex-phloggers attracted to its features relative to gopher, not ex-webbers attracted to its lack of client-side scripting or whatever.
The fact that this has attracted a fair number of gopher users (who have always very strongly opposed http/s) would seem proof enough of it's success, imo, at least within particular circles, and at least within the context of its goals. It was never intended to draw people away from the web, it was intended to be a sort of superpowered gopher.
This protocol doesn't seem to be aimed at commercial use, but the "web police" called "the manager" or "the marketing department" or whatever are the ones forcing the use of the privacy-invading tools.
Legacy and momentum plays a greater role in adopting a framework than sheer choice. If I could choose I would use frameworks in haskell or rust 100% of the time where I work. I don't because there is nothing built around it there and I need to get the job done right now. I would like to be the change I want to see, but sometimes there is just not enough time.
That is a solid point, at the end of the day, programming languages and frameworks are just tools we use to build a product that has some use to someone (Be that monetary value or art or whatever). At work, it is almost always better to just iterate on the existing tool stack rather than try to spool up new one. I love writing Rust, but I'd need a good reason (or at least a big project to amortize the cost over) to reach for it over the existing very function c++ libraries I already use for our embedded work.
The original Facebook and Google did not use much JavaScript, so I'm always skeptical when literal documents need it for anything other than ads (and even then...).
I never understood the hangup on screen refreshes - I mean, why is it important that the screen doesn't refresh?
And if it's that important that EVERYBODY deploys JS to render client side to avoid the refreshes, why isn't this handled at the browser level in the first place (i.e. declare "ExtendDisplayTime"-something on your document and the browser should replace the screen content only AFTER the new page is completely painted).
But at the core of it, the web is a document-display system, and back-hammering and shoe-horning apps that masquerade as documents will always be painful.
Maybe, instead of the spec allowing/banning javascript, it would only allow usage of curated list of "apps" to be part of the page ?
For example, i'm thinking of a roam research like content platform, without tracking and ads. It would be interesting to see shared content around that.
Actually they do.
If you do not fill in your taxes, which are powered by JavaScript, people with hubs are going to come for you and take you hostage, I. Order to rob you from your “tax” money..
I wonder if the main usage of the internet isn't the web as we think about it, but largely facebook, youtube, twitter, maybe some google+wikipedia. People don't see it in terms of 'websites' anymore.
A lot of people seem to believe that, yet plenty of people have to deal with commercial sites, bank sites, school sites, etc., to say nothing of the third party sites linked to by aggregators. I think people still know what websites are.
From my POV facebook is not a app on my devices but access via the web, meaning html, https, a ton of java and whatever adopted php facebook uses today. What i don't get about this project is why it should need it's own browser beside for the user adaptability that you could technically do via a browser plugin, not to reinvent the wheel all together. Coming from the BBS world i like this idea however, but will miss the illustrations (read pictures). Will however give it a go.
I think that is the case especially with Smartphones and Apps. The Internet isn't about "Web" anymore. Even Hyperlinks points to "Apps".
However Web still has its place in Google, Wiki, and Shopping. The three things has one thing in common in that they need multiple tabs to keep data and Information.
For most people email is just gmail.com or outlook.com. They don't realize that email is another protocol separate from the web. I am talking from personal experience :D
yes I think websites are becoming anachronical at that point.. web will fade into ubiquitous peers for high bandwidth data exchange. messages, multimedia, ar/vr .. the end result will be what matter.
I dunno. I see a backlash happening, people building their own blog sites again. The whole "push to your own site, then syndicate that link to social media" thing seems to be happening more.
I'm tempted to creat a new GeoCities and see how that goes for non-technical folks
I don't think this will last. It's not my personal opinion or preference, I care few about the modern web but to me .. watching people interacting with computers, browsers and how they use whatsapp or instagram .. I see no value for them[0] into the early simple web. It's like betting against television in the 90s.
[0] meaning in the average folk psychology, of course a simpler cleaner web has value, just not to them AFAIB
Well said. The problem with the Gemini protocol and its proponents is that they are trying to push a protocol (if you can call it that) which makes no sense when what they want to achieve can be achieved much more easily using an alternate approach.
This is like pushing for stone cart wheels when spoked wheels are already available.
For instance, if the issue is with the use of cookies, all they need to implement is a mechanism for the server to not respond to cookies in any form.
If there issue is with other forms of tracking, they could implement a browser which supports a small subset of the HTTP protocol which does not allow any tracking of any kind.
The problem is that deciding upon a strictly limited subset of HTTP and HTML, slapping a label on it and calling it a day would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way. It's impossible to know in advance whether what's on the other side of a https:// URL will be within the subset or outside it. It's very tedious to verify that a website claiming to use only the subset actually does, as many of the features we want to avoid are invisible (but not harmless!) to the user. It's difficult or even impossible to deactivate support for all the unwanted features in mainstream browsers, so if somebody breaks the rules you'll pay the consequences. Writing a dumbed down web browser which gracefully ignores all the unwanted features is much harder than writing a Gemini client from scratch. Even if you did it, you'd have a very difficult time discovering the minuscle fraction of websites it could render.
I wonder what magic technology they use that makes Gemini servers more discoverable than a regular website. It certainly can't be a Gemini specific search engine because Gemini doesn't have a monopoly on search engines.
You can constrain the features used by an HTML document using CSP headers. For example setting "img-src 'none'; style-src 'none'" will disable images and CSS styling. So this comment is wrong, basically.
This doesn't make much sense. You can clearly demarcate it by replacing the "http" in the URL with "httpsubset" or something, and by running it on a separate port.
> would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way.
That misses the fact that people don't want any of that; people just want to continue using the current infrastructure without functionalities that rely on a limited subset of features made available by the current infrastructure.
And that doesn't justify the effort of reinventing the wheel.
Clearly some people do want that, though, especially here. Evey week or so we have a thread about how terrible the modern web is and inevitably there's a subthread about how someone just needs to create an extremist, minimal fork of HTML with no CSS and no JS and create a new, hip web with blackjack and hookers.
I imagine the exclusivity of something like this is part of the appeal. They don't want to be part of the current infrastructure, they don't want to interact with it, or anyone on it.
Nothing is keeping it as a demarcated space other than the same conventions that you would have to follow anyway if you followed a "minimal subset" approach. You could easily write a Gemini client that handles application/javascript. In fact if this protocol got any level of popularity, I would expect that to happen extremely quickly.
> If there issue is with other forms of tracking, they could implement a browser which supports a small subset of the HTTP protocol which does not allow any tracking of any kind.
They did. That's the Gemini protocol. I recommend reading the spec to properly understand the constraints they were trying to meet.
With a text like this, you can’t blame the readers assuming incompatibility and losing interest
> Now, what does Gemini currently have to offer? The best way to find out is to head over to the official site: gemini.circumlunar.space in your Gemini browser.
The Gemini protocol throws away a huge number of advances - in effect throwing the baby out with the bath water.
Gemini protocol could have been a subset of HTTP and the document format could have been a subset of HTML. For instance, if you decide to not implement cookies, javascript etc but if you retain the ability to have formatting for texts, tables, images etc it would have been sufficient.
And even in the case of images, the protocol could have mandated that images cannot be accessed cross-site. The same restrictions could have been placed on other aspects such as the style sheets.
A subset of HTTP and HTML is not an option if you want to keep existing semantics. Both are by default broken, unsafe and unusable on some systems, so you will need boilerplate that clutters your subsets unnecessarily, e.g. a crapload of headers like "Content-Security-Policy", META-headers ("viewport") etc.
Just create servers that consume AccessibleHTML, and then adulterate it into fancy regular HTTP/HTML for consumption by most UAs (or, optionally, leave it alone for consumption by UAs that send a `X-Rendering-Policy: AccessibleHTML` header. Either way you can still get at the "real" AccessibleHTML source by sending `Cache-Control: no-transform`.) Think of these as "AccessibleWeb to RegularWeb gateways"—except they'd be deployed as reverse-proxies, so RegularWeb users wouldn't have to know they were there.
This is similar to the driving motivation behind RSS: it was supposed to be something for simple static sites to put up, such that gateways could then poll it, before turning around and doing something more technologically-complex to actually deliver the events, like using WebHooks, or sending emails, or doing whatever PuSH does.
I do not see the "incompatibility". There is a web-to-gemini gateway at https://portal.mozz.us which I think is an example to how easy it is to write one.
Big Browser is the cause of so many problems that people on HN complain about. Without the control over Big Browser that certain offending corporations have, their empires are considerably weaker.
Mom-and-Pop Browser is probably not an accurate caricature. Maybe something like End User Browser is more apropos.
If most HTTP is being sent over TLS these days, and Gemini is also over TLS, one could argue there is no "incompatibility". Gemini just doesn't need to all the extra ad-hoc functionality that has been built on top of HTTP. It is intended for data retrieval, not something that relies on help from the 20+ million lines of code in Big Browser to achieve.
If you a looking for a 1:1 clone for the early web, you will probably be disapointed. Gemini takes way more design hints from Gopher then it ever will from the web.
It looks to be a reimagined gopher with some early web parts. Which makes me wonder how close their protocol is to the gopher one.
in terms of semantics it is almost entirely dissimilar except that it's line-oriented. instead of having any of the content type fields or whatever which enforce half of gopher's grammar, links are just "=> gemini://some.url/tunes.mp3", and when you hit that content it gives you a header with a mime type telling you it's an mp3. so now you don't need any of the structure of gopher and can treat literally everything but links as raw text, and clients can optionally format some agreed-upon subset of markdown, and otherwise, that subset of markdown is so readable without formatting that you don't even really need it.
this means you can create a far simpler protocol semantically: requests are just the uri, responses are just a header of a two digit response code and the mime type, and the rest of the response is just the content as raw text. that's the entire protocol grammar.
I am mostly with you. It should be a subset of http. And the content should be a subset of html/css/js --- but the schema ought to have a different prefix. So if I navigate to acc://example.com, Mom-and-Pop browser would perform the same steps that Big Browser would take to fetch http://example.com
This allows hyperlinking to the accessible version of a page, and using different default browsers for the different protocols, so that I (as someone who wants to use Mom-and-Pop browser) can easily fall back to Big Browser when necessary to view a page that won't render over accs://
And part of that restricted spec would allow linking only to such links, to stay within the network ?
And the biggest challenge with Gemini is creating a great search engine. But now , searching site:gemini.* KEYWORD , gives us the power of Google and other search engines.
> And part of that restricted spec would allow linking only to such links, to stay within the network ?
Doesn't work over time. You link somewhere, the other site owner a month later decides they want to use Google Analytics, now you're linking your readers back to the web they're trying to avoid.
> And the biggest challenge with Gemini is creating a great search engine.
Not really, it's discovery in general, which can be solved in many ways that don't involve search engines; Wikipedia's references are often a good discovery tool, for example. I use aggregators to find Gemini content and follow discussions which are happening across the space.
> You link somewhere, the other site owner a month later decides they want to use Google Analytics, now you're linking your readers back to the web they're trying to avoid.
What if instead of a direct link, there was an intermediary that verified that the source and target of the link conformed to spec? If either side didn't conform, the link would just not work. Ideally the intermediary would be built in to the source's web server for privacy reasons. If the target site decided to quit and break spec, people could still access the site from links posted outside of the network.
Yeah, so what you really need is a browser that only supports your subset. It’ll ensure that whatever sites you view are fast/safe/whatever, but Gemini sites you make will be equally accessible to users of “legacy” browsers.
Except that then 99% of people reading your page will be using Firefox, so it's not that big of a deal to just say fuck it and not do this whole SafeHTML thing any more when you want that one extra feature in a year. The 1% of readers who cared are just a minority.
The entire purpose here is to build a community of people who care about this sort of thing, who write content for that community, where taking your content away from that community is not an easy decision of adding a <script> or <style> tag.
Missing 99% of your potential readers is a big problem, though.
Maybe there's another way?
For example, Gemini could only link through a centralized link management server, and that link server will verify links to be "clean" , and if a link isn't clean it will become dead ?
Of course, that depends whether said link gets most of his traffic from within Gemini or outside.
I spend most of my time in small spaces - 100 people, max, of who about 10 might be around at any given time. I'm not worried that people are going to miss what I have to say, because I'm talking to the people who are there. The people who aren't might as well be irrelevant.
Have you ever spent any time in small communities? It's lovely to just... not have to care about gaining followers or making arbitrary counters go up or whatever it is people do, and just talk/create for the sake of it. There's no "brand" to care about.
Counterpoint: All these brain structures that make us crave power and influence evolved in a time in which humans exclusively dealt with small communities (going by your "100 people max" criterion).
Many users also expect javascript-heavy experiences, that they'll be tracked across the web, and that every resource they're likely to access on the internet is commercial in nature. Gemini is quite explicitly a project to try different things.
This is how I am approaching one of my app framework projects. HTTP is the delivery vehicle, but I only utilize a very small subset of the various full APIs (HTTP/HTML/JS/CSS) in order to deliver the framework's functionality. One of my objectives is for the framework to support a wider range of browsers than most modern websites are able to handle today. If vendors like Apple and Google begin fully-embracing things like PWA, I could wind up in a really good position regarding this approach.
This is the only part that I don't really understand about Gemini. Even the most basic printed publications can include illustrations. <img> got added to HTML very early on because sometimes it's hard to share some piece of information in anything but a visual form.
I write a (mostly) technical blog that certainly focuses more on text content than images. I would be happy to throw away the header, sidebar and the rest of the "design" cruft (in fact my blog is perfectly usable in a browser that doesn't support CSS or Javascript) But I can't imagine having my posts without graphs, diagrams and photos inserted in the text.
If the fear is that in-line images would lead to frivolous use as ads or "useless multi-megabyte header images", then maybe a better approach would be to limit the number, or size, of images on each page? Some scientific publications do exactly that in an attempt to force the authors to focus on selecting only the most important images that need to accompany their papers.
No technical limitation is suitable. The appropriate number or size of images depends on the accompanying text. Setting it high enough to allow all legitimate uses makes it weak enough that you might as well have no limit. And even a low limit does nothing to prevent annoying use up to that limit.
The best possible limit is "must convince the reader to click it".
I guess that's a reasonable position to take. It reminds me of paper publications where you have all the figures on color plates bound in the middle/end, so I guess it isn't without precedent
On the other hand, it made me think of old Usenet posts and discussions. That was another medium where you were limited to plain-text only. Posts were often forced to resort to awful ASCII-art drawings of things they wanted to explain and that was just a horrible experience altogether (not to mention how fun those drawings are to decipher today where modern archives have mostly messed up the white space).
Binary attachments were somewhat fiddly in Usenet, AIUI? I don't think MIME/8-bit clean support was really consistently there at the time. In Gemini, you'd just serve it as a binary file.
also not really sure, how a protocol wants to be simpler than http (send bytes to port 80, get bytes, print bytes...) if it has baked in content limitations...
What about "inline images can't be linkable?" Caption text could still link to an enlarged/detailed version. But if images can never be clicked/linked, it would be hard to abuse them for ads the way we see today.
The lack of inline links is less about aesthetics and more about predictability. When you request a gemini resource you know that there will be two things happening: a TLS handshake followed by, hopefully, the server response (hopefully with your requested document).
Adding images requires more requests and breaks the concept of "one url/document == one request". I love that I know that my client will do nothing I do not tell it to do.
If you want to use gemini and you want inline images I believe https://proxy.vulpes.one does inline images of some form or other.
That said, images have other issues beyond causing page loads/requests to be unpredictable: they are an accessibility nightmare (as we have seen on the web).
> they are an accessibility nightmare (as we have seen on the web).
Audio is an accessibility nightmare for people who can't see, text is an accessibility nightmare for people who can't see or people who can't read, German is an accessibility nightmare for people who can't speak German.
At some point we have to accept that not every way something is presented is going to be equally accessible by every person, but the solution isn't to just decide we should jettison a rich form of communication because there is a small subset that can't fully benefit from it. Even books that are expected to be used by people who can see all the images usually describe the purpose of the image, what it is illustrating, and why it was included.
> I love that I know that my client will do nothing I do not tell it to do.
By this line of reasoning, you would have to manually approve every init process that your computer would want to start every time you boot it up.
"images" == "client doing things I don't tell it to do" is completely false. Clients have been built that have configurable policies for loading images and scripts, and they're conceptually very simple and easy to use - e.g. "don't load images by default, click to load temporarily, control-click to load and permanently whitelist" is an example of a user-agent policy that not only supports images, but conforms to your extremely convoluted definition of a user-agent "do[ing] nothing I do not tell it to do."
> requires more requests
Is the purpose of a document browser to minimize requests, or to actually serve useful information? Images can encode data that cannot be encoded in text, and a vast quantity of information is much more easily read and understood in graphical form. If you want to minimize requests, then just don't use the web at all.
Also, this isn't even necessarily the case. You could encode images as part of the page, as base64 or something.
> they are an accessibility nightmare (as we have seen on the web)
The web supports alt-text for images. When people don't provide alt-text, that's not a technical problem, that's a social one.
i don't know if you understand what you're saying?? are you suggesting the binary content of a png file be inserted into the text-readable markup for a document? this doesn't make any sense. in-line images are linking to other resources. nobody just copy-pastes the hex content of an image into their website's html, because it will end up being 95% of the file size and a nightmare to look at in your editor.
I suspect, were gemini format to catch on, user agents would likely get an option to render images from links inline (either instead of links or as thumbnail previews attached to the links.)
The ability for clients to render the markup as they please is actually one of the most important features, and a stark distinction from HTML, which only has one correct way to render.
I exploit this in my Unnamed Gopher Client[0], a client for the predecessor of the Gemini protocol, where I render links in a familiar files/folder format:
A website can be rendered at different resolutions, with or without stylesheets, in dark mode, in printer-friendly formats, in a text-like format, with user stylesheets, with some elements hidden, as plain text, etc.
True, but at a given resolution, with CSS turned on, and in dark mode all users will basically see the same HTML view. The user can't have a dark mode HTML page unless the web site offers one or the user has an extension that makes a best effort to create one. HTML with complex Javascript rendering makes it hard to give the user control.
The concept of a user agent that gives the user much greater ability to choose how they want to view content could mean each user will:
* Pick their own font, font size, line spacing, margins
* Pick their own text color and background color. Like dark mode, high contrast, etc.
* Choose how linked images are shown: inline, click to load, load in new window, expandable thumbnails, etc
* How sections, section headers are displayed. Add a table of contents? Add a button to jump to the next section? The user can choose.
I like reader view, which gives me the ability to choose how I view HTML, but only when reader view can figure out how to extract the content (sometimes disastrously missing paragraphs of text...)
This right here should be the headline selling point on Gemini existing as its own separate protocol segregated from standard HTTP.
This thread is the first I've heard, and up until this comment I was thinking in my head, "sheesh, what kind of value proposition would justify that amount of work. I'm just not seeing it."
It's kind of like what REST was meant to be. More about entities than verbs. Cool. I get it now.
Most of my list is available as extensions on other browsers [1] (which I'd generally prefer to reduce bloat).
However, in my experience, the DOM for some sites is such a mess that trying to apply user preferences is a hack. I.E. reader view accidentally loses text. Does the implementation Opera has always work? That would be cool, although I'd still avoid Opera for privacy reasons.
Gemini seems to be to throw away all that complexity, which makes user customization easier. I.E. the problem is HTML/JS/DOM complexity, not a browser or its extensions.
Images are generally not the point. Formulas, diagrams and illustrations are.
Managing image assets is tedious. The web design community still by and large hasn’t figured out a great standard way to do for images (versions with reversible/cherry-pickable diffs) what git does for code.
Instead, diagrams and formulas could follow the lovely ideas of mermaid, graphviz, dot, and mathjax inlined into the markdown as text. Tooling for VSCode handles inline diagrams beautifully for Markdown already.[1]
And then, inline SVG would let you illustrate nearly anything.
WSJ got by fine without photos, as did most journals for most of my lifetime, and Kindle books mostly don’t have them today. I wouldn’t be too quick to say a medium has to be filled with photos.
A large part of HTML, and a large part of modern browsers and other web technologies, are focused on ensuring that the publisher has control over the user's experience. I really like the fact that Gemini breaks that, and I hope that the project owners mercilessly reject any attempt to introduce features that allow the server to control or influence user experience.
I would really like to see structured text that is self-descriptive (e.g. this is the document title, this is a paragraph, this is a header, bullet list, etc.) but have no ability to influence HOW those things are displayed- eventually maybe we'll have browsers that can support rich theming, etc.
Others have noted that lack of images is an oversight. Perhaps the language needs a "binary file download" structure, and if the binary in question is a media file, then the browser could choose to display it. Maybe signal with mime types?
> focused on ensuring that the publisher has control over the user's experience
Worth noting that this is modern browsers/web, and was initially not like this.
The term "user-agent" comes from that the _user_ has control over the experience, no matter what the publisher thinks. The agent (browser) acts for the user, hence user-agent.
User-agent CSS files were rampant back in the days, when a lot of content was unstyled. So you could navigate between websites and they looked the same, as they would use your user agent css files.
But then everyone decided they had to have a unique look on the web (CSS). Then they decided they needed unique functionality on the web (JavaScript). And here we are :)
as an extension of this, user stylesheets are interesting conceptually, but practically, web pages became quickly indistinguishable when the same stylesheet was applied to every page. it hindered memory (mental categorization) and recall.
users then predictably wanted pages to look different, to have style, and that's likely the principal cause of user stylesheets' decline, not corporate coercion. that's not to argue against user stylesheets per se, but that they'll likely never have wide usage.
Would it be too simple to say that you're looking for HTML without CSS ? Because HTML already has semantic tags describing "this is the title" "this is a list" etc...
As a producer you can always circumvent the constraints imposed by a software. You can use a "this-is-a-title" tag to make text appear bigger, you can use a "this-is-a-list" tag to make content linear instead of using paragraphs, etc...
What I'm saying is that there can't be a format that isn't hacked and exploited to allow the publisher to do what they want, because ultimately it's their content so they control it. Maybe limiting the existing tags in HTML is a good idea (AFAIK that's one of the strategies of AMP) but reinventing a structured format will just lead to HTML-but-less.
If you want to give control to user, then you have to do that from the User Agent: forbid any publisher-provided styling, allow only certain tags, are going to actually do what you want, instead of inventing yet another format
You'd want HTML without CSS and without all the ancient legacy HTML 3.2 styling attributes (i.e. no way to set table cell sizes/padding, no way to set colors on anything)
> A large part of HTML, and a large part of modern browsers and other web technologies, are focused on ensuring that the publisher has control over the user's experience.
It's also the reason why it caught on. On one hand people reject the ability to express individuality on the web, on the other a similar crowd is nostalgic about geocities and praises similar revivals. It's either one or the other.
> I would really like to see structured text that is self-descriptive (e.g. this is the document title, this is a paragraph, this is a header, bullet list, etc.) but have no ability to influence HOW those things are displayed- eventually maybe we'll have browsers that can support rich theming, etc.
How about publishing markdown over HTTPS? Then make a client that renders just that?
Why's it either one or the other? You could have one browser application for no-nonsense textual or informational pages and one for wacky personal geocities/myspace style pages. Feel free to enjoy both, either separately, or if you want more integration you're free to use the underlying OS shell to switch between them, read them side-by-side, link between them, etc.
> You could have one browser application for no-nonsense textual or informational pages and one for wacky personal geocities/myspace style pages.
Hence the idea of a separate markdown only browser. I don’t think http is the problem here. So it would be better to reuse as much existing tech as possible.
Note: Personally I don’t think it would catch on, the convenience of handling everything in one program is just too high.
> On one hand people reject the ability to express individuality on the web, on the other a similar crowd is nostalgic about geocities and praises similar revivals.
I like to think about books as of an example where content is way more important than presentation. Most of the web sites these days spend a lot more effort on how you present it than what, in the end a 1000 thousand words article now has a very complex architecture behind it, hundreds or thousands of time more than the content.
as an early gemini convert this is one of the reasons I wish solderpunk would split the gemini protocol and text/gemini mimetype specs. gemini can serve more than text/gemini (e.g. markdown as you suggest), so embedding the text/gemini mimetype into the protocol spec seems rather like embedding the html spec into the http one
I agree in part, and disagree in part. I like that gemini has a "native" markup format, and that its simple and bare-bones as it is. It's a communications baseline, and other things are negotiable between client and browser?
There are a few gemini clients that support theming (inlcluding fonts, font sizes, text color for various elements, list bullet style, link color based on scheme, and page background color). This one comes to mind: https://github.com/MasterQ32/kristall
I've tried few around and this one seems to be most user-friendly, tho it looks like it has trouble displaying menus with multiple entries (spacebar interpretation)
I would take an alternative position in this matter. What we need is a simple yet functional subset of web. The point is to be able to build a browser in a reasonable amount of time with many languages reusing some commonly used libraries, while being able to use latest Chrome to browse those websites as well.
TLS: keep it as it is. Crypto is hard and TLS is proven crypto. Mandate something like 1.2+ and be done with it. Every mature language has TLS implementation or bindings.
HTTP: use subset of HTTP/1.1. Parsing is very easy: it's just bunch of lines. Full HTTP/1.1 is hard and probably unnecessary. Things like connection reuse are not necessary and should be excluded for simplicity.
HTML: use subset of XHTML. It must be valid XML, so parsing is just one call to the XML library which is available on every language.
CSS: I don't really know, that's a tough one. Something like CSS 2 I guess. There must be a balance between complexity of implementation and richness of presentation.
JavaScript: just nope. That rabbit hole is too deep.
If you take this position to the extreme, you can even reduce HTML + CSS to some kind of markdown-like language, but I don't think that we need to go that far.
In my opinion, there should be no CSS and no styling at all. The original idea of logical document markup and letting the client render the document is best. It shouldn't be the content provider's business if and how the content is processed on the client machine. Once you open that door, you just duplicate the kind of aberration we already have.
A good WWW provides linked documents in a format that is easy to display and process (e.g. extract links, text, headlines, images, etc.) and makes it impossible to hide content. If you publish a document, it should be publicly accessible.
Honestly, you've just described about 95% of gemini exactly. It's TLS required (even 1.2+), line-based parsing with text/gemini content, it's actually even simpler to parse than XML, since it's line-based (just peek at first 3 characters, you know the line type from that). It doesn't use CSS or JS at all, styling is totally up to the client.
So like I said, you've basically just described gemini :)
The difference is you can't just open Gemini protocol in the browser, so you're limiting audience of your resource to a very minority. That's what I want to highlight: something that is compatible with modern web, yet simpler so alternative browsers could be implemented.
You actually can open Gemini in a web browser, using a proxy like proxy.vulpes.one or portal.mozz.us. I tried to write a Firefox extension a la OverbiteWX for gemini to automatically redirect links, but my JS-fu isn't strong enough.
I think "just" going for CSS 2 would be a mistake. In my opinion, it would be preferable take new concepts that are simple to implement and simplify life for everybody (e.g. vh, vw units and flexbox, box-sizing). Goal should be to get a minimal deterministic rendering engine for content that's easily written manually with a minimal subset of modern XHLTML+CSS.
Re:XHMTL: as somebody pointed out here, there are rules to "normalize" unbalanced HTML5, but they have to be implemented and add to the mountain of "implicit" knowledge one has to have and implement...
a body with sequence of <img>, displayed vertically. you can look at imgur and see such format has been used for simple messages, blogs, collections of memes, recipes, news, informational content, engineering content, fitness advice etc etc.
no css, no nothing, the user agent takes care of formatting them according to the display device etc.
I can't think anything more flexible, simpler and yet capable of doing 90% of what the static web can do today. you can even have a comments section, just add <img> to the bottom and alt the commenting user&Timestamp.
I think the grandparent's point is specifically against embedded scripts rather than JavaScript itself, since it can be used to make HTML less like a document, and there's also the proverbial can of worms where you automatically run Turing complete code from an unknown person.
Actually I'm thinking about complexity of implementation and security consequences. I'm not sure that JavaScript interpreters are so common and bundling V8 just kind of defeats the whole purpose... Implementing JavaScript is not an easy task, it also requires implementing plenty of APIs like DOM access, XHR, complex event system, to be any useful. And ability to evaluate a Turing-complete code poses just another level of security issues.
Whether it makes HTML less like a document is up to author to decide, IMO. Some JS snippets are pretty useful, some are not. You can use JS to implement an interactive learning system or you can use JS to spy on users.
Wasm probably is easier to implement that JavaScript. But it still carries other issues mentioned above.
I'm a big fan of WASM. And I imagine that a naive but functional implementation is easy to make. However, as the web becomes increasingly heavy over time, there will be pressure on the dominant browsers to accelerate WASM, and they might resort to the same complex tricks that modern JS interpreters must do to stay competitive.
I think anything real Web 2.0 (I'm takin' it back, it's not their word anymore) would allow for both models. Whatever is next needs to be both simpler, more generically useful, and fill the requirements of this project as well as the most progressive "web app". At this point, I am open to "starting over."
I think you could keep JavaScript at this stage. It’s mature enough and very useful. There are a few strong implementations out there. Make it an add-on with well specified interfaces and object models.
'use subset of XHTML. It must be valid XML, so parsing is just one call to the XML library which is available on every language.'
HTML5 defined a concrete, final mechanism for parsing tag soup and presenting it as a standardized tree. While the library itself isn't simple, using it is, and being standardized, most non-fringe languages ought to have a library for it by now. It should probably use that, for all the same reasons trying to use XHTML didn't work the first time. XHTML raises the bar on writing correct HTML too far.
I feel like the lack of image support is a missed opportunity.
I know it is an idealogical choice to only have text, but being able to embed standard image formats (on a totally plain, non fancy way) would increase the utility of this hugely. They mention blogs and tutorials and recipes here - those would benefit hugely from having simple inline images within the body of the text, just like you expect in a newspaper etc.
I understand why they didn't allow inline images, but I agree with you it limits a lot of use cases.
If I were designing it, I would say: "you can have images, but they always display as a 'block element', with nothing to either side. No worries about wrapping text; no background images under other elements, etc." I think that keeps the spirit of simplicity.
you can have images and they don't always display as anything. the author of the user agent decides how they are displayed. you could in-line them if you wanted to, but only a few clients do that at the moment. there are no hints to the client about how some content could, should, or should "always display as".
It's text. The client displays that text and renders links, headings, etc, however it wishes. If it really wants to, it could just not format them at all. There's a gemini client made for plan9's acme text editor that doesn't render links, and instead displays them verbatim, because the plan9 plumber can handle the hyperlinking aspect. All of that is eye candy and fluff.
If a client finds a link to an image, it can in-line it if it wants. If you wrote a client, when it found a link to an image, it would in-line it "with nothing to either side." That's not something that has to be specced.
You could have a setting on the client that lets the user specify that that's how they always want to see images, I on the other hand might specify that I want to open them in a new window, or see a thumbnail until i hover over, etc.
Same with how headers are displayed (maybe i want folding or something), whether an ToC is displayed, colours, fonts, etc.
The point is that the user can decide all this stuff, without having to hack it around the author's own styles and scripts.
Technical paper is what I was thinking about there. Furthermore, since Gemini apparently lacks support for mathematical notation images would be necessary for such even if the paper doesn't intrinsically contain non-textual images (e.g. pictures, charts, or graphs, which are common though not universal).
I'm not sure how recipes, for example, would be an issue without inline images. You click (or otherwise trigger) the image link and look at the image (potentially in a new tab or window) then go back to reading content. It isnt hard or even a bad experience. It is just different than current expectations based on comparing something that isnt the web to the web.
Being able to look at the image of the preparation step while reading the instructions is nice when making a recipe. Having to go to a separate page is annoying.
I agree, but I see this as a client-concern. I could imagine clients fetching and inlining images if the user directed it to, or maybe media-focused clients having a text pane alongside a media pane where the images would be rendered. The main advantage I see of this approach is that it takes us from "the server decides what the client does" to "the client decides what the client does".
Ok, but the recipe maker is going to want to suggest where to put the images (so they are in the proper spot on the recipe).... at which point, how is that different than html? A client can decide not to render the image where it is suggested already.
So did I. But I'm not going to let one drawback distract me from something otherwise very good, nothing is perfect after all.
I can even understand why they did it. To keep the doc format very simple.
I hope that more clients will add unique rendering features that will turn this drawback on it's head. It could be in-line rendering or a gallery-like feature.
I think calling gemini an alternative to the web has a very limited view of the web. It takes an idea someone has of what the web should be: a set of text documents. That's a very small subset of the web, not just of today.
Building separate protocols for all the various use-cases of the web would be interesting, but would still need some interconnection. But I'm not convinced that has many advantages besides not being accidentally linked to a websited of the "Old web." A problem that could be reduced by a browser extension that strictly blocks any external urls and javascript.
The separate protocol for everything is essentially what things were like prior to about 1994.
There was a protocol for searching documents, a protocol for looking up someone's email, it was all partitioned out.
The web was seen as just another fish in the pond.
After the web became big, these things still lasted for a while
However spam and crooks changed it all. Usenet became useless, DNS full domain lookups (you used to be able to get a list of all the subdomains of a domain through the command line and you could just browse then out of curiosity), using whois for email (you could just query for a name and get an email address over whois), it's all gone because there's too many snakes trying to scam people and flood the network.
Things used to be much better tools but it turns out they were too good and had no defenses. The dream of everybody connecting has sort of been retracted a bit. RMS, TBL, Torvalds, I could just send them an email in the 90s and they'd respond, it was pretty remarkable.
It's not the case any more. Not even minor players in history (such as an author from a 25 year old book) respond to my questions. People just don't do that anymore.
Spam, harassment, criminals, ill will, this all has to be a big priority if we want to try it again.
The future should be the dreams of our better angels, building better tomorrows...
> RMS, TBL, Torvalds, I could just send them an email in the 90s and they'd respond, it was pretty remarkable.
I don't think this stopped just because of spam, harassment, or other bad behavior. A big part of it is just community size. When the community of internet users was smaller, you could interact with everyone who reached out in a reasonable amount of time. As it got bigger, that is no longer possible because of the sheer number of people.
This is an exceptionally good point. Security is also one of the top problems with the web (alongside the asymmetrical difficulty of hosting content vs consuming it and the lack of consistency for web content). The problems with the web are mitigated by "good enough" solutions from browser vendors, out-of-band third party extensions and even some services on the web itself (e.g. archive.org, though I don't know how sustainable that is, and it's far from perfect).
This is hard to understand for someone not familiar with Gopher, but I’m interested in how menus are handled.
I’ve always wished browsers handle site menus in their chrome, so that the document can be focused on content not navigation. It’s the browsers job!
For a while Opera supported these related links in the head for some pages but the dev was unable to add their own, it was limited to a small number of standard items such as Index. These were shown in a browser toolbar.
Nonstandard navigation have always been a point of friction for users as it precludes universal access by having to relearn how each site works.
The web is partly broken due to the sheer expanse of it. The author of the article alludes to how they enjoys looking at the aggregator, and notes they can always find something interesting. If Gemini ever became as popular as the web, that would stop being true.
So the selling point of Gemini is that by staying rudimentary, it can limit it's appeal, and subsequently stay unpopular enough to be more like "The Old Web". I think that's worth noting, because you could get trapped into thinking this is a technology problem, but it's really a people problem.
However, Gemini does not exist in a vacuum. The web will be there. There will be social media platforms, multimedia, awesome webapps and all that. And Gemini is just text.
When you have the choice between easily consumable infinite multimedia and just text, you only pick the latter when you really care about the quality of text content. It's not sexy so all the spammers, content marketers and ego boosters have nothing to gain on Gemini. And so there can be this esoteric little corner of the internet, with down-to-earth text content written by ordinary people.
> If Gemini ever became as popular as the web, that would stop being true.
Your implication may be true, but Gemini will never become as popular as the web (well unless the web becomes extremely unpopular at the expense of something else besides Gemini).
My wife would see "no images" and that would be the beginning and end of using Gemini for her.
2. Someone realises it would be great to have simple inline images, and makes a cool client that supports “gemini+img” syntax that they make up. The syntax gracefully degrades, so you can use it in your docs even if your users aren’t using the new browser!
3. Protocol is technically text-only but in reality everyone uses img-enabled browser
4. Repeat with basic styling, then simple scripts. Eventually authors rely on more and more “optional” features and syntax extensions, and we end up with a similar feature set to what we have today.
5. Advertisers move in as Gemini gains mainstream adoption, and we’re back to www
How is a network protocol proof against being used to transport CSS files? Does the network stack inspect what you're shipping and ensure you're only sending 100% Pure Plain Text?
> The Gemini transport protocol is unsuitable for the transfer of large files, since it misses many features that protocols such as FTP or HTTP use to recover from network instability.
Isn't that TCP's job? Is this person saying Gemini doesn't use TCP?
Finally:
> Now, what does Gemini currently have to offer? The best way to find out is to head over to the official site: gemini.circumlunar.space in your Gemini browser.
Back in the Gopher days, my "Gemini browser" would be my Web browser. That was one of the reasons Web browsers took off: You could use them to access all of the information on the Internet, including the WWW, Gopher, Usenet, and Email. Only more recently did Mozilla morph from the Netscape Communicator software suite into the slimmed-down Firefox browser without email, spinning off Thunderbird in the process, and only much later did Firefox drop Gopher support from the core binary.
> Isn't that TCP's job? Is this person saying Gemini doesn't use TCP?
maybe he’s talking about higher level features, like the possibility to restart a download from a certain point, without redownloading the initial part? haven’t used this since dialup days, though
> Gemini has no support for caching, compression, or resumption of interrupted downloads. As such, it's not very well suited to distributing large files, for values of "large" which depend upon the speed and reliability of your network connection.
Download managers that speed up downloads use this feature. I.e. they use 2 threads, 1 download from the beginning and the other continueing from the middle of the file. This fools single connection throttling measures.
I suspect that it is this partial download that they are talking about.
That said, I can't tell whether or not I've used it recently. I know I don't use it when I play with personal projects, but I don't know what other sites do because I rarely have a console pulled up in my browser when I'm just using it, rather than developing.
Its often used with bulk file downloads (e.g. curl a multi-gb file while transfering between wifi networks), as well as video streaming sometimes (buffering)
> How is a network protocol proof against being used to transport CSS files? Does the network stack inspect what you're shipping and ensure you're only sending 100% Pure Plain Text?
The Gemini specification includes its own format for pages, which is a text-based scheme inspired by Markdown and Gopher menus. You can use the Gemini protocol to transmit things other than Gemini pages, sort of like how you can use HTTP to transmit PDFs and Word documents, but you wouldn't build your whole site out of them. (At least that's my impression, I haven't gotten around to actually visiting many Gemini sites yet.)
> How is a network protocol proof against being used to transport CSS files? Does the network stack inspect what you're shipping and ensure you're only sending 100% Pure Plain Text?
It's just like the web, the transport protocol (HTTP/S) can be used on any file. But there is a separate spec for the document format (HTML etc.). You could transport CSS over Gemini, just don't expect any of the browsers to render it. Just like how web browsers won't execute alternate scripting languages natively.
> Isn't that TCP's job? Is this person saying Gemini doesn't use TCP?
I didn't really elaborate on this point while writing, because I had nothing to add. I will quote from the projects FAQ:
>> Gemini has no support for caching, compression, or resumption of interrupted downloads. As such, it's not very well suited to distributing large files, for values of "large" which depend upon the speed and reliability of your network connection.
Hopefully that clears up what I meant.
> Back in the Gopher days, my "Gemini browser" would be my Web browser.
You might be interested in Castor[1]. It's a browser for the minimalist internet. Rolls support for Gemini, Gopher and Finger all in one.
I can understand why FF removed support. But hopefully smaller applications, like Castor, can fill this gap.
>>Gemini has no support for caching, compression, or resumption of interrupted downloads. As such, it's not very well suited to distributing large files, for values of "large" which depend upon the speed and reliability of your network connection.
Which honestly is pretty silly, as lots of caching is about reducing latency for small files not saving bandwidth for large files. Suppose it matters less if the documents are self contained.
> You could transport CSS over Gemini, just don't expect any of the browsers to render it.
If Gemini is ever of even domain-specific serious use, I'd expect both the format and protocol to be added to what is supported by existing major web browsers (it can't be both tractable for small implementers and intractable for Apple/Google/Mozilla), which, as it turns out, know how to support the combination of HTML/CSS/JS just fine and won't likely forget just because a different transfer protocol is involved. Presenting a DOM mapping for Gemini format pages and exposing it at least to extensions even if there is no way to include page scripts doesn't seem unlikely, either.
One of the things I find refreshing about Gemini is there is no standard scripting language in there and the implementations vary wildly on the client side from Rust to Lua to Python to Go as well as the server-side. It made me realize perhaps browser technology for the Web got locked into specific domain-centric technologies which have held it back. There is so much C/C++ required for JavascriptCore and friends in a modern browser there is only one real choice to code in. Mozilla has made great advancements with Rust in Firefox but still a long ways off from a total conversion. It's not that its not possible but if you want to tap into the work which has already been done in JavascriptCore or other technologies you certainly cannot just pick your own backend or language. Gemini's efforts on the other hand are being brought up in parallel and in the open so that is a major strength that the ecosystem is already much more broad from the beginning. Building a modern browser from source nowadays is an intensive process on a single mid-range workstation just due to the fact much of the extra functionality is compulsory and not opt-in. Many of these modules were meant to be pluggable but somewhere along the way they became coupled dependencies of each other. A good example is Electron where in theory it should be just the things you need and a subset of a browser where applicable but instead you need the whole browser engine every single time.
That's actually covered in the deeper linked docs on Gemini.
Setting it up on a separate protocol / markup allows you to make hard reasoning about what kind of privacy, features and protection you get as a user, rather than relying on the goodwill or current promises of your content provider.
It’s not meant to “get traction” in the Silicon Valley sense of “everyone must use this or there is DOOM”, it’s meant to be useful for the communities that use it.
If you open the Gemini link posted below about why just defining a new doctype wouldn’t work, give it a read.
>It’s not meant to “get traction” in the Silicon Valley sense of “everyone must use this or there is DOOM”, it’s meant to be useful for the communities that use it.
If you're building a protocol that enforces certain strict standards, like in this case being text only because the internet is 'bloated' according to the author, then the only point of having it is adoption beyond your community.
If all you want to do is communicate with ardent non-bloat advocates you can already do this on the regular internet, because everyone in that community does it voluntarily already
There's no point in codyfing standards for a community that follows your standards to begin with
Attracting people who agree with your community's standards to join your community, while hinting that maybe other people might not be interested, is perfectly reasonable. That's... pretty much how communities work.
yes but you don't need a disinct technical foundation that is incompatible with your surroundings. If you want to go play at the chess club you can simply go there and everyone is interested in chess, you don't need to start a new chess players only micro-nation that keeps all the other people out, in the middle of the forest where bloated cars can't get to. That'd be pretty unecessary. In fact if you want to attract new people, that's a really bad idea
Solderpunk has explained the issues with specifying a safe subset of HTML, the article has been linked elsewhere in this thread. If you don't want to use a Gemini browser to access it, you can use this link: https://portal.mozz.us/gemini/gemini.circumlunar.space/users...
This would seem to be a similar sort of issue to when people say "come chat on IRC", or use a mailing list to communicate about a project, or whatever else - you're not going to use those if all you ever want to use is a web browser. And that's ok in my book. I'll hang out with the people who do want to use those tools.
Except, of course, that there are IRC clients accessed through browsers, mail through browsers, Usenet through browsers... so you don't actually exclude people that way.
If Gemini catches on, someone will write an add-on for Firefox that reads it. And then it's just part of the Web that is fast and looks a little different.
And that link is a long-winded way of saying "It's good to put an artificial barrier in the way."
There's actually already portal.mozz.us as well as proxy.vulpes.one, both of which are Gemini "portals" ala gopher.floodgap.com. I actually began to try and write an addon to open gemini:// links in Firefox based on Overbite, but I couldn't figure it out (and I'd have to change the gemini protocol to something like gemini+web, due to Firefox limitations).
Just because you can do that, does not make it part of the Web -- it's still a different space. Those portals are basically web-based clients to the protocol, which means they're still bound by the rules of the protocol -- they're not going to have JS in them, for example.
Hey, I just want to jump in here as a counterexample to your argument -- I came into Gemini from "beyond the community." I was a standard web user; I found out about gemini from another discussion on HN or Masto or somewhere and jumped in, and now I absolutely love the community I joined.
So it's absolutely become adopted beyond its beginning community.
If think what the parent is trying to say is; if this protocol is a subset of of we have today with HTTP and HTML, why not to create more of a political movement cheering for this subset to be use, the same way Google does with AMP?
On the technical side of things, it looks like its an old battle replayed with older weapons, just to get go through the same path of HTML. "Oh, but we will not put more features". Ok, but people wont use it then because they can also serve text, markdown, etc... and can chill out, because they know if later they need to serve images, videos or graphics, they can do it with HTML.
And i say this as a person who is also trying to create some alternatives to the web. But instead of going back to the nineties, i tried to think how the technology of 10 years ahead would look like. I've probably not maded it because its really hard to push the envelop when things are almost in the state of the art as Web is. But i also don't think the answer for the future will be in the past.
You know what i think would be a really badass movement. To create a simple spec of the Web, even without Javascript. Because with "feature creep" Google through Chrome is making impossible for others players to create competing browser engines.
So if two folks decide that they will create a web engine in this new language they like, it wont be an impossible goal. Because there's this simple version of the spec, with much less features.
The people behind this might be very good at convincing people and real believers, this thing can float for some time working real hard for this.. But it will be really hard to get this out of a small niche.
Anyway i love the thinking behind this. The meditation, the koan, is really on the right track. We need more rebels and fighters on this front. But i just can see it, how this can compete as a subset of a massively popular and deployed protocol with clients everywhere? How can it really differentiate itself, apart from what the web today already can serve to people?
The problem is that then you link to https://yourfriendsblog.org, and a couple years down the line your friend decides this safe subset is too restricting for them and decides to replace their blog with Wordpress with all the plugins and Cloudflare captchas and Google Analytics and evercookies and whatever else. The reader is dumped, unceremoniously, back into the big bad web, but could never know this before clicking the link.
When you link to a Gemini URL, you're linking to something you know can't be replaced with something privacy-violating in the future. The worst that can happen is the server shuts down, which is a very different failure mode. And someone is less likely to do that than to switch to a different brand of HTML - or so's the hope.
It's not going to be a major thing that everyone uses. That's ok! Neither is IRC, mailing lists, and whatever else - but people still use them, every day. There's ideas exchanged, friendships made, relationships formed, and they serve a not-insignificant community's needs.
> The reader is dumped, unceremoniously, back into the big bad web, but could never know this before clicking the link.
Ok, but you know the big majority of the users dont care were the content come from, or how its delivered to them. They care about what is being served instead of how.
If you guys manage to have some 'killer apps' on this protocol, where people will try to reach it no matter how its is implemented, than there's a chance.
IRC killer app is IRC itself, and they manage to push themselves as alternative in the nineties where a lot of popular protocols and alternatives like the web were still in the beginning.
Anyway, if you convince people over time to serve their content through this medium, with enough and interesting content, the user will try/learn to reach them.
But i dont know, i think at least they should be trying to use some p2p DNS system, making it easy for people to serve their own content, or revisit BBS and serve contents in tree-like structures akin to directories..
I feel that there must be something to really differentiate it from everything else. Some things that are unique, and that the web + others are not covering. Because if you think about IRC or Email, they have distinct features that web could never cover even being a mammoth protocol, while this doesnt hold the same way when you think of Gemini proposal.
Anyway, people trying to do something, to change things for what they perceive as the better is a good thing, and it should always be celebrated, because even when the thing doesn't stick it might need adjustments, incremental evolution or just to serve as a influence to something else or through experience inspire the creators to create something even better.
> Ok, but you know the big majority of the users dont care were the content come from, or how its delivered to them. They care about what is being served instead of how.
Who, exactly, without a profit motive, wants the majority of users? I don't want to talk to most people, and I don't want most people to read what I have to say. I, like most people I think, want to spend most of my time in my community sharing things I find interesting with people in my community.
It's the same as tilde servers, or MUDs. They're not going to take over the world. They're small communities and most people will never even know they exist, and that's fine.
I don't think he makes his point very well or convincingly. You could, if you wanted, make a plugin for a browser that blocks non-simple tags, blocks cookies, blocks images, blocks scripts, etc, and I suspect he's wrong to say that '...such an undertaking would be an order of magnitude more work than writing a fully featured Gemini client'.
He goes on to say 'Supposing you had such a browser, what would you do with it? The overwhelming majority of websites would not render correctly on it.' - A very good point, but equally applicable to a Gemini browser.
IMO, they have confused the network protocol with the presentation. You don't need to drop HTTP in order to change the way websites look. Likewise, you don't have to implement HTTP features that you don't like (e.g. cookies). This just strikes me as another mistaken belief that rewriting code from scratch will solve all your problems.
The proposition of Gemini is that it creates a separate, deliberately incompatible, ringfenced part of the internet that is self-sufficient; not operating as a subset of a larger whole but as something sovereign and self-contained. This fosters a community spirit and allows one to remain 'within the fence' in a way that would be very hard to do if inhabiting simply a subset of the existing web.
You can read it with a Gemini browser, along with a lot of other content! There's a list of clients at https://gemini.circumlunar.space/clients.html, along with SSH and HTTP bridges.
True, but before you visit that website, you don't know anything about how the site was implemented, or what JS is going to run, or whethere there will be a big autoplay video advert, etc.
When you visit a gemini URL, you know how it's going to serve you a limited capability, text based document, styled according to your own rules.
Author of elpher here. If you were running under MacOS you were probably hitting the restriction that GUI Emacs explicitly forbids the display of coloured Unicode characters on that platform.
Since the response body is not encoded, there's no safe end-of-response marker byte(s) to use.
So content-length seems like the way to go. But knowing content-length ahead of time is difficult for dynamically generated content (CGI is supported after all), so they also need something similar to HTTP chunked encoding, which does complicate things a little.
I understand that keeping the Gemini client simple to implement is one of their design goals, but I don't think the same is true for the Gemini server. So I hope that they would consider adding these to the protocol. They could probably stuff the content-length or the word "chunked" in the <META> string.
But since they explicitely state that it is not suitable for large content, server could just cache it until it has it all. Most clients will likely wait for whole response anyway.
I feel like that would strike reasonable balance. Client are still simple (arguably more simple since they don't have to guess if they got everything) and the protocol is still trivial. For dynamic content, it would increase time-to-first-byte and ram usage on server, but both imho would not be an issue for the type of content gemini aims for.
I agree that always sending content-length would be ideal, if it didn't come with the extra work and costs on the server that you mentioned.
Chunked encoding is simple enough to implement, avoids all those issues, and would allow a Gemini server to serve more requests faster given the same resources, or to run on hardware with more limited resources such as embedded. So I think it's well worth the slight cost in simplicity.
I wrote an ncurses-based gemini and gopher client a while ago: https://github.com/jansc/ncgopher I really like the gemini protocol because of its simplicity and its text-based nature. Spending most days in a terminal, text consumption is so much easier with a gemini client then e.g. lynx for webpages (which won't work 99% of the time).
I've been thinking about something similar for a while, but kind of disagree with Gemini's scope and implementation. I think that the lack of inline images is too limiting. In my opinion, a good replacement for the WWW should have the following features:
- ToS that strictly prohibits commercial use and advertising. We have the WWW for that, no need to duplicate it.
- Uses HTTPS or something similar. This allows use of efficient servers like Nginx.
- Based on a virtual display with fixed dimensions and orientations, 2-3 aspect ratios and vertical/horizontal orientation. Fixed virtual pixel solution. Every page is fixed in size and in the length of unicode text it can display.
- Uses a structured document format with a limited number of logical tags. The client displays the page as it likes (no styling directives in the document markup). Every page written in this format is compiled into an efficient and compressed binary representation for transmission.
- Limited number of links, overlays, and images per page. Input fields with validation should be allowed. Inline images and movies are limited in size.
I'm planning to implement something like this in my forthcoming virtual Lisp machine (z3s5.com), though it's going to be a bit less general and probably not be based on HTTPS.
Currently it’s TUI, but will add GUI eventually. It’s fun to have a protocol small enough you can implement it yourself, but I currently have a weird bug where some Gemini servers work and others don’t because they don’t seem to follow the SSL spec.
No mention of JS; not even a Gemini server in Node[0]. Finally a sane place to hang out! Kidding, but only half. The 'everything must be done in JS' is fairly annoying imho.
> Gemini, being a recent protocol, mandates the use of TLS. There is no unencrypted version of Gemini available.
Mandating the reliance on third parties in the protocol itself does not seem to be a great choice. If I have a simple webpage which I use as a daily diary, why should I go to the trouble of asking a random third party to provide me with a certificate?
Gemini does SSH-style TOFU, you don't ask a third party for the certificate. From the spec: "Clients can validate TLS connections however they like (including not at all) but the strongly RECOMMENDED approach is to implement a lightweight "TOFU" certificate-pinning system which treats self-signed certificates as first- class citizens."
I see a lot of comments expressing that all we need is markdown plus this or that little bit. I think that's unreasonable. It might suit Joe developer just fine for reading blogs and news, but the world benefits enormously from the ability to build complex software applications at low cost. Imagine the alternative: Welcome to Mario's Pizza - you can order right from your own computer after we mail a disc* to your house (*requires Windows 8 or newer)!
Also, some of the CSS and JS hatred is piffle. Publishers absolutely abuse these languages and it gets pretty bad on news websites especially. But I do not find that most or even many of the sites I visit perform badly on my hardware (2016 iPhone SE and a 2017 MBP). They work fine. Moreover, I appreciate nicely designed and competently implemented experiences on the modern web.
I have no interest in trading the modern web - warts and all - for some spartan plaintext utopia.
I think this might be too simple. In particular, the absence of a way for the client to specify a desired language or supported mime types, query strings as the only way for the client to send data (what about uploads? Or non-idempotentent requests like registration?),and the absence of compression all seem to go a little too far to me (compression could be done using a parameter to the mime type).
And really,why replace http? The complaint seems to be mostly with html, so why not just makr a gemini text format,build some browsers that use that as the default instead of html and specify semantics for how tls works, like custom status codes to request a client certificate and recommend TOFU certificate trust. And maybe specify certain headers that shouldn't be used, like cookie.
The web has become much more than a protocol for reading documents: controls that communicate both ways with the server so that they can be also updated in real time is a really useful aspect that won't make them go away anytime soon. I rather wonder if the browser is the best interface for that, or if html is the best protocol for that use.
The answer would probably be obvious: "one software doing all is cheaper to produce and maintain than two or three doing each one its own business (and we can still blame the user hardware for the added slowness)."
I still don't understand the point of this project.
I mean QUIC is here, waiting for us.
Why throwing away 30 years of HTTP ideas .
If images and cookies are to be forbid that should be made from the web developper decision.
Choosing Gemini is like accepting restriction by law.
I feel like this is old behaviour, that confidence and responsability is the way forward.
In a way Gemini could have been published by writer of European Union , North Korea or Soviet Union laws, I can't belive this is a US products, as it contains too much to liberty constrain ;)
> If images and cookies are to be forbid that should be made from the web developper decision.
Unfortunately even today that's not 100% under your control. I don't have to accept cookies nor load images from your site with the proper settings or browser.
You seem to be forgetting HTTP is a user-initiated protocol and while a lot of it is hidden behind Javascript and common browser features, ultimately it's the user agent that initiates everything. You as a web developer can only control the files on your webserver, put cookies in headers (which the user agent has to send back to you) and possibly take advantage of some other Javascript features like DOM storage (which I can turn off).
I'm 100% on board with Gemini being a 'liberty constrain' for those who put information online - honestly it's not necessary for remote systems to be executing code on my system just to display text. Yes, you can't monetize it as easily. That's a feature, not a bug, for a user like me.
How is introducing yet another new technology, incompatible with what everyone else is using, any better than just creating a minimalist static HTML website?
Because it's not about the individual, but rather it's about the ecosystem.
From an individual point of view, there's not much of a difference. If I wanted to migrate my blog to Gemini, it would take very little and it would lose almost nothing. But the second you click a link to another domain you are back in the bloated web.
Once you open a Gemini page, however, you know exactly which experience you are getting. No bloated websites are allowed, so you won't see those at all,ever.
How is the issue not bloated websites? You can easily create a text only website in HTML which is compatible with every existing internet capable device. All technology can be misused, it isn’t necessarily a problem with the underlying technology. By introducing yet more standards and technologies, you end up creating bloat and fragmentation of a different kind.
There is also an allure of having a client that will never run some weird god-know-what-doing javascript code.
Of course you can just disable javascript in your HTTP browser. But as the author states, it's easier to write a completely new client than to disable all the bloat in e.g. firefox.
Agreed, but like I’ve said elsewhere the effort would be better spent modifying an existing browser to be lightweight. That would gain more widespread adoption than a whole new client-server protocol. You’d also gain the benefit of better interoperability with screen readers and such like.
Nobody involved in Gemini wants "widespread adoption", because that inevitably means big commercial organisations coming into the space where right now they're having a lovely time building a community.
Screen readers handle Gemini just fine - one of the clients I've tried outputs text and buttons in a GTK window, another literally just outputs text into a terminal and takes command line input, if a screenreader couldn't handle that I'd be horrified. The format is paragraph-based, there's very little styling, no guessing at what the main content on a page is.
So you say the "effort". There are 50-some line gemini clients, and more than a few people have coded graphical clients in a few days by themselves. Same with servers. Is modifying firefox or palemoon really going to be a "better spent" effort?
I think screen readers can read text pretty well, also, since you've declared that to be a goal for some reason.
Unfortunately, having worked on both Chromium and Firefox, I can inform you that modifying an existing browser to be lightweight is an unfeasible project.
It's a waste of time compared to just making something new. That doesn't mean that normal browsers couldn't also get gopher support for those not interested in a lightweight browser.
> Because the problem is the technology everyone else is using
Introducing new technology doesn't change the technology everyone else is using if it doesn’t provide a reason for them to switch. Gemini seems to appeal very much to the issues of a very narrow group of users and, I suspect, an even smaller proportion of content creators. I don't see how it has any effect on the technology everyone else is using.
It's a subculture. I think the key is in the definition of 'everyone'; I doubt anybody imagines everybody in the world switching to Gemini from the web. So 'everyone' in this context means all the people who have collectively agreed to stick to minimal text format.
You could try the same thing with a community dedicated to minimal HTML--heck, you could say that's what HN itself is, almost--but it would have no effective border. You'd keep falling back into the bad old web. This makes the border explicit, instead.
Good point, but I doubt there will be any kind of meaningful adoption, I think the effort would be better spent modding chromium or such like to strip out crap from existing websites rather than creating a new client-server protocol.
I've recently switched from Markdown to ASCIIDoc ... the article makes it sound like the browser is optimized for text but isn't it really the server that's rendering the text that's sent? In this case, I like the idea of minimal styling mostly because my eyes are getting old but as others have stated, why isn't this specification a small subset of the existing internet?
I was just talking to someone about an idea like this. I even thought about making a Gtk+ widget for it, having no idea this existed. I really like the idea of viewers being the ones who decide presentation and something like Markdown (with images and videos) could work well for that. I'm not sure why we'd need a new protocol for it though, other than to escape the web.
If I wear a polo and dress pants, and I'm on the train at 8am, I'm going to white collar work. If I wear jeans at the same time and have my partner with me, I'm a middle class, middle aged, probably-parent on personal business that could probably be described as "running an errand". If I'm a 42 year old man wearing bright purple halter top, with black lipstick and combat boots on the train at 8am--you don't know me! Where the hell am I going? How am I living? You don't understand how to talk to me, so don't bother unless you are a 27 year old woman with red eye liner, green air, blue lipstick and a brady bunch t-shirt on! We wouldn't understand each other!
Elaborate protocol jokes aside, it's an apt analogy. It works--for some people, some of the time. They are dreamers living life waiting for a tomorrow that may never come, but the cause gives them their identity. Chances are that Bob and Linda still have to straighten up on Monday mornings and assimilate to the general protocol. But that's the world we live in, is it not?
I like it. I don't like to deal with it when I'm at the bank or the brokerage, but Friday night, it's alright with me.
I don't begrudge their them identities. However, it's a weak, unopinionated protocol. No inline images? Just because you support them doesn't mean they don't exist.
Users will have their inline images, and they will build clients, and the protocol will be defined. If not by you, then by someone else.
That is the story of the modern web, and that is, funny enough, how we got here.
Nostalgia isn't productive. A new protocol isn't insurmountable, but it has to be native to the time you are living in. You don't need to necessarily inline videos and graphics, but you need to define how these should be handled, or at the very least, the boundaries of the protocol. Waxing whimsically at people creating readers to handle inline images in an ad hoc way... lol, why? Why are they doing this?! They were just bemoaning the current state of the web! My god, have some foresight!
Make a protocol, make a browser, define the boundaries and perhaps adjacent protocols, set your browser roadmap accordingly. Stop when you feel the coverage of the desired space is sufficient. If that's text-only, fine. But you must have answers to how an image or video should be served to the good people of this community.
The thing is, when the web grew up, there was no not-web. Gemini occupies a swim lane, and if you want more, even a gemini page can link to a web page to do the heavy lifting of the modern web. So aside from making the bear dance, there's little interesting in forcing inline images. It will be not that the bear dances well, but that it dances at all, right? There is definitely a part of the Gemini community--from what I've seen--that is really excited to see the http 0.9 grow up that they missed the first time. But so far cooler heads are using their persuasion to knock most of that urge down. What will save gemini from turning into the modern web is that the modern web will be right there the whole time. I guess the perverse server and client could collude to deform the gemini protocol to look like http, but why? "=> http://www.mysite/recipe-with-inline-images/ Step by step recipes " in your .gmi file would do this so much better.
What if the problem statement (what is something like this trying to solve) is:
1. No tracking beyond what the server gets via standard logging
2. Document styling control in the hands of the user agent
3. Navigation control in the hands of the user agent
4. Compatibility with existing web browsers would be a bonus
Points 2 and 3 imply some kind of semantically-clear markup. Would definitely want to think of forward/backward compatibility (something the web has been fairly good at).
Point 1 means no cookies, likely nothing like JS, images need to come from the same server (either packaged in the initial request or forced to come from the same host).
Point 4 is a nice to have and may be possible if this is built on a subset of HTML and HTTP. One possibility is that if the server receives a request that looks like it's coming from a standard browser, it can serve up the page with some JS and CSS that fill in the stuff that would normally be done by the user agent.
IMHO, something built on a stack that looks like that would be a better fit for 2020 than something based off of Gopher's approach.
Could most of this not be achieved by adding a markdown mode to Firefox? Whenever text/markdown content type is received it would display it using a stylesheet set by the user. It could skip cookies, etc in this mode to reduce, if not completely eliminate, tracking.
I'm waiting for folks to look at gemini, see only part of what they want, and make markdown-native web protocol, a wiki protocol, basically. I think that would be very cool, and it is what drew me to gemini before I figured out it was really super-gopher. Like what if there was a mediawiki protocol? You'd have your tables! Anyway, there's no one stopping anybody for trying to make that happen. The true interwiki promised in the days of c2, could actually happen!... if some people want to make it happen.
A better approach to Gemini would be to create a UGC web app that allows people to create simple, text-only, markdown-enabled content that does not track users. And I think plenty of those exist already. It would be the circa 2004 era blogging platforms.
And then someone adds some privacy violating javascript to their site, and everybody who linked to them before is now pointing their readers right back into the big bad web.
The creator of Gemini has a post about this, linked above.
My understanding of Gemini’s raison d’être is that the JavaScript-laden app-filled WWW is an anti-feature so doing anything with Gemini and web apps would be contradictory.
Well, there's nothing stopping you from serving HTML/CSS/JS web apps over the Gemini protocol.
(Nothing stopping you from building a browser that handles gemini format documents and does special things with JS links, either, though to avoid accidental execution issues you would probably need to extend the format to distinguish “links I want to execute” from other links.)
I'd like to say again that by competing with the modern web to do things that gemini isn't good at, gemini gets to lose those contests, and this should limit the success of efforts to to make dancing bears out of gemini.
This would be interesting for a terminal based web alternative. A practical use would be something like O'Reilly's Safari Books Online.
The author mentions that fancy Gemini reader apps could allow linking to images, since that some terminals allow images to be displayed[1] that would fit nicely too.
Why not just use text based websites and something like w3m? Well it's hard to tell when opening a link in the terminal will be useful and when it would be better to open the link in a "real" browser like Firefox.
e.g. after a git push to github I'm provided with a link to create a PR. I really when I click that link I really want it to open in Firefox.
[1] Off the top of my head Kitty and iTerm2 both allow image display from apps such as the ranger file manager.
I agree with some of your thoughts and thank you for putting this together. In order for this to be successful (and I truly wish it becomes so; I very much miss the internet before the web sometimes), it needs to work side by side with http. By that, I mean your web browser needs to become a ‘dual’ browser and be able to switch back and forth between the two seamlessly and documents built for the two protocols need to be able to link back and forth between the two protocols. If a browser can do that, my question then becomes, is it even worth it or fo we just need more web sites out there that focus on their text and their usability?
I just wish they had not insisted on TLS and closing connections. It defeats the purpose to me because you are going to be delayed constantly by new TLS startups.
Yip and gemini's markup is so simple that it is useless for me. :-(
Combining a restricted HTML maybe with 1-file-only limit (use data url for insets) with gemini's protocol could make sense for me.
"rHTML" like PDF, all in one file, maybe a bit JS for rendering maths, but maybe there are other/better ways to do it (no maths in PICs please!).
Or brutally different?
Distribute DVIs over gemini://?
I like the 1-file-only style in the dimension gemini and apart from not yet having automated images to data-url-conversion, ... sigh ...my image-less HTML stuff already is 1-file-only.
Gopher will be faster by nature of gemini using TLS. However, gopher is inherently less secure than gemini. If you are wanting the TLS, gemini _should_ load your document faster than the https (even with minimal headers included for the https version, there is still more header overhead to just make the request and there is a lot more header overhead in most responses).
> although Gemini lacks in-line images, you can still use in-line links to images
this is telling. not only it's inconvenient, but claiming link to images are good enough for blog and recipe pages makes me think the audiences are completely misunderstood.
weird because text driven interface like airline reservations still exist and porting them on a common protocol would provide immediate tangible benefits today without the backward thinking about media
You're pretty much correct. Even in this thread there are people saying "oh the Gemini browser can decide whether to render inline images", which is exactly what Mosaic did.
The main advantage this has is that it will not get popular enough to experience the scope creep that the web did so will probably remain relatively pure.
This seems like a fair enough idea, except that there should be provision for "rich content boxes" that might contain images, forms, WebGL or other things that Gemini intentionally forgoes.
Having a "web browser" that can't do interactive content at all (no, server-side CGI doesn't count) is a non-starter in 2020. What if I need an interactive chart?
The overall web itself is going WASM for a lot of projects. Pure javscript that is actually readable isn't exactly common now either as more frameworks build multiple layers on top.
I see this as an interesting step sideways. Less is more and all that. Perhaps check in 2 years from now and see what major shifts it caused.
"When I picture it in my head I think of the early web as more of a library.
Over time it has transitioned into a shopping mall."
- chris_f (Hacker News comments)
There are still books in a shopping mall. You have to know where to look and not get distracted at every corner, though.
I believe there is a lobste.rs mirror (not sure about hacker news). It includes the comments, but in a read only presentation (gemini is more or less inherently read only; though there is a separate protocol called titan that handles writing for gemini servers that support it).
Is gemini featureful enough to build a sign-up form where I could pay for stuff? I ask only because it seems like the commitment of commercial players seems like a prerequisite for the success of stuff like this.
The almost full and complete point of this is to REMOVE the ability of "commercial players" from having a role or interest in the platform at all. They have already ruined the web and society as a whole (but that gets into a whole argument for/against capitalism... take a wild guess where I stand).
I'm not a great fan of capitalism, but really my favourite thing about it is the seeming inability of commercial players to surveil. I don't like the idea of building in the price of content as being nothing - I want to donate to bloggers etc.
A way around this is to mix gemini and http in a single browser. Only one can be rendered on a page, but they can link to one another, with a warning when you transition between the two. So blogs could have a donate button that just hits paypal etc.
Just make a regular old HTTP(S) site. You can host plain text files and they'll work just fine in any modern browser. Or you can use a subset of HTML and CSS for accessibilty reasons (screen readers, etc).
I'd find it an effort to standardize text/terminal-friendly accessible HTML+CSS much more appealing. And the TLS requirement seems to make it harder to get started for pure "fun" projects.
One of the goals of the project seems to eliminate all of the ways the current web can violate user's privacy. Without encryption, I dont think Gemini could claim to be better.
About TLS: it means the spec requires Gemini to be encapsulated in TLS; the same way RFC 2616 recommends HTTP be encapsulated in both TCP and IP ("HTTP communication usually takes place over TCP/IP connections"). Yes, technically they are abstraction violations, but they are commonplace for network protocols.
about network instability: it doesn't refer to congestion and packet loss handling in TCP, but to the "Range" feature of HTTP, which allows downloading only a subset of a file, to resume an interrupted TCP connection.
I think you might have misunderstood the latter, which just refers to the Gopher model having no transfer resume functionality, such as HTTP Ranges, meaning an interrupted transfer must restart.
Won't work. As soon as it gains any traction hackers/crackers will figure out how to exploit people. Marketers will figure out how to optimise it to sell you crap you don't need. Left wing/ Right wing censors and moderators will silence at least a portion of the users if not the whole project because "reasons". Oh yeah, the government will also be there to ensure that you can't do anything without them knowing about it.
Honestly the web could be great with the technologies that are out now. At this very moment. But it isn't.
I feel similar. Projects like gemini work, because they are niche, not despite it. If it became mainstream similar forces that drive the web to what it is woule take hold.
Yeah, designers would hack around the limitations until the hacks become official (against the original protocol inventors' intent, if necessary), because every company wants to control the experience of their content to an unreasonable degree. I recently turned off the "Allow pages to choose their own fonts" setting in Firefox, and it's refreshing to read text in O(1) rather than O(N) fonts. Now if only similar tweaks were as easy to apply to web forms, menus etc.
Thank the gods it was! I'm still traumatised by the <blink>-tags and dancing banana GIFs of the old days...
I mean, sure - 404 pages with dry "Oops" or "Not found" messages are way more boring, but I really don't miss getting to see GOATSE or TUB GIRL (don't look up either if you're not familiar with it) every other time I click an outdated link...
There's very little that works if it's development is driven by a community as large and diverse (in terms of sometimes opposing interests and goals) as "the public".
Meanwhile, the other post at the top of HN is a visual explanation of sphere eversion [1] that relies on the web being a system for distributing _sandboxed applications_, and so would be completely impossible in this Gemini system.
Don't get this fad of hate to be honest. The web, fucked as it is, works. Obviously there's a load of cruft because humans will try and exploit any system, but if our only problems are our Macbooks spinning their fans up a bit, it's a beef that really only irks purists. Day to day, talking to people, people don't bitch about the web other than about invasive adverts.
Frankly I think a lot of commentary stems from backend people being annoyed that frontend people earn a bunch of money for work they deem insignificant.
> I have really come to hate the World Wide Web. It is bloated at every level!
I have to admit that this opening statement almost stopped me reading the rest of the article. Which is a pity, because Gemini does sound like an interesting endeavour. I went searching (via the bloated World Wide Web) and found the Gemini FAQ page (https://gemini.circumlunar.space/docs/faq.html) which (in my opinion) makes a much better argument for considering this alternative approach to delivering content over the wire.
> I could totally see Gemini being used as an alternative particularly for the non-comerical individuals who use text as a primary medium. Blogs, poems, recepies, tutorials are perfect for the Gemini format.
The one big thought I had - as a content developer (I write poems: I will never apologise for this) - as I read through the FAQ was: "Yet another distribution channel to maintain" ... because Gemini reminds me (probably unfairly) of WAP and its Wireless Markup Language. Back in the day I was very keen to inflict my poems on everybody in the world: I posted them on Usenet (mainly RAP), various web-based bulletin boards, a Blogger Blog (with a blogroll!) and, of course, my own poetry website. Reaching out to a mobile-centred audience was the next logical step. But the mechanics of the effort defeated me, and I soon grew to truly detest WML and its stupid limitations.
I suppose, in 2020, adding another communications channel to my current website's toolchain should be a lot easier ... but I don't want to do the work. If the people around Gemini can get more passionate content creators to do the necessary work, then maybe it will have an interesting/exciting future?
If you're seeing Gemini as a "distribution channel", it's... probably not really for you? What's there, right now, is a community of people, sharing ideas and talking to each other. Pretty much all the content there was made to be shared with that community. And I very much have the impression they'd like to keep it that way as long as possible.
I'm glad they're building Gemini - a monoculture of anything is an unhealthy environment and projects like Gemini can help break up our current HTML/CSS/JS monoculture. But it is also a distribution channel: a method for content creators to get their content out to content consumers.
At the moment the 'coolest' way for poets to share their poems with their readers is through Instagram[1]. I've tried it; I don't like it. But I don't resent the poets who have made the channel work for them - anything that gets people reading more poems is a Win in my book!
We already have a transport protocol that "everybody" agrees on: HTTP. We can define a specification for a subset of it, and a subset of HTML/browser features we want, create tools around them, and form a small community.
The advantage here is that the community is not an island. Users of Big Browser can still read your latest rants. They can even learn about this project and, while perhaps not using Mom-and-Pop browser, may support it in their sites, since it wouldn't require another server; mostly just having their site work without JavaScript would be a huge step forward. Right, you don't have Google filtering based on Accessibility. The community can create a search engine that does. Now what? You just get on with your life, producing and consuming AccessibleWeb content without the gratuitous incompatibility.