We already have a transport protocol that "everybody" agrees on: HTTP. We can define a specification for a subset of it, and a subset of HTML/browser features we want, create tools around them, and form a small community.
And I'm not even talking about all the third party spyware.
I encourage every web developer to try running NoScript for a week. You will find it enlightening.
For frequently visited sites you can save which bits you want to allow so its not as onerous as you'd think.
Blocks Google Analytics and Facebook's poison by default.
...or let's just P2P our Emacsens... the last non divisive browser... :-P
The fact that this has attracted a fair number of gopher users (who have always very strongly opposed http/s) would seem proof enough of it's success, imo, at least within particular circles, and at least within the context of its goals. It was never intended to draw people away from the web, it was intended to be a sort of superpowered gopher.
And if it's that important that EVERYBODY deploys JS to render client side to avoid the refreshes, why isn't this handled at the browser level in the first place (i.e. declare "ExtendDisplayTime"-something on your document and the browser should replace the screen content only AFTER the new page is completely painted).
But at the core of it, the web is a document-display system, and back-hammering and shoe-horning apps that masquerade as documents will always be painful.
For example, i'm thinking of a roam research like content platform, without tracking and ads. It would be interesting to see shared content around that.
Sometimes it is just the latest trend...
The whole point of this project is to shrug off legacy. Yes, that means reinventing a few wheels.
I'm more terrified of the average electron app or java app than I am scared of a single browser tab.
Why? I actually think it’s pretty clean. You have a one-to-two letter opening tag and an equivalent closing one.
Markdown looks aesthetically nicer and is easier to type, but it’s less precise.
Maybe it would, slightly. But would it make the browser anymore complicated then the unessential features that Firefox has in its default build right now?
Pocket integration, FF Sync, some screenshot function?
The web is so broken that my Firefox even has some Protection Dashboard thing (about:protections). Not that I've ever used or noticed it before.
However Web still has its place in Google, Wiki, and Shopping. The three things has one thing in common in that they need multiple tabs to keep data and Information.
I'm tempted to creat a new GeoCities and see how that goes for non-technical folks
 meaning in the average folk psychology, of course a simpler cleaner web has value, just not to them AFAIB
This is like pushing for stone cart wheels when spoked wheels are already available.
If there issue is with other forms of tracking, they could implement a browser which supports a small subset of the HTTP protocol which does not allow any tracking of any kind.
The problem is that deciding upon a strictly limited subset of HTTP and HTML, slapping a label on it and calling it a day would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way. It's impossible to know in advance whether what's on the other side of a https:// URL will be within the subset or outside it. It's very tedious to verify that a website claiming to use only the subset actually does, as many of the features we want to avoid are invisible (but not harmless!) to the user. It's difficult or even impossible to deactivate support for all the unwanted features in mainstream browsers, so if somebody breaks the rules you'll pay the consequences. Writing a dumbed down web browser which gracefully ignores all the unwanted features is much harder than writing a Gemini client from scratch. Even if you did it, you'd have a very difficult time discovering the minuscle fraction of websites it could render.
That misses the fact that people don't want any of that; people just want to continue using the current infrastructure without functionalities that rely on a limited subset of features made available by the current infrastructure.
And that doesn't justify the effort of reinventing the wheel.
I imagine the exclusivity of something like this is part of the appeal. They don't want to be part of the current infrastructure, they don't want to interact with it, or anyone on it.
They did. That's the Gemini protocol. I recommend reading the spec to properly understand the constraints they were trying to meet.
> Now, what does Gemini currently have to offer? The best way to find out is to head over to the official site: gemini.circumlunar.space in your Gemini browser.
And even in the case of images, the protocol could have mandated that images cannot be accessed cross-site. The same restrictions could have been placed on other aspects such as the style sheets.
Please no. No more everything-over-http. There are other ports besides 443; there are other protocols besides HTTP.
The Internet was once a general purpose peer-to-peer network, and we should try to keep it that way.
This is similar to the driving motivation behind RSS: it was supposed to be something for simple static sites to put up, such that gateways could then poll it, before turning around and doing something more technologically-complex to actually deliver the events, like using WebHooks, or sending emails, or doing whatever PuSH does.
Big Browser is the cause of so many problems that people on HN complain about. Without the control over Big Browser that certain offending corporations have, their empires are considerably weaker.
Mom-and-Pop Browser is probably not an accurate caricature. Maybe something like End User Browser is more apropos.
If most HTTP is being sent over TLS these days, and Gemini is also over TLS, one could argue there is no "incompatibility". Gemini just doesn't need to all the extra ad-hoc functionality that has been built on top of HTTP. It is intended for data retrieval, not something that relies on help from the 20+ million lines of code in Big Browser to achieve.
It looks to be a reimagined gopher with some early web parts. Which makes me wonder how close their protocol is to the gopher one.
this means you can create a far simpler protocol semantically: requests are just the uri, responses are just a header of a two digit response code and the mime type, and the rest of the response is just the content as raw text. that's the entire protocol grammar.
This allows hyperlinking to the accessible version of a page, and using different default browsers for the different protocols, so that I (as someone who wants to use Mom-and-Pop browser) can easily fall back to Big Browser when necessary to view a page that won't render over accs://
What about using something like https://gemini.site.com ?
And part of that restricted spec would allow linking only to such links, to stay within the network ?
And the biggest challenge with Gemini is creating a great search engine. But now , searching site:gemini.* KEYWORD , gives us the power of Google and other search engines.
Doesn't work over time. You link somewhere, the other site owner a month later decides they want to use Google Analytics, now you're linking your readers back to the web they're trying to avoid.
> And the biggest challenge with Gemini is creating a great search engine.
Not really, it's discovery in general, which can be solved in many ways that don't involve search engines; Wikipedia's references are often a good discovery tool, for example. I use aggregators to find Gemini content and follow discussions which are happening across the space.
What if instead of a direct link, there was an intermediary that verified that the source and target of the link conformed to spec? If either side didn't conform, the link would just not work. Ideally the intermediary would be built in to the source's web server for privacy reasons. If the target site decided to quit and break spec, people could still access the site from links posted outside of the network.
The entire purpose here is to build a community of people who care about this sort of thing, who write content for that community, where taking your content away from that community is not an easy decision of adding a <script> or <style> tag.
Maybe there's another way?
For example, Gemini could only link through a centralized link management server, and that link server will verify links to be "clean" , and if a link isn't clean it will become dead ?
Of course, that depends whether said link gets most of his traffic from within Gemini or outside.
Have you ever spent any time in small communities? It's lovely to just... not have to care about gaining followers or making arbitrary counters go up or whatever it is people do, and just talk/create for the sake of it. There's no "brand" to care about.
But will that work, and compete with the web for knowledge sharing communities? I'm not sure.
But many users will expect search. And it's really hard to change people's minds.
But with a lightweight protocol like this, it seems easy to set up a proxy that would let anyone access the content via web.
- https://proxy.vulpes.one/ (also does gopher)
My question is: what's stopping us from doing that?
> Gemini lacks in-line images
This is the only part that I don't really understand about Gemini. Even the most basic printed publications can include illustrations. <img> got added to HTML very early on because sometimes it's hard to share some piece of information in anything but a visual form.
If the fear is that in-line images would lead to frivolous use as ads or "useless multi-megabyte header images", then maybe a better approach would be to limit the number, or size, of images on each page? Some scientific publications do exactly that in an attempt to force the authors to focus on selecting only the most important images that need to accompany their papers.
The best possible limit is "must convince the reader to click it".
On the other hand, it made me think of old Usenet posts and discussions. That was another medium where you were limited to plain-text only. Posts were often forced to resort to awful ASCII-art drawings of things they wanted to explain and that was just a horrible experience altogether (not to mention how fun those drawings are to decipher today where modern archives have mostly messed up the white space).
Adding images requires more requests and breaks the concept of "one url/document == one request". I love that I know that my client will do nothing I do not tell it to do.
If you want to use gemini and you want inline images I believe https://proxy.vulpes.one does inline images of some form or other.
That said, images have other issues beyond causing page loads/requests to be unpredictable: they are an accessibility nightmare (as we have seen on the web).
Audio is an accessibility nightmare for people who can't see, text is an accessibility nightmare for people who can't see or people who can't read, German is an accessibility nightmare for people who can't speak German.
At some point we have to accept that not every way something is presented is going to be equally accessible by every person, but the solution isn't to just decide we should jettison a rich form of communication because there is a small subset that can't fully benefit from it. Even books that are expected to be used by people who can see all the images usually describe the purpose of the image, what it is illustrating, and why it was included.
By this line of reasoning, you would have to manually approve every init process that your computer would want to start every time you boot it up.
"images" == "client doing things I don't tell it to do" is completely false. Clients have been built that have configurable policies for loading images and scripts, and they're conceptually very simple and easy to use - e.g. "don't load images by default, click to load temporarily, control-click to load and permanently whitelist" is an example of a user-agent policy that not only supports images, but conforms to your extremely convoluted definition of a user-agent "do[ing] nothing I do not tell it to do."
> requires more requests
Is the purpose of a document browser to minimize requests, or to actually serve useful information? Images can encode data that cannot be encoded in text, and a vast quantity of information is much more easily read and understood in graphical form. If you want to minimize requests, then just don't use the web at all.
Also, this isn't even necessarily the case. You could encode images as part of the page, as base64 or something.
> they are an accessibility nightmare (as we have seen on the web)
The web supports alt-text for images. When people don't provide alt-text, that's not a technical problem, that's a social one.
This is solved by HTTP/2 multiplexing https://en.wikipedia.org/wiki/HTTP/2
> breaks the concept of "one url/document == one request"
I don't think anyone cares about this "concept"
I exploit this in my Unnamed Gopher Client, a client for the predecessor of the Gemini protocol, where I render links in a familiar files/folder format:
And there are many more creative things that can be done with this.
A website can be rendered at different resolutions, with or without stylesheets, in dark mode, in printer-friendly formats, in a text-like format, with user stylesheets, with some elements hidden, as plain text, etc.
The concept of a user agent that gives the user much greater ability to choose how they want to view content could mean each user will:
* Pick their own font, font size, line spacing, margins
* Pick their own text color and background color. Like dark mode, high contrast, etc.
* Choose how linked images are shown: inline, click to load, load in new window, expandable thumbnails, etc
* How sections, section headers are displayed. Add a table of contents? Add a button to jump to the next section? The user can choose.
I like reader view, which gives me the ability to choose how I view HTML, but only when reader view can figure out how to extract the content (sometimes disastrously missing paragraphs of text...)
This thread is the first I've heard, and up until this comment I was thinking in my head, "sheesh, what kind of value proposition would justify that amount of work. I'm just not seeing it."
It's kind of like what REST was meant to be. More about entities than verbs. Cool. I get it now.
However, in my experience, the DOM for some sites is such a mess that trying to apply user preferences is a hack. I.E. reader view accidentally loses text. Does the implementation Opera has always work? That would be cool, although I'd still avoid Opera for privacy reasons.
Gemini seems to be to throw away all that complexity, which makes user customization easier. I.E. the problem is HTML/JS/DOM complexity, not a browser or its extensions.
Related: why can’t we just point the blind at a protocol optimized for just sharing text documents?
Managing image assets is tedious. The web design community still by and large hasn’t figured out a great standard way to do for images (versions with reversible/cherry-pickable diffs) what git does for code.
Instead, diagrams and formulas could follow the lovely ideas of mermaid, graphviz, dot, and mathjax inlined into the markdown as text. Tooling for VSCode handles inline diagrams beautifully for Markdown already.
And then, inline SVG would let you illustrate nearly anything.
WSJ got by fine without photos, as did most journals for most of my lifetime, and Kindle books mostly don’t have them today. I wouldn’t be too quick to say a medium has to be filled with photos.
If you mean something like Base64 encoded inline images, then those might be viable.
If we can have spam filters for email, we can have ad filters for images.
Streaming video is probably here to stay. /s
I would really like to see structured text that is self-descriptive (e.g. this is the document title, this is a paragraph, this is a header, bullet list, etc.) but have no ability to influence HOW those things are displayed- eventually maybe we'll have browsers that can support rich theming, etc.
Others have noted that lack of images is an oversight. Perhaps the language needs a "binary file download" structure, and if the binary in question is a media file, then the browser could choose to display it. Maybe signal with mime types?
Worth noting that this is modern browsers/web, and was initially not like this.
The term "user-agent" comes from that the _user_ has control over the experience, no matter what the publisher thinks. The agent (browser) acts for the user, hence user-agent.
User-agent CSS files were rampant back in the days, when a lot of content was unstyled. So you could navigate between websites and they looked the same, as they would use your user agent css files.
users then predictably wanted pages to look different, to have style, and that's likely the principal cause of user stylesheets' decline, not corporate coercion. that's not to argue against user stylesheets per se, but that they'll likely never have wide usage.
What I'm saying is that there can't be a format that isn't hacked and exploited to allow the publisher to do what they want, because ultimately it's their content so they control it. Maybe limiting the existing tags in HTML is a good idea (AFAIK that's one of the strategies of AMP) but reinventing a structured format will just lead to HTML-but-less.
If you want to give control to user, then you have to do that from the User Agent: forbid any publisher-provided styling, allow only certain tags, are going to actually do what you want, instead of inventing yet another format
It's also the reason why it caught on. On one hand people reject the ability to express individuality on the web, on the other a similar crowd is nostalgic about geocities and praises similar revivals. It's either one or the other.
> I would really like to see structured text that is self-descriptive (e.g. this is the document title, this is a paragraph, this is a header, bullet list, etc.) but have no ability to influence HOW those things are displayed- eventually maybe we'll have browsers that can support rich theming, etc.
How about publishing markdown over HTTPS? Then make a client that renders just that?
Hence the idea of a separate markdown only browser. I don’t think http is the problem here. So it would be better to reuse as much existing tech as possible.
Note: Personally I don’t think it would catch on, the convenience of handling everything in one program is just too high.
I like to think about books as of an example where content is way more important than presentation. Most of the web sites these days spend a lot more effort on how you present it than what, in the end a 1000 thousand words article now has a very complex architecture behind it, hundreds or thousands of time more than the content.
For that matter you can serve images or whatever binary files as individual requests (that is, not inline with another repsonse).
I've been playing with it and I rather like it.
If Gemini sounds like a dumb idea, I'd highly encourage you to move along. If Gemini sounds intriguing, you'll probably have fun.
Lots of opinions in this thread, but doesn't look like many armchairs have tried it. Personally, I've enjoyed the rabbit hole.
TLS: keep it as it is. Crypto is hard and TLS is proven crypto. Mandate something like 1.2+ and be done with it. Every mature language has TLS implementation or bindings.
HTTP: use subset of HTTP/1.1. Parsing is very easy: it's just bunch of lines. Full HTTP/1.1 is hard and probably unnecessary. Things like connection reuse are not necessary and should be excluded for simplicity.
HTML: use subset of XHTML. It must be valid XML, so parsing is just one call to the XML library which is available on every language.
CSS: I don't really know, that's a tough one. Something like CSS 2 I guess. There must be a balance between complexity of implementation and richness of presentation.
If you take this position to the extreme, you can even reduce HTML + CSS to some kind of markdown-like language, but I don't think that we need to go that far.
A good WWW provides linked documents in a format that is easy to display and process (e.g. extract links, text, headlines, images, etc.) and makes it impossible to hide content. If you publish a document, it should be publicly accessible.
So like I said, you've basically just described gemini :)
Re:XHMTL: as somebody pointed out here, there are rules to "normalize" unbalanced HTML5, but they have to be implemented and add to the mountain of "implicit" knowledge one has to have and implement...
a body with sequence of <img>, displayed vertically. you can look at imgur and see such format has been used for simple messages, blogs, collections of memes, recipes, news, informational content, engineering content, fitness advice etc etc.
no css, no nothing, the user agent takes care of formatting them according to the display device etc.
I can't think anything more flexible, simpler and yet capable of doing 90% of what the static web can do today. you can even have a comments section, just add <img> to the bottom and alt the commenting user&Timestamp.
Whether it makes HTML less like a document is up to author to decide, IMO. Some JS snippets are pretty useful, some are not. You can use JS to implement an interactive learning system or you can use JS to spy on users.
HTML5 defined a concrete, final mechanism for parsing tag soup and presenting it as a standardized tree. While the library itself isn't simple, using it is, and being standardized, most non-fringe languages ought to have a library for it by now. It should probably use that, for all the same reasons trying to use XHTML didn't work the first time. XHTML raises the bar on writing correct HTML too far.
I know it is an idealogical choice to only have text, but being able to embed standard image formats (on a totally plain, non fancy way) would increase the utility of this hugely. They mention blogs and tutorials and recipes here - those would benefit hugely from having simple inline images within the body of the text, just like you expect in a newspaper etc.
I guess I am not the target market then.
If I were designing it, I would say: "you can have images, but they always display as a 'block element', with nothing to either side. No worries about wrapping text; no background images under other elements, etc." I think that keeps the spirit of simplicity.
It's text. The client displays that text and renders links, headings, etc, however it wishes. If it really wants to, it could just not format them at all. There's a gemini client made for plan9's acme text editor that doesn't render links, and instead displays them verbatim, because the plan9 plumber can handle the hyperlinking aspect. All of that is eye candy and fluff.
If a client finds a link to an image, it can in-line it if it wants. If you wrote a client, when it found a link to an image, it would in-line it "with nothing to either side." That's not something that has to be specced.
Same with how headers are displayed (maybe i want folding or something), whether an ToC is displayed, colours, fonts, etc.
The point is that the user can decide all this stuff, without having to hack it around the author's own styles and scripts.
Technical paper is what I was thinking about there. Furthermore, since Gemini apparently lacks support for mathematical notation images would be necessary for such even if the paper doesn't intrinsically contain non-textual images (e.g. pictures, charts, or graphs, which are common though not universal).
I can even understand why they did it. To keep the doc format very simple.
I hope that more clients will add unique rendering features that will turn this drawback on it's head. It could be in-line rendering or a gallery-like feature.
There was a protocol for searching documents, a protocol for looking up someone's email, it was all partitioned out.
The web was seen as just another fish in the pond.
After the web became big, these things still lasted for a while
However spam and crooks changed it all. Usenet became useless, DNS full domain lookups (you used to be able to get a list of all the subdomains of a domain through the command line and you could just browse then out of curiosity), using whois for email (you could just query for a name and get an email address over whois), it's all gone because there's too many snakes trying to scam people and flood the network.
Things used to be much better tools but it turns out they were too good and had no defenses. The dream of everybody connecting has sort of been retracted a bit. RMS, TBL, Torvalds, I could just send them an email in the 90s and they'd respond, it was pretty remarkable.
It's not the case any more. Not even minor players in history (such as an author from a 25 year old book) respond to my questions. People just don't do that anymore.
Spam, harassment, criminals, ill will, this all has to be a big priority if we want to try it again.
The future should be the dreams of our better angels, building better tomorrows...
I don't think this stopped just because of spam, harassment, or other bad behavior. A big part of it is just community size. When the community of internet users was smaller, you could interact with everyone who reached out in a reasonable amount of time. As it got bigger, that is no longer possible because of the sheer number of people.
This is an exceptionally good point. Security is also one of the top problems with the web (alongside the asymmetrical difficulty of hosting content vs consuming it and the lack of consistency for web content). The problems with the web are mitigated by "good enough" solutions from browser vendors, out-of-band third party extensions and even some services on the web itself (e.g. archive.org, though I don't know how sustainable that is, and it's far from perfect).
Whilst using Castor, www urls would auto-open in Firefox, and the other way around.
I’ve always wished browsers handle site menus in their chrome, so that the document can be focused on content not navigation. It’s the browsers job!
For a while Opera supported these related links in the head for some pages but the dev was unable to add their own, it was limited to a small number of standard items such as Index. These were shown in a browser toolbar.
Nonstandard navigation have always been a point of friction for users as it precludes universal access by having to relearn how each site works.
So the selling point of Gemini is that by staying rudimentary, it can limit it's appeal, and subsequently stay unpopular enough to be more like "The Old Web". I think that's worth noting, because you could get trapped into thinking this is a technology problem, but it's really a people problem.
However, Gemini does not exist in a vacuum. The web will be there. There will be social media platforms, multimedia, awesome webapps and all that. And Gemini is just text.
When you have the choice between easily consumable infinite multimedia and just text, you only pick the latter when you really care about the quality of text content. It's not sexy so all the spammers, content marketers and ego boosters have nothing to gain on Gemini. And so there can be this esoteric little corner of the internet, with down-to-earth text content written by ordinary people.
Your implication may be true, but Gemini will never become as popular as the web (well unless the web becomes extremely unpopular at the expense of something else besides Gemini).
My wife would see "no images" and that would be the beginning and end of using Gemini for her.
1. Gemini is great, no spam and commercial crap
2. Someone realises it would be great to have simple inline images, and makes a cool client that supports “gemini+img” syntax that they make up. The syntax gracefully degrades, so you can use it in your docs even if your users aren’t using the new browser!
3. Protocol is technically text-only but in reality everyone uses img-enabled browser
4. Repeat with basic styling, then simple scripts. Eventually authors rely on more and more “optional” features and syntax extensions, and we end up with a similar feature set to what we have today.
5. Advertisers move in as Gemini gains mainstream adoption, and we’re back to www
How is a network protocol proof against being used to transport CSS files? Does the network stack inspect what you're shipping and ensure you're only sending 100% Pure Plain Text?
> The Gemini transport protocol is unsuitable for the transfer of large files, since it misses many features that protocols such as FTP or HTTP use to recover from network instability.
Isn't that TCP's job? Is this person saying Gemini doesn't use TCP?
Back in the Gopher days, my "Gemini browser" would be my Web browser. That was one of the reasons Web browsers took off: You could use them to access all of the information on the Internet, including the WWW, Gopher, Usenet, and Email. Only more recently did Mozilla morph from the Netscape Communicator software suite into the slimmed-down Firefox browser without email, spinning off Thunderbird in the process, and only much later did Firefox drop Gopher support from the core binary.
maybe he’s talking about higher level features, like the possibility to restart a download from a certain point, without redownloading the initial part? haven’t used this since dialup days, though
> Gemini has no support for caching, compression, or resumption of interrupted downloads. As such, it's not very well suited to distributing large files, for values of "large" which depend upon the speed and reliability of your network connection.
That said, I can't tell whether or not I've used it recently. I know I don't use it when I play with personal projects, but I don't know what other sites do because I rarely have a console pulled up in my browser when I'm just using it, rather than developing.
The Gemini specification includes its own format for pages, which is a text-based scheme inspired by Markdown and Gopher menus. You can use the Gemini protocol to transmit things other than Gemini pages, sort of like how you can use HTTP to transmit PDFs and Word documents, but you wouldn't build your whole site out of them. (At least that's my impression, I haven't gotten around to actually visiting many Gemini sites yet.)
It's just like the web, the transport protocol (HTTP/S) can be used on any file. But there is a separate spec for the document format (HTML etc.). You could transport CSS over Gemini, just don't expect any of the browsers to render it. Just like how web browsers won't execute alternate scripting languages natively.
> Isn't that TCP's job? Is this person saying Gemini doesn't use TCP?
I didn't really elaborate on this point while writing, because I had nothing to add. I will quote from the projects FAQ:
>> Gemini has no support for caching, compression, or resumption of interrupted downloads. As such, it's not very well suited to distributing large files, for values of "large" which depend upon the speed and reliability of your network connection.
Hopefully that clears up what I meant.
> Back in the Gopher days, my "Gemini browser" would be my Web browser.
You might be interested in Castor. It's a browser for the minimalist internet. Rolls support for Gemini, Gopher and Finger all in one.
I can understand why FF removed support. But hopefully smaller applications, like Castor, can fill this gap.
Which honestly is pretty silly, as lots of caching is about reducing latency for small files not saving bandwidth for large files. Suppose it matters less if the documents are self contained.
If Gemini is ever of even domain-specific serious use, I'd expect both the format and protocol to be added to what is supported by existing major web browsers (it can't be both tractable for small implementers and intractable for Apple/Google/Mozilla), which, as it turns out, know how to support the combination of HTML/CSS/JS just fine and won't likely forget just because a different transfer protocol is involved. Presenting a DOM mapping for Gemini format pages and exposing it at least to extensions even if there is no way to include page scripts doesn't seem unlikely, either.
Go read the mailing list. Implement a client in your favorite language, or that weird language you've been wanting to try...
Write a blog, or some fan fiction, or a screed or some poetry. (There's a choose-your-own-adventure you can play.)
Have fun with it.
Setting it up on a separate protocol / markup allows you to make hard reasoning about what kind of privacy, features and protection you get as a user, rather than relying on the goodwill or current promises of your content provider.
It'd be better to declare a new doctype and use a reduced html. Just make it simple and make ads and JS bloat impossible.
Enforcing ssl is kind of silly since browsers are starting to do that anyway independent of this. It's orthogonal.
If you open the Gemini link posted below about why just defining a new doctype wouldn’t work, give it a read.
If you're building a protocol that enforces certain strict standards, like in this case being text only because the internet is 'bloated' according to the author, then the only point of having it is adoption beyond your community.
If all you want to do is communicate with ardent non-bloat advocates you can already do this on the regular internet, because everyone in that community does it voluntarily already
There's no point in codyfing standards for a community that follows your standards to begin with
This would seem to be a similar sort of issue to when people say "come chat on IRC", or use a mailing list to communicate about a project, or whatever else - you're not going to use those if all you ever want to use is a web browser. And that's ok in my book. I'll hang out with the people who do want to use those tools.
If Gemini catches on, someone will write an add-on for Firefox that reads it. And then it's just part of the Web that is fast and looks a little different.
And that link is a long-winded way of saying "It's good to put an artificial barrier in the way."
Just because you can do that, does not make it part of the Web -- it's still a different space. Those portals are basically web-based clients to the protocol, which means they're still bound by the rules of the protocol -- they're not going to have JS in them, for example.
So it's absolutely become adopted beyond its beginning community.
On the technical side of things, it looks like its an old battle replayed with older weapons, just to get go through the same path of HTML. "Oh, but we will not put more features". Ok, but people wont use it then because they can also serve text, markdown, etc... and can chill out, because they know if later they need to serve images, videos or graphics, they can do it with HTML.
And i say this as a person who is also trying to create some alternatives to the web. But instead of going back to the nineties, i tried to think how the technology of 10 years ahead would look like. I've probably not maded it because its really hard to push the envelop when things are almost in the state of the art as Web is. But i also don't think the answer for the future will be in the past.
So if two folks decide that they will create a web engine in this new language they like, it wont be an impossible goal. Because there's this simple version of the spec, with much less features.
The people behind this might be very good at convincing people and real believers, this thing can float for some time working real hard for this.. But it will be really hard to get this out of a small niche.
Anyway i love the thinking behind this. The meditation, the koan, is really on the right track. We need more rebels and fighters on this front. But i just can see it, how this can compete as a subset of a massively popular and deployed protocol with clients everywhere? How can it really differentiate itself, apart from what the web today already can serve to people?
When you link to a Gemini URL, you're linking to something you know can't be replaced with something privacy-violating in the future. The worst that can happen is the server shuts down, which is a very different failure mode. And someone is less likely to do that than to switch to a different brand of HTML - or so's the hope.
It's not going to be a major thing that everyone uses. That's ok! Neither is IRC, mailing lists, and whatever else - but people still use them, every day. There's ideas exchanged, friendships made, relationships formed, and they serve a not-insignificant community's needs.
Ok, but you know the big majority of the users dont care were the content come from, or how its delivered to them. They care about what is being served instead of how.
If you guys manage to have some 'killer apps' on this protocol, where people will try to reach it no matter how its is implemented, than there's a chance.
IRC killer app is IRC itself, and they manage to push themselves as alternative in the nineties where a lot of popular protocols and alternatives like the web were still in the beginning.
Anyway, if you convince people over time to serve their content through this medium, with enough and interesting content, the user will try/learn to reach them.
But i dont know, i think at least they should be trying to use some p2p DNS system, making it easy for people to serve their own content, or revisit BBS and serve contents in tree-like structures akin to directories..
I feel that there must be something to really differentiate it from everything else. Some things that are unique, and that the web + others are not covering. Because if you think about IRC or Email, they have distinct features that web could never cover even being a mammoth protocol, while this doesnt hold the same way when you think of Gemini proposal.
Anyway, people trying to do something, to change things for what they perceive as the better is a good thing, and it should always be celebrated, because even when the thing doesn't stick it might need adjustments, incremental evolution or just to serve as a influence to something else or through experience inspire the creators to create something even better.
Who, exactly, without a profit motive, wants the majority of users? I don't want to talk to most people, and I don't want most people to read what I have to say. I, like most people I think, want to spend most of my time in my community sharing things I find interesting with people in my community.
It's the same as tilde servers, or MUDs. They're not going to take over the world. They're small communities and most people will never even know they exist, and that's fine.
He goes on to say 'Supposing you had such a browser, what would you do with it? The overwhelming majority of websites would not render correctly on it.' - A very good point, but equally applicable to a Gemini browser.
IMO, they have confused the network protocol with the presentation. You don't need to drop HTTP in order to change the way websites look. Likewise, you don't have to implement HTTP features that you don't like (e.g. cookies). This just strikes me as another mistaken belief that rewriting code from scratch will solve all your problems.
I’m still not sold on it (you’re allowed to do websites without js/css!), but the effort and skill involved is commendable
When you visit a gemini URL, you know how it's going to serve you a limited capability, text based document, styled according to your own rules.
> The server closes the connection after the final byte, there is no "end of response" signal like gopher's lonely dot.
This just seems like a bad idea, especially if one is on shitty connection.
Since the response body is not encoded, there's no safe end-of-response marker byte(s) to use.
So content-length seems like the way to go. But knowing content-length ahead of time is difficult for dynamically generated content (CGI is supported after all), so they also need something similar to HTTP chunked encoding, which does complicate things a little.
I understand that keeping the Gemini client simple to implement is one of their design goals, but I don't think the same is true for the Gemini server. So I hope that they would consider adding these to the protocol. They could probably stuff the content-length or the word "chunked" in the <META> string.
I feel like that would strike reasonable balance. Client are still simple (arguably more simple since they don't have to guess if they got everything) and the protocol is still trivial. For dynamic content, it would increase time-to-first-byte and ram usage on server, but both imho would not be an issue for the type of content gemini aims for.
Chunked encoding is simple enough to implement, avoids all those issues, and would allow a Gemini server to serve more requests faster given the same resources, or to run on hardware with more limited resources such as embedded. So I think it's well worth the slight cost in simplicity.
- ToS that strictly prohibits commercial use and advertising. We have the WWW for that, no need to duplicate it.
- Uses HTTPS or something similar. This allows use of efficient servers like Nginx.
- Based on a virtual display with fixed dimensions and orientations, 2-3 aspect ratios and vertical/horizontal orientation. Fixed virtual pixel solution. Every page is fixed in size and in the length of unicode text it can display.
- Uses a structured document format with a limited number of logical tags. The client displays the page as it likes (no styling directives in the document markup). Every page written in this format is compiled into an efficient and compressed binary representation for transmission.
- Limited number of links, overlays, and images per page. Input fields with validation should be allowed. Inline images and movies are limited in size.
I'm planning to implement something like this in my forthcoming virtual Lisp machine (z3s5.com), though it's going to be a bit less general and probably not be based on HTTPS.
Currently it’s TUI, but will add GUI eventually. It’s fun to have a protocol small enough you can implement it yourself, but I currently have a weird bug where some Gemini servers work and others don’t because they don’t seem to follow the SSL spec.
Also, some of the CSS and JS hatred is piffle. Publishers absolutely abuse these languages and it gets pretty bad on news websites especially. But I do not find that most or even many of the sites I visit perform badly on my hardware (2016 iPhone SE and a 2017 MBP). They work fine. Moreover, I appreciate nicely designed and competently implemented experiences on the modern web.
I have no interest in trading the modern web - warts and all - for some spartan plaintext utopia.
And really,why replace http? The complaint seems to be mostly with html, so why not just makr a gemini text format,build some browsers that use that as the default instead of html and specify semantics for how tls works, like custom status codes to request a client certificate and recommend TOFU certificate trust. And maybe specify certain headers that shouldn't be used, like cookie.
I mean QUIC is here, waiting for us.
Why throwing away 30 years of HTTP ideas .
If images and cookies are to be forbid that should be made from the web developper decision.
Choosing Gemini is like accepting restriction by law.
I feel like this is old behaviour, that confidence and responsability is the way forward.
In a way Gemini could have been published by writer of European Union , North Korea or Soviet Union laws, I can't belive this is a US products, as it contains too much to liberty constrain ;)
Unfortunately even today that's not 100% under your control. I don't have to accept cookies nor load images from your site with the proper settings or browser.
I'm 100% on board with Gemini being a 'liberty constrain' for those who put information online - honestly it's not necessary for remote systems to be executing code on my system just to display text. Yes, you can't monetize it as easily. That's a feature, not a bug, for a user like me.