Hacker News new | past | comments | ask | show | jobs | submit login
Gemini's "uselessness" is its killer feature (flounder.online)
243 points by Melchizedek on June 13, 2021 | hide | past | favorite | 193 comments



I really like the idea but reading this in the FAQ disappoints me: "Gemini has no support for caching, compression, or resumption of interrupted downloads"

Back in the day in the time of BBS with Fidonet or Usenet, I really liked the offline first mentality. With zmodem you had compressed and resumable downloads, with Echomail and Fidonet and POP3/Usenet you had offline first forums.

But the same FAQ says Gemini is also for those: "Interested in low-power computing and/or low-speed networks, either by choice or necessity"

This seems to be contradictory. Have compression, caching and resumable downloads omitted by default for the sake of simplicity or can it still be added to the protocol?

What's the rationale? A more formal seperation of simplicity of presentation and use within slow and frequently disconnected networks?

That if you for example want zstd compression you can use a VPN? Or that if you want resumable downloads or caching you can use a local proxy?

With the right proxy or browser one could emulate Gemini experience with the regular www, similar to using the old Opera Mini or force Reader View, block all images, javascript and ads on all sites. Wouldn't that be 'better'?


I'm not a part of the Gemini community, but if I had to guess:

* caching is like alcohol: it is both the cause of and solution to all of life's problems

* nothing Gemini serves should be so large as to need resumable downloads or compression

In other words they seem like deliberate choices to prevent web-app-ification of Gemini.


I think you’re right, and I think those complaints are a little ridiculous for a protocol primarily designed for publishing text. I have a tiny Gemini site because it’s fun to play with, even if its audience is tiny. My average file size is about 2KB. There’s practically zero benefit for caching or compression here, and much detriment toward making the protocol more complicated to support them (again, given the context of primarily moving small text files around).


yes, i just wrote about this on the Gemini mailing list. Even with small 2K files there is a ~50% reduction in size (and more if you use dictionary based compression). If you are on an extremely slow link at sea that can mean a few seconds of extra loading time, which is significant. Of course in reality, few people are in this situation. But if you are designing a protocol where the fun and perhaps also the goal is optimizing for getting textual information across as efficiently as possible within any constraints of conceivable computing resources it's something I'd would want to consider. A simple check if a gmi is binary and if so decompress it would solve such an issue of lone gemini users on the moon or at sea.


Cryptographic content-addressability (with inherent data deduplication) is superior to simple caching in every single way: efficiency, facilitation of distributed storage, and security.

A link should point to a hash. Now you can go get that hash from the same host you used to get the link or any other host that stores it anywhere in the world. That hash could unpack to data or to a list of other hashes. Go get those. Where you get them doesn't matter one bit. You can verify the correctness of the data trivially.

This is how BitTorrent works but it could work equally well on top of a simpler protocol.


Content hash based distribution has downsides too: E.g. updating content can result in people still stuck on old versions because the links weren't updated.

Content hash distribution can also easily have privacy problems. Say there is a site for rubber band fetishists that you don't want people to know you've visited. I can interrogate if you've visited this site by embedding some of its content-- if you load it instantly because you have it cached, then I can be fairly confident that you've visited that site before.


This last attack is possible regardless of whether content is addressed by hash or not, it only needs caching.


The first point is no different from regular dead links or outdated URLs.

The second can be mitigated but is a legitimate issue.


Unfortunately, stale hashes are a worse problem in content-centric networks than dead links.

So Alice links to some content in Bob's site, but Bob made an embarrassing error and wants to update it. Very well, and Alice has to update the hash pointing to Bob's content: but now any hash pointing to Alice's content is stale, and so on, ad infinitum.

There are ways to solve this problem, but none of them force a linear forward history. Basically, you can request all Merkle chains featuring a base hash, and verify that Bob signed all of the branches, but only discipline can keep this collection in a linear order.

This is probably fine in practice, just follow the longest branch.

And now you have Secure Scuttlebutt!


This seems like it would be vulnerable to accidental or malicious hash collision generation.

You need a hash plus some "authorized mirrors list" or some other anti-poisoning measure. Not sure offhand how BitTorrent does it.


Any hash algorithm for which a single collision has ever been generated would be an inappropriate choice for a URN system.

Re: malicious collisions, any modern hash algorithm currently in wide use, if "broken" by some kind of vulnerability, would cause far worse problems for the world than just some URN system breaking down. As such, literal man-millenia of cryptanalysis has gone into ensuring that there are no vulnerabilities in popular algorithms.

Re: accidental collisions, please note that 128 bits of entropy (i.e. a UUID) is already enough to ensure no accidental collisions before the heat-death of the universe. And most modern hashes have ≥256-bit residues.


FYI UUIDs are 128bits but only have 122 bits of entropy. Still not going to get a collision with a good implementation.


UUIDs are long enough but are not cryptographic hashes. They can be deliberately collided pretty easily.

A cryptographic hash like sha384 or blake2 makes it infeasible to come up with any input that would ever yield the same output. That’s what you would use for such a network.


You're thinking of UUIDv1s, which pretty much nobody uses.

UUIDv4s — the ones that are specified as being populated with 122 bits entropy from a HWRNG — cannot "be collided pretty easily." Those are what anyone clearly means if they're talking about "bits of entropy" in relation to UUIDs.

(Also, as it happens, UUIDv3s and UUIDv5s are specified as having their payload bits populated with the cryptographic hash of a user-defined input. But just to muddle your point further, the cryptographic hashing algorithms used in v3 and v5 are not ones that you'd trust today for collision-resistance; MD5 [v3] and SHA1 [v5] have both had at least one collision attack successfully done against them.)


Also, the concern needs to be around 'accidental' collisions of the same uuid within the same logical namespace. For example, generating the same UUID as a key for two rows in the same table. Even with weak non-v4 uuids, this is extremely unlikely with any dataset you are likely to be working with, just because even the worst UUID versions still have a fair amount of entropy, so population sizes have to be absurdly large to get even a tiny probability of a collision.

UUID's are not meant to be 'secret' or 'hard to guess', and implementations can not be relied on to take care in generating the right kind of random state to ensure they aren't predictable.

From the RFC: https://datatracker.ietf.org/doc/html/rfc4122

  Do not assume that UUIDs are hard to guess; they should not be used as security capabilities (identifiers whose mere possession grants access), for example.  A predictable random number source will exacerbate the situation.


Unless you are using an older hash function that doesn't have sufficient collision or preimage resistance, that should not be an issue, because finding 2 files that hash to the same output is infeasible. It does point to the need to be able to upgrade the hash function in your protocol at some point, and preferably associate a set of hashes generated by different hash functions to the same file contents.


Would IPFS about cover this use case?


IPFS uses this, but it's more complex than necessary and has a ton of added features that are not core to the principle. (Not saying they're all bad... just that the core principle doesn't require thousands of lines of code to implement.)

Here's the bottom line:

The cryptographic content addressing I described sounds complex but is actually super simple and can be implemented in a page or two of code that will almost never need to change because it is "correct." It could be added to the gemini protocol in an afternoon.

Caching sounds simple but is horrifically complex and an endless source of bugs and failure modes.


> It could be added to the gemini protocol in an afternoon.

I'd actually like this to be a separate protocol that's co-hosted with Gemini. It would make Gemini simpler, and it would enable other protocols to reuse content addressable storage.


A gemini-like but designed around a global content-addressed structure would be a lot more interesting, imho. It doesn't even need to be tied to gemini. If you know how to do that then you should do it!


Nothing stopping the client from caching locally. I think Lagrange and possibly the other clients display last download time and allow a refresh.


... Or simply a text-mode browser, yes.

To be fair, full HTTP is ludicrously complicated on the server and almost nobody actually does or uses it. I don’t know of a single web app framework or library that properly and conveniently does all of content encodings, transfer encodings, content negotiation, caching, ranges, ETags, and so on; what’s worse, if you’re doing anything dynamic much of this is application-dependent and so I don’t really know how it should be done. The parsing (on both sides) also kind of sucks, if not as much as for 822. Gopher (on which Gemini is based) is almost laughably simple by comparison, you can do it manually in shell script and not feel like a dirty hack.

I do not understand why the Gemini people don’t do NNTP, though. The relative inaccessibility of Usenet in the modern world? Still, you don’t have to federate with it if you don’t want to, and the tech is quite nice, if somewhat difficult due to the inherent complexity of the problem.


Gemini is symbiotic with other "old web" protocols, so Usenet fits. If one is running Gemini, then their gemlog will likely list an email address (SMTP) and host large files over FTP, HTTP or IPFS.


Lack of guides for setting up a private nntp, I'd say. If you have some resources, please share them.


I’m don’t remember Usenet being federated so much as based on replication. It was really organized around large centers with many users at each center, and the need to store and forward everything faithfully.


I’m ready to defer to your expertise (or any expertise, really), but aren’t we just disagreeing over terminology?

By a “federated system” I mean that there is a relatively small number of independently operated first-class entities (servers, nodes) that are more permanently connected and host a large number of second-class entities (users, points) that are more intermittently connected and generally correspond to individuals. Internet mail, IRC, Jabber and Matrix are federated; Slack (centralized) and BitTorrent with DHT (peer-to-peer) are not. The Internet itself is complex enough that it probably doesn’t usefully fit into this classification.

In this sense Usenet and FidoNet are both federated store-and-forward mail systems. Or am I wrong?


I'm not sure about how 'low' low power computing is, but consider this: a jpeg is compressed, and on a 1990 Acorn Archimedes (ARM, 8Mhz 32bit, and the fastest home computer at the time), you could watch a jpeg image being rendered in real time. Uncompressed images, unsurprisingly, rendered instantly.


Well, I enjoyed using the “cached images only” mode in Opera as recently as 2007 on a pay-per-minute 56K dial-up connection that was more like 28K due to the crappy cheap modem. (It wasn’t impossible to get broadband, but nobody felt it was important enough to bother until the last couple of years, and then it took some time to actually get around to it.) The relief at seeing that the JPEG you just commanded to load turned out to be progressive (not that I knew how it worked) was visceral, and the hate towards people making GIF buttons with no alt-text practically unbounded. Uncompressed images were beyond the limit of practicality.


If you design your low power computer to handle JPEG (eg with a hardware codec), then using JPEGs will be cheaper than uncompressed formats because there are less bits to move around.


Gemini has no media support so its only text. I'm not sure what computer people are using which struggles to uncompress text.


Yes, but suppose I'm on a 200 bytes per second long distance radio link. One average UTF8 encoded page would perhaps take 10 seconds to load. With zstd or brotli compression it would take half the time.

For ultra low power computing (for solar power) with zstd or brotli decompression support in a gemini client it would for example be doable on a raspberry pico running fuzix driving an e-ink display with keyboard. https://liliputing.com/2021/05/picomputer-kit-turns-a-raspbe... https://learn.adafruit.com/adafruit-qt-py-2040 https://www.tomshardware.com/reviews/adafruit-qt-py-rp2040-r...


This is a legit scenario. Many sailors use SSB [1] for data access and have ultra low bandwidth.

[1] https://www.practical-sailor.com/marine-electronics/a-second...


Some clients can be configured to load images inline. And if they don't, it's not a big problem to click on an image link for illustration.


> with Echomail and Fidonet and POP3/Usenet you had offline first forums.

One of the side effects of ActivityPub is to standardize a "post new content" endpoint that might be used by offline-first clients with intermittent connectivity. This is in addition to getting content via ActivityStreams format, which can be implemented very easily, even by a cheap, statically-hosted site with no reliance on JS webtech.


That’s the other direction, isn’t it? The point of UUCP, NNTP, and Echonet (and SMTP, for that matter) is message distribution between servers with intermittent connectivity by batching and relaying messages. Can I host an ActivityPub node that both publishes and receives (and, if we’re talking Echomail, relays) stuff if I only have an Internet connection for an hour per day? Can the whole federation function like that?

(An ActivityPub-NNTP interconnect sounds like something that public-inbox really should support, now that I think about it.)


AIUI, ActivityPub allows for both POSTing content and GETting it in either direction, that is, a node can be acting as an "inbox", an "outbox" or both.


Activity*: let's overcomplicate xml by making it json!


I guess if you dont have css or javascript/other subresources that stay constant, caching is less important.


"With the right proxy or browser one could emulate Gemini experience with the regular www, similar to using the old Opera Mini or force Reader View, block all images, javascript and ads on all sites. Wouldn't that be 'better'?"

One of the neat things about Gemini is how quickly and easily people started writing clients for it. Perhaps this illustrates how we can influence client design, e.g., limit complexity (and perhaps decrease insecurity), through protocol design.

I use a text-only browser to read HTML. I often make HTTP requests from the commmand-line using a variety of different TCP clients in order to utilise a feature of HTTP/1.1 called pipelining. I have been using the web this way for decades. Needless to say, "new" protocols like HTTP/2 and HTTP/3 come across as being 100% commercial in their vision of what the web is; they are designed to acommodate a web full of advertising. Their sponsors justify them based on problems that one might encounter when trying to use the web for commerce. However, they fail to account for non-commercial uses. For example, there's no "HOL blocking" problem when all the resoources are coming from the same server and the site operator is not trying to "build complex, interactive [graphical] web pages." There is an underlying bias with these protocols for enabling a commercial, advertising-filled web. As one would expect considering who is paying the salaries of their implementors.

Then we have something like QUIC whose sponsors similarly make justifcation arguments that assume every site's goals are "complex [graphical] web pages", likely to be supported through advertising. I have read articles arguing in favour of QUIC that suggest TCP never achieves its true potential for speed because we are only ever making large numbers of small TCP "connections". That usage pattern is indeed something we see with "complex, interactive [graphical] webpages".FN1 The kind that serve advertising. However, with HTTP/1.1 pipelining, as I use it, I have different goals. I make a single, longer-lived TCP connection and make 10s to 100s of HTTP requests to the same server. I am not using the web as an interactive, graphical interface. I am doing [text] records retrieval. I am using the web as an information source. Again, what I am seeing with these "next gen" protocols are an underlying bias and assumptions in favour of a commercial web filled with ads.

As such, I agree the "limitations" of Gemini which make it cool to some of us, i.e., it is inherently not advertising-friendly, could be achieved using HTTP either by

1. users choice of client, i.e., stop using graphical browsers that run Javascript, written by companies that rely on "surveillance capitalism" or

2. a new standard that websites could adopt whereby they (also) served content in non-advertising-friendly way.

#2 is outside the control of the user. It is squarely under the control of website operators and those with the power ($$) to influence website operators. Indeed, it can also be brought about through use of a proxy. I have also become a die-hard localhost-bound proxy user. It is seemingly limitless what a one can do with a good proxy, however it can quickly lead to some rather complex configurations and much of the results can alternatively be achieved by #1, simply using the "right" client. I use both #1 and #2 together.

1. Not sure that Gemini supports pipelining. If not, how does it allow for efficient, bulk text retrieval. Does it encourage short, successive TCP connections.

The mistake that the original lobster.rs "Gemini is useless" commenter made is that he assumed Gemini is meant to "replace" the web. Yet no one, certainly not the creator of the protocol, ever claimed anything of the sort. In fact, Gemini documentation explicitly disclaims this as a goal. The commenter has constructed a strawman. He also assumed the only worthwhile use for an internet is a web, and the only worthwhile use for a web is to engage in commerce. To me, that looks like myopia. The internet has historically been multi-use, as the parent comment describes. The "www" is but one use. Alas, it has been co-opted by those seeking to use it as an advertising vector.


I am not convinced. Gemini is useless, and the Web is tracked, because Gemini is a platform for distributing data/documents, and the Web has become a platform for distributing random executable code.

If Gemini documents could support static HTML and CSS (i.e. without anything like SCRIPT elements or cookies), it would be more useful to the people who it is intended to benefit - you could port Wikipedia or distribute journal articles - without being a tracker haven. I don't hate the modern web because I can read the results of a scientific study that mentions its statistics in tabular form. I hate the modern web because the first thing I have to do is sign away my right to be treated like a customer in business, a curious child in a library or a guest at a friends house, and instead must agree to involuntarily yield wealth for the company.


> I am not convinced.

Frankly, I don't think that it even makes any sense to convince anyone. There are people who like Gemini and for them it's good enough, the rest will never use it. Both camps have excellent arguments and I see no point of arguing here. It's like Vim vs Emacs wars in the old days, as if people couldn't use their $EDITOR of choice.


A client for HTML and CSS is much more complicated to implement than a client for Gemini and its rich text format. But you can serve HTML pages over Gemini either way.


Why a whole new protocol? Why not start a collective of like-minded users who design plain HTML/CSS websites that work for users who have JS disabled by default? As far as I can tell that would be a strict improvement over Gemini.


Probably. But you can see what would happen - without a new protocol or new document structure, traditional web browsers would work with the system. And because of that it wouldn't have an identity outside "the web, but worse".

Gemini's different protocol isn't important for technical reasons. Its important for social reasons - to make it effortful to move content to gemini, and effortful for users to start using it. The result is that all the content and users on gemini are people who have done work to opted in to it. And the space populated by small community like that will necessarily end up different from the culture on the web. (If they were happy with the web, they would have stayed there.)


>And the space populated by small community like that will necessarily end up different from the culture on the web. (If they were happy with the web, they would have stayed there.)

But they have stayed there. Gemini users still use the modern web all the time, mostly to remind everyone that they exist and are rejecting the modern web so hard you guys.


Well duh. If you have a race bike, of course you'll also keep a metro pass or a car. You still need to get places. But the race bike is your instrument of leisure and a place to unwind. This role cannot be fulfilled by a car or a metro pass for some people, because it just doesn't have the same relaxed atmosphere.


That’s an excellent analogy. I make my living building web stuff. When I get home, I want to play with other things as a hobby. I also have a ham radio license, but that doesn’t mean I only want to use that instead of the gigabit Internet drop at my house.


Yep absolutely. Another way to think about it is that using the web doesn't mean we delete all our local native apps. As users we don't gather into camps who only use one type of software or another. Modern computing environments are mixed spaces full of all sorts of different kinds of software. It very rarely makes sense to be exclusionary or exclusive about any particular type.

Gemini is no different. There's overlap with the web's use cases, but its its own thing. As a user, why not both?


>Gemini is no different. There's overlap with the web's use cases, but its its own thing. As a user, why not both?

What does Gemini have to offer to web users that the web itself doesn't? It's entirely possible to write small, simple HTML documents without javascript on the web. Indeed, one could replace Gemini entirely with a Geocities-like service that enforced the same rules, including only allowing linking within the platform.

Alongside the web, Gemini is just a worse version of the web, it isn't different enough to be its own thing, nor does it offer anything unique. Gemini only makes sense as a means of forging a separate identity from and exclusionary of the web, as you yourself said in the original comment I replied to[0]: (If they were happy with the web, they would have stayed there.)

[0]https://news.ycombinator.com/item?id=27491175


People have said this for years, and yet, no such formal community has been created, while Gemini only grows in popularity. I think such a community would be great, but it's hard to maintain a strict separation with the "rest of the web" in the way that Gemini has.


Because its easier to create an in group of sites which follow the same standard. Gemini sites will link to other gemini links only which you know will be minimal before clicking.


That's trivial to do in participating "minimal web code" group too.

You could e.g. color the links to other participating sites to highlight them (and still have the option to link to a conventional bloated website).


One advantage of Gemini is that it is easier to attract new users to the cause: even if a particular user is being helped by the collective you describe, he or she probably won't realize (because most users are oblivious to many/most of the effects of UI and architectural design decisions). He or she will ascribe the good experience provided by the collective to the web.

The web is like a badly architected government such that most citizens are unaware of the badness. The collective you describe is like a group who use their own awareness of the badness to improve the country, but that does raise awareness of the government's badness whereas starting fresh by founding a new country does.

Actually, the analogy I just gave is a little too harsh on the web. A more charitable description is that a subset of the web's users is poorly served by the architecture of the web, but most of that subset is unaware of that fact (what with not being experts on the effects of subtle technical decisions on something as vast, open and complex as the internet).

I am not defending Gemini in particular; I use it but have yet to form an opinion of it; I am defending the strategy of founding a new internet service to compete (for the time and attention of users) with the web.


>One advantage of Gemini is that it is easier to attract new users to the cause: even if a particular user is being helped by the collective you describe, he or she probably won't realize (because most users are oblivious to many/most of the effects of UI and architectural design decisions). He or she will ascribe the good experience provided by the collective to the web.

I don't buy this argument.

Even if "he/she ascribes the good experience (...) to the web":

(a) the subset of users that could fell into one of the "participating sites" would be much larger than the subset of users that would install something like Gemini. The subset is open with no barrier at all to every web user. They just need to visit a page. The second subset requires them to have heard about Gemini, to care enough about it, and to actively download a client.

(b) For this reason, even the subset of (a) that could tell "Ahh, this site is fast and nice because it's simple web code" would be much larger than the subset that would install something like Gemini.

(c) If the subset that recognizes the benefits from the simple web design designs to join the movement, they just need to code simple html and deliver to everybody (an easy sell, no new tools needed). For the subset that discovered and likes Gemini, they'd need to find a host and install a Gemini server (much fewer options, no turnkey-host that supports it, unknown code quantity/quality), and then their potential audience is severely limited to Gemini client users (a much harder sell).


My response to you is to say that although it is nice to help users in the short term, it is more important to increase the number of web users who know that the web poorly serves some of their needs, specifically their need to read, to discover things to read and to publish plain-text writings.

(The main problem with using the web to publish plain-text writings is that unless the person doing the publishing is an expert on the technology, in which case he will use something like Github Pages, there is no cheap way to publish a few paragraphs or a few dozens of paragraphs where it fill find a decently-sized audience without enriching some intermediary and subjecting most of the people who over the years will read the writings to either advertising or paywalls.)

I offer no criticism of the web as a way to buy things online or to apply for a passport online or such.

I am avoiding saying that the web is bad: the web might be bad only for a small fraction of web users. I don't know enough about how web users differ from each other to say how large the fraction is. I do know enough about software and how site owners will respond to incentives to be able to tell that the web is bad for users sufficiently like me. Well, I'm fairly certain, too, that the web is severely sub-optimal for blind users (and I wonder if attempts have been made yet to tell blind users about Gemini and to make it easy for blind users to get started with Gemini).

Someone could write a whole book about how seemingly insignificant decisions in the design of something like a web browser (e.g., decisions in the design of web standards) can have profound effects, but I think in this case it is better to show than to tell.

Improving Gemini might eventually show a lot of people that the web is a sub-optimal solution for some of their needs. Note that Gemini need not completely replace the web in a user's life: one session with the improved Gemini might get the point across to a new user (and the user might do nothing but browse Wikipedia during that session).

In contrast, the strategy I originally replied to, namely, a collective of web sites, might significantly improve the experience of many web users, but will cause approximately zero web-tech non-experts to become dissatisfied with the web.

I have no road map for how a large numbers of people dissatisfied with how the web serves some of their needs will eventually find their way to something better. I figure it will take many years. I do believe that increasing the numbers of dissatisfied users is probably a necessary first step.


I am aware of at least one blind person on the Gemini mailing list, talking about their experiences with it (mostly about Gemini's native text format).


You could also build a community around a static site generator (and maybe a hosted version), where everybody would pledge to not customise anything except a set of offered css variables. Then federate those blogs somehow. Though I somehow doubt that such a community would stay “pure” for long.


The faq answers your question:

=> https://gemini.circumlunar.space/docs/faq.gmi

> 2.5 Why not just use a subset of HTTP and HTML?

> Many people are confused as to why it's worth creating a new protocol to address perceived problems with optional, non-essential features of the web. Just because websites can track users and run CPU-hogging Javsacript and pull in useless multi-megabyte header images or even larger autoplaying videos, doesn't mean they have to. Why not just build non-evil websites using the existing technology?

> Of course, this is possible. "The Gemini experience" is roughly equivalent to HTTP where the only request header is "Host" and the only response header is "Content-type" and HTML where the only tags are <p>, <pre>, <a>, <h1> through <h3>, <ul> and <li> and <blockquote> - and the https://gemini.circumlunar.space website offers pretty much this experience. We know it can be done.

> The problem is that deciding upon a strictly limited subset of HTTP and HTML, slapping a label on it and calling it a day would do almost nothing to create a clearly demarcated space where people can go to consume only that kind of content in only that kind of way. It's impossible to know in advance whether what's on the other side of a https:// URL will be within the subset or outside it. It's very tedious to verify that a website claiming to use only the subset actually does, as many of the features we want to avoid are invisible (but not harmless!) to the user. It's difficult or even impossible to deactivate support for all the unwanted features in mainstream browsers, so if somebody breaks the rules you'll pay the consequences. Writing a dumbed down web browser which gracefully ignores all the unwanted features is much harder than writing a Gemini client from scratch. Even if you did it, you'd have a very difficult time discovering the minuscule fraction of websites it could render.

> Alternative, simple-by-design protocols like Gopher and Gemini create alternative, simple-by-design spaces with obvious boundaries and hard restrictions. You know for sure when you enter Geminispace, and you can know for sure and in advance when following a certain link will cause you leave it. While you're there, you know for sure and in advance that everybody else there is playing by the same rules. You can relax and get on with your browsing, and follow links to sites you've never heard of before, which just popped up yesterday, and be confident that they won't try to track you or serve you garbage because they can't. You can do all this with a client you wrote yourself, so you know you can trust it. It's a very different, much more liberating and much more empowering experience than trying to carve out a tiny, invisible sub-sub-sub-sub-space of the web.


I wrote an HTML parser while waiting for Firefox to build once, it didn't take a ton to turn it into a minimal browser. Web browsers are hard when they have to be an application runtime instead of a document viewer, HTML grammar is easy.


I don’t agree that HTML grammar is easy, but either way it’s objectively more complicated than Gemini markup. I also don’t agree that web layout is easy. Dillo, for example, is not the best document-viewing experience (even when I’ve designed for it specifically) despite putting in much more effort than I’d be willing to.

Also, if this[1] is your HTML parser, it doesn’t look like an actual HTML parser with implicit tags and error correction, just a parser for some easy subset of HTML. If we’re not going to be web-compatible anyway, why not spec a language that’s actually good?

[1] https://github.com/s2607/webthing/blob/d6dd43aa5a3cc6dee15f8...


HTML parser is trivial. There's a huge difference between HTML parser and HTML rendering engine. "A minimal browser" which you allegedly wrote isn't something you or anyone else does, or would, use.


I didn't "allegedly" write it, it's on my github. Yeah it doesn't do box model layout but (and this was my point) that's not as big a deal without js because the pages are still perfectly usable.


I wrote a toy browser in a weekend too; I just used an existing HTML and CSS parser. The hard part isn't the HTML parsing, that's pretty easy and mostly just basic lexing/parsing stuff. It's applying the CSS to the HTML and rendering a document that vaguely looks right. There are loads of different interactions, and even Firefox and Chrome actually get it wrong in many edge cases (and also frequently in not-so-edge cases). I wrote a list some time ago of all the interactions "position: sticky" breaks in Firefox, and there's loads of them (I can't find that list now though, it was a HN comment from last year). It was pretty basic stuff like z-index, tables, flexbox, etc. Nothing too obscure.

If a lot of the cruft would be removed then actually, it probably wouldn't be so bad. But you know, compatibility and such. Still, a <link rel="newstylesheet" href="style.newcss"> probably wouldn't be such a bad idea. The base CSS specification can probably be reduced by half if not more, both by removing cruft and by learning from mistakes.


Point taken, but you probably know that some pages will not be usable in a browser that does not implement CSS. As in, completely unreadable.


There are a great many pages that are not usable on a browser that does not implement JS. None of these things are "hypertext" anymore, the original idea was that all web locations would fall back to simple text rendering of the content, the fact that much of the modern web can't be read on a base HTML renderer is more of a statement of what a website is now - an application server.


Okay, but how many implementations do you need for a protocol that is proudly useless and actively targeted at a narrow community? At first glance it seems like there's a little bit of circular reasoning here.

Gemini's selling point is that it's a narrow spec that's basically good for one thing: displaying largely unstructured text. It's not good for PDFs or layout, it's not good for media-heavy documents, it's not good for apps. You probably wouldn't write something like a book in Gemini. It knows exactly what it's doing, what kind of community it wants, and it focuses only on that use-case -- which is why its community likes it.

But how many different implementations do you need to build for this kind of narrow project? Are we really expecting that 50 different hobbyists are going to build custom Gemini parsers? Why? I'm not sure I understand what reason there would be for that kind of implementation diversity with a spec that is deliberately designed to decrease application diversity.

On the web, we have a lot of reasons why we are scared of having only one browser rendering engine. But most of those reasons don't seem to apply to Gemini, because it has different goals from the web. So say there was one zero-dependency C/WASM binary for parsing that was compiled for every platform and could be imported or transpiled into other languages for use in your custom browser/app. What do you feel would be missing from Gemini in that world?


When I say that not being complicated to implement is an advantage, I mean I can have much more confidence that whatever implementation I’m using is bug-free and secure (e.g. by auditing the entire thing and its dependencies, something not even remotely feasible with a web browser), not that it should be constantly reimplemented. Contributing to a Gemini client or even just building it is a lot easier than doing the same for a web browser component.

Plus, being able to implement something from the ground up in not much time means it’s possible to explore foundational advances, like building the entire client/server in a memory-safe language.


Great answer, I buy all of those reasons as strong justifications to prioritize simplicity.

I think it seems a little bit extreme in some ways, because when we talk about Gemini's implementation being simple, the FAQ isn't just talking about structure/readability, it's saying 100-200 lines for a basic client. To me that crosses from 'should be auditable' into something else entirely -- I'm not sure why having a 1000 line client would make this significantly harder to audit, or build, or re-implement in Rust. I'm not sure Rust is that complicated of a language.

But :shrug:, I guess if you shoot for an extreme metric, then you know you'll end up with something that meets all of your other goals as well, so those requirements do make some sense to me when phrased as part an overall architectural stability/reliability goal.


Yeah, I would have liked to see different trade-offs made too (especially giving the same treatment to TLS), but I’m satisfied that it’s at least a good direction to start exploring and putting some level of support behind.


That's true. Perhaps the solution is to explicitly agree that the Gemini format will expand to a certain goal complexity - so that there's always several implementations, and it's possible to ramp up to something with more flexibility.

> But you can serve HTML pages over Gemini either way.

The important question is what people can read over Gemini, I guess. Do Gemini clients effectively render that? I imagine it would be difficult to take Webkit and make it work without supporting JS.


Basic HTML doesn't need WebKit. If what you want is basic HTML 4 tags set with a couple of HTML 5 tags, this is not a big task.


What good is competing implementations if they all have to keep to the same spec?


Different quality, support, tradeoffs, maturity levels, focuses, and add-ons?

Might as well ask "what good is competing webservers".


The same reason that it's good to have different clients for other protocols. Some clients lean heavily to one side on the CLI vs GUI vs VR debate, some choose to have convenience over technical complexity and so make choices that other Gemini users would disagree with, some allow a degree of automation, some are a multi-protocol client that has Gemini as just one supported option.


Read the spec, it's not long. It leaves plenty of room for variation.


Not keeping to the same spec is one of the reasons why modern web is a dumpster fire.


You wouldn't need something that complex, and it seems the author has a philosophical disagreement with CSS as such.

I would love be able to write a gem document where I can say this is a quote, this is an emph light text and this is an emph text, this is a table and this is a link and then it will be up to the client to choose how it wishes to display it, including just as text.

I prefer inline links strongly, but I require inline images and tables for it to have any value directly. It would be an obvious benefit to include support for inline math too, something that we still don't have on the web.

And no a link to a table or an image is not a replacement for an inline one (though it is an obvious fallback for terminal renders).

Such a format would be really nice to access information in, it would be easy to save/proxy for online access and it would still be possible to include ads (of the still image, non tracking variant).


People can and have already ported Wikipedia.


So you're asking for something a bit richer, say maybe Markdown, that is still well-contained and limited.


It's a bit of a slippery slope though. Even though Markdown allows arbitrary HTML, several competing extensions have emerged because people always want one more feature.

I think the issue with Gemini is that it that while it appears to be a general purpose protocol, it actually favours a very particular type of content. This maybe seems self evident, but even if you want to write anything much more than a blog post it's not a good fit. I wouldn't mind if it was really pure, but a code block syntax has been included to appeal to a tech crowd, while other basic elements of modern written communication are not. And now we're back to the start of my comment.


Seems like some folks like simple stuff because they feel more included in the magic of it all.

Kinda like.. some folks like manual-transmission cars with simple constructions that they can modify. Or machine-code/assembly. Or building their own computer. Because then they're not just a user -- an outsider to those who've actually created the technology -- but rather a fellow creator and equal participant.

Projects like this give me the same vibe. This is, I'm skeptical that anyone's doing this because it's practical, even in its minimalism -- instead, it seems more like a hobbyist project where folks can feel included.


> Kinda like.. some folks like manual-transmission cars with simple constructions that they can modify.

Let me give you my perspective: the modern web client experience is that of a sturdy, advanced alien spaceship. It gets you from A to B and knows how to dock to all the popular alien space station airlock models automatically (aka all the HTTP/2/3/TLS/cookies, stuff that's expected by sites to work). You chuck plenty of fuel at it (RAM and CPU) and cross your fingers that it lifts off.

If you prod the wrong part of the (browser) engine, the universal constants change and you get turned inside out. You still occasionally do it because the ship can be made to block all the alien ads once you're on the space station.

Gemini is just an old ICE car with an antigrav module bolted on (the TLS engine). Its construction is simple enough that you can hold it in your head. It doesn't change the universal constants when you prod the wrong part of the engine, so it's refreshingly free of the risk of turning inside out. It goes into space just fine, but doesn't know how to dock on the alien space stations. This is fine - you knew that from the start, and the new human space stations are where all the quirky fun stuff is happening anyway. And the human space stations have no ads, for now.


Liking simplicity is not always about being "included in the magic" IMHO (but it's a very valid and fine point).

My observation is simple stuff is generally less wasteful and designed with different set of goals. The simple stuff I particularly like lasts a lifetime, its maintenance is minimal and doesn't lock you into a single ecosystem.

Since these items are used for much longer than their modern counterparts, they become yours more.

On the digital realm, simple stuff is less distracting, works faster, is not short on essential features and can do 95% of the more complicated stuff with minimal overhead.

At the end of the day, people may like simple stuff more because it does most of the work done by the complicated thing with much less mental load. Funnily, sometimes the simpler stuff actually does more than the complicated one.


>Seems like some folks like simple stuff because they feel more included in the magic of it all.

Or, well, because:

(a) it's more resilient (and they already have lost tons of formats of the last 40 years that can only be read with emulators now)

(b) it's simpler

(c) it does what they want (document web) without bloat

(d) it's hella fast

(e) it's light on resources

So, it's not "kinda like some folks like manual-transmission cars", but more like "some folks like a toaster that it's not internet connected with 30 toast programs and 5 buttons", or to squeeze their own orange packs without a $400 Juicer [1], or they like the plain old Hacker News webpage and not some social media sharing monstrocity with 200 options, and prefer static file serving instead of a CMS+Database+...

Gemini might not be practical, but that's because the majority wont know about it, as it wont be pushed by companies, and thus it wont be adopted in any large numbers (heck, advertising wise, it's against the whole FAANG model).

I don't have any plans on hacking with it, but I was looking for something like Gopher-revived to cut back on web bloat, years before Gemini started. That said, I'd prefer if it could somehow be part of the web (not another server/protocol, but a forcefully cut-down web - maybe with something like a HTML's version of Javascript's "strict mode" - when that would be toggled, only a minimal set of HTML and no JS would be supported).

[1] https://www.bloomberg.com/news/features/2017-04-19/silicon-v...


People are attracted to Gemini for different reasons. While feeling included in the magic of it all may appeal to those developing clients, servers, or sites but it won't mean much to those who simply want to read what others have to say.

In my case, I appreciate the environment it creates while reading. It is more like reading a book. The text is front and center, progression is mostly linear, and the visual transition between sites is less jarring. Minimalism is nice in the sense that it makes the experience more cohesive, rather than being an end in itself.


> Seems like some folks like simple stuff because they feel more included in the magic of it all.

Gemini vs. HTTP ~ HN vs. Reddit


Wait, I like HN very much, but that's just wrong. HN is basically what one tech-subreddit would be. This is fine, but not very inclusive, just because it is focused. People also like Facebook which is far from simple stuff.

Inclusiveness via simplicity is just one axis in the space of why people like things.


> HN is basically what one tech-subreddit would be

Not really. It may have been true ten years ago, but during that time Reddit has slowly accreted "features" (Reddit gifts, Reddit Premium, sponsored contents, a chat...) whereas HN didn't.


> Reddit has slowly accreted "features" (Reddit gifts, Reddit Premium, sponsored contents, a chat...) whereas HN didn't.

Also there are NO native media (video/GIF, images, photo albums) support & linked media previews on HN too.

Additionally users on HN has no any administration or moderation rights.


>This is fine, but not very inclusive, just because it is focused

That's neither here, nor there though. You can have 20,000 other individual Hacker News style sites on every subject, and the simplicity would still be there on each individual one.

Not every thing has to be everything to everybody, for the category of the thing to be inclusive.

And Gemini is not like content or like a speficifically-themed website like HN, it's a content delivery format (like HTTP/Gopher/etc). The allusion between Gemini to HN was for the simplicity of the presentation, wasn't referring to the kind of content HN carries.

In a site's with HN design the content could still be whatever, from whaling to development, and from rocket science to religious practices.

(If anything, something like Gemini would be much easier to make inclusive to e.g. low powered devices in the developing world, or screen readers, compared to the modern ultra dynamic news and content websites that have tons of moving parts, trackers, popups, interstitials, animations, and random JS just to deliver news and text content).


I think you like manual because that's all you know at first but when you try a hill start in an automatic the first time you're sold on its comfort.

Now the next step is deciding wether you enjoy the precise control of the manual vs the liberating comfort of the automatic.

I d argue that a tiptronic like solution that does both is perfect and should be default in all cars. But then it s a matter of cost.

At least you're totally wrong thinking we buy manual because they're simple enough we can modify them :D We're 70M people where I come from, 95% of cars are manual, never ever heard this one.


Manual transmissions are very rare in the USA now. Most auto manufacturers don't use them anymore (even in trucks). And the few that do, only do so in certain models that have to be ordered. I bought a manual in 2019. I had to drive to another state to get it. That was the last year the car I bought came with an option for a manual.


>At least you're totally wrong thinking we buy manual because they're simple enough we can modify them :D We're 70M people where I come from, 95% of cars are manual, never ever heard this one.

And 400M people with manual in Europe, where an equal share (95% or so) is manual. And 99% of the people seldom if ever "modify" or hot-rod their cars.

That's more of an American thing, in the country where they have the automatics as the first choice.


That will probably change once EVs become more popular. All EVs I've heard of use either single-ratio transmissions, or automatic transmissions, even in Europe.


It's that "precise control of the manual" that's enabling.

Presumably we'll see the same thing with fully self-driving cars in the future. This is, a mature self-driving technology should eventually out-perform (unmodified) humans, but even then, when there's no argument for manual-driving's practicality, some folks'd probably want to drive manually anyway.


To all: please consider reading the Gemini faq [1] before chiming in with impressions of the project. You definitely don't have to agree with the rationales, but the discussion is impoverished without understanding their angle on those decisions. (Esp. seeing a lot of people who might be interested by "2.5 Why not just use a subset of HTTP and HTML?")

[1]: https://gemini.circumlunar.space/docs/faq.gmi


Gemini feels like a job half done: the presentation layer is simplified, but the TCP/TLS stack is still there in all its monstrosity, and so is the model that the user agent downloads stuff from the authoritative source live in response to interaction, with all the associated issues, like leaking meta-data. It also lacks a discoverability protocol improving on the search engine model.

These are hard problems arguably.


TCP is not hard. It's not perfect, but for what Gemini needs (ordered guaranteed delivery of packets) it's a good tradeoff. Many firewalls would happily drop non TCP/UDP traffic as well, which would hurt adoption.

For TLS, I agree it's a monstrosity, but it was an intentional protocol tradeoff. We really want to have privacy, so the only other way is to homebrew your crypto protocols (which is just as dangerous as homebrewing your crypto algorithms).

Since we're basically stuck with TLS, Gemini asks the question of how to make TLS make a run for its money - and this is by using TLS asymmetric crypto keys for persistent or semi-persistant identity handled by the client. The HTTP web still doesn't have single sign-on, but Gemini does. Just present the same key everywhere.

As for search, it's not like the HTTP web has a good discoverability story either. The only reason why we have search that can pull up any page based on any keyword is because we've spent decades selling our eyeballs to a company that indexed it all by sheer brute force. The Gemini web is arguably no different here - if someone wants to build a search engine, they are free to do so.


Gemini needs neither ordered nor guaranteed delivery of packets, that's the point.

It's a pretty context free format: each line can be rendered separately, and the user agent can display things as they arrive (with some visual indication of "holes" for lost packets while they're being re-requested). A partial page may also be enough for the user to decide they want to move on, saving pointless delivery of the entire page.

Besides many Gemini pages fit in 1 or 2 packets, so there is more wastage through the TCP setup dance.


> Since we're basically stuck with TLS, Gemini asks the question of how to make TLS make a run for its money - and this is by using TLS asymmetric crypto keys for persistent or semi-persistant identity handled by the client. The HTTP web still doesn't have single sign-on, but Gemini does. Just present the same key everywhere.

and the selling point is 'no tracking'? i must be missing something.


Well, the client don’t need to send the key all the time. Only when you want auth.

And could easily implement one-keypair-per-server based on user preference.


TLS is kind of icky (so many size fields), but it's not really a monster if you limit your scope; you can use ciphers from elsewhere, maybe limit to TLS 1.3 if daring, or just 1.3 and 1.2 if you want broad (but not infinite) compatability.

The monster is x.509 certificates.


> Gemini will remain as it is, frustrating anyone trying to extract value out of it.

Why am I going to use anything if I'm not trying to extract value? I don't mean in the sense of "make money off of lusers", I mean as in "do something that provides value for my effort".

I used gopher to "extract value" 25 years ago, but it turns out I want inline links and images and formatting and interaction (and compression and caching et al) because they're really useful. Look no further than HN for the Gemini ideal use-case, but even here the experience would be worse because of its restrictions.

Reading the spec, Gemini seems like it wants to be the elitist / hipster version of both gopher and HT(TP|ML). It forgets that a lot of user-agent features are pro-user, even if they've been co-opted for nefarious means. It's the CLI tech nerd's dream of a user interface, and discounts both the evolution of the web and of UI/UX design over the past 30 years.


Those who use Gemini find value in it. Their definition of value may be different. It may be expressed in terms of the type of community they wish to participate in, the type of environment they wish to expose themselves to, the hobbies they pursue, how they entertain themselves, or any other reason that keeps them returning.

Painting people with negative stereotypes simply because they seek something different from their online experience is uncalled for. Nothing compels people to use Gemini aside from their own personal interest. Its imprint upon the world is so minuscule that it is impossible to complain of incidental impacts. Simply put, it will not affect the lives of those who choose to ignore it. Contrast that to the web, which is virtually impossible to escape while remaining functional in the modern world.

And why paint it in terms of negatives? People tend to compare Gemini to a crippled version of the web simply because that's what most of the world uses, but is more accurate to claim it is a modernized version of Gopher. It simplifies linking to other documents. It is possible to reformat text for small screens. It adds a layer of security. These are all positives.


> Those who use Gemini find value in it.

The fact that your primary answer is a tautology is telling.

I'm asking what value I should find from it, what value I get from going down this bespoke, inaccessible, poorly tooled path versus using the web with it's widely-used tools, billions of users and a million onramps?

> Nothing compels people to use Gemini aside from their own personal interest.

Nothing compels me to use it, but nothing incentivizes me to use it, either. I'd like a static site generator for my blog. Why would I pick Gemini over Jekyll, Hugo, etc.? That's the missing part of the story.

> And why paint it in terms of negatives? People tend to compare Gemini to a crippled version of the web simply because that's what most of the world uses, but is more accurate to claim it is a modernized version of Gopher.

People tend to compare Gemini to a crippled version of the web simply because that's what it is. All of your arguments on how it's better than Gopher ignore the fact that the web has all of those same features.

It's negative because the web already did that, it's already a modernized version of Gopher. That's why we use it and not Gopher. That battle was lost when either NCSA Mosaic or Netscape Navigator was released 20something years ago.


There's a difference between delivering value (a user searches for or accesses a resource and fairly expects a resource to deliver a value) and extracting value (a predator inserts itself into the user->searching->accessing->resource chain and opportunistically seizes value from stages in that process).


I was treating "extracting value" as my ability to get value out of something that is "delivering value". Your definition is probably better as a more complete spectrum, so my question becomes "How does Gemini deliver value relative to the effort I need to exert in order to use it?" Essentially, how does it overcome its own learning curve?


When I view a Gemini document in a Gemini browser, I have inherent assurance that:

- I'm looking at something I just downloaded from the URL I entered or navigated to. If I'm really paranoid I can open it up in a text editor and not have to wade through tag soup and potiential Javascript obfuscation.

- Nothing is hiding from me anywhere, anything that was retrieved from the server is in the current document view.

- No scripting has modified the document.

- No scripting is listening to input device events and sending those to a server.

- If I tell my client to render stuff in a certain way, such as font preference, font size, how to deal with links, the Gemini document won't break.

- If the Gemini document is a big list or index of a bunch of things I want, such as pictures or other media, I can parse those links ridiculously easily with standard UNIX tools and download them when I want with a multitude of download managers. Trying to parse things like Twitter's direct image or video links is hard.

So if you want those things, Gemini is worth a learning curve, which isn't hard (download a client and start browsing).


> Why am I going to use anything if I'm not trying to extract value?

(Author here)

What "value" is there to extract from a public park? Some things are just places meant to be appreciated and enjoyed, and lack a "practical" purpose.


I think you misunderstand my argument, and actually are reinforcing it with this comment.

If I take effort to go and visit a park then I get some kind of value from it. I get to relax in nature, see some pretty trees, have a space to play on a playground, etc. Those uses are evident even if I personally prefer concrete and steel, have pollen allergies or don't have kids.

This project sounds more like a single tree on a tiny island. You can go and sit under it, but you need to swim out to the island. It has no low branches, so you can't do anything except sit under it. It's scraggly, so it doesn't provide any shade, so sitting under it isn't very pleasant in the hot sun. Oh, and no boats are allowed on the lake, so you have to swim the half-mile each way.

It's the opposite of your story, the only possible reason someone could have in visiting the island is to log the tree, or just to say they've been there.

And my argument is more about the practicality. Why am I going to go from writing my blog about how fluffy alpacas are on the HT(TP|ML) web to writing it on Gemini? What do I get for my effort and for the restrictions that are now placed on my writing?

I could appreciate a text file on my local disk about as much as a gemtext page because nobody will see it. Network effects are really real.

Using Gemini is building an enormous mountain, and the only reason to climb it is "because it's there".


As far as I can see, the value seems to be in using code golfing as a form of artistic programming. Probably useless, but not any more so than your average video game, or theater performance, or art museum...


The analogy with the useless tree breaks down because gemini still has to be useful to some people. I can understand the tree living as an end in itself, less so for a web protocol. Gemini has a mission and if it's usage is too restricted it will barely be a safe space for anybody because barely anybody will be using it.

It's a tradeoff and I can't judge if gemini got it right, but gemini obviously still wants to be useful in similar aspects in which it wants to be useless enough.


>Gemini has a mission and if it's usage is too restricted it will barely be a safe space for anybody because barely anybody will be using it.

I think that's the point. When you build a protocol so restrictive and ascetically minimalist that even HN and Lobste.rs, epicenters of the very sort of contrarian tech hipster anti-modernist elitism that brought it about mostly complain about it, it's clear that what you're really optimizing for isn't utility but ideological purity and cultural homogeneity.


TIL: protocols have a cuture.


Protocols can be built to reinforce the ideals of their creators, and thus attempt to create a culture around themselves, yes. For an example of this, see Urbit.

Gemini exists to satisfy political and cultural, not technical, needs. Most of what Gemini users have to say regarding their preference for it, and its benefits and its purpose are almost entirely about culture, not technology - the web is too complex, the web is too mainstream, too commercial, no longer a small "quirky" community, too full of the wrong people using it the wrong way.

We're here replying to an article whose entire thesis is that Gemini's technical shortcomings, it's "uselessness," is a killer feature for promoting a specific kind of culture, for preventing certain forms of expression and encouraging others, inviting certain kinds of people in, and keeping other kinds of people out. That it will never be extended is a decision based on enforcing culture, in the belief that any extra complexity, regardless of its utility, risks the protocol being used in ways that undermine its ideological basis.

So yes, culture.


Browsing around the Gemini community is a thousand times more relaxing than reading on the web. I really recommend picking a nice client and enjoying a look around.


This is funny because the only way for Gemini to keep that way, is by not becoming popular. Because that's exactly what happened to the web.

It was also an idyllic place in the beginning that got crowded, and giving we didn't figure it out the next step beyond capitalism the trash from our material world started to fill the digital realm because we are still in the same culture.

So to solve the problem Gemini is trying to solve, is not by fixing the web standards, but by fixing our culture, which is a much harder and bigger problem.

Gemini will want to keep working, than it will figure it out a way to sustain itself economically that in turn will make it possible to create profitable services on it, and it will be just a matter of time to become exactly as the web is now.

If it doesn't take that path, giving the Web is a superset of it and it doesn't provide a unique value on its own, it will probably fade away or keep very niche.


The whole reason why the Web became what it is, is because HTML tags and JavaScript allowed for unbounded feature creep. Gemini purposely specifies a simple protocol that puts boundaries around what's allowed.

You would need a critical mass to get an extended Gemini off the ground now. Between the die-hardness of the early adopter community that realises that this extensibility is why they needed Gemini in the first place, the support for HTML and Markdown mimetypes over the core protocol (to alleviate some pain points) and lack of commercial attention, I have high hopes for Gemini.

> it will figure it out a way to sustain itself economically

Why should it? It's not a platform for commerce. Hosting a Gemini server on a residential connection has a negligible cost (a Pi and a few kilowatt-hours per year). Some small web communities host it for free. For personal expression, Gemini should be very cheap.


When the web was there, it had no historical record to rely on, everything that happened was unprecedent in history because it was the "killer app" of the internet, and by virtue of being the killer app it made the internet something people want.

Now in 2021 we have a lot of historical records of how things shaped and get in the way they are now. Everything that you are saying about the Gemini is exactly how the primary web was in the beginning and yet it changed because a lot of forces happened.

I completely agree that the Web now is a whale and its too fat (BTW this is the point that Gemini is getting right), but i don't agree with the approach at all, as its not possible to have a time machine and make the technological status-quo get back to were it was in the nineties so a project like Gemini can be a successful branch of WWW.

A "skinny web" standard would be much more useful, and here i think that Tim Berners Lee is missing the point by creating something like Solid instead.

Anyway, its a chance missed to point to something for the future, pointing to the past instead and making a lot of good folks lost by following the wrong way.

I consider Gemini part of the "retro technology movement", that while cool and while having its heart in the right place, miss a nice chance to help us all change a very dark future ahead of us of digital feudalism imposed by the big tech goliath monopoly.

We, the tech crowd need to step back and stop following them into that bleak future, Gemini is subversive in that sense, but while the goals are great, its alienatory nature negating the status-quo and forgetting to learn with historical events its not a step into the right direction.

But if at least take some people away from the WWDC's and Google IO's, at least it will be a little bit less people into this army that is leading us into this dystopic future we are heading right now.. and i just hoped that Gemini with a little more contextual knowledge could be at least a stone in their path.


A couple of things to unpack here. The main argument that I would like to make is that hindsight of where we'd end up if we pick the HTTP way matters a great deal for Gemini's adoption.

Back when Gopher and HTTP were slinging it out, the future was unwritten. No matter which nascent vision of the Web we picked, it would be cool. 20+ years later, we ran the capability-maximalist (excuse the term) experiment to its logical conclusion, we know for a fact that a web in the HTTP/HTML spirit leads to a hellscape inhospitable to the privacy conscious.

In that sense, Gemini is not a nostalgic look. It's a recognition that we picked the wrong evolutionary branch and that we must step back and redo the evolution.

I don't think it can win against the Goliaths of Big Tech, but then I think that no-one will. It's a political problem with no technical solution, because ultimately it involves money. The only way to kill those Goliaths is to get the governments off their seats and get them to sling some stones.

Ultimately, I think that Gemini recognises that slaying Goliaths is not something it can accomplish. It refuses to play that game entirely, and instead focuses on carving out little chunks in the hellscape for those that seek them. It's not forgetting to learn from historical mistakes, it's simply abandoning the pretense that the HTTP web can be unseated.


> ..the only way for Gemini to keep that way, is by not becoming popular. Because that's exactly what happened to the web.

I felt like the web's "change" was when it became skewed towards high-bandwidth content like images and videos, more so than popularity alone.

I can't imagine Gemini becoming as popular, but if it does, another new protocol can grow. :)

> it will probably fade away or keep very niche

Sounds nice, actually.


The big problem I have with Gemini is TLS.

That makes it way too complex and also breaks the ideal of a complete, unmoving spec since cyphers tend to change as weaknesses are found. It also requires certificate management with roots of trust, expiration dates, etc... You basically need libs, which is a problem on its own. TCP is also a bit of complexity but not as much, and it is not mandatory, unlike TLS.

And now, since uselessness is the topic. Or more precisely, that Gemini can't be polluted like the web because it is useless, don't you think mandating the lack of encryption would make it even more "useless".

I mean, really, why do people want https? Why do Google insist so much on https? The biggest reason is e-commerce. The secret you have that everyone wants to know is your credit card number, and more generally, everything that allows someone to make a transaction on your behalf. Really for most people, https protects the bank account.

So, if you think that the commercial web is the enemy, making a protocol that is unsuitable for commerce can be an idea.

I took the reasoning from the ham radio rule that bans encryption. Simply, it is a way to keep most commercial operations out and keep it an amateur thing.


Indeed, tables are an essential a thing which should be there, but it should be implemented in a way which makes it impossible to abuse it for layout, it should only be used to represent actual tables. I would even add column data types so the browser would be able to let the user sort them in a meaningful way (e.g. I hate Wikipedia tables which have sorting but can only be sorted alphabetically or something like that).

As for multi-column layout - it's a work for the browser to do on the user demand. View should be decoupled from the content.

For cases when you want to control every aspect of how is the information displayed there is HTML.

Things like Gemini should not have formatting features besides those semantic which MarkDown has.


Even better. Gemini supports alternative content types, so there's nothing to prevent you from serving up Markdown documents in response to a Gemini request. Worst case, your client will display it as plaintext (which will be still very readable), or in the best case, it will kindly render it into something pretty but still simple.


The fact that tables must be sorted with JavaScript is a fundamental flaw in the HTML specification. Sortable tables that the website cannot ask to be unsortable are a strong disincentive to use those tables for layout. Mandatory thick & ugly borders would also help.


I like the approach modern gopher clients are taking by natively supporting markdown content. Does Gemini not support that?

Some people (maybe they call themselves "purists") rage against this and kind of insist that gopher is all about text and only text.

To a certain extent they are right - gopher has traditionally been about text, but the designers of gopher didn't have this dogmatic obsession with text-only but actually wanted to do "VR", I e. a graphical 3D interface for gopher (complete with "flying" transitions between servers) as well as doing visual overlay over a globe etc showing where gopher servers were physically located. They had dreams of a much richer interface so I don't think we should artificially constrain ourselves.

The real modern benefit of gopher is the lack of pervasive scripting and cookies/tracking (and FYI modern gopher clients and servers support TLS)


> I like the approach modern gopher clients are taking by natively supporting markdown content. Does Gemini not support that?

Gemini supports RFC2046 mime types. So you can serve using their own text/gemini, text/markdown, text/html, and so on. You can go ahead and serve images and video, too.

It's up to the client to support the mime type, but a lot of clients have pretty wide-ranging support (a few actually are re-using a web browser engine, so have full support for a lot of formats). (And markdown will fallback to plain text when unsupported, which is what it is defined for, so would be fine).


I dislike that Gemini require an another web server and another web browser.

Isn't Gemeni can be simplified to a convention that forbid CSS/JavaScript and allow only basic tags of HTML (<h1-6>, <p>, <a>, etc..)?

IMO this could be a better move because this will use any performances improvement made on modern web stack (cache, compression, etc..).


Yeah I was thinking that if the system just piggybacked on HTTP's Accept headers, it could be run on any common web hosting service. I could, for example, add a Gemini endpoint to my PHP site.


I don‘t think survival of a new protocol is predicated on it being useless like a gnarly tree to a carpenter. It may be there are niches in which survival for gnarly technology is possible but like the quoted surviver tree they will be on hard to assault spaces like gnarly usecases.


The author’s response doesn’t really address the complaint in the post quoted.

The complaint was that Gemini lacks basic utility. Intentionally making things that lack utility is a great way to express an idea, but to me that’s an art form or a statement, not a tool.

Gemini is a communication tool. While it’s noble to make something that is difficult to extract value from, a tool has to serve a purpose. The complaints, to me, feel legitimate critiques for a communication tool.

Whenever I venture into the Gemini-verse, I feel like I’m peeking into personal diaries left out at random. I am not a person to shy away from a good search for interesting information. But on Gemini, I don’t see a point in digging. That makes me feel like it’s still lacking as a communication tool.


> that’s an art form or a statement, not a tool.

(Author here) I would definitely view Gemini more along those lines. Gemini is an interesting space to inhabit and explore -- a space dramatically different than the modern web due to its design. That's the value in it for me!


People make out fine without a useless protocol, I think. There are still a large number of people creating blogs and websites without analytics etc., and with no expectation of being able to monetize their work. As the lobste.rs quote points out, gemini goes a step beyond that, in banning many features that even very minimal websites might want to use.

When I think of a minimal website, I think of danluu.com. But even danluu occasionally uses tables and images. https://danluu.com/cli-complexity/ https://danluu.com/branch-prediction/ . I think these articles would be made worse if these features were removed. I suppose they could be replaced with carefully formatted code blocks, but these would be harder for danluu to make, and would look worse to the reader. Consider the dictum from motherfuckingwebsite.com: "shit works on all devices"; codeblocks work very poorly across devices.

I think claiming that gemini is intentionally useless is a bad excuse. Gemini includes some features, but doesn't include others. There needs to be better reasoning behind the fetures it excludes.


> There are still a large number of people creating blogs and websites without analytics etc., and with no expectation of being able to monetize their work.

Gemini is a social space as well as a protocol. Part of why it's got such a sharp break with HTTP than e.g. being a subset of HTTP with a special client, is to demarcate such a space for those who want it. Minimal websites still operate within the larger context of the HTTP web, where this demarcation doesn't exist. Click a link to a Facebook profile from that small blog and you've unwittingly chosen to load half a megabyte of markup and tracking scripts. This is not possible with Gemini because such a capability simply is outside the protocol.

> But even danluu occasionally uses tables and images. https://danluu.com/cli-complexity/ https://danluu.com/branch-prediction/ . I think these articles would be made worse if these features were removed.

I agree that this could be annoying. Gemini has support for alternate mimetypes, so it's possible to return Markdown to the client. I think that Markdown is still in the spirit of Gemini's minimalism, but it does support more things (including tables). If a particular client refuses to render markdown, it's still readable as plaintext.


It's funny and poignant to see this sincere discussion here, at the very heart of the commercial web


Heh.

It seems to me that the discussion here is just rehashing all the discussion on the Gemini mailing list from the past few years.


The lack of images is really an important "feature."

Images seem to do something to the human brain. They grab attention in ways text does not as evidenced by the fact that a dumb image meme can be used as an anchor point to advance the most inane bullshit in either meme or short-attention-span text form.

Image boards and their capacity to originate and amplify utterly inane bullshit are the canonical case, and the format illustrates the principle exactly. Each post consists of an attention grabbing catchy image along with text that is usually one or more of banal, mean spirited, or irrational, and the image meme sells the message to the point that these sites almost feel like cults. Take the images away and there is virtually nothing compelling there.

Edit: there was interesting stuff on the chans once. 4chan was not always identical to /pol. The fact that /pol, originally a containment board, became 4chan and edged out everything else tells you everything you need to know about the format and the medium and what kind of thinking it encourages.

Media guides discourse. This is the most important idea of Marshall McLuhan and why we still talk about him; "the medium is the message." When the quality of the ideas on a medium almost universally declines over time, the medium is the problem not the users of it.


The lack of images limits the ability to create Gemini content on many subjects, however. Just off the top of my head I could name discussion of artworks, some travel writing (no, it isn't all just inane "influencer" shots – bicycle tourers and overlanders like getting some photos to have an idea of what the terrain to traverse is like), building stuff, classical philology where the original edition of a text as it was typeset has to be examined, and so on.

These are subjects for which there are already many minimalist HTML blogs around, but the lack of image support on Gemini makes it impossible for them to support that ecosystem.


Nothing about Gemini prevents you from linking to images.

Furthermore, nothing prevents Gemini clients from eagerly fetching linked images (based on the URI) and displaying them inline- if the user chooses such a client.

Being strictly text-oriented is simply the default, and a default that must be taken seriously by anyone writing a page.


Sourcehut supports Gemini pages. Sourcehut sysop Drew DeVault has a Gemini presence as well.

https://portal.drewdevault.com/x/srht.site

https://drewdevault.com/2020/11/01/What-is-Gemini-anyway.htm...


How about replacing HTML with Markdown?

Markdown is simpler than HTML, but has more to offer than Gemini. You could have tables, images and depending on the flavor also checkboxes. Client side styling would also be easily possible. And if you want a simple browser, you don't even have to parse it and simply show the plain text.

You do not have to be a programmer to write Markdown and many writers already write in Markdown and then convert to HTML.


It is possible to serve markdown files over gemini protocol (so says the specs: https://gemini.circumlunar.space/docs/specification.gmi). As for the tables, images, etc., it's up to a client (browser) to render it. For example, even though gemtext doesn't specify displaying images among text, some clients can display links to images as actual images inside the same page. Luckily, the gemtext is at least very similar to markdown, save for the link markup.


Both Markdown and Gemini's markup are missing some semantic tagging useful for screenreaders. The disabled should not be second-class citizens in any new web community.


Never thought of it this way (privileged enough to never have had to). are there any good resources that discuss this a little further?


What semantic tagging does markdown miss?


Take ISO-639 tagging for words in a foreign language different from the main language of the document. For example, real should be pronounced by the screenreader differently in English and Spanish, coin differently in English, French or Irish, etc. A screenreader might be able to use some heuristic to identify the language of longer snippets of text, but for one-off words it doesn’t have enough data to work with. HTML has the lang="" tag to guide screenreaders, but I am not aware of any Markdown equivalent.


How common is the use of these affordances in practice? I'm just now discovering that it's possible.

I'll say this: it's very cool that it's possible to make text more accessible to people using a screenreader than it is to people reading it raw. For instance, if I were to refer to the real real, I would do what I just did, and italicize the word to indicate that it's foreign. Someone who knows how Spanish works can interpolate from "italic real" and get /real/ instead of /ri:l/ out of it, and that's basically what a screenreader would do, literally say "italic real".

I'm not convinced that accessibility demands that words be pronounced properly under such circumstances. Note that someone completely ignorant of Spanish would just see "italic real" and maybe think it represents emphasis, so the sighted are not at an advantage here.

Again, it's pretty cool that there's a way to do it in HTML! Semantic markup is neat, even if it doesn't get used much.


Foreign-language words used in a text are not always set in italic. For example, the author might use a proper noun (so unitalicized) like the name of a minor city in a foreign country that isn’t going to be in the screenreader's dictionary. In these cases, language tagging helps the screenreader output something intelligible instead of garbage.

Generally, use of this tagging is obligatory to meet accessibility guidelines. If in practice people don't use the tags so much, this is a problem. Again, the disabled should not be second-class citizens on the web or any medium that aims to substitute for it.


I don't know how common it is, but I do it. I encode foreign terms as <span lang="de" title="above everything">über alles</span>. How the client deals with that is up to the client, but most of the browsers I've used will show the title attribute if you hover over the term. I also style the <span> tags with the language attribute.


I'm glad to see this article and posts like the one mentioned because Gemini really doesn't make sense. It seems like a hobbyist project that people now are getting defensive about when questioned on the value of its spread and existence. Gemini misses the point. The problem with the web is Javascript, not italics. It's a good example of throwing the baby out with the bathwater.


Why not simply use HTML with JS off, plus userscripts to block cookie banners and sticky headers? You get modern CSS features like column display.


It's a shame it can't add on any innovations that didn't make the cut to the current web, such as Ted Nelsons parallel documents.

Seems like any interest in making technology more than a replica of reality has disappeared. Maybe its even going the opposite direction, that what we have in reality is too complicated to recreate so the digital becomes more constrictive than the real thing.


We need a foothold first. The Web 3.0 (4.0?) took all the oxygen out of the room for Web+ technologies, so we first need a healthier ecosystem in which to frame the discussion.


Could you elaborate please?


Gemini always seemed like more a political statement than a useful tool, imo


Direct browser support for Markdown would accomplish many of the same goals.


Other products that “can’t emulate typography produced on a daily basis – often in a hurry – 150 years ago”, but that you may have heard of: Twitter, WhatsApp.

Also, how well does html do multiple columns with text flowing between them nowadays?

⇒ I don’t think the original argument is a strong one.


> Also, how well does html do multiple columns with text flowing between them nowadays?

With the CSS property `columns`, this is pretty easy to do:

https://jsfiddle.net/52j401rg/

https://developer.mozilla.org/en-US/docs/Web/CSS/columns


The gemini faq talks about not having any feature, intentional or unintentional that can be used for user tracking...and then talks about TLS client certificates being first class citizens.

Guess im a bit confused on the design criteria.


Can TLS certificates in the way that Gemini uses them be used to track end users?


Presumably? I haven't read the gemini spec or know how they use them, but the entire point of a tls certificate (whether client or server) is to give a persistent identity that can't be faked.

Edit: found the spec - https://gemini.circumlunar.space/docs/specification.html

Sounds like they're passing the buck a bit to the client. Client can decide how long lived, so its up to the user if they want to use a long lived identity or a short lived one. Doesn't sound all that different from cookies to me. In old web browsers you were always prompted if you wanted to accept each and every cookie. Everyone clicked yes so web browsers stopped asking. This seems the same.


The initial quote is very odd. Someone complaining you can't do newspaper layouts in Gemini. Newspaper layouts are determined by the needs of the medium, that being fixed content printed on paper of specific proportions.

Gemini isn't useless because it can't do some arbitrary thing, but I think it's useless because it has no pragmatic purpose. The article author even claims it won't be modified once it's formal. It's like the definition of "dead on arrival".


My problem is that protocols and software are not trees with their own imperatives for existence. They're just tools for people, and a useless tool gets neglected. Even a fun tool that's not as capable as others gets used less and less over time, once you get past a period of infatuation. And with a small user base, that trend turns into a death spiral.

I'm curious to see how long Gemini keeps an active community.


Gopher seemed to have a healthy (if small) adoption for years after it was declared dead. If Gemini becomes a Gopher+, then it has a strong chance at staying around.


Gopher has actually evolved since, though. The Gemini community doesn't want their protocol to evolve.


The Gopher+ protocol doesn't seem to be that popular in practice, and the only evolution of the gopher protocol since the mid-90s is the introduction of a few new selector types not found in RFC-1436 (like 'i' and 'h'), and the switch to UTF-8 (instead of ISO-8859-1 mentioned in RFC-1436). There are a few servers and clients that support TLS for gopher, but it's not exactly a good fit [1].

[1] http://boston.conman.org/2019/03/31.1


My bad, I didn't realise that Gopher+ was an actual protocol. I simply meant to use that name as a placeholder for something that's slightly more than basic Gopher, not to refer to any specific proposal.


> If your technology can’t emulate typography produced on a daily basis – often in a hurry – 150 years ago then your tech is rather limited.

Just wait until Alex hears about illuminated manuscripts from the 13th century. Gemini's inability to depict historiated initials is an even bigger tragedy.


I would say Gemini have about the same characteristics as Hacker News which is why we love it so much!


HN is IMHO a good counter-response: It works just as it is on the web, and you afaik couldn't build it on Gemini, because having forms etc is "going to far" for Gemini. (And to be fair, if your model is the imaginary "good old web", then maybe forums are a step to far, because that's what mailing lists are for? It's an exercise in making things purposefully extra effort after all)


You can have forms on Gemini.

> if your model is the imaginary "good old web"

Is your assertion that the original cookie free, javascript free web was a hellscape of arbitrary executable code intended to exfiltrate as much monotisable information from the user as possible? Just because someone doesn't like the current situation doesn't mean the past was imaginary.


> You can have forms on Gemini.

As far as I see this doesn't cover most uses of forms, since its result should be appended to the URL with `?` and the URL is limited to 1024 bytes.


For short to mid length posting, it should be probably enough. Long-form content could be handled by e.g. uploading posts via SFTP or IPFS. As a bonus, the user can also download all of their previous posts and take them elsewhere if they want to jump ship or export.


I don't consider 1024 bytes as "mid length", especially considering non-Latin scripts. For the comparison Twitter's 280-character-of-sort limit translates to the maximum of 840 UTF-8 bytes (reachable for many Middle Eastern writing systems).

I also don't find the usage of SFTP or IPFS reasonable because that would mean two separate clients necessary for a single Gemini site. In fact I personally imagined email instead and email alone can already handle the forum usage (i.e. mailing lists). Gemini at the current state is simply not meant to be used as a full-featured forum. I do think there is a rather simple extension to allow that usage (add an equivalent of HTTP POST to the request protocol, and extend INPUT to give a pre-filled template), but any additional feature to Gemini can possibly weaken its merits.


> You can have forms on Gemini.

Oh interesting, had missed that when I took a look at it.

> Is your assertion ...

Obviously not, and responses like this from its proponents are a main reason I'm not interested in it, despite quite a lot of ideological overlap. Saying that not all was as perfect in the past as it's sometimes presented is not asserting that all everything today is better.


> Obviously not, and responses like this from its proponents are a main reason I'm not interested in it, despite quite a lot of ideological overlap. Saying that not all was as perfect in the past as it's sometimes presented is not asserting that all everything today is better.

Some people are optimising for different properties than the current web. They are developing new technologies to do that.

It is entirely hostile to say that their model is 'the imaginary "good old web"' or to imply that they present the past as perfect (as you do here - clearly in poor faith). They do neither. They are learning from the past and the present and trying to develop some other technologies and a new community that addresses some of their concerns. With great success!

It's strictly impossible for a person who advocates for Gemini to believe the past was perfect. A person who felt the past was perfect would be using the technologies of the past. The people who use Gemini want some things to be different, and find it productive to learn from the past and the present in building this new space. It is a new and fulfilling project. This entire thread is based around the premise that is sufficient as it is, even if a bazillion people aren't posting on it. No imaginary models need enter into it.

Twice now you've used standard but false tropes: Anyone who wants to learn from the past and the present instead of accepting the present as good enough has an imaginary view of the past and believes the past is perfect. Some times, a person runs "git checkout -- some/file" while fully acknowledging that the reason they started editing some/file was because it wasn't perfect when they started. Sometimes a person runs "git bisect" to see how they next version of the code can have the wins of the past and the wins of the present. Acknowledging value in the past is not presenting it as perfect, nor mere imagination.

If you don't like getting pushback for mischaracterising someone's opinions and projcets, feel free to not do that. If the main reason you're not interested in it is because people defend themselves against such unreasonable mischaracterisations, it's probably Gemini who's winning.


We're not looking for perfect. We're looking for good enough. The web isn't.


Why is it that people who go, "Is [it] your assertion-" or "Is it your contention-" never actually get their rhetorical question right?


But you can actually build HN on Gemini!

gemini://geddit.glv.one https://proxy.vulpes.one/gemini/geddit.glv.one


I can imagine a PhpBB alternative written for Gemini very easily, both on the posting and reading side. 1024 character post limit is plenty for a thriving discussion.


I think you will like this gemini site, then: gemini://geddit.glv.one

It's a Reddit/HN cone for gemini.

cf. https://github.com/pitr/geddit


I tried gemini, i like the idea of a frugal web protocol, it's just too esoteric as it is IMO. I like different but it felt just a tad too much and too crude.


What is Gemini?


A modern successor to gopher, with some extra features to make it more useful in the modern day.

Gopher was a competitor to the early web. It had a distinction between contentful pages and link pages, so it was less flexible than the HTML based web. I'm pretty sure this is one of the things Gemini fixes



Or the direct answer to the question in the FAQ:

https://gemini.circumlunar.space/docs/faq.gmi


Better question maybe: why is Gemini?


Gemini is because some people are sick of the modern web.

In the (idealised) olden days, the web was a place people posted content. An amateur could make a geocities that showed people their interests, an academic could have a collection of pages that acted as notes for their lectures, or a company could advertise the products that they sold.

In the (distopianised) modern days, the web is a giant network of interlinked computer programs, none of which can be trusted, but most of which offer at least some attractive distraction, whose primary purpose is to develop a small number of competing databases about you to maximise the amount of money that can be extracted from you while minimising the amount of value that can be returned to you. The providers of the computer programs take particular steps which should cause rational people to distrust them (e.g. hiding the button that says "Save my choices" to discourage you from doing what you want and what they are obliged to permit you to do), but healthy people can only tolerate so much distrust in their day-to-day life that they become exhausted.

Gemini starts with extensions to Gopher to develop something a little bit more like the first one. It acts as something like a safe space. It is based on a similar sort of principle to the black-and-white mode on a lot of modern phones, to discourage you from overusing it by making it less attractive. Although Gemini does support form submissions and CGI, the primary form of interaction as far as I know is to have multiple gemlogs.

(I tried using Gemini last year when it was mentioned in this place. But the content I want - e.g. programming language API documentation - is not on Gemini, and I think the Hacker News proxy is read-only, so I began to forget about it. I think Gemini is perhaps a little bit too far over.)


> the web is a giant network of interlinked computer programs, none of which can be trusted

Agreed, but Gemini does address the problem of surveillance.

All metadata are still leaked: IP addresses, DNS queries, FQDNs in the TLS session opening. Also timing attacks.

Furthermore, there's nothing that Gemini can do to prevent unofficial extensions, e.g. browsers detecting and loading HTML/CSS/javascript found on a Gemini page.


> All metadata are still leaked: IP addresses, DNS queries, FQDNs in the TLS session opening. Also timing attacks.

That's the problem with the current iteration of the internet. If you run a Gemini-based hidden service, they go away.

> Furthermore, there's nothing that Gemini can do to prevent unofficial extensions, e.g. browsers detecting and loading HTML/CSS/javascript found on a Gemini page.

That's the problem for the client to take care of. Those clients that aren't built with web technologies are unlikely to be subject to accidental web technology execution.


The only issue with gemini is that is should parse gopher for backwards compatibility.


Lagrange handles both gemini and gopher URLs. It's pretty seamless.


I mean the gemini servers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: