Hacker News new | past | comments | ask | show | jobs | submit login
Gemini – A new, collaboratively designed internet protocol (circumlunar.space)
200 points by _emacsomancer_ on May 1, 2020 | hide | past | favorite | 62 comments

This is so intensely cool.

Urbit was on its own island until recently - the only project challenging the domination of WWW.

This + Tildeverse feels like the very, very, very early days of hackers starting to play with alternate protocols, and the style and format of WWW.

It's fun, non-commercial, social, and aimed at hobbyists - just like the early WWW.

Given another 10 years of experimentation, I could 100% imagine WWW being seen as corporate, commercial and professional space.

Right now there isn't a good pseudonymous layer, because everything is very tied to real identity. Privacy doesn't really exist in a world of shadow profiles. And, with my commercial hat on, not do I want it to. I want to be able to retarget the shit out of email lists, and run intensive Facebook ad campaigns.

The needs of commercial space are not the needs of personal space. One solution could be different protocols, or at least different spaces, for each.

"Given another 10 years of experimentation, I could 100% imagine WWW being seen as corporate, commercial and professional space."

Excerpt below is from the file /scripts/web included with the original netcat in 1995.

   #! /bin/sh
   ## The web sucks.  It is a mighty dismal kludge built out of a thousand
   ## tiny dismal kludges all band-aided together, and now these bottom-line
   ## clueless pinheads who never heard of "TCP handshake" want to run
   ## *commerce* over the damn thing.  Ye godz.  Welcome to TV of the next
   ## century -- six million channels of worthless shit to choose from, and
   ## about as much security as today's cable industry!
   ## Having grown mightily tired of pain in the ass browsers, I decided
   ## to build the minimalist client.  It doesn't handle POST, just GETs, but
   ## the majority of cgi forms handlers apparently ignore the method anyway.
   ## A distinct advantage is that it *doesn't* pass on any other information
   ## to the server, like Referer: or info about your local machine such as
   ## Netscum tries to!
   ## Since the first version, this has become the *almost*-minimalist client,
   ## but it saves a lot of typing now.  And with netcat as its backend, it's
   ## totally the balls.  Don't have netcat?  Get it here in /src/hacks!
   ## _H* 950824, updated 951009 et seq.
FWIW, I still use original nc and similar TCP clients to "interact" with the www. Works great.

Not fan of SSL, now TLS, which was in fact created to facilitate commercial use of the www, or "e-commerce", in 1990's.

As an ongoing experiment in a different protocol/space, I run CurveCP on home LAN.

Incidentally, "Netscum" is a reference to Netscape Communications Corporation. They introduced SSL. This company was founded to commercialise a web browser, called NCSA Mosaic, that was originally co-written by venture capitalist Marc Andreesen of a16z.com as part of an academic project at U. of Illinois. The name "Mosaic" refers to its support for a variety of internet protocols, not all of them web-based. Netscum code-named their derivative browser "Mozilla", marketed under names like "Navigator" and "Communicator".

Fearing the growth of the web, Microsoft acquired rights to use the NCSA Mosaic code and produced a derivative browser called "Internet Explorer", which they included as part of Windows.

As web use grew, Mozilla Foundation, who inherited rights to publish the Netscape source code, added a "search box" to their derivative browser called "Firefox" and started directing search queries to Google, receiving large payouts in return. Mozilla Corporation was formed.

Google later hired developers from Mozilla to write another derivative browser called "Chrome".

For mobile users:

The web sucks. It is a mighty dismal kludge built out of a thousand tiny dismal kludges all band-aided together, and now these bottom-line clueless pinheads who never heard of "TCP handshake" want to run commerce over the damn thing. Ye godz. Welcome to TV of the next century -- six million channels of worthless shit to choose from, and about as much security as today's cable industry!

Having grown mightily tired of pain in the ass browsers, I decided to build the minimalist client. It doesn't handle POST, just GETs, but the majority of cgi forms handlers apparently ignore the method anyway.

A distinct advantage is that it doesn't pass on any other information to the server, like Referer: or info about your local machine such as Netscum tries to!

Since the first version, this has become the almost-minimalist client, but it saves a lot of typing now. And with netcat as its backend, it's totally the balls. Don't have netcat? Get it here in /src/hacks!

_H* 950824, updated 951009 et seq.

Wish I had scrolled down first :D

Whoa CurveCP looks super interesting.

Any resources you’d recommend beyond the main web site?

+1 to all of this. I hope more innovation happens in this area over the next years. This also makes DAT, IPFS, etc. to be interesting technologies (IMO) as they help to decentralize our highly centralized/commercial "web" as it is today.

> Urbit was on its own island until recently

In what way?

This might be me missing some awesome projects, but I haven't heard of many other projects with Urbit's scale of ambition and scope.

WWW is so powerful, everything exists within it. There's not much competition at the OS layer, and virtually none at the protocol level.

When people go back to using Gopher, you know the Web has turned to shit. I hope something good can come from these alternative protocols, a new space where oldtimers like me can focus on honest information exchange (what the Internet used to be), without the corporate behemoths tracking our every move.

Google's motto used to be "to organize the world's information". It's a daily occurrence where I'm wasting enormous amounts of time because Google search produces results of such low quality as to defy belief (startup opportunity right there, it's gotten so bad somebody should eat their lunch by doing a better search).

I'm yearning for an alternative that's close to what I experienced two decades ago.

After recently discovering Gopher and falling down a deep (gopher)hole of exploration, I’m convinced that it, or something similar is a great alternative to the commercial web. What a volunteer-run co-op which sells food in bulk bins is to Walmart.

Simple, fast and open. The exact opposite to something like Urbit, which while it might be open source is designed to be as closed as possible.

Incidentally I recently found a good Gopher client for iOS has some rough edges but is definitely one of the most usable clients.


I installed `elpher` via emacs and then didn't know where to go to actually see any content. Eventually I followed a link from "The Elpher Project Page" - it linked to "Project Gemini (hosted using gemini)". From there I found something called CAPCOM and am having fun exploring... but my suggestion is "list some gemini content on the gemini site!"

(Update: oh dear, I've never played with Gopher before. This is my kind of internet! There goes the rest of my day...)

If you're looking for a small week-end project, try checking out the spec and implementing a Gemini client or server. I wrote up a server over two days during my lunch breaks and it was relaxing working with such a small and simple protocol.

There's also a lot of neat stuff at CAPCOM[1] (which is sort of like a public RSS feed) if you're looking for capsules (what Gemini calls websites) to check out.

[1] gemini://gemini.circumlunar.space/capcom/

Very cool, this is essentially just an updated (modernised) Gopher protocol.

I wish the protocol was designed so that the server signed the document itself [well, most likely a hash of the document]. That would allow caches, archives, and proxies to prove that a document did in fact come from the claimed origin.

Unfortunately the Gemini protocol uses TLS, and so only offers the standard guarantee of HTTPS: a client can confirm it is communicating with the origin server, but it is unable to transfer that guarantee to anyone else.

Having played with this now via their "kiosk"[1] it's very cool and I think there's something here.

I expect only the kind of people who visit HN will ever try it and fewer will ever use it, but the protocol is very nice and simple. I wish it would take off and that a nice GUI client was written so that it was easier to use.

[1]: ssh kiosk@gemini.circumlunar.space

Awesome, this makes me feel like it's 1998 again, in the best possible way :)

To the sibling comment: It's a bit non-obvious indeed. It allows you to use their AV-98 Gemini client. [0] The source seems to be basically the documentation here ;)

Try this:

AV-98> tour gemini://gemini.circumlunar.space/

Then add links that you want to visit (they're numbered) with, for example

AV-98> tour 1

and navigate there with

AV-98> tour

[0] https://tildegit.org/solderpunk/AV-98/src/branch/master/av98...)

There is the Castor[1] GUI client, which ends up being a bit nicer to use. But the protocol draws heavily from Gopher so text-based clients are the default.

[1] https://sr.ht/~julienxx/Castor/

What on earth is this kiosk thing? There's no explanation whatsoever. `help` does not do much.

I'm acquainted with some of the folks involved in this project, and it's been a privilege to see them bring the concept so much to life in such a short span of time. For those who share an active interest in alternatives to the farrago that the modern web has become, I can unreservedly recommend Gemini to your attention.

Gemini is such a widely used name for systems. I wish people would stop reusing it, in place of finding new unique names.

of course, someone on HN has to make this ultimate bike-shed comment about a project's name.

Last time I tried to grouch that grouch someone pointed out how the the ratio of amibiguously-named projects to all projects approaches unity. Try it yourself, it's fun:






Haskell (named for a guy)

Curry (also named for that guy)




...you can go on aaaaaall day...








I see that they've specified both a transport protocol, to replace HTTPS, and a document format, to replace HTML.

BLUF: They should have just run the text/gemini format on top of HTTPS 1.1, make a gemini --> HTML formatter, and maybe a restricted subset of HTTPS, and called it a day. Replacing HTTPS is a waste of time. Also, most of the benefits of the document format could be gotten with a sane subset of HTML. There are no mandatory bad parts to HTTP or HTML.

I've seen this "The Web is too complex, we need Gopher" sentiment on the Fediverse a few times and it looks like the same class of thinking as "C++ / Rust is too complex, we need C."

They are complaining about how _bad_ parties use HTTP and HTML and concluding that _good_ people should disavow HTTP and HTML as a result. It is like refusing to drive your pickup truck because someone else's truck has truck nuts on it.

But I've run websites with the "Motherfucking website" HTML style and it's fine.

All the complexity of the web is opt-in. Switching my site to Gemini wouldn't prevent, say, the New York Times from wanting a complex HTML website. All I'm doing is shooting myself in the foot to spite my enemy. The FAQ says they intend to co-exist with the web, so I'm sure they agree with me on this. They just want to lead by example. I also don't think it's a good example.

About extensibility, from Section 2.1.2 in the FAQ:

"Gemini is designed with an acute awareness that the modern web is a privacy disaster, and that the internet is not a safe place for plaintext. Things like browser fingerprinting and Etag-based "supercookies" are an important cautionary tale: user tracking can and will be snuck in via the backdoor using protocol features which were not designed to facilitate it. Thus, protocol designers must not only avoid designing in tracking features (which is easy), but also assume active malicious intent and avoid designing anything which could be subverted to provide effective tracking. This concern manifests as a deliberate non-extensibility in many parts of the Gemini protocol."

These claims are made:

- Privacy violations are inherent to HTTP/HTTPS/HTML - Making a protocol non-extensible is feasible

But if you're specifying a completely new client and server, you could also just refuse to send and accept the ETag and cookie headers that are known to allow privacy violation.

And no protocol is non-extensible. They seem to think that software and ideas are controlled and owned by the first people to think of them. But if Gemini catches on, then it can be forked. This should be obvious to people working in FLOSS. I seem to recall it happened to IRC. Designed simple, forked into incompatible competing versions, the official next version is in dev hell, and now it's also competing with XMPP and Matrix.

Perhaps that belief is why they chose to make a new spec instead of defining a subset of HTTP and HTML. They think that HTTP and HTML are atomic and we must not reuse any good ideas from them, they've been tainted with bad ideas, so we have to change everything all at once.

To this end they even made the status codes different from HTTP.

"Importantly, the first digit of Gemini status codes do not group codes into vague categories like "client error" and "server error" as per HTTP. Instead, the first digit alone provides enough information for a client to determine how to handle the response."

They could have just specified a subset of HTTP status codes, to make it easier to remember which codes are which. Personally I like having 4xx and 5xx separate. Maybe they were really happy to save 33% of status code bytes compared to HTTP.

Regarding performance, the spec says, "Connections are closed at the end of a single transaction and cannot be reused."

I believe there's also no inline media, so 1 document == 1 connection == 1 request.

Again, this is completely possible with a sane subset of HTML and HTTP - Just write a server that can't reuse connections, and write HTML that doesn't have inline media. Use a linter or transpiler (from text/gemini to HTML) to enforce that.

But if you _do_ reuse connections, or use something like QUIC, then you can get better performance. So they are making that impossible. Again, until someone forks it and adds it anyway.

I feel like I'm the crazy one because there's clearly a few people working on this project seriously, and I'm one person writing a rambling comment. But I don't see the point. Now I feel like I owe the world a subset of HTTP and HTML to put my money where my mouth is.

I've wanted to make something like this, and I'm happy to see its here because implementing a client looks like a lot of fun.

Here is why I want a simpler web: The NewYorkTimes wont be there! Neither will google or facebook or shopify or influencers or clickbait even! I want a web that is hard to make money from. Something that doesn't support fingerprinting and ad tracking, where my interests aren't at odds with the "platforms".

I want a place where people are posting ideas and creations and info and software and labors of love for free, with weird one-off communities that don't get embroiled in national censorship debates.

I might even want the relatively high barrier to entry, the fact that other people there would be looking for the same thing instead of being directed there by browser defaults and content portals.

That sounds like an online utopia, almost like the feeling of discovering the joy of the internet again. I want that too.

That is how I felt when I discovered gopher and then went on to work on gemini. I can wholeheartedly recommend both communities.

I can tell you, from having been involved in a number of the discussions on the mailing list, that these points were definitely discussed. There were a number of people on your side. HTML subsets were discussed, as was using markdown (or a markdown subset). In the end a simple link syntax was chosen along with a few optional items. The simple link syntax is much more than a stripped down html 1.0... it is gopher but with a better syntax that is easier for non-developers to generate content with.

Gemini isn't perfect and is definitely a work in progress that serves a purpose and fits a niche. In my opinion it is geared more to converting people from gopher to gemini rather than from the web to gemini, though I hope we'll get some of both :)

Did someone also pointed out that CRLF is a pointless Microsoftian archaism? There's no need anymore to use two characters to mark the end of a line.

Yes, several times. It was kept in because other text based protocols (like SMTP, HTTP) also specify the use of CRLF, at least for the request.

It also doesn’t really hurt anyone, though it would be nice if new protocols could try to remove the need from their specs.

We’re going to build new clients and servers anyway, so might as well handle just /n

> It also doesn’t really hurt anyone

Well it does. Extra characters in the HTTP spec end up getting sent trillions of times per year over the Internet costing real money in bandwidth in aggregate.

> They are complaining about how _bad_ parties use HTTP and HTML and concluding that _good_ people should disavow HTTP and HTML as a result.

A protocol that can be easily abused is perhaps not a good protocol.

A verbose markup language with loads of historical baggage, inconsistent implementations (much better these days), and whose spec is half-baked and incomplete at best, is perhaps not a good markup language.

With respect to the status codes, the original spec was worse (single digit response codes) than what exists now. I didn't agree with the original spec, so when I wrote my own Gemini server [1][2] I reused codes from HTTP. It was only after a drawn out discussion between myself and the original designer that a compromise was struck and we ended up with two digit response codes (and redirects---those weren't in the original spec either).

[1] https://github.com/spc476/GLV-1.12556

[2] It was the first Gemini server to be written.

I still don't see why Gemini couldn't use a small subset of HTTP itself. HTTP is really great, just a little too complicated with all the extra RFC adding stuff to it (cookies, CORS)... if you ignore all that and remove a few of the fancy things in HTTP/1.1 (as defined in RFC-7230), like content-negotiation, you get pretty much what Gemini wants, no?

God forbid someone explores a few ideas, huh?

> Now I feel like I owe the world a subset of HTTP and HTML to put my money where my mouth is.

There are such subsets around: the one used by Texinfo's HTML export for portability [1] and by XMPP for XHTML-IM [2], the ones supported by simpler web browsers, the ones individual developers or projects stick to. An issue with that is the differences between those subsets (along with presence of multiple ones), although they seem to have quite a bit in common: attempted accessible websites tend to be usable in basic (or restricted) web browsers, possibly about as often as random/unrestricted websites are usable in major web browsers.

[1] https://www.gnu.org/software/texinfo/manual/texinfo/texinfo....

[2] https://xmpp.org/extensions/xep-0071.html

> But if you _do_ reuse connections, or use something like QUIC, then you can get better performance

I don't have opinions about the rest, but this is a good point.

With text-only I think one could easily slurp a whole Gemini site in one go. Although it could complicate the spec, specifying a way to download the whole site as a compressed archive could be nice. Clients could then offer an offline mode and it could help with mirroring sites.

Think of it more as a beefed-up Gopher than a pared-down web.

Here's the source for one of the clients (Bombadillo): https://tildegit.org/sloum/bombadillo

Thanks for linking! More info can be found here: http://bombadillo.colorfield.space for those that are interested. A number of other clients also host at tildegit.org (a search for gemini will probably yield them).

OMG, this spec was so readable I actually read the entire thing for pleasure.

I think this is a first.

I agree it is neat.

My experience tells me it is a sign of too much ambiguity and gaps that will bite you later though. See Markdown for example.

It’s like religion - one of the reasons they say Christianity won over Judaism is because they didn’t require one to circumcise before the initiation. Yet both are equally rejecting of any other religion.

It also helps that the Jewish books predicted Christianity. Isaiah 53, for example.

Only if you choose to interpret it that way, while ignoring other bits.

Judaism is also quite "racist". There is only one "chosen" people. Everyone else would be/is a second citizen. Who would join a religion to be 2nd citizen?

I'd encourage you to actually attend a synagogue service, or even just have a chat with a real-life Jewish person, before exposing yourself to ridicule as you have here.

In my life as a Jewish person, I've never _once_ hear a rabbi even hint as such a sentiment. Converts are treated exactly as Jewish as those who are born into the faith. What leads you to think otherwise?

Most every religion does have that bias of "we know better, we've got the holy texts, if you're not with us you're doomed/clueless/need to be saved/unlucky".

The fact that modern practitioners in your geographic locale of choice aren't as bigoted as their more literal brethren doesn't mean much, except perhaps that there is hope in undoing the more harmful superstitiousness of religion by raising quality of living (for all, regardless of faith).

Sure, but that's not what the GP was talking about. They directly stated that those who joined would forever be "second-class citizens", which simply isn't factual.

I'd suspect that OP was referring to the first century CE when Christianity could be said to have "won out" over Judaism, and yes then Jewish people often were racist in that way, just like people of most cultures then. Nowadays Jewish people aren't racist at all, or at least less so than the average American or European - and OP should have acknowledged and phrased that better.

Yes, I should have me it more clear that I am speaking of the fact that there is no philosophical reconciliation between Christianity or Judaism; rather than people being racist.

I'm not saying that Jewish people are racist but (as far as I read) "in Judaism, "chosenness" is the belief that the Jews, via descent from the ancient Israelites, are the chosen people, i.e. chosen to be in a covenant with God."

Apparently "God" decided that not all the people are equal. I could adopt Judaism customs but I would could never be descent from ancient Israelites. This reduces the number of people interested to join Judaism. That was my whole point.

As far as discrimination/racism is concerned I think most of the religions(inclusing Judaism and Christianity) have some extreme(i won't name them) and less extreme (i.e interfaith marriage) views but fortunately most people care less about them(and religion in general) and more about their well-being/getting along with everybody.

Judaism is clear that God's choosing is not a thing to be desired.

Gemini sounds neat. Not sure I'll continue playing with it, but from the spec I have a couple of suggestions you are free to implement or ignore.

# Denial of Service

Permitting zero or more whitespace characters in a request a introduces a denial of service because the request is no longer bounded. Just keep sending whitespace to keep the connection open. Mandating a single character closes this particular hole.

It's possibly a minor DoS considering other attack vectors, but why leave any low-hanging fruit?

# Caching, ETags and Tracking

Caching is great, and I'm sure Gemini devs wouldn't object to caching if it could be handled without introducing user-facing privacy issues. Here's a sketch for an ad network protocol that I think would work with Gemini:

1. Client requests URL X.

2. Server replies with a redirect to "X?tracking-id", where tracking-id is associated with a set of IP addresses.

3. All links in the document append the tracking-id to preserve it.

Anytime a client accesses a document in the ad network, the tracking-id quickly returns. People often have multiple IPs for their laptop, desktop, phone, tablet, etc. but there would be enough overlap in these sorts of requests to correctly tie a set of IPs to a unique client.

What cookies, etags, etc. permit on the WWW is doing this without requiring sharing the client IP database.

So maybe what we can do is still enjoy the benefits of caching while at least detecting when some shenanigans are at play. The ad network outline above is observable behaviour from which we might be able to infer shenanigans.

To reintroduce caching without amplifying the tracking powers, and add the ability for the client to verify the server's ETag is legit, rather than treating it as an opaque identifier. If an etag for content specified the hash function used, then the client can verify the integrity of the document. So take the HTTP ETag header to a syntax like "ETag(sha256): ..."

Like the ad network outlined above, malicious servers could craft slightly different versions of a document for each IP and so generate unique ETags that may permit some sort of tracking in a similar way. However, the gemini document has no notion of hidden data (like comments and hidden fields in HTML), so any such shenanigans will always be visible in some way in the document itself, eg. embedding a unique hash code at the end, for instance.

So in the end, adding caching in this way should improve efficiency without appreciably amplifying tracking.

# Transclusion

A language with a construct becomes more expressive if it also includes its dual. Gemini has document-level references, where a document points to another document, but it lacks transclusion which is a dereferencing operation, such that you can embed another document into the current document.

So links in Gemini are:

    => gemini://some/url
Transclusion would be its inverse:

    <= gemini://some/url
This is pretty handy actually. Kinda like iframes without the headaches. However, unlike iframes, I don't think the client should load the URL itself, but rather all communication should go via the server for the top-level document. This is again to avoid amplifying the tracking powers, since a top-level document server could append tracking-id to transcluded URLs and then they act just like tracking cookies.

I like your first suggestion, the rest don’t sound so great to me.

It’s simple now, let’s keep it like that.

Sure, it's simple, but a stated goal is power to weight ratio. Transclusion definitely fits under that. Authoring and organizing information with transclusion is so much easier than without, because it avoids much of the need for special tooling to assemble a final document from fragments that might be reused. It's too unwieldy for very large documents though, so it's perfect for authoring small to medium sized documents, which fits right in with Gemini's goals.

There are many ways to handle it client-side:

1. Behaviour like iframes, although I don't like the tracking implications.

2. Origin server fetch, as I mentioned.

3. Another possibility is to render it as a button which the client must trigger to load the document.

“ In the same way that many people currently serve the same content via gopher and the web”

Is this actually true?

In my experience the gopher community is very split on this concept with many people doing just that, but many others (myself included) wishing that gopher/web proxies and cross posting did not exist at all.

Is there much (any) gopher community left?

There is! I read this on a gopher mirror of the Hacker News, so I thought I'd better come on over to the web version to respond...

In fact, several regular gopherites have commented on this thread.

Funny that this came up on the Hacker News on the weekend that the server is going down. Probably a good thing, since it only has 128MB of RAM.

There are some. It's had something of a resurgence in the last couple of years and a few new gopherholes have opened up.

You can peruse gopherspace via the overbite gateway, and the big gopher search engine, Veronica, is still up and running.

Take a look! :)

SDF[1] and many of the various tilde PUBNIX's both have gopher communities, and if you're on the fediverse (mastodon/plemora/etc) you'll find people listing gopherholes in the profile

[1] sdf.org

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact