Hacker News new | past | comments | ask | show | jobs | submit login
Where Have all the Gophers Gone? Why the Web beat Gopher (1999) (unc.edu)
115 points by alokrai 11 days ago | hide | past | web | favorite | 42 comments

The NCSA and Cern web servers had the ability to let users on a system publish content from a specially named "~/public_html" directory. At UIUC, the engineering labs quietly enabled this functionality for all engineering students.

Within a week, dozens of students had their own web pages up, and by the end of the year, hundreds did. Encouraged by their initial success, this encouraged students to stand up their own web servers so they could run cgi scripts, etc.

As far as I knew, there was no equivalent taste-test for providers of Gopher content. You either stood up your own server or got special access to someone else's server, both of which were hard to do at the time.

RPI had this, too, and I remember taking full advantage of it oh so long ago.

In the late 90s Oz universities generally gave their students email via Pine plus a 10mb web folder as standard. Many STEM students made random websites about their hobbies or poetry or whatever. It was like a proto Geocities.

And once you did that, your content was on the "World Wide Web", not just some "Gopher server". The WWW name made the venue sound more intriguing.

Oh god, pine - I completely forgot about that. Thank you for bringing it up!

I was pretty jealous of that over here in NZ.

As a student I worked as a sysadmin in the engineering department at Iowa State, we had something similar running in the mid 2000s out of anybody's home directory, it looks like it's still up and running. Some graduate students had really great information up.

Are free shell accounts still a thing?

looks like it..


they have gopher hosting still...

It is, and there's plenty of small communities built around them: https://tildeverse.org

And yes. Gopher is ususally included. :)

The primary gopher killer was around February 1993 the University of Minnesota announced that it would charge licensing fees for the use of its implementation of the Gopher server. In contrast, CERN said anyone could implement the WWW. Practically all implementation work from then on used the WWW. In addition, the gopher developers wouldn't work (basically) with open standards groups like the IETF.

By trying to control & license everything, they lost everything.

I wonder what will happen to the next gen of video codecs. The proprietary video codec is on the verge of collapse with most new devices now shipping AV1 decoders and streaming services looking at switching to AV1 when available.

Has apple accounted AV1 support in MacOS and iOS? (sorry if I missed it. That would be great news)

They are in the group behind AV1 but they have yet to support it. VLC on iOS can play it however so maybe streaming apps could include their own decoders while waiting for apple. It looks like netflix is using AV1 for android users right now and TVs are coming with AV1 support now.

Good riddance to proprietary codecs.

There is a sort of Gopher "successor" in the works called Gemini that was featured recently:


It aims to fix flaws in the Gopher protocol while still making it easy to implement clients.

If you haven't dug into Gopher, there's lots of cool stuff in it from ASCII art and old computer manuals to games and lots of blogs (called "phlogs"). I suggest grabbing a client and heading to the Gopher Lawn to get a taste:


(Lynx works as a client, but there are a ton more out there with fun UIs.)

I wrote a 15 line Python server behind STunnel and xinetd for Gemini; I love the protocol, and where I felt a lot of the Gopher world seemed to venerate the old, I think Gemini really is a great, low-fat, content oriented protocol and community. Come join the fun!

If you want to connect now: Web Portal: https://portal.mozz.us/gemini/gemini.circumlunar.space/ Clients: https://portal.mozz.us/gemini/gemini.circumlunar.space/softw...

I'm in the process of writing a Tcl graphical client, to let folks hack their browser as if it were a running lisp process. For day-to-day browsing, I'm mostly using Elpher right now, which is an emacs Gemini and Gopher client written in Elisp, and is fantastic.

I'm at gemini://acidic.website/

I understand why acidic, thanks to your explanation, but why website, when you don't have a website?

(I also like acidic food sometimes. I'm not sure if I like acidic coffee; so far, the only coffee I have successfully consumed is espresso without milk, water or sugar and some is surely better than others. I was certainly surprised to discover that plain espresso is actually better than adding milk, water or sugar.)

Is there a way to actually reply within Gemini? I could set up my own server, I guess, but you would never know I had.

> I understand why acidic, thanks to your explanation, but why website, when you don't have a website?

Because the domain was incredibly cheap, and I may eventually host some HTTP content there as well, but I'm not sure. The main reason was just the price heh.

> Is there a way to actually reply within Gemini? I could set up my own server, I guess, but you would never know I had.

Nope, no comments. You could email me, of course, but I'm thinking of setting up an ActivityPub server on the box so folks can message me if they're interested.

I've had a lot of fun with this recently.

One of my favourite gemini sites is this [1] page where a guy is hosting music that he's made over the years.

[1] gemini://gemini.circumlunar.space:1965/~sloum/

The platform is really nascent but I think has a lot of potential. If I get some time I might hack together a client that's better suited to me than the extant ones (although bombadillo works great for the moment).

Shameless plug time again. If you are on Windows, why not try my Gopher client:


One thing I never see in these analyses is ease of setting up a server. Back when I was in college and got my first computer, I could install a simple web server, drop any file in its folder, and view that file in my web browser. If it was HTML, and I got the HTML wrong, it would still display most of the page.

Gopher wasn't like that at all. I downloaded the Gopher server and ran it. Then I put a file in its folder, and it didn't show up. You had to (IIRC) write a special index file to tell it how to serve each file. If you didn't get it perfectly right, it wouldn't show up at all. And of course the error messages and documentation were somewhere between "terrible" and "missing".

I wanted Gopher to succeed, because I liked the simple, regular organization of information, rather than the crazy anything-goes world of the World Wide Web. I just couldn't figure out how to get it to work.

I still maintain a gopher presence[1] as a mirror. Once it was determined that gopher could neither be monetized nor weaponized, it was doomed to obscurity. It is an open academic tool for open academic purposes. As an aside, I actually saw the initial announcement on USENET about CERN releasing something called a browser for something else they called the World Wide Web. I couldn't see the point since we already had gopher, veronica, jughead, et al. Absolutely prescient on my part...


> Once it was determined that gopher could neither be monetized nor weaponized, it was doomed to obscurity.

This is a cynical rewriting of history. More likely, Gopher lost to HTTP due to a combination of random chance, network effects, and simply not being as user friendly to e.g. set up a server. HTTP was also an "open academic tool" for "open academic purposes". Only later, due to HTTP's success, was it monetized and "weaponized".

> This is a cynical rewriting of history.

Seconded. I was on the Internet before gopher, and I never really saw the point over regular ftp (and, a quick glance at the wikipedia page right now doesn't really tell me what the real extra value over ftp is).

Conversely, the web's value was immediately obvious.

Gopher was a menu-driven interface via ftp to search, find and download files for academics who weren't particularly computer savey. It also gave them a platform to publish their own research for others and have it indexed and searchable. It was never meant to be a popular format...

Well, I think popular in the context of Internet meant something different then as it does now. But yes, gopher seemed then just like 'nother service, while WWW was a revolution in the mid-nineties.

Perhaps. Gopher is simply a single, world-wide, hierarchical, indexed and searchable directory tree. Its sole means of transport is ftp. It can move all manner of formats about but only display ASCII text. The only thing resembling hyperlinks are the lowly gophermaps, one per directory. Gopher servers are small, simple and require few resources. Gopher clients are minuscule. Gopher with veronica is essentially a library card catalog that directs you to the proper stack and shelf to fulfill your query. And there is not a single commercial farthing to be made anywhere...

> It can move all manner of formats about but only display ASCII text. The only thing resembling hyperlinks are the lowly gophermaps, one per directory.

That’s the point you should be focusing on: it’s not a competitive user experience. The commercialization angle isn’t “no way to extract money” but “no users”.

I remember the era of BBSes, Fidonet, and then getting access to the internet. FTP was obviously useful. The web was extremely useful. Gopher was … “what’s the point?”

Gopher was designed for and by academia. It is simply a research tool. It is a way to post, find and retrieve textual data on a topic. It still does exactly what it was designed to do. That what it was designed to do is no longer relevant, given the web, is besides the point. It still functions as designed, much like the ed editor. And that it is now used almost exclusively by hobbyists trying to recreate an 80's dial-up BBS aura is also besides the point...

I remember, I think, connecting to the local library to access all of the above using a text/curses type interface on a 2400 bps modem borrowed from my high school. I'm thinking 1993ish?

The Portland, Oregon public library system offered this in the 1990s. You could dial in and get the library's own text-terminal catalog system, Gopher, or the Lynx browser. You could download files as well as browsing text. That's how I first started playing with ray tracers and downloading shareware games.

You could even quit the browser and you'd be sitting in a Unix shell account.

There are still some gopher holes. For instance, the bitreich is quite active:

Some of the content is rather funny, other is a bit too much for insiders to be comprehensible. It seems to be a "pure" fork of the suckless.org community.

When I mapped Gopherspace back in 2018, there were over 300 active gopher holes:


I have a list of 634 gopher servers, which I check daily for their "aliveness":


There are some duplicates (different domains for the same content, IP addresses) but roughly 364 domains are constantly online.

I used Gopher to download MPEG-1 videos back in 1993. Whole two of them. One was egg shot by a bullet and second one was Michael Jackson's Smooth criminal anti-gravity lean clip. Exciting times.

I was in college getting my MIS degree a little before this article was written, and gopher was already being described is antiquated and vintage technology.

The web had a much broader appeal to me and as soon as I could I set up a page on our schools web server. I then took a deep dive into CGI scripting and that got me to start coding.

I remember the first time I read about the web was on gopher.

A few years later (1997), part of my job was to migrate a bunch of university gopher holes to the web.

I just built a rough gopher server for createaforum.com just for fun https://www.youtube.com/watch?v=Nws5oVpPY_g

TL;DR Hypertext links and graphics.

you say graphics, I read 'porn' ... never used gopher, did get an application/icon when installing internet software for one of the first ISPs I used, but I guess little adult content to be found on gopher?

Cynical. Generally speaking, articles with images and color are more interesting to random browsers. Even variable width fonts are more interesting than fixed width fonts.

Given a HTML page using the default serif font that shows a picture of you and a cat and talks a little about your research, or a plain text file using the default fixed width font that talks a little about your research and says "in the menu below, clyde.jpg is a photo of my cat", then most people are going to prefer the first option.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact