Hacker News new | past | comments | ask | show | jobs | submit login
The Rise and Fall of the Gopher Protocol (minnpost.com)
235 points by lindner on Aug 12, 2016 | hide | past | favorite | 62 comments



This was posted yesterday but looks so good, and spent so little time on the front page, that we've put it into the second-chance pool (described at https://news.ycombinator.com/item?id=11662380 and the links back from there). We wouldn't normally do that for a day-old post with 28 comments, but I think the HN software penalized this submission by mistake, which was bad. Plus also, it's the gopher protocol.


Good call, I would have otherwise missed this awesome article.


When I got my first shell account in 1993 it was as a student of the University of Minnesota, and I remember that the first time you logged in it didn't dump you directly into tcsh or whatever, there was an easy shell that let you navigate by number. I recall vividly that both Gopher and the World Wide Web were on this list (previously I only had had experience with Usenet and FTP). Initially it seemed like Gopher had all the things, and the WWW was some weird novelty. Of course within a year I had seen Mosaic running, and building web pages became a career long obsession for me. It's hard for me to gauge the politics (I only turned 15 in 1993), but at least from where I sat, the multimedia document with embedded hyperlinks was the secret sauce. It was the same thing that had gotten me obsessed with HyperCard back in the day. The web just felt like this creative playground with unlimited untapped potential, and Gopher felt sort of bureaucratic (again, maybe just from where I sat at the U of M). Ironic that now Gopher is maintained as a labor of love by hobbyists.


You know how Paul Graham says "build something people want". Well somehow I headed in exactly the opposite direction and designed some strange nondescript beast that mashed up linked data (remember that concept, no? doesn't matter you missed nothing), with JavaScript and the web into some sort of Gopher-ish linked data browser.

Basically with 15 lines of JavaScript you could navigate any http based open data API

I named it NeoGopher (New Gopher) https://www.youtube.com/watch?v=yuSDU0JiI2c The madness starts at 1:30 (turn the sound down, the words serve only to confuse)

Essentially it let you browse through linked lists. Each list item could link on to another list or to a web page. Each list was created by about 15 lines of JavaScript which could be loaded from any web server and the data came from any HTTP web API.

Ahhh... so many meaningless words in the demo. Even I, who designed it, found it hard to explain what it was - today I would say "It's a linked list browser.", which is what Gopher was.

Of all the stupid ideas I have had (there have been many), re-inventing Gopher was probably the worst. What was I thinking?

btw this is an edited repost of a comment I made a while back on the Gopher topic.


Very interesting ideas.

It reminds me Xanadu (without the entanglement):

http://xanadu.com/Ping'n'Jeff.png

My idea for Gopher is to convert webpages on the fly with a text browser like elinks.


"Eventually, though, the U did want some money — for itself. At GopherCon ’93, Yen announced that for-profit Gopher users would need to pay the U a licensing fee: hundreds or thousands of dollars, depending on the size and nature of their business."

I wonder how many futures have been destroyed from the desire to profit off the eating of the seeds instead of fruit?


The University of Minnesota office of commercialization of technology does this habitually, asking for far too much money from startup entrepreneurs. Many of the academians have no idea how business works, and don't care. 90% of their success has come from one patent. So from the standpoint of this one institution, you could actually answer that question by auditing their performance over the past.

On the other hand, it is a publicly funded institution, funded by taxes and student fees. So one has to also ask if it would be fair for a small few having little to do with the University should be able to profit off of investments made by public dollars and overly high student debt.


This is so true. You have to remember that finances were really tight at this time. The University budget was getting cut left and right throughout the history of Gopher's evolution. At one point there were plans to outsource everyone to the Minnesota Supercomputer Institute.

Of course in hindsight obtaining grants or forming a partnership with a non-profit org or an academic department might have been a better choice, especially for all the professional services requests.

edit: Also you have to remember that computing was a LOT more expensive then. I have old quotes for SparcStations and RS/6000s that were in the $20-40k range, even with an educational discount.. The Mac IIci's were not cheap either ~$5k when loaded up with RAM.


> The University budget was getting cut left and right throughout the history of Gopher's evolution.

Was it actually getting cut or were they not getting the increase they wanted? I lived in MN off and on since the 90's and it seems like they call a "cut" every time they don't get the increase they want.

> edit: Also you have to remember that computing was a LOT more expensive then. I have old quotes for SparcStations and RS/6000s that were in the $20-40k range, even with an educational discount.. The Mac IIci's were not cheap either ~$5k when loaded up with RAM.

Looking back, when people see the price of the NeXT cube and freak out, they forget how much a Mac IIfx was. Its amazing the era had basically expensive computers and machines like the Sinclair and Commodores in the very low end.


Regarding cuts, yes. We had to do with less year-over-year. From Hasselmo's 1991 State of the U address:

> """We lost at least $25 million to inflation, and $16 million through a base cut this year. In addition to a potential $25 million loss to inflation next year again, the Governor's vetoes of IT and systemwide special appropriations cut another $23 million in funding -- for which we are aggressively seeking full restoration."""

The mainframe teams had a harder time of things. For Microcomputers we were lucky - our hardware costs decreased and we had a deal with the University Bookstore to support their computer hardware sales.

That stuff was still expensive. Here's some educational pricing for a workstation with substantial education discount in 1994.

                                        list          discount   
  IBM model 25T                         $8495         $5400.00
         80Mhz upgrade                  $1500         $ 953.50
         64MB upgrade                   $             $2912.00
         2GB disk upgrade               $             $1463.00
                                                       -------
                                                      10728.50


But, HoneyCrisp apples are so good!


In hindsight it was a stupid limiting move. Yes, we did get the Gopher T-shirt on MTV and a number of companies did end up paying. I was even loaned out for a few weeks to Schlumberger to assist with putting Gopher servers on Oil Platforms.

But it was optimizing for a local maxima. The damage was done, the community was broken.


During the first dot com boom, an MIT spin-off company tried to sell a new server side language. It looked something like Lisp with curly braces. I don't remember the name of the company or language. I just remember that they expected customers to pay per byte served (from the customers' own web servers)!


Curly braces? The language was called 'Curl'. ;-)

http://www.curl.com



I'm sure UIUC could chime in on that w/Mosaic.


Incidentally, when I matriculated at UIUC in 1997, most of the computer-lab Macs ran NCSA Mosaic. (The PCs and UNIX machines had Netscape.) I remember receiving some materials on internet resources when I got to the dorms. At the time, many freshmen required instructions on how to use the internet. These materials had been prepared a few years earlier and still included references to Gopher.


> “I still remember a woman in pumps jumping up and down and shouting, ‘You can’t do that!’ ”

> Among the team's offenses: Gopher didn’t use a mainframe computer and its server-client setup empowered anyone with a PC, not a central authority.

File this under "straw-man responses that happened in their feverish anti-establishment imaginations."


Not sure what dmd is implying here. But I will say that the committee was looking at using either X.500 or CSO protocols instead. Thus it would rely on a centralized authority and a publishing model that would not allow for individuals to run their own servers.

Also remember that this was the old internet where we didn't have NAT or really any firewalling. Most of the University of Minnesota's class B was open to the world at the time. In fact I used to send email directly to my workstation instead of the central mail host to avoid delays..


Actually, the article very accurately describes exactly how the University of Minnesota works with surprisingly vivid realism. It's likely that the department was receiving significant funding from companies asking it to do mainframe research of some kind. Working on what were considered "toys," at the time probably would have looked extremely bad.

Now does this mean that Gopher would have, "won out," over the WWW? No. But the description holds.


Source? Context? Authority even?


A contemporary article about Hyper-G: http://much.iicm.edu/projects/hyper-g/9.htm/

My impression was that they tried to monetize Hyper-G immediately, with the negative results mentioned in the original article. Having an open source server may have helped avoid that for both; I think Apache was a powerful force for the web.


Gopher is still alive :)

http://gopher.floodgap.com/gopher/gw

Veronica-2 is the gopherspace search engine:

http://gopher.floodgap.com/gopher/gw?ss=gopher%3A%2F%2Fgophe...


And there are even blogs (known as phlogs) in gopher. After reading one (gopher://sdf.org/1/users/jstg/phlog) I was motivated enough to modify my blogging engine (https://github.com/spc476/mod_blog) to support gopher. The gopher protocol itself is very simple so getting a simple server up and running was rather easy: gopher://gopher.conman.org/


The article says there are just over a hundred gopher sites in existence. Yay - mine is one of them! I've got 200 or so articles on my website and one day I decided to make them all available as well over gopher. It's easy to do: install pygopherd on your machine for the server, and run your HTML files through an emacs macro that strips the HTML and outputs 80 column text, and you're in business!

It feels whimsical to have a gopher site up and running, but it costs me nothing and earns me nothing. It's there because I like it and because I feel happy knowing it's there. In a way, that's the sentiment behind the Internet in its earliest days. And that's what i like about it - the philosophical purity of doing something for its own sake. And now, for the simple pleasure of posting a link with that old protocol:

gopher://therandymon.com

How awesome does that look? :)


Inspired by Cameron Kaiser's (and others) ongoing work to keep Gopher alive, I spent a good few months this year writing a new Windows client for it:

http://www.weegeeks.com/upload/Jar-Gopher-Browser-Full-Scree...

Unfortunately real life has got in the way, and I've not found time to release it yet.


Not sure how you'd visit gopher sites in Windows. Add the Overbite Firefox plugin available at Floodgap, I suppose. On Unix/Linux "forg" is really nice on the GUI side, and lynx is your best bet at the CLI. Although you'll find the original gopher client in a lot of repos too - all derivatives of Debian/Ubuntu and FreeBSD, at least.


> Not sure how you'd visit gopher sites in Windows.

By using a windows Gopher client. There are several, but mostly outdated. Which is why I have written a new one. I guess you didn't check the screenshot I posted.


Gopher is great. However, an effort should be made to offer StartTLS on port 70 so that we can have encrypted gopher traffic.


I miss Gopher. I remember when the Gopher sites started to display: "This service has been discontinued. Please use WWW instead". I was completely disappointed.


Actually there are still a number of Gopher servers in operation, and you can use Firefox to visit them.


"All the gopher servers (that we know of)"

http://gopher.floodgap.com/gopher/gw?gopher://gopher.floodga...

SDF still provides a gopherspace inside its free shell:

http://sdf.org/?tutorials/gopher


I remember discovering gopher through the wonderful TurboGopher program. All the fiddling around with FTP was revealed to be pointless and dumb. For a brief period, really until Altavista came along to make Mosaic useful, Gopher was so cool. But at school we all had our public_html directory being published to the world, and you could nag the sysadmin to install cgibins, whereas gopher was an institutional thing and focused on files.

Tangent: NNTP is still better than any web forum, fb or twitter. Too bad solving the spam problem is only via centralised identity platforms.


The most popular protocol, or method of retrieving information from another computer, was FTP (file transfer protocol), the primitive, labor-intensive equivalent of knocking on someone’s door and asking if you could carry away his piano. lol, I wonder what the computing equivalent of ftp today is.


Still FTP.


Sftp, hopefully.


> Sftp, hopefully.

Ha! I've had precisely one vendor ask for an SSH public key to set up file transfer.

The world runs on FTP. It's sad. Even sadder, it's often a Windows FTP server!


If there was a sane way to set up an anonymous sftp server, never mind having web browsers actually understand sftp links, things may change.


Does anyone have a list of modern alternative (experimental even) protocols out there?


The recent Decentralized Web Summit touched on a number of alternates:

http://www.decentralizedweb.net/

Interplanetary File System (IPFS) was discussed quite a bit. NameCoin was the most popular DNS replacement. ZeroNet was a really interesting project that does all of the above.

http://ipfs.io/ http://namecoin.info/ http://zeronet.io/


I also collect old models, centralized or not, in case they're useful in modern situations (esp intranet) for reliability or security especially. What do you think of Tannenbaum et al's Globe model as WWW alternative?

https://cds.cern.ch/record/400321/files/p117.pdf

I thought it was an interesting design that provided nice way of reducing abstraction gaps and rework in the various layers/techs. They integrated it with their Amoeba OS that ran on cluster of workstations w/ single, system image.


Wasn't familiar with this so I skimmed the IEEE paper:

http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7...

Interesting that it references the Legion OS work that Greg Lindahl spoke about at the Summit:

http://legion.virginia.edu/

At first glance Globe feels like a low-level solution for the problem. Mapping objects to binaries via a broker and all that.

The web we have today solved much of the issues in other ways. Anycast DNS and CDNs allow for content distribution. Storing state in cookies allows for Operational transform and other eventually consistent techniques.

Thanks for the reference and connecting a few dots!


The article mentions Hyper-G, which appears to be more of an evolution of the WWW than a completely different protocol.

On the surface it looks a little overcomplicated to me, for not a whole lot of benefit now that Google exists.

https://www.chemie.fu-berlin.de/outerspace/doc/hyper-g-abs.h...


We liked Hyper-G because it had a bi-directional link model and the Harmony browser was able to render VRML.

Maybe 2016 will be the year VR takes off (again)


MaidSafe is an alternative protocol as well as an application platform focusing on strong security and privacy, http://maidsafe.net


Interesting! I guess I knew the basic outlines of that story, having been an internet user in 1993... but it's a neat story. Good work, guys!


In February, Metafilter brought back their Gopher server, "after fifteen years of downtime:" https://metatalk.metafilter.com/24019/Direct-your-gopher-cli...


> To the curious who stayed behind, Berners-Lee explained that the Web could be used to connect all the information on the internet through hyperlinks. You could click on a word or a phrase in a document and immediately retrieve a related document, click again on a phrase in that document, and so on. It acted like a web laid over the internet, so you could spider from one source of information to another on nearly invisible threads.

One of the saddest things about the modern Internet is how little hypertext is used. As an example, I would have expected, 'Mark McCahill' to link to his personal or academic site[0], 'San Diego' to link to the city government's site[1], 'Hyatt Islandia' to link to that hotel's site [2] (perhaps with a note that its name has changed), and 'Mission Bay' to link to an appropriate page [3] — and that's in the first paragraph alone.

It's also interesting how those slides from 1992 look current as of 2016: flat, bullet-pointed, sans-serif font.

> But the internet was not yet open for business. It had been built on dot-mil and dot-edu, on public funds. Programmers shared source code; if you needed something, someone gave it to you. A dot-com address was considered crass. It was “as though all of TV was PBS,” Lindner says. “No commercials.”

Those really were the days. I remember the Canter & Siegel spam, and how appalled we all were to see it. The Internet then was more genteel. Honestly, I wish it were still non-commercial: available to be used by companies (as it was even in the 90s), but not as an advertising medium.

> At GopherCon ’93, Yen announced that for-profit Gopher users would need to pay the U a licensing fee: hundreds or thousands of dollars, depending on the size and nature of their business. Many users felt betrayed. In the open-source computing spirit of the day, they had contributed code to Gopher, helping the team keep up with the times. Now they were being asked to pony up.

As much as it's become fashionable to group-hate esr the last decade or so, I think that it's important to recognise his major contribution: persuading folks the free software (under a different name) can be good for business. Imagine if the university had kept Gopher as GPLed software, rather than demanding money from its users.

> At its peak, the Mother Gopher consisted of 10 Apple IIci computers. But when it was finally euthanized, who knows what shape it was in. There was no ceremony. Nothing was carted off to a museum. Gopherspace simply became emptier, and the world without the Web became harder to imagine.

That's definitely sad. We need to do a better job of honouring our history and marking important events. Even in the late 90s or early 2000s, folks ought to have recognised the historic import of gopher.

[0] https://fds.duke.edu/db/aas/ISIS/faculty/mark.mccahill

[1] https://www.sandiego.gov/

[2] http://missionbay.regency.hyatt.com/en/hotel/home.html

[3] https://www.sandiego.gov/park-and-recreation/parks/regional/...


"One of the saddest things about the modern Internet is how little hypertext is used. As an example, I would have expected, 'Mark McCahill' to link to his personal or academic site[0], 'San Diego' to link to the city government's site[1], 'Hyatt Islandia' to link to that hotel's site [2] (perhaps with a note that its name has changed), and 'Mission Bay' to link to an appropriate page [3] — and that's in the first paragraph alone."

If it's trivial for you to come up with the link, then it's not very useful because it's trivial for me, too. Similarly for something trying to automatically add links; I've actually seen that functionality and it's more annoying than helpful, obscuring the links added by humans with intention. If it's challenging for you to come up with it, then it takes effort and on average people won't do it.

Pretty much no matter how you slice it, there's no story that results in the sort of hyperlinking you mention.


> One of the saddest things about the modern Internet is how little hypertext is used.

Right click, search google (Or however FF does it/extension), without the hard to read hypelinking standard (hard to read because it's for important stuff, you should stop there and think)

Wikipedia is a joke in many articles around this hyperlinking irrelevant points.

A hyperlink should reinforce the article, not detract from it in a random manner.

We read linearly not in a hash.


> Right click, search google (Or however FF does it/extension)

Which of course relies on a third-party, proprietary service.

> A hyperlink should reinforce the article, not detract from it in a random manner.

Back in the old days, being able to jump around from article to article was considered a virtue. It made the Web like a piece of interactive fiction, and adventure. It was awesome.


I miss Suck, which felt like it started not using links for exposition:

> Cavanaugh pointed out that one particular lasting legacy of Suck's is the idea of using a link as a rhetorical effect. "People still used italics to make a point in a sentence back then," he said, explaining that the site was one of the first to use a link to let readers know what it was writers were discussing, or to point to a joke. "That was what knocked my socks off about Suck right away, was the idea that oh, the link is this funny thing."[1]

I enjoyed it when the link was like a footnote at odds with the text.

[1] https://www.engadget.com/2015/09/16/suck-dot-com-20th-annive...


>> Wikipedia is a joke in many articles around this hyperlinking irrelevant points.

I think this is subjective. I find that I'm far more often frustrated by a wiki page I'm reading not linking something that I wanted to click on (necessitating search etc), than I am frustrated by overabundance of links. This varies between different wikis, though.

>> We read linearly not in a hash.

But do we do so because of the constraints that the legacy information mediums have imposed on us, or because it's really the only way our brain can process information? Perhaps our brains can, in fact, just as easily adapt to the web model?


I really have to disagree here.

Depending on the starting page i "open in new tab" a whole bunch of links and then switch between tabs doing a breadth-first search finding my way to either deeper information on a topic or related information that may then become my focus.

and don't call wikipedia a joke. I vastly prefer it to the hoard of content-mill listicle ridden garbage that makes up most of the popular web these days.


I tend to read more in a stack than just linearly - looking up terms, taking notes, following links, then back to where I left off. That is what hyperlinks should be for.


A gopher-take on wikipedia:

gopher://gopherpedia.com/

As even Firefox finally removed its support for the gopher protocol several years ago, "OverbiteFF" or the equivalent may need to be installed in Firefox or Chrome to view the above without issue.


The pervasive hyperlinkng you mention seems to have a (large) niche in Wikipedia.


> The pervasive hyperlinkng you mention seems to have a (large) niche in Wikipedia.

True enough, but back when the Web was young everyone linked pervasively. My impression back then is that that was what the Web was for.


We also didn't have great search capability yet. A document lived alone unless the author took pains to connect it. A lot of my surfing was done via other people's linkdumps.

This was good in some respects since it encouraged a high degree of cooperation, but it's definitely easier to access most things now.


That's why its "The Web" no? Many almost randomly interconnected threads? That's how I think of it.


So much they use themselves as the canonical example in the image for it:

https://en.wikipedia.org/wiki/Hyperlink


Lots of photos and quotes from the original Gopher team on this long form article.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: