I named it NeoGopher (New Gopher) https://www.youtube.com/watch?v=yuSDU0JiI2c The madness starts at 1:30 (turn the sound down, the words serve only to confuse)
Ahhh... so many meaningless words in the demo. Even I, who designed it, found it hard to explain what it was - today I would say "It's a linked list browser.", which is what Gopher was.
Of all the stupid ideas I have had (there have been many), re-inventing Gopher was probably the worst. What was I thinking?
btw this is an edited repost of a comment I made a while back on the Gopher topic.
It reminds me Xanadu (without the entanglement):
My idea for Gopher is to convert webpages on the fly with a text browser like elinks.
I wonder how many futures have been destroyed from the desire to profit off the eating of the seeds instead of fruit?
On the other hand, it is a publicly funded institution, funded by taxes and student fees. So one has to also ask if it would be fair for a small few having little to do with the University should be able to profit off of investments made by public dollars and overly high student debt.
Of course in hindsight obtaining grants or forming a partnership with a non-profit org or an academic department might have been a better choice, especially for all the professional services requests.
edit: Also you have to remember that computing was a LOT more expensive then. I have old quotes for SparcStations and RS/6000s that were in the $20-40k range, even with an educational discount.. The Mac IIci's were not cheap either ~$5k when loaded up with RAM.
Was it actually getting cut or were they not getting the increase they wanted? I lived in MN off and on since the 90's and it seems like they call a "cut" every time they don't get the increase they want.
> edit: Also you have to remember that computing was a LOT more expensive then. I have old quotes for SparcStations and RS/6000s that were in the $20-40k range, even with an educational discount.. The Mac IIci's were not cheap either ~$5k when loaded up with RAM.
Looking back, when people see the price of the NeXT cube and freak out, they forget how much a Mac IIfx was. Its amazing the era had basically expensive computers and machines like the Sinclair and Commodores in the very low end.
> """We lost at least $25 million to inflation, and $16 million through a base cut this year. In addition to a potential $25 million loss to inflation next year again, the Governor's vetoes of IT and systemwide special appropriations cut another $23 million in funding -- for which we are aggressively seeking full restoration."""
The mainframe teams had a harder time of things. For Microcomputers we were lucky - our hardware costs decreased and we had a deal with the University Bookstore to support their computer hardware sales.
That stuff was still expensive. Here's some educational pricing for a workstation with substantial education discount in 1994.
IBM model 25T $8495 $5400.00
80Mhz upgrade $1500 $ 953.50
64MB upgrade $ $2912.00
2GB disk upgrade $ $1463.00
But it was optimizing for a local maxima. The damage was done, the community was broken.
> Among the team's offenses: Gopher didn’t use a mainframe computer and its server-client setup empowered anyone with a PC, not a central authority.
File this under "straw-man responses that happened in their feverish anti-establishment imaginations."
Also remember that this was the old internet where we didn't have NAT or really any firewalling. Most of the University of Minnesota's class B was open to the world at the time. In fact I used to send email directly to my workstation instead of the central mail host to avoid delays..
Now does this mean that Gopher would have, "won out," over the WWW? No. But the description holds.
My impression was that they tried to monetize Hyper-G immediately, with the negative results mentioned in the original article. Having an open source server may have helped avoid that for both; I think Apache was a powerful force for the web.
Veronica-2 is the gopherspace search engine:
It feels whimsical to have a gopher site up and running, but it costs me nothing and earns me nothing. It's there because I like it and because I feel happy knowing it's there. In a way, that's the sentiment behind the Internet in its earliest days. And that's what i like about it - the philosophical purity of doing something for its own sake. And now, for the simple pleasure of posting a link with that old protocol:
How awesome does that look? :)
Unfortunately real life has got in the way, and I've not found time to release it yet.
By using a windows Gopher client. There are several, but mostly outdated. Which is why I have written a new one. I guess you didn't check the screenshot I posted.
SDF still provides a gopherspace inside its free shell:
Tangent: NNTP is still better than any web forum, fb or twitter. Too bad solving the spam problem is only via centralised identity platforms.
Ha! I've had precisely one vendor ask for an SSH public key to set up file transfer.
The world runs on FTP. It's sad. Even sadder, it's often a Windows FTP server!
Interplanetary File System (IPFS) was discussed quite a bit. NameCoin was the most popular DNS replacement. ZeroNet was a really interesting project that does all of the above.
I thought it was an interesting design that provided nice way of reducing abstraction gaps and rework in the various layers/techs. They integrated it with their Amoeba OS that ran on cluster of workstations w/ single, system image.
Interesting that it references the Legion OS work that Greg Lindahl spoke about at the Summit:
At first glance Globe feels like a low-level solution for the problem. Mapping objects to binaries via a broker and all that.
The web we have today solved much of the issues in other ways. Anycast DNS and CDNs allow for content distribution. Storing state in cookies allows for Operational transform and other eventually consistent techniques.
Thanks for the reference and connecting a few dots!
On the surface it looks a little overcomplicated to me, for not a whole lot of benefit now that Google exists.
Maybe 2016 will be the year VR takes off (again)
One of the saddest things about the modern Internet is how little hypertext is used. As an example, I would have expected, 'Mark McCahill' to link to his personal or academic site, 'San Diego' to link to the city government's site, 'Hyatt Islandia' to link to that hotel's site  (perhaps with a note that its name has changed), and 'Mission Bay' to link to an appropriate page  — and that's in the first paragraph alone.
It's also interesting how those slides from 1992 look current as of 2016: flat, bullet-pointed, sans-serif font.
> But the internet was not yet open for business. It had been built on dot-mil and dot-edu, on public funds. Programmers shared source code; if you needed something, someone gave it to you. A dot-com address was considered crass. It was “as though all of TV was PBS,” Lindner says. “No commercials.”
Those really were the days. I remember the Canter & Siegel spam, and how appalled we all were to see it. The Internet then was more genteel. Honestly, I wish it were still non-commercial: available to be used by companies (as it was even in the 90s), but not as an advertising medium.
> At GopherCon ’93, Yen announced that for-profit Gopher users would need to pay the U a licensing fee: hundreds or thousands of dollars, depending on the size and nature of their business. Many users felt betrayed. In the open-source computing spirit of the day, they had contributed code to Gopher, helping the team keep up with the times. Now they were being asked to pony up.
As much as it's become fashionable to group-hate esr the last decade or so, I think that it's important to recognise his major contribution: persuading folks the free software (under a different name) can be good for business. Imagine if the university had kept Gopher as GPLed software, rather than demanding money from its users.
> At its peak, the Mother Gopher consisted of 10 Apple IIci computers. But when it was finally euthanized, who knows what shape it was in. There was no ceremony. Nothing was carted off to a museum. Gopherspace simply became emptier, and the world without the Web became harder to imagine.
That's definitely sad. We need to do a better job of honouring our history and marking important events. Even in the late 90s or early 2000s, folks ought to have recognised the historic import of gopher.
If it's trivial for you to come up with the link, then it's not very useful because it's trivial for me, too. Similarly for something trying to automatically add links; I've actually seen that functionality and it's more annoying than helpful, obscuring the links added by humans with intention. If it's challenging for you to come up with it, then it takes effort and on average people won't do it.
Pretty much no matter how you slice it, there's no story that results in the sort of hyperlinking you mention.
Right click, search google (Or however FF does it/extension), without the hard to read hypelinking standard (hard to read because it's for important stuff, you should stop there and think)
Wikipedia is a joke in many articles around this hyperlinking irrelevant points.
A hyperlink should reinforce the article, not detract from it in a random manner.
We read linearly not in a hash.
Which of course relies on a third-party, proprietary service.
> A hyperlink should reinforce the article, not detract from it in a random manner.
Back in the old days, being able to jump around from article to article was considered a virtue. It made the Web like a piece of interactive fiction, and adventure. It was awesome.
> Cavanaugh pointed out that one particular lasting legacy of Suck's is the idea of using a link as a rhetorical effect. "People still used italics to make a point in a sentence back then," he said, explaining that the site was one of the first to use a link to let readers know what it was writers were discussing, or to point to a joke. "That was what knocked my socks off about Suck right away, was the idea that oh, the link is this funny thing."
I enjoyed it when the link was like a footnote at odds with the text.
I think this is subjective. I find that I'm far more often frustrated by a wiki page I'm reading not linking something that I wanted to click on (necessitating search etc), than I am frustrated by overabundance of links. This varies between different wikis, though.
>> We read linearly not in a hash.
But do we do so because of the constraints that the legacy information mediums have imposed on us, or because it's really the only way our brain can process information? Perhaps our brains can, in fact, just as easily adapt to the web model?
Depending on the starting page i "open in new tab" a whole bunch of links and then switch between tabs doing a breadth-first search finding my way to either deeper information on a topic or related information that may then become my focus.
and don't call wikipedia a joke. I vastly prefer it to the hoard of content-mill listicle ridden garbage that makes up most of the popular web these days.
As even Firefox finally removed its support for the gopher protocol several years ago, "OverbiteFF" or the equivalent may need to be installed in Firefox or Chrome to view the above without issue.
True enough, but back when the Web was young everyone linked pervasively. My impression back then is that that was what the Web was for.
This was good in some respects since it encouraged a high degree of cooperation, but it's definitely easier to access most things now.