Hacker News new | past | comments | ask | show | jobs | submit login
Ted Nelson remembers Englebart 5/11/2022 4PM-5PM Pacific (stanford.edu)
11 points by drallison on May 10, 2022 | hide | past | favorite | 5 comments



Will definitely be watching this!

As I understand it Nelson's vision is still not grasped today.

His original hyperlink was bi-directional.

Pause and think about that for a moment.

In practice, at the time, the target in a client-server implementation had no way to store "return" links pointing to it. The WWW could not rise to Nelson's ambition. Maybe that's changed today, and we could certainly rewrite servers or create a more peer-oriented way of going about things.

The upshot is a totally changed "Web", in which search is fundamentally different from a centralised index a-la Google.


Not sure I understand. Can you explain? How would a bidirectional link work? What would be an example? And how would that free us from the need for centralized search engines, for example? I ask not because I doubt, but because I have a hunch that you may be right. Thanks!


Yes it's a profound concept isn't it.

I suppose there's three things to talk about. Ted Nelson. The idea. Possible implementation.

First, Ted.

He's clearly a genius, also the type who has a lot of magical thinking. Which I mean that in a mostly good way.

Then there's the theory. Graph theory. The WWW (basic concept) is an ephemeral directed graph under asymmetrical navigation, such that sources are initiated by clients (requests) to destinations on listening servers (respondents). It's a stateless static document model, and within a document the outdegree is the (known) number of links in a document, and the indegree is the (unknown) number of all other links in documents out there that point to it. Links are just text strings (URLS).

As it stands the only way to know the indegree is to spider all documents exhaustively and count. Larry Page's "rank" did this, giving a rough metric of how "important" a document was by its popularity (indegree), which launched Google.

As I understood it, Ted wanted links that were containers of other links that referred to them. The server would somehow need to watch for these coming in, and remember them. The upshot is a much richer information space.

Implementation. Can-o-worms. What happens if a million pages refer to yours? How to prune dead return links? How to trust servers to be honest? It's completely different privacy/visibility universe.

So I think it's just a theoretical pipe dream.

But I hope I may be proved wrong by really smart people. I think the key is to move away from client-server to something a bit more flattened, perhaps with IPFS or suchlike underneath. Who knows.


Ted Nelson's vision and inventive genius helped create today's Internet. Englebart and the Augment Project at SRI made some of the vision real.


Where's the YouTube recording?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: