
Freeing the Web from the Browser - joesavage
https://www.reinterpretcast.com/open-hypermedia
======
oblio
While this is somewhat cool, I have a few comments:

> The Web is, without a doubt, the most powerful research tool currently
> available to man. No longer must researchers comb through endless indices
> and catalogues to find what they are looking for.

True, but most people aren't researchers. Heck, I think most people don't even
know what indices are :)

> The vast majority of those interested in a piece of work are merely readers,
> unable to contribute, only to consume.

Guess what, most people, 99% of the time, "consume".

> Billions across the globe rely on the Web to enhance their intellectual
> capabilities on a daily basis, building understanding through its rich mesh
> of connections.

Not really, billions across the globe check out funny cat pics, play games,
watch you-know-what, etc. :)

Anyway, what I'm saying is: it's a nice vision of the world and the web, but
that's not what the world mostly is. Good luck with it, but don't expect that
a super contributor-friendly media will turn the vast majority of people into
constant contributors.

~~~
joesavage
I largely agree. I'm very much coming at this from the angle of 'knowledge
work', and think a system of this kind is most useful to (though definitely
not only useful to!) scientists, engineers, designers, lawyers, journalists,
etc. While the population of knowledge workers is admittedly much smaller than
the population on the whole, though, it's still sizable. Knowledge workers
play an incredibly important role in our society, and anything that can
amplify their intellectual capabilities is well worthwhile in my view.

~~~
michaelscott
If the target audience is knowledge workers then this is a nice step in a
better direction. Bear in mind, though, that the goal of a knowledge worker is
to understand content so the focus of any such project probably shouldn't be
on enhancing the manifestation of graph theory on the web and rather on the
methods of education available.

Focus on how the content is presented to the user rather than the connections
between content, because at the end of the day it's the content I care about
and not the connections.

------
machiaweliczny
I would love a web where you can comment with your forum circle on any URI
available on internet.

Eg. I open some research paper and I click "comments" in web browsers. And I
see comments from /r/machinelearning, hackernews etc. Also real time chat
would be awesome to have for each of websites.

Nowadays it work for me like this - found something interesting X - type "X
site:myforum" in google to learn more about it

Found a bug, typo, wanna contribute related resource - need to go to github,
email author etc. - can't just open "comment" on page without comments and
contribute :(

I think such structure of internet as we have right now relies on
google/search engines too much when it could be much better organised.

~~~
wruza
That could erase forum-ness of people and turn comments into a big heap of
voices. Personally I wouldn’t like my forums if they were just a random people
hanging around. Moderators and implicit rules do form what a forum is, _what
you do on it_ , but with “just comments on URI” all will be lost. While your
wish feels great, it has downsides, since as an entire humanity we’re still
not ready to discuss anything.

~~~
wild_preference
Just seems like a downgrade to posting it on said forum.

For example, now you need a UI that enumerates the articles that have comments
on them. Maybe new comments even bump to the top. Starts to sound like a
regular forum but with a small gimmick to me. There’s a reason this never
caught on.

------
dcuthbertson
This looked interesting, so I started reading the author's dissertation, but
I've been sidetracked/put-off by its overbearing copyright statement.

    
    
        This copy of the dissertation has been supplied on 
        condition that anyone who consults it is understood to 
        recognise that its copyright rests with its author and 
        that no quotation from the dissertation and no 
        information derived from it may be published without 
        the prior written consent of the author.
    

At least in the US, the fair use clause under the copyright law allows for
limited quotes/excerpts w/o asking permission.

~~~
joesavage
Sorry about this! It was a stock copyright clause provided by my university.
I’ll look into getting it removed, and in the mean time am perfectly happy for
you to use the work (with proper attribution) as you see fit.

EDIT: The problematic clause in question has now been removed.

~~~
dcuthbertson
Nicely done! Thanks for being so responsive!

------
bastawhiz
We almost got there with Pingbacks being the first step. Then they devolved
into meaningless spam. Without a system of manual curation, it's impossible to
build something where _everyone_ contributes. Spam scales easily, and
moderation and curation does not. A thousand good links buried under a million
spam links don't add any value.

And we can talk about reputation and proof of stake systems until we're blue
in the face, but so far, nothing exists that actually works. If it did, we'd
already be using it.

~~~
yjftsjthsd-h
I never understood that; PoW seems like not merely an elegant solution, but a
simple one. What am I missing?

~~~
Yen
My gut sense is that proof-of-work as a spam deterrent can _improve_ the
signal-to-noise ratio, but won't bring it up to an acceptable level.

Say you have a simple proof-of-work protecting some action (posting a comment,
voting up/down a story, etc.), and you have its difficulty tuned to allow a
median-productivity human on a typical desktop computer to do that action at
their typical rate.

A spambot doesn't need to sleep, take days off, and can get illicit access to
much more computing power than any one human.

It _would_ probably shift spamming activities to focus on more central, high-
value venues, though. hmm.

------
skadamat
Surprised nobody has mentioned HyperCard -
[https://arstechnica.com/gadgets/2012/05/25-years-of-
hypercar...](https://arstechnica.com/gadgets/2012/05/25-years-of-hypercard-
the-missing-link-to-the-web/)

There was a very compelling vision for the web, by Doug Engelbart and others
in the 60's and 70's. Unfortunately, because of the computing culture's
attitude of forgetting even the recent history / understanding what
foundational work was done (like real scientific fields do!), the web folks
didn't have a lot that context.

Alan Kay, in many of his talks, has discussed how the browser should really be
more like an operating system kernel. The web is a mess and we can still build
interesting things with lots of hacking & engineering, but it's fallen short
of the original vision. And now we're locked into the tooling we've built.

~~~
nradov
WebAssembly may eventually get us closer to that vision.

~~~
skadamat
WebAssembly will no doubt give us more freedom, but still has a lot of
constraints. Also, it's fundamentally a hack built on top of the browser
ecosystem, not replacing browsers entirely!

~~~
tim333
I hadn't heard of Hypecard so I watched the video on it here
[http://www.openculture.com/2017/08/apples-hypercard-
software...](http://www.openculture.com/2017/08/apples-hypercard-software-the-
innovative-1980s-precursor-to-hypertext-now-made-available-by-archive-
org.html)

I seemed kind of cool but doesn't do much you can't do with webpages and
javascript. Its scripting language seemed much easier to do some things with
than javascript though. I kind of imagine WebAssembly will go the other way
and make things more complicated.

~~~
scroot
> I seemed kind of cool but doesn't do much you can't do with webpages and
> javascript. Its scripting language seemed much easier to do some things with
> than javascript though.

In our current developer culture we tend to view things less holistically and
more in terms of this-or-that language or system. Hypercard (which was
composed of both the visual system and Hypertalk) was one holistic system that
fit extremely well with personal computing of the early to mid 90s (which is
why recreations of Hypercard on today's machine miss the point).

This was a completely different view of personal computing, one that sought to
reduce the chasm between "developer/programmer" and "user". Hypercard allowed
authoring in the computing medium, and did so by permitting users take
advantage of what was new about computing as opposed to other older media. In
fact, they were called "authors" and there were thousand of them — most of
them weren't professional programmers.

There is a lot that hypercard could do that the modern web cannot. I cannot
copy and paste a button — retaining all of its internal functionality in a
different context — from one web page to my own (at least not without a lot of
trouble). This was the de facto way to get started in Hypercard.

Hypertalk is another interesting part of bridging the divide. It is difficult
for entrenched programmers to reckon with because it's more like natural
English and (unlike most programming languages) is easier to read than it is
to write. But for regular people it makes sense.

Final point: everything good about computing is about metaphors. Hypercard had
one of the best metaphors since spreadsheets: the concept of stacks, cards,
and objects on cards. That's all there was and it was easy to understand how
these pieces interact. The web does not have anything like this for its
"authors" because it is inherently unfriendly to them.

------
TuringTest
_Most saddening, perhaps, is the way in which the Web constrains the use of
links. For example: although the link is the primary form of reference on the
Web, underpinning the tangle of connections that make the system so useful,
the ability to create new links is a privilege granted only to content
producers. The vast majority of those interested in a piece of work are merely
readers, unable to contribute, only to consume._

The sad part is, we already have the technical infrastructure in place to
support those user contributions - it's the Comments section of any blog-
shaped site.

So called "Web 2.0" was all about readers contributing feedback to whichever
content was being published through a channel. But the shape it took was not
the original hypermedia vision, but a conversation of loosely related comments
that could potentially go off-topic.

To support the annotation feature described in the article, it would just
require that common web platforms allowed their current comment systems to
attach comments to paragraphs in the article, and show these comments as side
notes. Current moderation functions could be used to separate the wheat from
the chaff. But it would require readers to adapt and learn to tap this
resource to its fullest potential.

~~~
superflyguy
" it's the Comments section of any blog-shaped site."

Something I never read on any site. It's just pure racism/muppets talking
shit. If there were a plugin to block them I'd use it; I feel dirty even
knowing they're down there. L

~~~
TuringTest
You're reading the comments section of a website right now.

Places like Ars Technica or Stack Exchange have a healthy environment in the
comments section, so it can be done.

~~~
superflyguy
This site works because of the small numbers of people who are educated and
there's sensible moderation.

~~~
TuringTest
Yes, precisely.

So, it can be done.

~~~
superflyguy
So what? I should start reading Nazi's comments on other sites now?

------
robertkrahn01
Great to see those ideas discussed! It is a little strange, however, that Ted
Nelsons ideas around Xanadu and its link representations aren't even
mentioned.

[https://youtu.be/hMKy52Intac?t=1m44s](https://youtu.be/hMKy52Intac?t=1m44s)

~~~
goodmachine
Check the full dissertation, they are.

[https://www.reinterpretcast.com/pdfs/savage-j-
dissertation-2...](https://www.reinterpretcast.com/pdfs/savage-j-
dissertation-2018-05.pdf)

------
ChrisSD
Didn't Google used to have a project that allowed readers to annotate the web?

If I understand it correctly, it sounds like the author wants something
similar only categorised by field of expertise instead of being a free for all
(and not owned by one company). This would require some kind of moderation, in
one form or anther.

 _EDIT_ : Wikipedia says I was most likely thing of Sidewiki, which wasn't
actually a wiki:
[https://en.wikipedia.org/wiki/Google_Sidewiki](https://en.wikipedia.org/wiki/Google_Sidewiki)

~~~
ChrisSD
Another thing my search brought up was Hypothesis[0] which sounds almost but
perhaps not quite what the author describes.

[0] [https://web.hypothes.is/](https://web.hypothes.is/)

------
tim333
>One could imagine a system in which multiple sets of links could be
associated with a single resource to accommodate this, allowing for a range of
different viewpoints on how things are connected.

You can kind of do it yourself by quoting the thing in a public comment eg
"Freeing the Web from the Browser" [https://www.reinterpretcast.com/open-
hypermedia](https://www.reinterpretcast.com/open-hypermedia)

and then saying it reminds me of whatever eg. the semantic web
[https://en.wikipedia.org/wiki/Semantic_Web](https://en.wikipedia.org/wiki/Semantic_Web)
a bit

Then anyone gooling the title may come across stuff like this.

------
icc97
Although this talks about freeing the web from a browser, this seems like a
pretty good case for a augmenting the browser experience. The first thing
being a browser plugin that ignores all links (which is probably a good
default for anyone interested in reading an article and not getting
distracted), then it just allows a layer on top for highlighting sections
creating your own links. I expect this probably already exists.

I think firefox's reading mode should have an option to turn off links.

The missing step here is connecting to other programs, but this is a first
step.

~~~
joesavage
Author here. It’s definitely true that extending existing browsers is the
fastest route to this kind of behaviour (though it’s not clear to me if it’s
the best route). In fact, if you squint a bit (or maybe a lot), it could be
argued that with application-specific URL schemes we kind-of sort-of already
have the primitives we need to make something like this work.

Practically, though, if you want the multi-program side of this (which is kind
of orthogonal to the 'multiple perspectives on how things are connected'
side), then to make this kind of multi-window hypermedia system usable I think
you need to have deep integration with the window manager. While Chrome OS
tries to achieve this kind of integration by making the browser the OS, I
propose that the best way forward here is to effectively make the OS the
browser, as I discuss in the article. (Of course I’m not talking about the
kernel when I say ‘OS’ here, but the desktop environment). At that point, I’d
say the browser is different enough to the browsers of today that the
description of ‘freeing the Web from the browser’ is still accurate.

~~~
icc97
Thanks for the reply! It's a really interesting piece of research. Yes, you're
quite right, the browser alone can't interact between all the programs, I was
purely thinking of low hanging fruit for the browser plugin.

I watched a Google interview with Douglas Engelbart [0] where people asked him
a couple of times if Wikipedia was what he'd envisioned for hypertext, he was
very polite about it and said it "was a good start", but he'd clearly wished
that we'd got much further by now.

What you're suggesting is definitely a step forward.

[0]: [https://www.youtube.com/watch?v=xQx-
tuW9A4Q](https://www.youtube.com/watch?v=xQx-tuW9A4Q)

------
jerjerjer
The problem I think is that most sites and businesses would be up in arms over
this idea. FB wants to have links pointing to other FB pages. Many websites do
a lot to prevent user from leaving. How do you plan to overcome this?

~~~
joesavage
That's a good point, and not one that I've considered particularly deeply to
be honest. (I'd love to hear other people's point of view on the topic!) I
guess in many ways the situation is similar to that around adblock.
Ultimately, the links that are overlaid on a particular page should be solely
and completely under the control of the user. If the technology that everyone
is using permits this kind of behaviour, I'm not sure companies have much
choice in the matter.

~~~
TeMPOraL
> _If the technology that everyone is using permits this kind of behaviour, I
> 'm not sure companies have much choice in the matter._

The technology _used to_ permit this; the current trends go against this
direction. I'm of course referring to JS-rendered content and SPAs. I imagine,
were your idea deployed, most of the time would be spent on fixing broken
links and link anchor points.

I support the goal you're trying to achieve. But between greedy publishers and
their ToS and JavaScript infecting everything like a pathogen, I fear that
we'll have to spin up an alternative Internet for knowledge work. That
Internet would be reader-friendly (both human and machine kind) and much more
static.

------
xtf
Most documents are linked regarding the topic. If you go on a math page,
you've got math links. Ancient and holy texts could be referenced in multiple
ways, because it could be interpreted in multiple ways (or the correct way is
unknown), but not something like guides. If a guide is written in more than
one meaning, it is not a good guide. Learning should be more seen like a tree,
you go down the route of branches and specialize more in the directions of it
and the branches lead the references. At the beginning you learn the language,
later you know it, otherwise every word needs to be linked.

~~~
romaniv
The simplest counter-example is Wikipedia. Most links on any page could lead
to generic term definitions, or they could link to explanations of how those
terms work withing the context of the page.

I.e. a link to "synthesizers" on a page about FM synthesis could lead to a
generic article on synthesizers, or to a list of FM synthesizers released up
to date.

And that's just the most obvious example. Having different "linking contexts"
would allow to add more links without turning the original document into a
mess.

Again, using Wikipedia as an example, you could add another "context" to the
page by linking various paragraphs to citations. That would be much more user-
friendly than what they do right now with bracketed numbers.

------
Sidth
Great article, thanks for sharing. Of relevance here is TiddlyWiki
(www.tiddlywiki.com), a personal note taking tool, that includes some of the
salient features, like transclusion and user-generated linking, mentioned in
the article. In addition, the approach supports a built-in programming
capability to allow computation on content (e.g., dynamic filtering and
content generation) and extension via plugins. All information (content and
Javascript code) is within a single html document.

------
dredmorbius
Joe: some god thoughts, and ideas I'm seeing from several quartersv and have
been thinking about myself.

I'd also like to see a tool that's useful for research, a _readers '_ ans
_writers '_ Web, not merely a consumers' funnel.

I've been lookingat the history of the Web and information in general (guilty
pleasure:
[http://www.historyofinformation.com/index.php](http://www.historyofinformation.com/index.php)).
Bush, Nelson, TBL. Early browser history, especially Viola, an entire system:
[http://www.viola.org](http://www.viola.org) . Plan 9's 9p and /net:
[https://en.m.wikipedia.org/wiki/Plan_9_from_Bell_Labs#/net](https://en.m.wikipedia.org/wiki/Plan_9_from_Bell_Labs#/net)
.

I'm thinkinking or possibly presenting the Web as a filesystem, or other forms
not typical of contemporary browsers:

[https://old.reddit.com/r/dredmorbius/comments/6bgowu/what_if...](https://old.reddit.com/r/dredmorbius/comments/6bgowu/what_if_the_web_was_filesystemaccessible/)

And I've a long list of concerns, largely referenced in the last link above.

Taking a look at your doc.

------
code_coyote
Comments and feedback won't work for a site with a lot of readers or viewers,
they don't scale. If just 20k people read your post, and one-quarter of them
comment, you'll be flooded and not able to find anything meaningful. I'm
already following some YT channels where the creator(s) has stated, "we can't
read the comments, requests here are ignored."

~~~
wruza
YT comments simply suck in any area. Creators can’t make polls, sorting seems
random, upvoting has no filters like funny/insightful at least. Moderation
seems to not exist at all. No groups, no sections (those 4-5 in sidebar do not
count). It is worst implementation of all possible, and YT does nothing to fix
it for decades. They have time to make “material designs” though. If you
imagine that videos are just OPs in a forum, you’ll see how crappy it is.

------
tomc1985
I had to stop reading after a few paragraphs. His assertion that linking
belongs to "content producers" is ludicrous. Those content producers have
given users the tools to do linking themselves, and they express themselves in
a variety of ways over a variety of mediums.

You need to learn how to write to express yourself with written word, yet how
many people do we here harping on how difficult it is to learn language?

At some point we can draw a line and say, "if you want these abilities you
need to learn these things". We did so with literacy, with driving, and with
so many professional trades. We can do so with basic internet literacy.

~~~
ergothus
> Those content producers have given users the tools to do linking themselves,
> and they express themselves in a variety of ways over a variety of mediums.

? How can I link to, say, a quote in that article that offends you? I can't.
How can I link to a youtube video and add my own commentary links? I can't (I
think) without creating my own video that explicitly copies the original
(rather than consuming it).

> You need to learn how to write to express yourself with written word, yet
> how many people do we here harping on how difficult it is to learn language?

I definitely wish people took the time to work on that skill instead of
assuming it's both automatic and that their level is adequate. Nonetheless,
this doesn't seem related to the point of the article - not that linking is
HARD, but that, outside of whatever the creator enabled, all we get are top
level URLs.

~~~
ianbicking
> How can I link to a youtube video and add my own commentary links? I can't
> (I think) without creating my own video that explicitly copies the original
> (rather than consuming it).

You can write a blog post and embed the video with timing information.

Of course embedding is largely the same as transclusion, among the features
touched upon by open hypermedia.

While you don't get full expressivity without a blog or something that allows
full HTML, you can get most of this in other mediums (e.g., Twitter) where the
video is embedded automatically given a link. In theory OEmbed
([https://oembed.com/](https://oembed.com/)) is a standard for something like
transclusion, though it's not very widely supported.

Constructing a link to a point in time in a video is a non-standard operation
(you just have to know the YouTube interface). Similarly there aren't great
patterns for finding a link to a position in a web page. But the pieces are
all kind of there, though missing the controls and patterns to bring them
together. Which is a failing of browsers, though that points in the opposite
direction of the claim in the title of this piece (i.e., it implies to me that
we need browsers to go deeper, not increase the breadth of linked
applications).

------
einrealist
I don‘t get what the actual problem is and how todays technology is limiting
anyone to create something like a link-sharing (e.g. reddit) or link-redirect
(hello url shorteners). Of course, if you need control over the content, then
you have to build a content management platform, too (centralised or
decentralised does not matter). But then you deal with boring copyright and
other legal stuff.

And hey, browsers do allow extensions nowadays. And if that‘s not enough,
build your own.

------
bb010g
This smells a lot like Xanadu.

~~~
moviuro
Saved you a click:
[https://en.wikipedia.org/wiki/Project_Xanadu](https://en.wikipedia.org/wiki/Project_Xanadu)

------
echan00
Super small feedback, you should put the video at the top of your post.

I really dig the commentary btw, maybe its the english sounding voice :)

------
mertnesvat
It would be awesome if they have social media plugin which µigrates all posts
from other platforms into mastadon.

------
mark_l_watson
There used to be a browser plugin that allowed users of the plugin to register
comments on parts of web pages.

In general I like the idea behind the article, of enriching content by
allowing readers to add links, but in practice this opens the door for
spammers.

------
achow
For those who are on mobile:

[https://www.outline.com/hvF3cS](https://www.outline.com/hvF3cS)

------
masukomi
did i miss something or does this article COMPLETELY ignore the amazing
possibility for abuse this opens up?

Now any a-hole can make a public link in your page (or whatever future form
that takes)? Nah, no way _that_ could go wrong. The word "abuse" appears
literally zero times in the 200 page pdf.

~~~
vxNsr
This is exactly where my mind went, the amount of spam even small time
bloggers have to contend with in their comment sections is astounding, now
imagine if every page on the internet had a comment section and no filtering,
or moderation.... Even today youtube comments suck because of the near total
lack of moderation.

Even sites like this one or the verge with paid and volunteer moderators who
in theory monitor things 24/7 I still see useless spam. I'm not even talking
about individual people with opinions some might find offensive, I mean just
outright spam advertising.

------
textmode
"Different people have different perspectives on how information should be
connected, so why do we not allow these range of perspectives to be
represented and shared digitally? Why limit ourselves to just one point of
view?

...

Why re-create code editors, simulators, spreadsheets, and more in the browser
when we already have native programs much better suited to these tasks?"

The title is something I contemplated and began to address long ago, only on a
personal level.

With respect to the first question, perhaps this goes to the poor mechanism
promoted by Google, to rank the www's contents by "popularity".

This mechanism obviously succeeds for purposes of measuring _www user_ opinion
and _selling advertising_ (the later not anticipated by the founders in the
early years). However it falls short in the non-commercial context, e.g., the
academic setting out of which the company grew. Anyone remember "Knol"?

Today Google search (and probably others seeking to emulate its commercial
success) intentionally promote a pattern of usage of their cache/database
where its users never reach "page 2" of search results. The company has built
their ad sales business on the idea that _one_ perspective ("the top search
result") should not only prevail but also that, optimally, other results need
not even be considered. It should be obvious that in a _non-commercial
research_ context, this is not optimal.

If the www is 100% commercial then of course this is not an issue. But "the
www" is difficult to define. All httpd's on any accessible network? All
httpd's listening on accessible addresses with corresponding ICANN-registered
domainnames? All pages crawled by a commercial bot, deposited in a commercial
www cache and made accessible to the public? And so on. In any event, if users
only view the www's supposed contents through the lense of a commercial
entity, the perception of what the www actually comprises may be manipulated
in a way that suits commercial interests, e.g. the sale of advertising.

As to the second question, when given the choice I do not use a popular web
browser. The author mentions the utility of "native programs". I would prefer
the term "dedicated programs". Programs that perform essentially one task, or
"do one thing". Whether such programs can perform their dedicated tasks better
than an omnibus-styled program that performs many, varied tasks is a question
for the user to decide. For example, the author answers that native programs
are "better suited" than the web browser.

The "web browser" has become a conglomeration of once dedicated programs.

There are such dedicated programs for making TCP connections over which HTTP
commands can be sent and www content retrieved. This is a task that web
browsers can perform, although some users may prefer a dedicated program. In
this way content retrieval can be separated from content consumption,
alleviating many of the www annoyances such as user tracking, manipulation and
advertising.

------
thawkins
Select text, right click, search with google, job done....

------
ianbicking
We do have a way to create links between two documents without editing either
document: create a new document that links to both documents. This is a
normal, though informal, activity.

And of course simply linking two documents together isn't that useful, you
have to say WHY they are linked. I.e., the semantic triple
([https://en.wikipedia.org/wiki/Semantic_triple](https://en.wikipedia.org/wiki/Semantic_triple))
of subject–predicate–object, or maybe more informally you are simply saying X
relates to Y because of Z, where Z is akin to the predicate.

Currently in HTML hypertext we're stuffing Z into the link text, which
sometimes works nicely and sometimes works very poorly. But in an external
document you have all the space you want to explain the relation between the
documents.

Obviously there's lots of shortcomings of adding a new document to the web to
explain every relation between existing documents. But I think it's a good
starting point. We're missing things like:

1\. Reliable deep linking to documents. We have ids, YouTube timestamps, etc.,
but finding these is an ad hoc process and they aren't always available.

2\. Widespread transclusion tools. We actually have some now, in the form of
link previews or OEmbed. When you post a link in a comment or post on Twitter
or Facebook, they effectively transclude the link into the document. Not fully
interactive, but it might be a better balance between linking and viewing than
traditional/literal transclusion.

3\. Discovery of these annotations or commentary. There's a hard CS problem
here, to maintain privacy while also trying to find serendipitous results.
Maybe it involves pre-loading lists of documents from the locations you want
to "discover" from. Maybe it requires some understanding of privacy levels, or
whether content is personalized or public. Or we use the technique we have
now: lead with commentary, with no attempt to discover it after the fact.
I.e., I know there are comments on [https://www.reinterpretcast.com/open-
hypermedia](https://www.reinterpretcast.com/open-hypermedia) at
[https://news.ycombinator.com/item?id=17690865](https://news.ycombinator.com/item?id=17690865)
because I found the document on
[https://news.ycombinator.com/news](https://news.ycombinator.com/news) – is
serendipity even a thing in a place as large as the web?

4\. Maybe publishing tools... do I want to post a Tweet to describe every
relation I see? But maybe I do, because even if organic discovery is possible
I probably also want to publish a feed of my own annotations, and I want to be
part of a community of people doing this, and Twitter is a reasonable example
of this.

5\. Some sort of representation of these links when they've been found. Even
without fancy discovery this is necessary. Right now if I click on a link from
a post like: "OMG this is the stupidest argument ever:
[http://example.com/some-stupid-document"](http://example.com/some-stupid-
document") it will look like any other page I've opened. Only if I remember
well why I clicked on the link will I understand that I've been offered
something with derision. The browser has to do something here, all it has
currently is the back button to understand why you've gotten somewhere (and
that doesn't even work consistently in these cases).

------
LoSboccacc
oh boy it's the semantic web all over again. it's appalling the lack of
citations of the copious corpus that exists and the fact that it's never named
for what it is.

"what is really lacking — in my view — is research considering the human
factors at play"

there you go, if someone is interested in the topic, some citation back from
2005 which should be enough to find more references and research
[http://kmr.nada.kth.se/papers/SemanticWeb/HSW.pdf](http://kmr.nada.kth.se/papers/SemanticWeb/HSW.pdf)
(they even have a workable concept browser, go figure)

~~~
joesavage
To be clear, I’m familiar with the semantic Web and did a reasonable chunk of
reading about it when doing this research, but view it as only tangentially
related to the ideas I talk about here. If you’re looking for citations around
this work, check the full dissertation — there are plenty.

~~~
adultSwim
Thanks. It's not clear in the article that this is based on another work. Can
you provide a link? Reading the article I too thought linked data was
noticeably absent.

------
jlebrech
One way would be to create a browser/portal(combo) that only indexes
webgl/wasm apps for example. and you could still visit the info page about
that app via a normal browser but would have to install the specific browser.

the irony is that walled garden might have a valid use case (not to wall from
a vendor, but to wall us from old tech)

~~~
lucbocahut
Sounds like we can achieve this with a browser plugin, not sure we need a
whole new web just yet:)

------
jusa_
Every year for the last 25 or 30 I see this kind of thinking about
"information processing" show up.

What it represents, is a gigantic failure of computer science departments
world wide not connecting their theories of information with the department of
education's theories of information.

Most techies who mentally masturbate about how information should be organized
and optimally consumed to maximize the production of good outcomes have never
heard of the word pedagogy.

Without understanding that complex topic, they spend their time busy producing
articles and collecting them in libraries that only they can navigate. They do
this scratching their head wondering why it isn't creating global
enlightenment. Ever stuck in some fools quest for a better magical library
that will inject wisdom automatically into their heads.

After they hear of pedagogy and after they read a couple of text books on how
to turn a first grader into a tenth grader they finally understand the
difference between a library and school. They then proceed to think up ways of
converting the web (a library) into a school. Most of the time not even fully
aware what they are attempting.

And thats why it always fails. Schools have already been invented. They
already exist. They are constantly evolving. And they will always be better
than a library at producing information processing in the human mind. Every
first grader knows not to walk into a tenth grade class room and try to solve
the problem on the board there. Now step back and take a moment to think about
why that automatically doesn't happen on the web? And what the consequences
are of first grader constantly exposed to problems of all sorts of grade
levels without any indicator of grade or path to that grade. Naturally these
first graders get it into their head there is something very wrong with the
web.

If you want to "improve the web" understand pedagogy.

