
Web of Documents (2019) - rauhl
http://blog.danieljanus.pl/2019/10/07/web-of-documents/
======
vlttnv
I think I understand what the author is trying to say. For me the feeling that
you are browsing a "web of documents" is that you know that you are clicking
around for information. It is very basic. You look at and read a page with
some links. You find something interesting and click on a lick. This produces
a deterministic action: another page loads with some links. That's it. It
might look slightly different but it doesn't move around when components are
loading dynamically, no pop-ups, no red "1" badges that try to grab your
attention. It will either load slowly or it will load fast, it won't alternate
between the two while loading.

The main focus is the content of the document and the other content it leads
to. Not the style, the features, the tech that it uses to load the content. I
compare it to books. Most books are the same, predictable. They will have text
and some images on pages. When turning a page it will show another page. It
wont suddenly ask you to log in or go buy another book in the middle of this
one. The covers of the books are slightly different but the basic function is
the same. That's it.

~~~
onion2k
_it doesn 't move around when components are loading dynamically_

Images that don't have width and height attributes always did that. Not adding
those attributes to an img tag was considered a sign of a poorly made website
in the very early 2000s. I think it _might_ have been the origin of the term
"layout thrashing".

~~~
joshspankit
Sidetrack: I really miss when designers added h*w. Even mobile apps do it now
causing the button under your finger to jump as you’re tapping...

Feels like such a step backward

------
scottmotte
Daniel's sentiments resonate with me. I would add 1 additional restraint:

4\. Every document is version controlled

That way, as Daniel puts it, "[the document] will not magically alter its
contents tomorrow". Or if it does, I can see a history of what was altered.
Ideally, this would somehow be built into the protocol/browser rather than be
a burden to the publisher.

Also, maybe after a certain amount of time I can no longer modify my document.
If I'm the New York Times, this means when I publish a news article document,
and it contains an advertisement, that same advertisement forever lives on
that document - just like physical newspaper.

~~~
s_gourichon
Interestingly, the page as I just read it does not (no longer, I guess)
mention "version control". Probably the author changed their mind towards
simplicity. This is funnily, ironically meta.

That said, I agree with the author and discussed the topic with friends for
years.

On the "do you practice what you advocate?" side, my freelancer company
website is, since day one 8 years ago, a totally static (no script, no
cookie... except the one my hosting provider added without letting me options,
I will quit them some day) collection of documents with readable stable URLs.
[https://fidergo.fr/](https://fidergo.fr/) And the smaller English language
[https://fidergo.com/](https://fidergo.com/)

The part detailing dozens of software projects is generated in advance from a
structured data store into static pages uploaded to server.
[https://fidergo.fr/expertise](https://fidergo.fr/expertise)

Version-controlled, scripted deployment.

Safe for me to serve, safe for you to browse and read.

That is the web of documents.

~~~
nathell
> Interestingly, the page as I just read it does not (no longer, I guess)
> mention "version control". Probably the author changed their mind towards
> simplicity. This is funnily, ironically meta.

Author here. The article has not been edited since it was originally
published. I agree that version control is an important topic, and I applaud
efforts such as IPFS; it is, however, tangential to the main point of
"documents vs applications" that I was trying to make.

~~~
s_gourichon
Oh, sorry, thanks for correcting. That line from a commenter was actually an
additional idea, not from you. I totally agree with you. Bundling a global
visitor-visible version-control system would bring the whole idea closer to
the never-implemented xanadu project.

------
joppy
The author states that in an ideal "web of documents", we would have (1) only
GET requests, (2) no (java)scripts, and (3) no cookies. While I agree with (1)
and (3), I disagree completely with (2).

Having Javascript enables amazing things like _interactive documents_. For
example I would consider [1] a document, despite the fact that there is
Javascript running. What a wonderful way to interact with and get a better
understanding of a Voronoi diagram! Or how about the wonderful interactive
demonstrations of graph searches at [2]? To me, things like this are part of
what makes the web platform fantastic.

Earlier in the article, the author writes:

"A document is stateless. It exists in and of itself; it is its own microcosm.
It may be experienced interactively, but only insofar as it enables the
experiencer to focus their attention on the part of their own choosing; the
potential state of that interaction is external to the document, not part of
itself."

I think that the two interactive javascript-powered articles I linked comply
with this. Ruling out scripts entirely is too heavy-handed - perhaps a nice
middle ground can be found somewhere.

[1]:
[https://strongriley.github.io/d3/ex/voronoi.html](https://strongriley.github.io/d3/ex/voronoi.html)

[2]:
[https://www.redblobgames.com/pathfinding/a-star/introduction...](https://www.redblobgames.com/pathfinding/a-star/introduction.html)

------
anderspitman
I've been thinking a lot about this. Over the past week or so, I first made my
blog browsable with cURL[0], then plain TCP sockets[1]. I learned a lot
through this process, and relearned some things as well:

1\. HTTP/1.1 is valuable. With the move to HTTP/2, we lost the principle of
"simple things should be simple, complex things should be possible" (Alan
Kay). All things in HTTP/2 are complex. I can make an HTTP/1.1 client on the
command line with netcat.

2\. Serving documents is incredibly simple[2] and safe (once you handle path
vulnerabilities :D ).

3\. We should extract web browsers from the behemoth JavaScript VMs we're all
running now. I'm all for having portable VMs, and I like the direction
WebAssembly is headed, but probably 80% of what I do on the web could be
accomplished using an order of magnitude simpler software, accessing read-only
documents.

4\. Markdown and other human-readable formats are awesome. I used to think
writing my blog in HTML had the least dependencies, but now I realize it makes
you depend on a browser to render it, and that's a _huge_ dependency. Markdown
can be read as-is and therefore is almost dependency-free.

So I agree with the author here, and this is how I think we get there. Someone
should make a stripped down web browser with the following attributes, or
something similar:

1\. Only speaks HTTPS

2\. Only GET requests

3\. A few choice headers, like Range for streaming video.

4\. No JavaScript, or any Turing complete language at all.

5\. Minimal CSS, ie colors, font sizes, flexbox. No animations (you don't have
a language to trigger them with anyway).

6\. Can probably throw away some HTML elements as well.

The nice thing about this approach is that sites made to work with this
browser will still work in normal browsers. They'll just be super fast, low on
resources, secure, and private.

[0]
[https://anderspitman.net/17/#curlable](https://anderspitman.net/17/#curlable)

[1]
[https://anderspitman.net/19/#netcatable](https://anderspitman.net/19/#netcatable)

[2] [https://github.com/anderspitman/newb-server-
go](https://github.com/anderspitman/newb-server-go)

~~~
dvfjsdhgfv
> With the move to HTTP/2, we lost

Fortunately, we haven't lost anything yet on that front as HTTP 1.1 is still
supported by practically all servers and clients.

~~~
anderspitman
Yes, that's actually what I'm trying to convey, is that we shouldn't see 1.1
as something to deprecate. I'm in favor of maintaining that compatibility.

------
dsleno
What the article describes is gopherspace, which believe it or not still
exists. And believe it or not, is experiencing a resurgence. It's too bad
major browsers gave up on gopher:// because as we are seeing now in this and
other laments for the old www, we lost something that we didnt have to lose.
Gopherspace--check it out.

~~~
rossdavidh
best browser for gopherspace, if I were curious? best search engine?

~~~
ChrisSD
If you're just curious, there's a gopher->http gateway:

[https://gopher.floodgap.com/gopher/gw](https://gopher.floodgap.com/gopher/gw)

And a search engine:

[https://gopher.floodgap.com/gopher/gw?=gopher.floodgap.com+7...](https://gopher.floodgap.com/gopher/gw?=gopher.floodgap.com+70+372f76322f7673)

------
archivist1
The case for "forking the web"?

What if we just released a new browser that flat out refused to load any
resource outside this definition?

Would sort of be like the parallel worlds of gopher and the web for a while. I
think it would be interesting to have a "web fork" that was just for documents
not apps.

~~~
WClayFerguson
IPFS and blockchains are the technological solution here.

Many applications are already being developed to create this new Web3.0
document-centric world (I'm creating one of them!) which will provide a "way"
to have the original Tim Berners Lee web once again be possible.

...and not only that but it'll also be decentralized and 'uncensorable'!

~~~
progval
IPFS is not uncensorable. The protocol is based on a DHT that allows anyone to
find what IP addresses are hosting a particular content, so it's possible to
take them down.

It only appears uncensorable because it's not (yet?) popular enough for states
to care about it.

And for blockchains to be uncensorable, you need the content to be on-chain,
which doesn't scale. (because, if the content is off-chain, then you need an
external solution to host the content, so the blockchain didn't solve the
problem)

~~~
WClayFerguson
You're claiming that if a piece of information has an identity that makes it
reachable, then that makes it censorable too, but you left out what would be
required to make that censorship happen. The gov't would have to require the
IPFS codebase to support a 'blacklist table' of hashes, and then make it
illegal for a server to run a codebase without using the 'updated' latest
table. The the gov't would have to come up with a mechanism where they
maintain and update this blacklist and disseminate it.

Meanwhile, if I want to make a blacklisted doc 'available again' all I'd have
to do is add one bit or byte to the end of it, and the hash would change
completely.

Right now one single person at one single tech company (like a Twitter
employee) can literally ban someone for life, for perfectly legit political
speech.

IPFS is the best-in-class effort to solve all that, and I'm actually a guy
building a node that will go on the IPFS network.

~~~
progval
> You're claiming that if a piece of information has an identity that makes it
> reachable, then that makes it censorable too, but you left out what would be
> required to make that censorship happen. The gov't would have to require the
> IPFS codebase to support a 'blacklist table' of hashes

No, it only has to take down servers hosting its content.

> all I'd have to do is add one bit or byte to the end of it, and the hash
> would change completely.

This is unrelated to my point, but a nitpick: IPFS uses a rolling/chunking
hash, so one could blacklist chunks of a file; so it's not as easy as adding a
byte at the end.

~~~
WClayFerguson
1) Regarding the gov't taking down servers publishing hashes it doesn't like,
yes that's exactly what I said they'd have to do, if you go back and re-read
my prior post. Glad you agree. But like BitTorrent or Bitcoin, they'd have to
basically shut off the whole internet to disable it. Not gonna happen.

2) Regarding chunking in IPFS, the fact that files are chunked actually
increases the burden on any authority attempting to implement a blacklist of
hashes, and so that's in agreement with my points.

Also if someone truly wants to fight censorship you'd just symmetric encrypt
the data using a publicly available key so that it DOES change all hashes of
ALL _chunks_ every time you encrypt with a different key.

And encrypting the same data under many different keys (creating different
copies of whole file, is also my proposed solution to the problem of "Proof of
Replication" in the IPFS system.)

------
Thorentis
> This page is a document. Thank you for reading it.

 _Opens Dev tools, views <head> tags, and "Source" tab looking for javascript_

Nothing there. Well done, author. All too often I read articles like this on
Medium and laugh at the authors perpetuating the problem they want to solve. I
love clean, well styled, beautiful looking static blogs. I'll have to start
one someday.

------
paysonderulo
I appreciate the sentiment, but the prescription is extreme. The technologies
mentioned beyond HTML: CSS and JavaScript (yes, even Javascript), can be used
to enhance the presentation of the document, and more importantly can do so
without altering the document itself (ie the HTML). We need to be encouraging
better design and implementation practices, namely graceful decay, and can do
so without sacrificing features that improve readability and accessibility,
such as syntax highlighting or dark color scheme variants.

~~~
buzzert
Can you give an example where JavaScript would “enhance the presentation of
the document”?

You mean like scroll-jacking? Some script that prevents me from copying and
pasting? Something that loads images later for some reason and shifts
everything around?

I don’t mean to be facetious, I am genuinely curious what you mean. In my
experience JS has been nothing but a detriment (for reading documents, that
is), and I have it disabled on all of my computers.

~~~
cetra3
This is a good example where I think JS adds substantially but is still
document-y:
[https://www.redblobgames.com/grids/hexagons/](https://www.redblobgames.com/grids/hexagons/)

~~~
graphpapa
This is amazing

------
bloaf
>Not even to enrich a document, such as syntax-highlight the code snippets.
This one may seem too stringent, but I think it’s better to err on the safe
side, and it’s very easy to enforce.

If the enrichment rules are known ahead of time, there is no reason you can't
just "hard code" the colors or formats into the document.

~~~
fc81
I think that's what the author was going for. Syntax highlighting and other
improvements would be "baked" into the document by the server and the final
document would be served to the client. All without the use of client side
scripting.

~~~
rudedogg
You can do this now with some static site generators.

------
SmooL
Maybe I'm missing something, but I don't understand the authors thesis here.
He seems to have a problem with when he navigates somewhere and instead of
receiving an expected block of pure text, he gets that along with a whole
bunch of tracking/impressions/pay walls.

Sure, I get it, that's a problem, but it's not a _document_ problem. The web
of applications let you _actually do stuff_. Yeah sure there's a ton of crap,
but cherry picking only the news sites as a reason to remove PUT requests
seems like the total wrong approach.

------
b0rsuk
I sympathize with the author, but who really likes a 100% web of documents? Me
and a couple of other nerds. Is that enough to reach critical mass? Vast
majority of internet users loves the _engaging_ , interactive, animated web
and notifications. It has armies of psychologists working on it to make it
addictive, then feed them ads and sell their data. We would be quite alone in
a web of documents. A voluntary exile. I would personally like it, but who
would finance it?

~~~
pmlnr
> A voluntary exile. I would personally like it, but who would finance it?

Us. All of us. All hail mesh networks.

------
mxuribe
Such a good post; I really agree with so many of the author's points! The
following tidbit in particular is so powerfully true:

> A document is safe. A book is safe: it will not explode in your hands, it
> will not magically alter its contents tomorrow, and if it happens to be
> illegal to possess, it will not call the authorities to denounce you. You
> can implicitly trust a document by virtue of it being one. An application,
> not so much...

------
defanor
I keep seeing (and mostly sharing) sentiments like this, but a couple of
things to note:

\- Sets of proposed features vary (happens to both lightweight WWW and
modernized Gopher proposals). I guess a more viable approach may be a less
strict one than composing such a set. For instance, a search engine for such
websites was mentioned in another comment here, and I keep thinking about a
web directory for that, but perhaps either of those would be more useful if it
was able to detect (and filter by) features required by a website, rather than
a single flag saying whether it's usable with a particular set of features.

\- Occasionally it's suggested (in this case both in the article and in the
comments) that browsers only supporting documents would help somehow. Yet
there's a bunch of lightweight web browsers (even I wrote a couple), as well
as combinations such as FF+noscript+uBO+Stylus, which may be nicer to use, but
don't seem to affect the overall situation beyond "please enable JS" and "best
viewed in IE6" messages.

------
theknarf
I think JavaScript would be fine as well, as long as you disallow fetch /
XMLHttpRequest and third-party resources.

------
WClayFerguson
I agreed with and enjoyed this 'document'!

I think the new Web3.0, and blockchains, and IPFS are going to lead to a new
web that is more like it was in the old days where people were sharing actual
documents.

I'm the creator of 'quantizr.com', which also fully adheres to your
sentiments, but also a belief that documents themselves should be granular
(i.e. quantized).

The IPFS 'interpretation' of my quantizr concept would be something like
saying a 'document' can have a version stored on IPFS that is a specific
immutable copy.

------
thulecitizen
> We don’t have a Web of Documents anymore. These days, the WWW is mostly a
> Web of Applications.

Yes! It's 'SaaSS' \- as defined by Stallman, not 'SaaS'.

"On the Internet, proprietary software isn't the only way to lose your
freedom. Service as a Software Substitute, or SaaSS, is another way to give
someone else power over your computing.

The basic point is, you can have control over a program someone else wrote (if
it's free), but you can never have control over a service someone else runs,
so never use a service where in principle a program would do.

SaaSS means using a service implemented by someone else as a substitute for
running your copy of a program. The term is ours; articles and ads won't use
it, and they won't tell you whether a service is SaaSS. Instead they will
probably use the vague and distracting term “cloud”, which lumps SaaSS
together with various other practices, some abusive and some ok. With the
explanation and examples in this page, you can tell whether a service is
SaaSS."

Source: [https://www.gnu.org/philosophy/who-does-that-server-
really-s...](https://www.gnu.org/philosophy/who-does-that-server-really-
serve.en.html)

------
roca
The author and their sympathizers can easily refrain from using JS and other
technologies they don't like in their own sites, and preferentially give
attention to such sites. No technology changes are required for that, except
that as another commenter noted, it would be helpful to have a search engine
that prioritizes such documents.

The problem the author has is not a technology problem, it is that sites
restricted to just such documents are not especially popular with most users.

People who wish that interactivity etc had never been added to HTML are
misguided. (I'm not sure if the author falls into that category.) HTML
competed with other technologies (e.g. Flash, ActiveX, native apps); when
authors felt overconstrained by HTML they used those instead. If interactive
apps were all Flash and ActiveX and native today, and Web apps didn't exist,
would that be a better world? Certainly the Linux desktop would be far less
usable.

------
z3t4
I think the way forward is even more powerful and interactive "document".
Imagine for example a car manual that also lets you run diagnostics or tweak
the HUD. And also third party "web pages" that can do the same thing if you
allow them to. Imagine searching for a baking recepy, then with a click of the
button, the "web page" connects to your baking machine and sets the
ingredients. Imagine a document about some math topics, with interactive
models that helps you understand the concepts (that actually exist today).

~~~
TeMPOraL
We hit problems of ownership, control and scope here. I want a tool I can use
to find a baking recipe and configure my baking machine automatically. I want
that tool to be entirely independent of the sites it pulls recipes from, and
of the machines it configures. From this point of view, I want the recipe
sites to be dumb documents, preferably in machine-readable format. Similarly,
I want my baking machine to be a dumb telemetry/configuration endpoint.

In this baking example, there are three pieces of the puzzle. Recipe source,
configuration endpoint, and the brain in the middle that uses one to work on
the other. I want that brain part to be independent - but you can easily see
how the recipe site and the machine vendors would each like to own the brain
part exclusively. That's why we can't have nice things on the Internet -
nobody can accept their role as a _service provider_ , everyone wants to be a
platform, the landowner of their digital sharecrop.

~~~
z3t4
That would be even better. A static document with an URL. Then you "swipe" the
URL from your phone/pad to the baking machine, which fetches the URL and reads
the marked up recipe. You then confirm the ingredients on the baking machine,
and it starts baking

I however want both "apps" and semantic "documents". Launching an app by
typing an URL is powerful. And on Android you can now add an web app to the
home screen.

We might not want the recipe to have the same capabilities as the "app". Maybe
the apps should be closer to the OS, and "documents" opened in a reader app
(browser).

One problem with the "semantic web" is that the added markup feel alien and
too verbose. It was designed to be used with web page builder apps. But I and
many others still prefer to code the HTML manually, with the help of a
powerful editor (with autocomplete and macros). "Structured data" gets too
hard to read by source. An idea is to replace the many span and div elements
with actual semantic elements.

In another post I ranted that nothing had really changed for "documents" on
the web in the last 20 years. We are practically standing still, without any
innovation. Probably due to browsers being overly complex and hard to develop.
I think the time is ripe for a "document only" web browser, and we can start
innovating again! For example, the enterpise market is still using word
documents, some use alternatives like Google docs. But web "documents" would
be far more powerful. Instead of e-mailing files, you should just share a URL.

I'm currently working on an Editor for making web documents (but also for
making web apps). I've put a lot of thoughts to how enterprise can use my
editor to make web documents instead of word files and PDF's. Almost everyone
are using the web today, but very few people actually make web "documents".

------
oefrha
Obvious question that’s been asked a million times: who the hell is gonna pay
for this web of “documents”? Producing and distributing “documents” costs
money; this web of “documents” is all well and good for hobbyist stuff (and we
do still have a subweb of documents of personal websites) but breaks down when
people rely on it to put bread on the table.

~~~
qznc
The article for not ask for ad-free documents. Admittedly, personalized ads do
not work well without Javascript.

~~~
oefrha
The Web of Documents would eliminate subscription models entirely (unless we
resort to HTTP Basic Auth), so that leaves non-personalized advertising as you
said. Unfortunately that seems to only work when your content has a specific
audience that can be profitably targeted without personalization, plus being
notable enough so that advertisers suitable for your audience would notice
your site and partner with you. Doesn't work for most ad-supported websites if
you ask me.

~~~
Tomte
> Unfortunately that seems to only work when your content has a specific
> audience

Advertisers find general magazines and TV still interesting enough, without
any personalization.

~~~
oefrha
There are way more random internet publications than TV stations/channels and
sustainably run physical magazines.

------
mohamedattahri
Not that I dislike the idea of a web of documents, but I find it ironic that
the page (application?) it’s being debated on would not be possible without
some of the capabilities the author is suggesting to restrict.

------
zojirushibottle
really beautiful, simple blog layout! the background color should be a tad bit
less saturated though, imo. also, that blurring effect is a little too much
for my poor eyes. but i love the simplicity of it!

~~~
nathell
Author here, thank you!

I’m not much of a designer so I just reused a color scheme I liked that I
found on ColourLovers.

------
oconnore
I love the idea of a web of documents, but I would love it even more if it was
a web of documents you could edit collaboratively.

First class support for Google Docs/Sheets/Slides style creation, with a clean
default read mode would be perfect. An open web born out of Libreoffice rather
than out of Blink/WebKit/Gecko. Secure, federated document access with easy
long term caching/archival of static versions.

------
nexuist
While I like the idea I feel like it's misplaced. Hand writing documents was
acceptable until we got typewriters. Black and white text was acceptable until
we built color displays. Paper documents were acceptable until we built word
processors. Digital documents were acceptable...until we built the Web of
Applications.

I do not see documents and applications as two separate things. I think an
application is the inevitable next step towards our goal of disseminating
information as efficiently as possible. That's why it's called hypertext -
it's meant to be a better version of plain text documents.

The author's biggest complaint is "applications pretending to be documents,"
such as journalism sites that implement monthly article caps. Sure, but in the
old days you had to pay for the newspaper too, didn't you? Paywalls are not a
result of the "Web of Applications" as much as they are a corporate attempt to
adapt to a new medium. Nobody will publish documents for free unless it is a
hobby or passion; in order for there to be worthwhile documents churned out
every week or so, someone has to get paid. The author seems to presume that
news orgs would simply publish their documents for free if the Web only
enabled GET requests. I have extreme doubts about this.

~~~
WClayFerguson
I think Daniel's ideas are very similar to the sentiments of pretty much
everyone fed up with Social Media and their censorship and the fact that a few
monopoly tech companies control the flow of information nowadays
(FANG+Twitter), rather than the original free "web of documents" Tim Berners
Lee created.

Web3.0, IPFS, blockchain, etc. are what will end the 'non-document-capable'
web that we all seem to now be experiencing.

For example if you go to someone's blog, it's on some site, and if that site
goes away, that document is gone forever. That's the problem. We need an
internet that is more like a massive public blockchain than a small number of
monopoly sites trying to feed you enough stuff to collect information to sell
about you.

------
StyloW
Don't RSS feeds already provide an experience similar to the one outlined in
the article?

~~~
b0rsuk
RSS is a good illustration of the problem. In theory, it can be used to do
that. In practice, it isn't because >Read the full article<. RSS used to be a
solution for people who liked simple content. But there's no guarantee that
you will get the whole article.

------
mrfinks
Agreed with the poster on many points. I want a web where I can return to a
website at any given time and expect to see the same content that was present
before.

I think a version controlled web would be sweet, like git internet.

Publishing a page ultimately creates a commit hash. Removal of published
commits is not a thing. Only forward changes. Mess something up? New commit.

When referencing a page from another, a commit hash can be specified if
desired, however, linking to a page/accessing a page without a commit hash
resolves the most current (think master branch) page/commit hash.

When accessing a page in browser, you should have the ability to rollback to a
previous version of a page if so desired. Since all dynamic content and
linked-out content is based on commits hashes (resolved at time of publish),
you now have a way to receive all content you did before.

I like the idea of a static HTML/CSS only idea for pages and ditching JS
mostly due to what crap the web has become filled with, like author mentions,
paywalls and tracking.

At the same time, I think all can be accomplished without dynamic scripting as
long as pages are hosted privately (as things like server logs can be
inspected) and websites are able to set/retrieve session storage on client-
side. Not sure how to solve for that.

I always land on the whole thing being decentralized and everyone
participating holds many pieces like bittorrent where there is hash
authenticity. The hashing to client lookup could be authority type servers
(like cert authorities/trusted tor nodes) as trackers. The problem with this
is nobody wants to have crap stored on their computer to use the internet.

Private, trusted nodes seem to me to be the better option, but they would need
to be locked down like a root certificate authority. No logs either. Just
multi-redundant, multi-region, eventually consistent, nodes adhering to some
protocol used for passing around (in essence) a huge scalable, partitioned git
repo.

Also this might be complete crazy talk, but that's okay. It's a much better
dream than a world where the internet has become full of paywalls and
tracking.

------
julienreszka
The author would like to limit the web to markdown?

