
So you want to decentralize your website - pfraze
https://macwright.org/2017/07/20/decentralize-your-website.html
======
dispo001
Oddly enough some complexity also helps with adoption.

I one time, long long ago, put a many page html website in a torrent (I'm
guessing 2007)

site name

-index.html

-other-files

\--c.css

\--j.js

\--banana.html

\--kiwi.html

Then I publish a new posting by making a new torrent.

site name

-index.html

-other-files

\--c.css

\--j.js

\--banana.html

\--kiwi.html

\--pear.html

The fun hax of it was in that the new torrent has mostly the same files and
mostly the same directory structure with the same name top folder. All torrent
clients I tested wont complaint until you try overwrite something with
something else. Javascript populated the index. The hypothetical user would
simply add an rss feed to their client for site updates.

Some technically unsophisticated thought it was weird to browse html from
their disk. A maybe-solution (that I've never implemented) was to have a
domain redirect to the file path. I can think of a bunch of ways to do that.

Anyway, the point I wanted to make: It involved insufficient work to brag
about and those who knew were not impressed by the obvious. I suppose any
torrent client could implement it very easily but with lack of bragging and
lack of awe it didn't amount to anything.

The only fun thing accomplished in the project was to de-paginate torrent
websites so that they have a few thousand entries per page. (Replace the link
to the torrent page with a magnet uri.) Then I made a search feature that
would xhr load each page looking for the keywords. Like the dumbest search
engine possible. It did work surprisingly well. The local file system is not
impressed by multiple multi mb file requests. Javascript indexOf worked fine
with very large strings at time time. I just chop out a sub string large
enough to include the result and split that into an array on the "<tr>"

