Every time there's a change to the db you need to download a whole new copy afaik.
Ideally you'd use a log based store. To do this you need to fundamentally change your idea for how a website works.
With a distributed log store chances are you won't have the same data as another person.
> The two systems also have a number of differences. Dat keeps a secure version log of changes to a dataset over time which allows Dat to act as a version control tool. The type of Merkle tree used by Dat lets peers compare which pieces of a specific version of a dataset they each have and efficiently exchange the deltas to complete a full sync. It is not possible to synchronize or version a dataset in this way in IPFS without implementing such functionality yourself, as IPFS provides a CDN and/or filesystem interface but not a synchronization mechanism.
Above quote is from "How is Dat different than IPFS?" https://docs.datproject.org/faq
The Dat web can be browsed on Desktop with Beaker Browser https://beakerbrowser.com or on Android with Bunsen Browser https://github.com/bunsenbrowser/bunsen.
* Disclaimer: I'm helping out with the development of Bunsen. You should too!
https://docs.datproject.org/faq - https://beakerbrowser.com/ -
By putting an SQLite database file (.db) inside a torrent, we can query its content -- by prioritizing pieces based on the SQL query -- and quickly peek at the content of the database without downloading it entirely.
"You start serving sites as soon as you visit them"
I'm not comfortable with that at all. Distributing content has very different legal implications than just viewing it, especially when it comes to pirated content.
But that's only after I have already started serving that content to others, is it not?
One way of dealing with it is to make the images "Optional content". This means they are not sent to requestors of the site immediately - they are requested only when specifically wanted. Users can choose to seed optional files - in which case they'll mirror them all - or they only seed the ones they've looked at. Optional files don't count towards site size limits.
All the stated benefits are non-issues for me:
I never been censored.
* No hosting costs
I serve hundreds of thousands of users per month for something like $20.
* Always accessible
Just like hosting costs, my downtime is negligible.
Have you never self-censored?
You might never have been censored, but then you likely never said much that would upset powerful people. But you live in a society that is rich and (mostly) free because others have done so.
File transfer is done over a service that ZeroNet runs on a particular port with its own protocol.
IPFS shares the same problem and knowing when it's safe to shut down a node in both IPFS and ZeroNet is difficult.
Freenet solves this by having an insert of data sent directly to peers immediately. The site isn't stored on the inserting node. Once the insert is done it's safe to close the node and the content is still available.
I guess there's a lot of opaquenets already, and it's not the feature richest, but this one you can have running in under five minutes.
I was actually using it for some torrents but currently ZeroTV is down :(
ZeroNet operates at the level of sites. Sites contain multiple files. Seeding of data happens at the site level.
You could implement ZeroNet on top of Freenet but implementing Freenet on top of ZeroNet wouldn't make much sense.
When inserting new data, Freenet pushes data out to nodes. ZeroNet pulls data from requestors. With Zeronet you can't disconnect your node until other nodes have a complete copy if you want the data to remain available. It is difficult to tell when it is safe to do this. With Freenet the data isn't stored on your own node, it is pushed out to other nodes when first inserting. When the insert completes you know you can safely turn off your node. The data remains available.
The logic goes, if some information is more valuable shared privately, then the infrastructure should make it harder to access. Conversely, if some information is more valuable shared publicly, the infrastructure should make it easier to access.
To do this effectively/efficiently, information must become a market commodity and have a means by which the value to access it is governed by the consensus of the viewers. This is made difficult given information may be easily copied by individuals and the current state of affairs with sharing things over social media and the problems with group think.
I would note that "hate groups" are groups who focus on removing choice for others by way of introducing thoughts and ideas which act in a viral manner in the population. Double binds are one form of passive coercion which have made a reappearance within the (cheap) available social media offerings.