> Named data networking (also content-centric networking, content-based networking, data-oriented networking or information-centric networking) is an alternative approach to the architecture of computer networks. Its founding principle is that a communication network should allow a user to focus on the data he or she needs, rather than having to reference a specific, physical location where that data is to be retrieved from. This stems from the fact that the vast majority of current Internet usage (a "high 90% level of traffic") consists of data being disseminated from a source to a number of users.
(Another idea with a Ted Nelson pedigree, btw.) Van Jacobson's working on another, NSF-funded project in this area http://named-data.net/ at present.
Ted Nelson talks about this in this amazing and inspiring Google Talk video from 2006. He talks about the rise of packet-based networking, and how the future will be content-centric.
Contrast points: IPFS is an implementation at the leaves of the network (rather than calling for routers to change initially), can be mounted as a filesystem, maps to the web (string paths as URLs), and provides a Merkle DAG data model.
Obviously you can't tell that other guy what to do but if IPFS doesn't inform that other effort then I think they are going to be wasting time.
Anyway I hope you will very seriously look at the various different approaches to NDN (with several different names) as a loose category and consider attempting to recruit from or merge with or interface with other efforts (there are many similar systems) and possibly expanding the scope a little if necessary.
We really do need a new internet.
Also what do you think of operational transformations, do they relate in any way to IPFS or future IPFS capabilities?
On OTs: yep! you can implement OTs on a merkle dag trivially, so you can build files from OTs. So! apps can use OTs as first class data structures and store directly onto IPFS.
There is a project to port GNUnet to the browser going on here: https://github.com/amatus/gnunet-web
With numbers like that, and with cloud storage prices sitting very far away from marginal costs, protocols and incentive structures like IPFS and filecoin are going to be of great value to enterprises, governments, and consumers. I would not be surprised if, by 2020, a majority portion of the data online was stored in such a system; with exponential growth, a new majority share is created every ln(2) / rate years. In the case of the "data universe", which grows about 40% annually, doubling time is about 2 years .
* Content keys are based on hashes of content, and don't change (unless a new encryption key is used to re-insert a file)
* keys that are not requested frequently will fall off the network automatically.
* Keys can be signed so only the holder of the private key can update said key
the good news, I guess, is that this system might get people more acquainted with content-based networks, so freenet will become a next logical step.
on the other hand, I just don't see much point in development of non-anonymous system when we have a good anonymous system already
There is certainly a use case for a more modern project that would try to implement cleanly the base features of Freenet, with a clear interface.
If you have some spare time, could you solve that power inverter / solar panel challenge that's on the front page currently ? Thanks in advance.
People that care about the data's presence will pin objects locally:
- I pin + seed my own files, or files I'm interested in keeping alive.
- I can hire a service (or multiple) to pin + seed my files for me.
- I can use things like http://filecoin.io to incentivize large groups of people to seed files for me. (this is why Filecoin + IPFS are sister protocols)
Of course, that would introduce some privileged nodes, but if anyone can create such a node, as part of protocol and be part of this data routing network, isn't that still a distributed egalitarian network?
This opens up the door to censorship too though, so it has to work like the web does now: individuals and groups can select whether to serve things or not depending on their own views. Same for Routing.
Even if, say, someone published a list of known illegally-transmitted copyrighted files, you could still make a strong argument that you don't trust the source of the list and would need to receive and review a specific report of each instance of copyright violation that went through your service, like the current DMCA model. (Again, though, not a lawyer.)
People shouldn't accidentally store illegal things they don't mean to (hence blacklists) but you also don't want to snuff out freedom of speech of those who understand + are willing to take the risk. The issue is routing access to it is also considered hosting it (dcma takedowns for links on the web).
I think that the best thing is to have the default DHT include blacklists that can be updated to handle DMCA requests. Sort of like DNS works today. Definitely something that we'll have to figure out as time goes on.
# a mutable path
I didn't read very carefully but I don't see many facilities for permissions control other than having a personal folder.
Did I miss that? Do you have plans for permissions support, like groups or read/write access or ACLs etc.?
Or maybe if you just make a simple type of group so that one account could share access to its personal folder with a set of other people/accounts, that would handle most use cases for permissions.
Other than that, seems like this is a great start on solving everyone's problems.
I think permissions in IPFS should be implemented as encryption + capabilities (see E).
(e.g. grant people access to particular paths by giving them decryption keys. once they have the blocks they can cache them, but you could do revocation by moving the blocks. gets away from the dedup benefits of the merkle dag, but if you _really want to do revocation_ you sort of can.)
This problem and the proposed solution involves new ideas and changes across the boundaries of that layers of abstraction, so it would be hard to digest in a single sitting. Unless of course all this was synthesized in your own head :)
My guess is you would need need a more agile version of the Douglas Engelbart's "Mother Of All Demos", a functional prototype that shows everything works together in synergy to create more than a sum of the parts.
The best case would be if all the components of the puzzle are independently valuable, like Musk's SpaceX model.
Both look pretty cool, I'd like to see some more traction in this area.
Logic in webapps are not covered here; logic is totally dynamic. that's not what IPFS is for-- HTTP works well, and there's other things in mind for the future. Some attempts to look at are ethereum, go-circuit. I've ideas around an erlang-inspired global vm, but that's a whole can of worms I'm not ready to open yet :)
IPFS says: do your logic however you want, return IPFS links, and fetch data from IPFS directly.
Woud you say that there is a way where a truly distributed content-centric system could be build, sort-of around this whereby the following wouldd be true:
Assume there is a commonly massive shared set of content. Users have resource pools (money, storage, bandwidth, whatever) -- all apply some slice of their resource pool into the system and then all users are contributing to the servicing of content that is globally accessed by the group as a whole.
All "static" or mundanely common assets are served from this resource pool.
The regular pinning and seeding applies to all other "niche" content.
as more users access/gain interest into that niche content, the more resource units it receives?