There's obviously a lot of work going into this, and it's a lot more interesting than just reinventing BitTorrent and magnet links. However, I reflexively don't like the sound of this part (from the introduction to D5.5):
All main documents are being made publicly available and the partners are committed to open source releases of prototype code. This open approach does not adversely affect the exploitation possibilities of the project partners themselves: as originators of the technology, they have a first mover advantage in the initial adoption of the technology, also retaining the possibility of patenting the key results created by them in the project.
The requested URL contains malicious code that can damage your computer. If you want to access the URL anyway, turn off the avast! web shield and try it again.
This could be huge. I note however that Peer to Peer is not mainstream. It either never was, or it ceased to be. Bittorent is popular, but it's not mainstream. If it was, YouTube would distribute torrents of it's videos (it could be done right now by writing a torrent client in JavaScript). The sad reality is, there is a long lasting, major roadblock:
There's a tradeoff between the extra complexity of engineering a P2P system and the ratio of marginal cost of resources between home and data centers. In 2001 when people were going gaga for P2P it was actually a good idea compared to the status quo at the time (remember that Web 2.0 hadn't been invented yet). Since then data center storage and bandwidth has gotten exponentially cheaper but broadband has stagnated, so client-server is now cheaper than P2P. For example, before they were driven out of business, cyberlockers delivered better service and had a more sustainable business model than BitTorrent. And that's not even taking into account the network effects of "big data".
I don't think that's necessarily a major roadblock. Depending on implementation, with peer to peer you may connect to many peers, and their combined bandwidth will be good.
Another way of looking at it: if I used 100% of my upload traffic continuously, it would be a lot more total data sent than I generally download with normal use. So as long as a peer to peer system ensures QoS, the bandwidth is there. I think.
On a peer to peer network, download equals upload. If you arrive late, sure, everyone will upload to you. But if you arrive in the middle of the viral explosion, you won't be the only one who wants to download. Peers will upload to each other, not just you. So, unless one of the peers happen to be a big fat server with monstrous upload, you won't download the video much faster than the median upload rate.
The idea of using your uplink to the fullest sounds good, but to do that, you need to acquire some content in the first place. And that requires the upload bandwidth of other peers…
Which is one reason that Google Fiber is so interesting to me. People, even technical people, tend to see it as overkill, but a fat, symmetrical pipe into every house could be a huge game-changer.
>That would potentially make the internet faster, more efficient, and more capable of withstanding rapidly escalating levels of global user demand. It would also make information delivery almost immune to server crashes, and significantly enhance the ability of users to control access to their private information online.
As I understand it, the plan is essentially to turn the internet into a giant peer to peer network. I'm not sure how this fits into the claim that it will "significantly enhance the ability of users to control access to their private information online". If anything, it seems that this model will lead to an even greater and equally permanent duplication of online content. This means that user data will be reproduced and recopied even more than it already is...how does that give users a greater control over privacy at all?
I don't think I'll touch on the problems of the implementation as its described, but the following quote:
>the researchers behind the project also argue that by focusing on information rather than the web addresses (URLs) where it is stored, digital content would become more secure. They envisage that by making individual bits of data recognisable, that data could be “fingerprinted” to show that it comes from an authorised source.
really makes it sound like they are working toward making the internet completely search engine based. How else will you find content if the servers are gone? You search for it. It sounds like kind of a terrible idea, but maybe someone else can champion their cause.
"This means that user data will be reproduced and recopied even more than it already is...how does that give users a greater control over privacy at all?"
On the one hand, this just sounds like P2P. On the other hand, I am glad to see people recognizing the value in P2P (wasn't that what was envisioned for the Internet originally?) and trying to head back in that direction, away from centralized, implicitly trusted services run by gatekeepers/middle men.
After digging around on their website, it actually sounds like they see the current internet as P2P and they want to replace it (or at least have an alternative) that's concerned with routing between sources of information rather than peers/devices.
Like in the video in the article, the guy's heart rate monitor, rather than having a constantly changing IP address as he moves through different networks, would be identified as "source of information about X's heartrate" (using an anonymous/opaque identifier I'm sure), and this "future internet" would be keeping track of how to route that information to the people who have access to and are subscribed to it.
So the "future internet" routers would be kind of managing pubsub at the IP layer. That's what I gather from these pages, anyway:
While a model such as in IP networking enables a stream of data between two explicitly addressed endpoints (with total transparency as to the information represented in this sent data and the communication surrounding this exchange of data), the model envisioned in our aspiration elevates information onto the level of a first class citizen in the sense that data pieces are explicitly addressed and therefore directly embedded into the network, unlike in today’s IP-like networks.
Im saying this because im working hard to create a P2P equivalent platform, and boy, the devils are in the details..
Its very sophisticated right now, and i also focus on information; im doing it using the chrome engine, so developers will create isolated applications that will just inherit the mechanics of the system (pretty much like a browser does)
I didnt publish anything, no paper or idea, because im pragmatic (and not backed by any university) and want to publish the real software so devs can create the applications and uses can just use..
I wonder how many of my own conclusions will share with the conclusions of this work of theirs..
If anyone knows more details about this and how this is different from P2P and MQTT(based on pub/sub) it would be cool to know about... what are the innovative parts.. etc..
PS: if anyone want to know more details about what im doing just let me know
None of this involves any new technology, but certainly a new way of using it.
Like my Italian grandmother learning how to cook Thai food in her late 80s: so uncomfortable that the same flour and oil was making this weird foreign deliciousness. But she had to be coaxed into trying it before she realized the new flavors were a good thing!
We'll feel the same way - data is data is data, but getting it from some cache from a few hops upstream? ISPs and content providers are not going to like losing this control. Very much its own topic and nowhere near as much fun to talk about!
Some of the methods detailed here would definitely be useful paradigm shifts. Digital signing for data, Mobile addressing, and broadcast GPRS would all be useful features of a future internet. But who's going to pay for it?
In one of their documents they offer the idea that there's an incentive for ISPs to cache this information to prevent charges from lots of queries. But even with caching, they still aren't getting any money from anyone to pay for the services. You're not even sure where it's going or what it contains. Somebody's going to have to pony up for the bill, and it's not going to be the ISPs out of the goodness of their hearts. And if it involves different networks (like cellular) you're going to multiply the number of people who need to get paid to make global data access useful.
I predict that this "future internet" is going to require a subscription-based fee that rivals your cell phone and cable bill combined.
This sounds awesome but it is also scary. Seems like the end of pirated content to me. Ban it once and its gone globally? Not sure if I understand correctly but it seems like a brilliant way to control property rights and precisely funnel information.
Why wouldn't the interaction be peer-to-peer as well? The point is to remove servers from the system, so there should not even be a concept of an "interactive server."
I mean for example an online game. To run it, I would need to have a computer over which I have control and which runs the game. That computer would be the server. If it goes down, the game will be down. How does that fit in this system?
Can you elaborate on this? I am not sure I understand what you mean.
"Peers may not be online at the same time that you are online."
That is already the case now with social networking, email, IM, etc. I do not really see this being a serious problem -- P2P networks are pretty resilient to people going offline as long as some threshold number of people remain connected.
"Updates are slower / content may not be fresh"
This is certainly a problem, but perhaps not insurmountable.
About lack of golden reference: Illustrative but extreme example is Bitcoin. There is no golden transaction history. What emerges through consensus from your peers is accepted as the transaction history. If there are network outages and silos get formed, then content can diverge.
About peers being offline: I am not talking about people being offline, but their systems going offline. Say I want to publish a photo to my friends. Their systems may not be available at the moment my system is online.
I think that in both cases, Usenet serves as an example (and an old one) of how the problem might be solved. At worst, what happens is that messages arrive in a different order than they were posted, hence the importance of "References" headers and quoting posts (the rest becomes a UI problem). Digital signing defeats forgery (if the network fractures, then your signed messages will only appear in one of the pieces of the network until the fracture is resolved), and encryption and mix-nets protect privacy.
So let's say you wanted to build a social networking system on top of Usenet. Here is one approach:
* Use attribute-based encryption to distribute keys to your friends; this allows you to control who can see the posts you make
* To post, you encrypt, sign, and encrypt again your message; then you send it through a mix-net to alt.anonymous.messages
* Your friends all download from alt.anonymous.messages, decrypting those messages they can decrypt. Your signature tells them who made the post, and ABE ensures that only those who have permission to see the post can decrypt it.
You can imagine extensions for commenting on posts, or creating groups, etc. You can also see that Usenet is not really relevant -- any P2P broadcast system could be used.
Your example is practical, and something I might even like to use. In theory, each user could run their own Usernet server and that could be called P2P. But if the default configuration is to use a shared server, it doesn't sound P2P to me.
I will read up more on Usenet though. I have used it but don't know its internals.
I was disappointed when the pursuit website turned out to be nothing but design documents and "mission statements". There doesn't seem to be any actual software anywhere...
So like a P2P CDN. Big deal. A CDN like Amazon's is cheap and efficient. For web applications, e.g. dynamic content, a CDN isn't helpful, and neither will this system be. I don't see this as revolutionary.
For web applications, e.g. dynamic content,
a CDN isn't helpful, and neither will this system be.
CDNs are helpful for "web applications". See: http://www.allthingsdistributed.com/2012/05/cloudfront-dynam... Lots of "dynamic" content can be cached for some amount of time. At very least, the truly dynamic parts of a page can be cached and the dynamic stuff can be AJAXed or SSIed in.
A P2P CDN without Amazon (or any corporation or government) able to control it? It may not be technically revolutionary, but stopping the concentration of power could reduce the threat of the Internet becoming the tool of fascism.
Would having to query a DHT or whatever for each piece of data really be faster than just doing a single DNS lookup at the beginning and then knowing where to get everything?
http://www.fp7-pursuit.eu/PursuitWeb/?page_id=158
There's obviously a lot of work going into this, and it's a lot more interesting than just reinventing BitTorrent and magnet links. However, I reflexively don't like the sound of this part (from the introduction to D5.5):
All main documents are being made publicly available and the partners are committed to open source releases of prototype code. This open approach does not adversely affect the exploitation possibilities of the project partners themselves: as originators of the technology, they have a first mover advantage in the initial adoption of the technology, also retaining the possibility of patenting the key results created by them in the project.