The biggest drawback is that you can only serve static content, although I think many websites today would be fine with this restriction.
You can read more on my website:
Follow along with the development on github here:
Refer to this:
At a high layer of abstraction, neural connections work probabilistically, firing in the path most likely to be a useful pattern; hence, most people will tend to assume "free" means "as in beer". However, a new term like "liberated software" will not trip this pre-existing wiring, forcing a new pathway that doesn't have to compete with the conditioning created by a lifetime of marketing messages ("Free Checking! Free Estimate! Free For The First Year!"... etc)
Good thing 4-clause BSD never caught on in the Linux world.
By the way, have you heard about Freenet? https://freenetproject.org/
You said it downloads the entire site when i open a site. Now, this is completely fine for normal text-based blogs, but what if it's an image heavy site? Like theChive? Wouldn't i end up storing several gigabits (possibly terrabytes) of data for that site?
Secondly, if somehow we extend the concept to dynamic content (don't ask me how) wouldn't it effectively mean each of us running a server and hosting it on that?
Maybe your idea can be implemented on the server level instead of the client level... like proxy servers doing what you want the clients to do and thus the content would be all over the internet instead of a few specific servers. (not saying that's a good idea btw)
I hear what you are saying, but there is also reason for drm. Or we will have flash forever. What I am saying, I at least can understand why he did it.
It really is astounding, and bears so much emphasis, that the proposed DRM module system is actually worse than Flash.
I believe that free and open, given some semblance of equal playing field will always win.
Just to be clear, if they came and say, no drm in html, I would stand behind that as well. I just accept their judgement that this was necessary at the moment for things to work.
I don't think only my view is right, so, I might be wrong here, but this is what I believe.
If a company insists on using DRM then they should simply create their own proprietary plugin like Flash. But there is absolutely no reason for us to destroy the open web and make it rely on proprietary binary modules.
Just because it's easy for you to circumvent doesn't mean it's easy for the everyday user.
But totally agreed that it doesn't belong in web standards.
All it takes is for one person to circumvent it and the whole system crumbles. Which is the current state of affairs.
If i sound angry, it's because i am.
Actually Netflix is easier then pirating, which is now why Netflix consumes more traffic than torrents.
It... actually is already at this point. I can find any song, any TV show, and any film, available to stream instantly on Youtube of all places. This isn't a shady torrent site, it's a Google-owned property.
In the music sphere, Grooveshark has every song you could think of, and is claimed to fail to comply with the licensing requirements of the copyright owners, and is thus being sued over it.
There is little or no reason that a similar service could not pop up for video, aside from the fact that bandwidth is costly, and I'm sure that could be resolved to an extent through some form of torrent-over-WebRTC system.
You don't combat piracy by locking it down and making a pain to access your content, you compete/combat piracy by making sure its always easier and more convenient to obtain your content legitimately, not punish those that are legit for the few that will go pirate it anyway. As long as I can purchase a song for $1 DRM free and have it in 5 seconds at a high quality from iTunes or Amazon, I won't ever be tempted to find a torrent for it, even if its free.
DRM creates artificial scarcity, and scarcity breeds a black market. Those black markets may always exist, but if you make it hard enough to access your product on the device they want at the time they want, they will have more incentive to go to the black markets.
This is false. Those companies can use the HTML without DRM with the same success. No company ever needs any DRM.
The BBC needs DRM to be able to give us that content.
"The rights owners" are in a market, and if the market simply refused to buy broken products, they would have to either deliver a non-broken product or go bankrupt, there is no way in which they could force people to buy broken products in order to sustain them.
People, the majority of people don't know what DRM is, they don't care, they just want to watch content.
One of the biggest problems in tech and forums like these is that everyone thinks they are so much more significant than they are, they don't realise they are such a tiny minority of the consumer spectrum.
Not our problem. They can take the web or leave it. Why should we pander to them and screw up a good thing in the process?
The web is a proprietary app controlled by the w3c, who are mainly Microsoft, Google and Apple.
You can believe it's an open platform, but if that's the case it should be easy for you to block DRM ;) Good luck
The web is not proprietary, not an app, and only to some extend controlled by the w3c, which is much more than Microsoft, Google, and Apple.
Well lets see if DRM makes it into HTML5 then.
Is that the same kind of DRM that enabled businesses to use the printing press, radio broadcasts, TV sets, vinyl records, cassette tapes, audio CDs and MP3s?
You cannot credibly argue for "building new architectures for the web where it's decentralised" (especially not with the arguments he uses with regards to corporate and government power) and at the same time endorse leaving the endpoints open to centralized control.
I can understand why some people who do not like it would still support DRM in HTML. I cannot understand why he does.
I'll rather have a reasonably cross-platform plugin with possible open-source implementations than 100% closed plugins with no cross-platform support which cannot be implemented in open-source.
DRM on the open web is all bad. If you compromise on that because you want Netflix in your browser and not an app, that makes you an asshat, and part of the problem.
We need open standards, we need to defend them, and we definitely don't need people undermining them on factually incorrect basis.
Standardizing DRM extensions is bullshit. No new browser can ever be standards compliant when being compliant requires having the appropriate binary blob as distributed by the approved parties.
If only a select few can implement it, then why call it a standard?
The standard (that is being proposed) in this case is an interface that the actual DRM modules hook into. Anyone can implement the interface.
In this case, the implementation of the DRM hooks are those proprietary DRM modules.
This "standard" shuts everyone except the select few in the dark. Without have the appropriate plug-in, you cannot access the content.
Of course, the amount of work it would take to build such a web and move to it likely rules out the possibility of it ever happening. I mean, how do we go about forming a movement to build this? It only works, really, if everyone's on board.
This is very true. More so if you look at countries like the UK where ISPs have recently been consolidated into MAFIAA-complying monopolies that censor just about anything that Hollywood and the government ask them to. It's scary how close ISPs are to the 'governements' of the UK and USA. Obamas' recent NSA 'reform' proposal even suggested that ISPs be in charge of surveillance & storage of Metadata instead of the NSA! WTF? Fascism.
Mesh networks have a big role to play in decentralizing the the post-NSA internet. Micropayments through bitcoin and cryptocurrencies will encourage individual nodes to provide bandwidth to users . Smartphone apps like OpenGarden  and systems like the Athens Wireless Metropolitan Network (AWMN)  are the frontrunners of the new internet.
First and foremost: Antennas and hardware in general. It's one thing to maintain a multi-channel 500mbps connection within 10-20 feet, let alone doing it with multiple devices simultaneously connected to other multiple devices. Even if you scale the speed down, the consumer grade hardware out there isn't targeting this use case.
Second: Protocols. If we had the hardware, the whole mesh has to be trustless while being affected as little as possible by connectivity and pathing overhead. Plus backwards compatible with existing infrastructure (which is less hard I guess).
Third: Gatekeepers. If you get them on board, creating the mesh as a natural extension of the internet becomes easy. Since they invariably won't be on board, it'll be that much harder.
Naturally the internet is not a mesh network because nobody will take this role of gatekeeper without benefit. If somebody does, it becomes ISP again.
However, although the current web is messy, we should be able to organize it using distributed network solutions in a peer-to-peer manner, as proposed in the blog here: http://bit.ly/1e8vrLL. It serves an additional layer on top of the nature internet. It has the protocols built-in and the "Gatekeepers" sitting in between the distributed nodes and a feasible business model to support its execution, rather than the ideal Semantic Web solution requiring DRM.
Thinking further, what if all these services could be plugged into a well-abstracted peer-to-peer network, consisting of every connected device in the world? Services similar to Twitter or Facebook would no longer require a central host. Redundancy would be built in. Uptime would be pretty much guaranteed. Ads would go away. Freedom would be an implicit part of the system; no longer would profit motives sully (or censor!) services that people use and enjoy. And it would be more natural, too: pumping all our data through a few central pipes makes a lot less sense than simply connecting to our neighbors. Our devices would talk to each other just like we talk in the real world, only with the advantage of being able to traverse the global network in a matter of milliseconds.
New technologies like Bitcoin and BitTorrent would crop up naturally as part of this arrangement. It would put power back into the hands of the people.
Sadly, it seems a future of locked-down app stores controlled by a small number of large corporations is much more likely.
(Sorry, I realized this started sounding like a Marxist rant halfway through. And I'm not even a Marxist! It just sucks that this fascinating future is one that we're not likely to experience, even though we're almost at the point where we could actually implement it. I think.)
Unfortunately, they are not quite practical either due to the security (open local firewall) and duplicates from every node, etc. It's hard to control without a centralized facilitator. However, agent technology provides a solution for that. Check out here: http://bit.ly/1bvvIJ3
So, in urban areas of the US at least, it really is just a matter of software.
Of course, a large scale wireless mesh network would probably be somewhat lacking in terms of performance, but we also still have wired connections to ISPs and could start setting up wired connections to each other. Peering with your neighbor (wirelessly or not) to share idle bandwidth shouldn't make things any slower than they already are...
The problem of course is that everyone would have to use a compatible mesh protocol. There are several options for a mesh network protocol now, but I don't know of any that scales well while remaining decentralized. Even if someone did come up with a good protocol now, considering how long the switch to IPv6 has been taking, I wouldn't expect any switch away from IPv4 to happen very soon.
Seriously, why prevent people from using a network? If you're relying on WPA for security then you shouldn't be connecting to the Internet in the first place. The most common thing I hear when guests see that my WiFi is open is that people will 'steal the bandwidth', as if the local exchange will magically run faster if there are enough residences connected to it! Also, given the monopoly position of phone provider, getting everyone to throw more money at them for the privilege of having multiple overlapping, mutually-exclusive networks is not a good plan.
and, if you're really zealous about it, the firmware. there was a high-priority FSF project a while back for free wireless router firmware. that listing mentioned something called "orangemesh," which now appears to be on hold. what are some of the other options out there on the software side?
This looks extremely interesting, basically the entire network works like a giant CDN.
Example: Comments on a blog post.
Given a blog post at URL A, comments are created by posting a note on your own site at URL B, then using a webhook to notify the blog that such a comment exists.
Your content cannot go away as long as you host it, and its not dependent on the blog's cooperation to be available, just to be findable.
That makes someone else's content depend on the availability of your content - in the example of comments, the context and any intermediate replies can be lost if you stop hosting them. Leaving not a record of a conversation behind, but only indecipherable fragments.
A quick scan through the page at the link posted also seems to indicate the protocol depends a lot on content providers being well behaved. I'm assuming that somewhere in there there is handling for when content simply goes away with notification?
I think this is true of existing content, except there are much fewer content providers and there's no protocol.
You can get guidance on how to get involved with Mozilla by following the instructions and filling the form at http://www.mozilla.org/en-US/contribute/
There is some hope that WebRTC and other p2p technologies might alleviate the problem but then the task of punching through firewalls with techniques like STUN falls on developers. I'm not seeing an industry-led campaign to standardize a true p2p protocol based on something like UDP, because it would undermine the profits of data producers.
I could go into how the proper layering of TCP should have been IP->UDP->TCP instead of IP->TCP and a whole host of technical mistakes such as inability to set the number of checksum bits, use alternatives to TCP header compression, or even use jumbo frames over broadband, but what it all comes down to is that what we think of as “the internet” (connection-oriented reliable streams and the client/server model) is not really compatible with a distributed content-addressable connectionless internet that can work with high latencies and packet loss.
I think if this network existed, then extending HTTP to operate in a p2p fashion wouldn’t be all that complicated. Probably the way it would work is that you’d request data by hash instead of by address, and local caches would send you the pieces that match like BitTorrent. Most of the complexity will center around security, but I think Bitcoin and other networks of trust point the way to being able to both prove one’s identity and surf anonymously.
Tim Berners-Lee may not be thinking about the problem at this level but we need more people with his clout to get the ball rolling. I wasted a lot of time attempting to write a low-level stream API over UDP for networked games before zeromq or WebRTC existed and ended up failing because there are just too many factors that made it nondeterministic. It’s going to have to be a group effort and will probably require funding or at least a donation of talent from people familiar with the pitfalls to get it right. Otherwise I just don’t think a new standard (one that makes recommendations instead of solving the heart of the problem) is going to catch on.
You know, the speed at which SPDY has caught on has given me hope. Admittedly, that's at the application layer, but I think if Google were to get behind an initiative to create a better TCP, it would actually have a chance at working. Historically, Google's incentives were properly aligned with a decentralized Internet, but Google is more and more a content producer. So I'm not sure.
Until then, we'll have to rely on servers to bootstrap peer-to-peer networks, even Bitcoin, Bittorrent, and yes, WebRTC.
I don't think Google has much interest in a decentralized web. Google, along with Facebook, are two of the prime examples of a centralized web.
Unfortunately, with the existing architecture and infrastructure along with the team resources, they are focusing more on the algorithms of machine intelligence and AI.
You might find QUIC interesting (being implemented on chrome):
The "SPDY" application layer its implemented on top of it.. working at UDP.. so in theory, one can implement other sorts of applications layers on top of it..
Not as NAT friendly as you might think. There's a fair bit of discussion in the QUIC design document around how NATs screw up the protocol and how to mitigate the problem.
The WebRTC connection is bootstrapped through a server (STUN and TURN), but that's the case with almost all peer-to-peer software (take a look into how Bitcoin or Bittorrent is bootstrapped if you don't believe me). But anyone can run a STUN and/or TURN server.
Once the connection is bootstrapped, it's entirely peer-to-peer between browsers.
It's the current philosophy of "web apps" and not "hypertext" that isn't decentralization.
The problem is not technical, it's social, political and economical.
Two years ago I really became passionate about the problem of decentralizing the consumer internet again. We can see with git and other tools how distributed workflows are better in many ways than centralized ones. The internet was originally designed to be decentralized, with no single point of failure, but there's a strong tendency for services to crop up and use network effects to amass lots of users. VC firms have a thesis to invest in such companies. While this is true, the future is in distributed computing, like Wordpress for blogs or Git for version control.
I started a company two years ago to build a distributed publishing platform. And it's nearly complete.
Soon... it will let people run a distributed social network and publish things over which they have personal control. And I'm open sourcing it:
http://qbixstaging.com/QP <-- coming soon
As a general concept, P2P is easy for users - using bittorrent clients, TPB, etc. is essentially mainstream.
The one thing that would be nice is browser support - I would love it if Chrome had the option to download .torrent files using the Chrome download manager.
Thanks to web seeds, this would allow providers to convert any download to a torrent instead, both increasing use of P2P and decreasing their own server load.
Users wouldn't even need to know that they're using Bittorrent to download files - all they'd see are faster download speeds.
I wish that were true, but I am forced to disagree; it's been my experience (and that of my tech-savvy friends with whom I have discussed this UX shortcoming) that most people don't know how to torrent (much less set up and use freenet). Showing my parents or most of my siblings how to use a torrent client in conjunction with a tracker site is not going to be a fruitful task. If it's not as easy as Youtube or Netflix (click and watch), they aren't going to put in the effort to learn or remember how to do it.
The mainframe->distributed->mainframe cycle will then once again swing to distributed (yes, we now live again in a mainframe age).
Almost every really big, really stable website (or network service) is built on a set of distributed applications. Global data redundancy is just one of the considerations. If you want your application available everywhere, all the time, you have to design it to withstand faults, to distribute data and computation, and to do this over long distances, and still perform the same actions the same way everywhere. That's all a de-centralized web needs to look like, and it's actually already implemented in many places.
What I think Tim is saying is that we need to move away from concepts that centralize data and computation and use existing proven models to make them more stable in a global way. And i'm totally behind that. But if you think we're going to get there with p2p, self-hosted solutions, new protocols, new tools, new paradigms, or a new internet, you've completely missed the boat. We have had everything we need to accomplish what Tim wants for a while. It just has to be used properly [which some companies actually do].
But good luck convincing most companies to spend the time and money doing that...
I cannot agree with you to realize it via data redundancy and replication. The way you mentioned is just what Google is doing right now, which is not true distributed computing. Google's architecture relies on replicating services across many different machines to achieve distributed computing/storage and fault-tolerance across the world. See the reference here:
The real distributed computing solution for providing a well-organized web has been proposed to build a layer on top of the existing physical infrastructure of the internet to present the web in a de-centralized manner under a systematic way. http://bit.ly/MwT4rx
The paper you linked to does describe how queries are resolved by Google, and not quite the whole picture required for a typical distributed computing model. But to say it's not distributed computing is... confusing. Clearly they must support this model over many sites the world over, and we know from other documents that they geo-locate data to make it more convenient, meaning it's portable and partitionable. Sounds like a distributed computing system to me.
A good reference could be from Wikipedia: a distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal.
You can find from the blog page here about distributed computing: http://bit.ly/1jkySqU
"Tim Berners-Lee: we need to re-decentralise the web"
The actual article never mentions the term "re-decentralise" at all.
There are a lot of diy-isps out there. Engage with your community and become part of a diy-isp, e.g. by hosting services there, peering with them, etc.
I have this image of a framework/protocol in my head around allowing various policies to be created, and built on top of each other. As a programmer I pretty much used object oriented programming as an inspiration. Though I guess even OOP had its inspiration from biology. Its a way for an organic system to grow and adapt robustly. I think just like computer programs, government needs that too.
Added to that, if the web has a decent self regulation mechanism, that makes the justification of regulating the web even harder.
The one problem is, the advantage "physical" governments have is they have sovereignty. If Amazon starts doing illegal things, the FBI can arrest Jeff Bezos. The internet doesn't have an equivlent means.
However I think there is away to have the essence of sovereignty, and that is through a new crypto-currency like bitcoin. Imagine if to be a part, and to receive the benefits of this online sovereign nation you had to register your receive address. If the sovereign entity had a way to embargo your ability to accept payments in the currency, it might be enough penalty to fall back inline with whatever regulations were created. I think it also provides another means to give legitamicy to a crypto-currency.
An example of something that would be cool is this. You want to create an open source space program, so you propose the idea on your local netizen forum. The idea has overwhelming support, so someone creates a new policy component, and gives permissions to suction tax funds. I can imagine something like that happening in hours. Imagine initiating a public effort going from idea to reality in hours.
What you're talking about is popularly known as science fiction. You should be writing books and not pretending like you know the first thing about politics.
The reason why things take so long in reality is because there is a multitude of voices. Reddit does not promote pluralism, rather a giant circle-jerk of like minded individuals. Compromise is the name of the game in politics, something that the goon squad on Reddit has almost no understanding of.
What are you actually saying here? IF you're going to barate parent comment, then you're going to have to (at the very least) back it up with a few reasons why his/her system would fail.
You know what always fails? When a group of nitwits gets together and decides they've got it all figured out and then submit the rest of society to their hair-brained ideas. In the last 100 years engineers have have played a large role in a number. See: Franco, Pinochet, Mussolini.
So yeah, expect a lot of vitriol from students of history when some engineers start getting really excited about playing the role of political radical.
The meritocracy the SV loves so much is basically a nightmare when applied to society st large. You'll have designed a system by and for computer engineers with the premise that all of the unwashed masses will either have to sink or swim in your newly founded utopia system. You're going to fuck it up and along with that you're going to fuck up millions of lives.
Are there any fucking adults on this forum any more? Do your parents know where you are? If not, you should probably call and let hem know, they are probably worried sick.
> Think about how many great things have sprung out of places like Reddit.
To think about what is lacking in the fantasy lets start with what happens when there is no overwhelming support for anything. How does a whiz-bang networking technology solve fundamental human disagreements?
It's also worth asking if speed of execution is really the defining characteristic of a good government. Plenty of horrible governments have been able to get horrible things done quickly and efficiently.
Some examples which spring to mind are the Snoopers' Charter and Digital Economy Act.
The motivation to create wealth justification for inequality is one of the most morally bankrupt ideas ever conceived. Billions of people around the world live in abject poverty; to blame them for their situation takes a profound case of the fundamental attribution error.
I should do a better write up sometime.
It leverages Bitcoin, but not in a way to "suction" taxes away from unwilling participants.
After publishing my own rant about the same topic , Dave Winer commented via Twitter and argued that RSS is a very successful tool that decentralizes the web. I didn't understand his point at the time but today I think he is absolutely right.
His point was that when trying to fix the problem we should look at what already exists and works. Why did RSS succeed? Probably because it solved a problem for many people. It did not try to reinvent old things in a decentralized way but offered real benefits. Another good example for this git vs svn (centralized vs decentralized revision system).
I miss USENET. Nowadays, my digital identity has been spread to the four corners, across Facebook, Twitter, various unrelated phpBB-based sites, and places such as /. and HN. Even if I wanted to, there is no possible way that I could re-locate every digital utterance I have ever made publicly. Not only that, but I don't truly "own" anything I write anymore. It's stored in some unmarked server somewhere, stuffed into a schema that has no published documentation. I still have e-mail archives going back 10+ years, but if Facebook ceases to exist 10 years into the future, it will be like the (admittedly few) worthwhile written conversations and interactions I had with people on there never happened. There will be no record of it.
There's got to be a better way.
They're not the new Microsoft quite yet, and they may never be, but Google is a long way from their peak of hacker-ish credibility.
Add to that the fact that Google is THE face of the recent tech backlash. Others don't like the whole idea that they employ a tiny fraction of someone like GM yet "hoovering up public subsidies and then avoiding taxes". If the public backlash continues or increases, we'll all start to face (some) repercussions - so how long will the tech community support Google en masse?
Antitrust accusations? You mean those funded by Microsoft astroturf lobbying groups?
And just listen to your other complaint: Google is the face of rich privileged techies, and at the same time, their uncool for theoretically pushing down salaries, and these already hated SF techies should be showered with even more money? If you look at the publicly released transcripts, Steve Jobs was the ringleader of this effort anyway and threatened to "go to war" with some of the players if they poached Apple employees. Why isn't Apple facing similar outrage over this? Steve Jobs tried to price fix engineering salaries and price fix eBooks.
Becoming anti-privacy, pro-DRM, and against net neutrality are much more significant betrayals.
The reason Apple doesn't get such a backlash is that they haven't represented themselves as morally superior. Everyone knows Steve Jobs was a ruthless businessman, and that Apple is not a philanthropic organization.
For ADD-ers: http://youtu.be/hSyfZkVgasI
The story: http://www.archive.org/details/paulotlet
Long PR story short: RPC was already prevalent, that is how you control(led) your stuff remotely. Like Tim did on his day job. Today, if you'd ask for a NeXT-like toy, you'd be denied were you an average Eastern member of CERN (n.b. western equivalent). As a true westerner, TBL asked for it and got one. Put the gopher link address ptr in the reserved field of the text font properties (where things like bold and italics properties are stored) and voilà. Mix in a simplified SGML for markup (as it was customary back then for typefaces and layout on printers). One could also hire cheap disposable students to actually write the web clients and servers to be cross platform (the web's virtue/value).
Would the "web" have just ran on NeXT, it would be long extinct, let alone have taken off. Clicking on text is how you used Oberon or even Genera Document Examiner (perhaps even over the network). It was the spirit of the times. Even Jobs demoed a WYSIWYG client-server application in 5 minutes on the NeXT. What an insult.
On linking and hypertext: all post-war era anglo-saxon stuff is spin (including Vannevar and Tim invented the web myths). The real stuff comes from Belgium, as did Tim's boss, btw.
When asked what he would have done differently!!
So I'm not surprised that people do not put weight on his words, but rather the reason why they don't and the apparent recency of this opinion.
Not only mandatory flashing, but also tracking.
If you want to view the content, it will ensure that you've seen the ad before it renders the content.
Then can and will be workarounds, but thanks to lobbying those workaround are criminal in many parts of the world.
Also, isn't this decentralization? Not having the data in one mega-American-cloud? Seems to me that Tim is doing a lot of PR for big companies lately, masked as a benefit for the users, just like he's doing with the whole DRM thing, which he actually says would be beneficial for users.
But let's assume he's not trying to be malicious here, and that he has a point. Here's the thing. Yes, I agree that having every country demand companies to host the data locally is going to make it very hard for innovation to spread, and therefore, progress will slow.
HOWEVER, right now it seems we only have a choice between this, and allowing US to get their hands on all the data. I didn't even see Obama mention anything about NSA not being able to tap the world's fiber cables anymore.
So until US gets serious about not doing shitty stuff like that to the world's users, then I absolutely think all the other countries should try to force companies to host the data locally.
There is one other solution to this, that will allow companies to keep the data wherever they want, and that's encrypting everything by default, and even them not being able to decrypt the data without the user allowing it. So stuff like OTR for chat systems, DarkMail/PGP for e-mail systems, and so on.
Make it so the web is completely trust-less, so other countries don't have to trust Google or the American government to not get their data, because they could be assured there's nothing for them to get other than strongly encrypted data.
Unfortunately, this option isn't even on the table right now with the big companies, and the US government will push against companies trying to do this, too. And the only way to get this option on the table, is for them to think that other countries are going to inevitably force them to store the data locally, and build data-centers locally. Only then they might start preferring this "encrypt everything with the user having the key" solution, as an alternative to storing data in every country or region.
So until that happens, I absolutely support countries demanding data to be stored locally, because I know that minutes before that will begin to happen, US companies and the US government will agree to letting the data be fully encrypted and trustless. But not any sooner than that. So in the end, we'll get what we want, and the Internet will be safe.
So not only would you have eth0:
You would have p2p0:
Anonymous, Unique ID, DHT
So you could reach your web server eiher via 192.168.0.100 or short unique id
SSL communications will not be based on central CAs because that model is long compromised.
I think we should built this together. Maybe its GNUnet?
Tim Berners-Lee needs to withdraw his statement, acknowledge his mistake and publicly apologize before his voice can carry any weight again.
Wow. Understatement of the century.
tl;dw: next time you build a webservice, let your users use private drive, dropbox, or whatever accounts to host their data (or at least the data that doesn't get JOINed to other tables all over the place). one has to pay for data ownership, but it's not per-service charge.
the article's kind of a tease. it's easy to say things like "open" and "decentralized," but i guess if i want to know what specific implementations were discussed, i'd have to buy the magazine.
The web meaning basically http websites. On a bunch of servers.