Hacker News new | comments | show | ask | jobs | submit login
Tim Berners-Lee: We need to re-decentralise the web (wired.co.uk)
585 points by amirmc on Feb 6, 2014 | hide | past | web | favorite | 200 comments



Perfect timing, I just released a tool that enabled a more decentralized web. It used BitTorrent Sync to distribute files and I plan on implementing namecoin or colored coins for name resolution. As soon as an open source, stable btsync clone is released I plan on swapping it in.

The biggest drawback is that you can only serve static content, although I think many websites today would be fine with this restriction.

You can read more on my website: http://jack.minardi.org/software/syncnet-a-decentralized-web...

Follow along with the development on github here: https://github.com/jminardi/syncnet


BitTorrent Sync is the wrong foundation for this revolution. It should be open source from top to bottom.


Come on, the guy even mentioned himself in the above post "As soon as an open source, stable btsync clone is released I plan on swapping it in.", what's the point of re-emphasizing that point? He obviously knows the issue with btsync but it's fine to start building something on top of it in the meantime, just as a proof of concept.


Free software; the term is free software. In other words, software that respects your freedom.

Refer to this: https://www.gnu.org/philosophy/free-software-for-freedom.htm...


This is a language battle which will never, ever be won, not even with geeks, let own real people. Invent a new term ("liberated software"), or let it go. It is this sort of ideological pedantry that holds back FOSS adoption.


Provide better justification rather than "its my opinion".


It's a mental "snap-to-grid" problem. The average American hears the word "free", and they immediately think of $0, absent a specific pre-existing linguistic context (free country, free speech, free range). Because software is something that is frequently paid for as a product on a shelf, a non-enthusiast is always going to hear the $0 meaning in their head, even if they've been educated as to its other meaning.

At a high layer of abstraction, neural connections work probabilistically, firing in the path most likely to be a useful pattern; hence, most people will tend to assume "free" means "as in beer". However, a new term like "liberated software" will not trip this pre-existing wiring, forcing a new pathway that doesn't have to compete with the conditioning created by a lifetime of marketing messages ("Free Checking! Free Estimate! Free For The First Year!"... etc)


And it's not Linux, it's GNU/Linux!


Ordinarily I think Stallman takes that way too far, but in this case the exact motivation behind free software was being described, yet it was still placed under the banner of open source. I think there's a time and a place for both terms, but when you're talking about resisting the NSA, you are (or should be) referring to free software.


That statement would make sense if the distros we are using were mainly just Linux kernel + GNU tools. If we are going to give everyone their deserved credit, we should call it GNU/Linux/KDE/Apache/LibreOffice/...

Good thing 4-clause BSD never caught on in the Linux world.


open source btsync alternative you say? https://github.com/calmh/syncthing


This is quite nice. You should submit it as a "Show HN". You can be sure that as soon as you have a free software replacement for btsync I'll use it at least to serve my personal website.

By the way, have you heard about Freenet? https://freenetproject.org/


He did a few days ago[0] but it didn't get very much attention unfortunately. I only saw it because I spent way too much time on HN that day.

[0] https://news.ycombinator.com/item?id=7171978


Yeah I did submit it but it didn't get much attention. I don't know the etiquette on resubmitting items.


Try again when you've got a new feature or make a new release or something else noteworthy. There's always some timing luck to a HN submission.


You should team up with the Ethereum project: http://ethereum.org


While i love your idea, (and am glad you found the context to show your project off again. i read the original post), i do have two nitpicks.

You said it downloads the entire site when i open a site. Now, this is completely fine for normal text-based blogs, but what if it's an image heavy site? Like theChive? Wouldn't i end up storing several gigabits (possibly terrabytes) of data for that site?

Secondly, if somehow we extend the concept to dynamic content (don't ask me how) wouldn't it effectively mean each of us running a server and hosting it on that?

Maybe your idea can be implemented on the server level instead of the client level... like proxy servers doing what you want the clients to do and thus the content would be all over the internet instead of a few specific servers. (not saying that's a good idea btw)


BTSync supports selective sync, so I could only fetch the resources that are needed to display the page.


Agree. Just like the distributed search engines which are not quite practical in the real use, the concept is cool.


You could consider using Tahoe-LAFS instead of BitTorrent sync. Tahoe-LAFS is open source and stable, but it isn't a clone of btsync, and it might not fit for your purposes.


This is awesome.


It's great that Berners-Lee is saying this, but he'd have retained a lot more of his authority to pronounce on the future of the Web had he not thrown his weight behind the W3C's horrible plan to put DRM into HTML.


Let's crucify his for that :)

I hear what you are saying, but there is also reason for drm. Or we will have flash forever. What I am saying, I at least can understand why he did it.


Flash and Silverlight are dying anyway. So it's up to the movie industry and not up to us to bend over. And you know at least Flash was separated from the browser and ported to several systems. DRM in HTML5 will mean that Google, Apple, Microsoft will simply bundle their DRM modules and the free software world can fuck off. DRM in HTML5 is far worse than the status quo.


> And you know at least Flash was separated from the browser and ported to several systems

It really is astounding, and bears so much emphasis, that the proposed DRM module system is actually worse than Flash.


I don't share completely your assertions. In order for commercial companies to be able to use html5, they need some form of drm. This is just enabling them to do business.

I believe that free and open, given some semblance of equal playing field will always win.

Just to be clear, if they came and say, no drm in html, I would stand behind that as well. I just accept their judgement that this was necessary at the moment for things to work.

I don't think only my view is right, so, I might be wrong here, but this is what I believe.


They don't need DRM. DRM does not work. Just go to pirate bay and you'll find every TV series, movie (with DVD/BluRay/Stream release), album immediately after their release. They think they need DRM. But in reality they don't. It doesn't stop piracy at all. It's a burden only to their legal customers and not to the pirates. It is all about easy and affordable access.

If a company insists on using DRM then they should simply create their own proprietary plugin like Flash. But there is absolutely no reason for us to destroy the open web and make it rely on proprietary binary modules.


It stops plenty of piracy as without it piracy could be as easy as a single click from a place like Netflix, Google Music, etc.

Just because it's easy for you to circumvent doesn't mean it's easy for the everyday user.

But totally agreed that it doesn't belong in web standards.


"Just because it's easy for you to circumvent doesn't mean it's easy for the everyday user."

All it takes is for one person to circumvent it and the whole system crumbles. Which is the current state of affairs.


I don't know about "normal" people, but all the pointless shit they put on Blurays made me go BACK to piracy. I started my first real job 2 years ago, so i felt i should buy my movies. Turns out, Blurays are EVEN WORSE than DVDs. Nonstop trailers, that use your bandwidth to download new fucking trailers, a hundred warnings about piracy and a DRM that requires constant annoying updates and a crappy PowerDVD software.

If i sound angry, it's because i am.


It is now anyway, even with DRM.

Actually Netflix is easier then pirating, which is now why Netflix consumes more traffic than torrents.


> It stops plenty of piracy as without it piracy could be as easy as a single click from a place like Netflix, Google Music, etc.

It... actually is already at this point. I can find any song, any TV show, and any film, available to stream instantly on Youtube of all places. This isn't a shady torrent site, it's a Google-owned property.

In the music sphere, Grooveshark has every song you could think of, and is claimed to fail to comply with the licensing requirements of the copyright owners, and is thus being sued over it.

There is little or no reason that a similar service could not pop up for video, aside from the fact that bandwidth is costly, and I'm sure that could be resolved to an extent through some form of torrent-over-WebRTC system.


Everyday users fail at saving a youtube video. And someone who knows how to rip a youtube video usually also knows about pirate bay or other pirate websites.


Even worse, there's a whole bunch of malware Youtube downloader programs/browser plugins. It's pretty crazy but using TPB is at least easier and safer.


And I agree with you completely. But they believe that they need it and will not release content otherwise.


Then they can keep it and they won't get my money, and everyone will just end up pirating it anyway.

You don't combat piracy by locking it down and making a pain to access your content, you compete/combat piracy by making sure its always easier and more convenient to obtain your content legitimately, not punish those that are legit for the few that will go pirate it anyway. As long as I can purchase a song for $1 DRM free and have it in 5 seconds at a high quality from iTunes or Amazon, I won't ever be tempted to find a torrent for it, even if its free.

DRM creates artificial scarcity, and scarcity breeds a black market. Those black markets may always exist, but if you make it hard enough to access your product on the device they want at the time they want, they will have more incentive to go to the black markets.


wow, what kind of insights! Agree with you.


But we don't have to bend over for their believes! They can continue using Flash or Silverlight or write their own crap plugins. Why should we destroy the open web for them?


> In order for commercial companies to be able to use html5, they need some form of drm

This is false. Those companies can use the HTML without DRM with the same success. No company ever needs any DRM.


The BBC can not show the majority of their content on the iPlayer because the rights owners won't let them without DRM.

The BBC needs DRM to be able to give us that content.


No, that's completely the wrong way around. The rights owners could not sell anything to the BBC if the BBC were to simply refuse to buy anything they can not publish without DRM, and not selling anything is not a particularly good business model in most cases.

"The rights owners" are in a market, and if the market simply refused to buy broken products, they would have to either deliver a non-broken product or go bankrupt, there is no way in which they could force people to buy broken products in order to sustain them.


The BBC fights hard enough to win the rights to show this content, if they didn't the content owners would go to the other channels, Sky, ITV, Channel 4, 5 or even create their own distro like HBO.

People, the majority of people don't know what DRM is, they don't care, they just want to watch content.

One of the biggest problems in tech and forums like these is that everyone thinks they are so much more significant than they are, they don't realise they are such a tiny minority of the consumer spectrum.


BBC as a distributor still has way more influence than a single individual. By rejecting DRMed content they'll send a strong message to the publishers. It works in other cases, so I see why it can't work with video as well.


Doesn't appear like they do have influence http://www.wired.co.uk/news/archive/2013-02/17/bbc-drm-w3c


It only says that they "don't feel the requirement will change". It says nothing about their potential influence. I'd say they are just chickening and don't have guts to stand up against crazy Lysenkoist publishers which insist on DRM. It doesn't mean they can't - they just don't want to. That was the whole point. As long as distributors would continue cowardly proliferating DRM, the situation will remain as is.


This is an argument against DRM.


You don't want to view this content?


Not if the price is so high. Just because advertising spends all day every day telling us that we can't live without something, doesn't make it true. In fact, my usual opinion is that the better the advertising, the less I trust what it says.


"In order for commercial companies to be able to use html5, they need some form of drm"

Not our problem. They can take the web or leave it. Why should we pander to them and screw up a good thing in the process?


They currently are leaving it, there is a lot of content I can't get through a web browser that I can through a native app or through a plugin player.


And that's fine. If they want DRM they should write a proprietary app or plugin. But we shouldn't turn the web into a proprietary app controlled by Microsoft, Google, and Apple.


I would rather have a seamless experience in the browser than having to switch to native apps all the time.

The web is a proprietary app controlled by the w3c, who are mainly Microsoft, Google and Apple.

You can believe it's an open platform, but if that's the case it should be easy for you to block DRM ;) Good luck


So you'd rather give up on the open web and turn it into a proprietary system? Just for a seamless experience? Sorry to say this. But that's just plain stupid.

The web is not proprietary, not an app, and only to some extend controlled by the w3c, which is much more than Microsoft, Google, and Apple.


> The web is not proprietary, not an app, and only to some extend controlled by the w3c, which is much more than Microsoft, Google, and Apple.

Well lets see if DRM makes it into HTML5 then.


> In order for commercial companies to be able to use html5, they need some form of drm. This is just enabling them to do business.

Is that the same kind of DRM that enabled businesses to use the printing press, radio broadcasts, TV sets, vinyl records, cassette tapes, audio CDs and MP3s?


They don't really. Any stream can be captured. So if I can play the audio or video, I can capture it in a format that I can share, drm just makes it a little harder.


Whatever we may WISH, Flash is still everywhere. And still necessary, for example, to play sounds across browsers


And that has absolutely nothing at all to do with DRM...


Legacy browsers sure, but deciding whether to add DRM to HTML presupposes a browser that plays sound according to the standard.


Indeed, Ireland's Revenue service has a flash "application" to manage your requests.


Yes, and the reason for DRM is exactly the same as for centralization: central control and exploitation.

You cannot credibly argue for "building new architectures for the web where it's decentralised" (especially not with the arguments he uses with regards to corporate and government power) and at the same time endorse leaving the endpoints open to centralized control.

I can understand why some people who do not like it would still support DRM in HTML. I cannot understand why he does.


I hear what you are saying, but there is also reason for drm. Or we will have flash forever.

I'll rather have a reasonably cross-platform plugin with possible open-source implementations than 100% closed plugins with no cross-platform support which cannot be implemented in open-source.

DRM on the open web is all bad. If you compromise on that because you want Netflix in your browser and not an app, that makes you an asshat, and part of the problem.

We need open standards, we need to defend them, and we definitely don't need people undermining them on factually incorrect basis.


Yes let's.

Standardizing DRM extensions is bullshit. No new browser can ever be standards compliant when being compliant requires having the appropriate binary blob as distributed by the approved parties.


If "standards compliant" means implementing every feature then no new (or existing) browser will ever be standards complaint anyways. This is not a lever that can be used to force everyone to implement DRM.


But any agreed upon public standard should be implementable by any vendor. DRM extensions do not fit this bill.

If only a select few can implement it, then why call it a standard?


Only a select few can implement specific DRM modules, but nobody is talking about standardizing them.

The standard (that is being proposed) in this case is an interface that the actual DRM modules hook into. Anyone can implement the interface.


The "implementation" of an interface is the code that actually does stuff when that interface is called.

In this case, the implementation of the DRM hooks are those proprietary DRM modules.

This "standard" shuts everyone except the select few in the dark. Without have the appropriate plug-in, you cannot access the content.


There is no reason for DRM whatsoever. At least no valid reason.


If we were sure flash stays forever, we could just finish implementing Lightspark. With DRM that's going to be much harder.


You make a false choice. It should be the industry's choice whether to stop using DRM or having their DRM based on a dying platform that fewer and fewer people will be using.


I was thinking the same thing. Thank you.


Decrentralizing the web's software isn't good enough. We need to decentralize the hardware. Right now, connections to the web look like a tree, where a whole bunch of connections get funneled through an ISP. That ISP has the power. The power to throttle, the power to block, the power to record. And that ISP can be pressured by other powers. We need to decentralize so that instead of looking like a hierarchical tree, the internet looks like a graph. With each building forming a node that connects to its neighbors.

Of course, the amount of work it would take to build such a web and move to it likely rules out the possibility of it ever happening. I mean, how do we go about forming a movement to build this? It only works, really, if everyone's on board.


>Right now, connections to the web look like a tree, where a whole bunch of connections get funneled through an ISP

This is very true. More so if you look at countries like the UK where ISPs have recently been consolidated into MAFIAA-complying monopolies that censor just about anything that Hollywood and the government ask them to. It's scary how close ISPs are to the 'governements' of the UK and USA. Obamas' recent NSA 'reform' proposal even suggested that ISPs be in charge of surveillance & storage of Metadata instead of the NSA! WTF? Fascism.

Mesh networks have a big role to play in decentralizing the the post-NSA internet. Micropayments through bitcoin and cryptocurrencies will encourage individual nodes to provide bandwidth to users [1]. Smartphone apps like OpenGarden [2] and systems like the Athens Wireless Metropolitan Network (AWMN) [3] are the frontrunners of the new internet.

[1] https://crypto.cs.virginia.edu/courses/14s-pet/2014/01/30/aw... [2] http://en.wikipedia.org/wiki/Open_Garden [3] http://www.motherjones.com/politics/2013/08/mesh-internet-pr...


I'd think content providers in general would be willing to provide bandwidth under a mesh network. They'd also be the ones most likely to pipe the mesh into the existing infrastructure. Who else besides the likes of Google / Netflix / Facebook would front heavier duty APs city-wide. Every ounce of traffic would have to be encrypted though. The main barriers for mesh networking when I look at it are:

First and foremost: Antennas and hardware in general. It's one thing to maintain a multi-channel 500mbps connection within 10-20 feet, let alone doing it with multiple devices simultaneously connected to other multiple devices. Even if you scale the speed down, the consumer grade hardware out there isn't targeting this use case.

Second: Protocols. If we had the hardware, the whole mesh has to be trustless while being affected as little as possible by connectivity and pathing overhead. Plus backwards compatible with existing infrastructure (which is less hard I guess).

Third: Gatekeepers. If you get them on board, creating the mesh as a natural extension of the internet becomes easy. Since they invariably won't be on board, it'll be that much harder.


You have some valuable points.

Naturally the internet is not a mesh network because nobody will take this role of gatekeeper without benefit. If somebody does, it becomes ISP again.

However, although the current web is messy, we should be able to organize it using distributed network solutions in a peer-to-peer manner, as proposed in the blog here: http://bit.ly/1e8vrLL. It serves an additional layer on top of the nature internet. It has the protocols built-in and the "Gatekeepers" sitting in between the distributed nodes and a feasible business model to support its execution, rather than the ideal Semantic Web solution requiring DRM.


I sometimes wonder about this. Pretty much everyone already has a connected computer in their pocket. Wouldn't it be nice if we could use the phone without a cell provider? The web without an ISP? Connect to our friends without a social network? Exchange money without a bank?

Thinking further, what if all these services could be plugged into a well-abstracted peer-to-peer network, consisting of every connected device in the world? Services similar to Twitter or Facebook would no longer require a central host. Redundancy would be built in. Uptime would be pretty much guaranteed. Ads would go away. Freedom would be an implicit part of the system; no longer would profit motives sully (or censor!) services that people use and enjoy. And it would be more natural, too: pumping all our data through a few central pipes makes a lot less sense than simply connecting to our neighbors. Our devices would talk to each other just like we talk in the real world, only with the advantage of being able to traverse the global network in a matter of milliseconds.

New technologies like Bitcoin and BitTorrent would crop up naturally as part of this arrangement. It would put power back into the hands of the people.

Sadly, it seems a future of locked-down app stores controlled by a small number of large corporations is much more likely.

(Sorry, I realized this started sounding like a Marxist rant halfway through. And I'm not even a Marxist! It just sucks that this fascinating future is one that we're not likely to experience, even though we're almost at the point where we could actually implement it. I think.)


Your idea is similar to the existing distributed search engines which are up and running on many individual pcs of a P2P network.

Unfortunately, they are not quite practical either due to the security (open local firewall) and duplicates from every node, etc. It's hard to control without a centralized facilitator. However, agent technology provides a solution for that. Check out here: http://bit.ly/1bvvIJ3


Where I live at least, nearly everyone has a wireless router, which hardware-wise should have no trouble talking to other wireless routers in nearby buildings. Often, there are even multiple wireless routers in the same building, which can't talk to each other except through the ISPs, which I find quite ridiculous.

So, in urban areas of the US at least, it really is just a matter of software.

Of course, a large scale wireless mesh network would probably be somewhat lacking in terms of performance, but we also still have wired connections to ISPs and could start setting up wired connections to each other. Peering with your neighbor (wirelessly or not) to share idle bandwidth shouldn't make things any slower than they already are...

The problem of course is that everyone would have to use a compatible mesh protocol. There are several options for a mesh network protocol now, but I don't know of any that scales well while remaining decentralized. Even if someone did come up with a good protocol now, considering how long the switch to IPv6 has been taking, I wouldn't expect any switch away from IPv4 to happen very soon.


Unfortunately I've noticed a major obstacle to getting nearby hardware talking to each other: WPA.

Seriously, why prevent people from using a network? If you're relying on WPA for security then you shouldn't be connecting to the Internet in the first place. The most common thing I hear when guests see that my WiFi is open is that people will 'steal the bandwidth', as if the local exchange will magically run faster if there are enough residences connected to it! Also, given the monopoly position of phone provider, getting everyone to throw more money at them for the privilege of having multiple overlapping, mutually-exclusive networks is not a good plan.


> So, in urban areas of the US at least, it really is just a matter of software.

and, if you're really zealous about it, the firmware. there was a high-priority FSF project a while back for free wireless router firmware. that listing mentioned something called "orangemesh," which now appears to be on hold. what are some of the other options out there on the software side?


Yes, probably too difficult to move entirely, but you can build it off of the current web, and just migrate it over, or: You don't move a mountain, you move stones.


Just throwing an idea a the wall here: Maybe it could be combined with the decentralization of the power grid, i.e. smartgrids? This would (a) give additional incentives to home owners (subsidizing their investment) and power companies (going after the broadband market) and (b) it would solve the question on where to get the bandwidth from.


One of solution is a different protocol: http://named-data.net/

This looks extremely interesting, basically the entire network works like a giant CDN.


Yes indeed, NDN is a serious contender for the future Internet architecture. There are still some privacy concerns (due to the ubiquitous caching), and some performance/throughput issues due to signature verification on content packets (which are all signed), but lot of research has been done on ICN and I really would love for it to be the next Internet.


The IndieWeb movement is pushing the constructs built on top of the web into a decentralized form using self hosted content and webhook notifications.

Example: Comments on a blog post.

Given a blog post at URL A, comments are created by posting a note on your own site at URL B, then using a webhook to notify the blog that such a comment exists.

Your content cannot go away as long as you host it, and its not dependent on the blog's cooperation to be available, just to be findable.

http://indiewebcamp.com/comment


Doesn't that introduce the reverse problem though?

That makes someone else's content depend on the availability of your content - in the example of comments, the context and any intermediate replies can be lost if you stop hosting them. Leaving not a record of a conversation behind, but only indecipherable fragments.

A quick scan through the page at the link posted also seems to indicate the protocol depends a lot on content providers being well behaved. I'm assuming that somewhere in there there is handling for when content simply goes away with notification?


the protocol depends a lot on content providers being well behaved

I think this is true of existing content, except there are much fewer content providers and there's no protocol.


Why can't your blog periodically verify its links? It seems obvious that you can't depend on notifications that something is no longer available


so, trackbacks?


Oh yeah the good old parsing HTML-comments for weird SGML data, this worked out well in the past ;) It is good for everybody that we tried something like that in the past but please don't bring it back.


The idea of an open, free and decentralized web created by all for all is one of the main reasons I became a Mozillian. I invite all that think this is a good idea to check out their local Mozilla communities and help carry this mission onwards.

You can get guidance on how to get involved with Mozilla by following the instructions and filling the form at http://www.mozilla.org/en-US/contribute/

=)


Funny you say that. If you study history you will learn Mozilla org was big on pushing proprietary non standard compliant extensions. Rise of IE curbed their enthusiasm because roles have changed. I think it was in one of the interview videos at https://www.coursera.org/course/insidetheinternet


If you mean Netscape that was a different group of people at a different time with different goals. Some of those people carried over to the Mozilla project but the goals have always been different.


The internet is decentralized, the web has never been. HTTP protocol doesn't support it. But it's time to do that. What we need is the possibility of peer-to-peer connections between browsers.


While I agree that TCP/IP is decentralized, unfortunately due to NAT and corporate firewalls, vast swaths of users have been relegated to second class citizens who can only act as data consumers. Unfortunately this same misguided approach to security, or as I call it suckerity, has been carried forward into IPv6.

There is some hope that WebRTC and other p2p technologies might alleviate the problem but then the task of punching through firewalls with techniques like STUN falls on developers. I'm not seeing an industry-led campaign to standardize a true p2p protocol based on something like UDP, because it would undermine the profits of data producers.

I could go into how the proper layering of TCP should have been IP->UDP->TCP instead of IP->TCP and a whole host of technical mistakes such as inability to set the number of checksum bits, use alternatives to TCP header compression, or even use jumbo frames over broadband, but what it all comes down to is that what we think of as “the internet” (connection-oriented reliable streams and the client/server model) is not really compatible with a distributed content-addressable connectionless internet that can work with high latencies and packet loss.

I think if this network existed, then extending HTTP to operate in a p2p fashion wouldn’t be all that complicated. Probably the way it would work is that you’d request data by hash instead of by address, and local caches would send you the pieces that match like BitTorrent. Most of the complexity will center around security, but I think Bitcoin and other networks of trust point the way to being able to both prove one’s identity and surf anonymously.

Tim Berners-Lee may not be thinking about the problem at this level but we need more people with his clout to get the ball rolling. I wasted a lot of time attempting to write a low-level stream API over UDP for networked games before zeromq or WebRTC existed and ended up failing because there are just too many factors that made it nondeterministic. It’s going to have to be a group effort and will probably require funding or at least a donation of talent from people familiar with the pitfalls to get it right. Otherwise I just don’t think a new standard (one that makes recommendations instead of solving the heart of the problem) is going to catch on.


This is exactly the problem. We need a better transport layer.

You know, the speed at which SPDY has caught on has given me hope. Admittedly, that's at the application layer, but I think if Google were to get behind an initiative to create a better TCP, it would actually have a chance at working. Historically, Google's incentives were properly aligned with a decentralized Internet, but Google is more and more a content producer. So I'm not sure.

Until then, we'll have to rely on servers to bootstrap peer-to-peer networks, even Bitcoin, Bittorrent, and yes, WebRTC.


> but I think if Google were to get behind an initiative to create a better TCP, it would actually have a chance at working.

I don't think Google has much interest in a decentralized web. Google, along with Facebook, are two of the prime examples of a centralized web.


Yeah, you got the point. They are on the list of "reliant on big companies, and one big server". But "support geo-distributed clusters" remains an open task on Google's radar. See here:

http://highscalability.com/google-architecture

Unfortunately, with the existing architecture and infrastructure along with the team resources, they are focusing more on the algorithms of machine intelligence and AI.


>You know, the speed at which SPDY has caught on has given me hope. Admittedly, that's at the application layer, but I think if Google were to get behind an initiative to create a better TCP, it would actually have a chance at working.

You might find QUIC interesting (being implemented on chrome):

http://blog.chromium.org/2013/06/experimenting-with-quic.htm...

The "SPDY" application layer its implemented on top of it.. working at UDP.. so in theory, one can implement other sorts of applications layers on top of it..


Oh, thank you. I hadn't heard of QUIC. It's tunneled through UDP...does that mean it's NAT-friendly? Presumably so.


>It's tunneled through UDP...does that mean it's NAT-friendly? Presumably so.

Not as NAT friendly as you might think. There's a fair bit of discussion in the QUIC design document[0] around how NATs screw up the protocol and how to mitigate the problem.

[0] https://docs.google.com/document/d/1RNHkx_VvKWyWg6Lr8SZ-saqs...


As far as a decentralized content-addressable network goes, the swift protocol[0] seems pretty close to what you're talking about. I heard about it here a while back, and was quite interested in it, but it doesn't look like it's moved much lately (the page has a copyright date of 2010).

[0] http://libswift.org/


(Co)author here. The libswift effort got converted into IETF PPSP eventually https://datatracker.ietf.org/doc/draft-ietf-ppsp-peer-protoc... Also, folks in Sweden tinker with the codebase, mostly from the video distribution angle.


Or a bunch of University research. This is exactly the kind of complex problem that research groups work on but they tend not to get much attention until they're fully baked. The group I'm working with is considering what kind toolstack we need for a distributed future and then building it from the ground up. If you're interested, you can find out more from the link below. One of the pieces hit 1.0 late last year and we're working on the other components too.

http://nymote.org/blog/2013/introducing-nymote/


Why do you say HTTP doesn't support it? If a browser doubled as a server, and each person hosted its own pages, how would that not be decentralized and P2P? Hell, it's been done: Opera Unite had just that model, with web pages and applications hosted by the browser itself: http://www.operasoftware.com/press/releases/general/opera-un...


The big problem with browser-as-server is that users typically want to power off their browsing devices, while they typically want servers to be always-on. Tying it through a cloud-hosted VPS or some local dedicated device (those wall-wart computers, or something else lightweight and low power but sufficiently capable), but then having the browser talk to it smartly (potentially from several devices simultaneously), would seem a better plan.


One possible solution is to distribute the hosted content to peers, so that when one person's browser is offline, hopefully someone else's is online to act as a server instead. That doesn't serve as a guarantee, but with a centralized fallback it could work. There are some very clever people already doing this with WebRTC.


Right, that works great assuming static content.


People interested in privacy and Free Software are already working on it: https://freedomboxfoundation.org/learn/


Right, certainly related - the vision may or may not be precisely what I described (I'm not sure one way or the other about browser integration) but certainly a great starting point to build off of if not entirely what we want.


Okay, technically HTTP does support it, but it's not standardized.


How so? What's the non-standard part?


Peer-to-peer connections are possible right now with Chrome and Firefox using WebRTC peerconnection.

The WebRTC connection is bootstrapped through a server (STUN and TURN), but that's the case with almost all peer-to-peer software (take a look into how Bitcoin or Bittorrent is bootstrapped if you don't believe me). But anyone can run a STUN and/or TURN server.

Once the connection is bootstrapped, it's entirely peer-to-peer between browsers.


Sure HTTP does. You can cache things, sign things, &c.

It's the current philosophy of "web apps" and not "hypertext" that isn't decentralization.


Web apps are not an issue: https://unhosted.org/

The problem is not technical, it's social, political and economical.


Web apps are a bit of an issue, but unhosted web apps are not more of an issue than applications (as opposed to utilities composable in the shell) in general - which is a tremendous improvement.


Well, technically a browser can listen to incoming connections, but in any case standards don't require it.



Don't be rude to HTTP, there is no limitations for decentralization in HTTP.


I'm curious, what will that enable?


The same thing as CDN will achieve, just a lot more complex.


That doesn't seem nearly as important as having a decentralized data store.


I wrote about this as well: http://myownstream.com/blog#2011-05-21

Two years ago I really became passionate about the problem of decentralizing the consumer internet again. We can see with git and other tools how distributed workflows are better in many ways than centralized ones. The internet was originally designed to be decentralized, with no single point of failure, but there's a strong tendency for services to crop up and use network effects to amass lots of users. VC firms have a thesis to invest in such companies. While this is true, the future is in distributed computing, like Wordpress for blogs or Git for version control.

I started a company two years ago to build a distributed publishing platform. And it's nearly complete.

http://qbix.com/blog

http://magarshak.com/blog/?p=135

Soon... it will let people run a distributed social network and publish things over which they have personal control. And I'm open sourcing it:

http://github.com/EGreg/Q

http://qbixstaging.com/QP <-- coming soon


Git is an interesting example, even though the software itself is decentralized it seems to be most commonly used around centralized services. See how upset HN gets when github goes down.


Part of the difficulty of bringing this about is making p2p easy for users. I have high hopes that WebRTC data channels can start tearing these walls down by making it as simple as opening a web app.


> Part of the difficulty of bringing this about is making p2p easy for users.

As a general concept, P2P is easy for users - using bittorrent clients, TPB, etc. is essentially mainstream.

The one thing that would be nice is browser support - I would love it if Chrome had the option to download .torrent files using the Chrome download manager.

Thanks to web seeds[0], this would allow providers to convert any download to a torrent instead, both increasing use of P2P and decreasing their own server load.

Users wouldn't even need to know that they're using Bittorrent to download files - all they'd see are faster download speeds.

[0] https://en.wikipedia.org/wiki/BitTorrent#Web_seeding


> As a general concept, P2P is easy for users - using bittorrent clients, TPB, etc. is essentially mainstream.

I wish that were true, but I am forced to disagree; it's been my experience (and that of my tech-savvy friends with whom I have discussed this UX shortcoming) that most people don't know how to torrent (much less set up and use freenet). Showing my parents or most of my siblings how to use a torrent client in conjunction with a tracker site is not going to be a fruitful task. If it's not as easy as Youtube or Netflix (click and watch), they aren't going to put in the effort to learn or remember how to do it.


Opera has this since about 8 years. I used Opera heavily back then, but I never felt that this needs to be baked in. Other opera features I miss much more, most importantly free keyboard hotkey programming. I always had y, x, c (without modifier) mapped to back, forward and fast forward.


Not only easy, but also fast and reliable enough to compete with centralized services.


Fair point. A significant reason that p2p networks like freenet tend to be slow is because of low adoption - the more users there are, the faster it can be. I submit that making p2p applications easy to use will in turn make them faster than we have seen before, because adoption rates promise to be much higher.


We need IPV6. We need every device to be fully connected, not the halfassed solution we have now. Full connectivity plus ever increasing distributed and unused computing capacity is the right playground for a new set of network usage paradigms.

The mainframe->distributed->mainframe cycle will then once again swing to distributed (yes, we now live again in a mainframe age).


This actually has nothing to do with the web in general, or the internet, or politics, or personal liberties, or peer to peer networks. This is about designing stable distributed applications.

Almost every really big, really stable website (or network service) is built on a set of distributed applications. Global data redundancy is just one of the considerations. If you want your application available everywhere, all the time, you have to design it to withstand faults, to distribute data and computation, and to do this over long distances, and still perform the same actions the same way everywhere. That's all a de-centralized web needs to look like, and it's actually already implemented in many places.

What I think Tim is saying is that we need to move away from concepts that centralize data and computation and use existing proven models to make them more stable in a global way. And i'm totally behind that. But if you think we're going to get there with p2p, self-hosted solutions, new protocols, new tools, new paradigms, or a new internet, you've completely missed the boat. We have had everything we need to accomplish what Tim wants for a while. It just has to be used properly [which some companies actually do].

But good luck convincing most companies to spend the time and money doing that...


I partially agree with you in the sense that we don't need new network with p2p or self-hosted solutions, but need stable distributed applications. How?

I cannot agree with you to realize it via data redundancy and replication. The way you mentioned is just what Google is doing right now, which is not true distributed computing. Google's architecture relies on replicating services across many different machines to achieve distributed computing/storage and fault-tolerance across the world. See the reference here: http://static.googleusercontent.com/external_content/untrust...

The real distributed computing solution for providing a well-organized web has been proposed to build a layer on top of the existing physical infrastructure of the internet to present the web in a de-centralized manner under a systematic way. http://bit.ly/MwT4rx


As far as i'm aware there's no such thing as "true" or "real" distributed computing. It's a wide field of study with many different overlapping concepts.

The paper you linked to does describe how queries are resolved by Google, and not quite the whole picture required for a typical distributed computing model. But to say it's not distributed computing is... confusing. Clearly they must support this model over many sites the world over, and we know from other documents that they geo-locate data to make it more convenient, meaning it's portable and partitionable. Sounds like a distributed computing system to me.


Yes, I agree that it's hard to define what is true distributed computing. That's only from my personal opinion.

A good reference could be from Wikipedia: a distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal.

You can find from the blog page here about distributed computing: http://bit.ly/1jkySqU


I always chuckle when the title of the article was obviously changed after first putting it online because the slug uses a different word:

"Tim Berners-Lee: we need to re-decentralise the web"

vs

"tim-berners-lee-reclaim-the-web"


That doesn't imply the title was changed. Shorter slugs are nicer, and there could be some policy involved in keeping them short. It doesn't mean anything.


"reclaim" vs "re-decentralise"

The actual article never mentions the term "re-decentralise" at all.


Still it's standard practice. I've done it often for a few news sites I've managed content for. A word doesn't even have to be in TFA, just needs to sum it well enough.


It's done all the time to get search engines to rank the page higher for the keywords mentioned and listed.



Help us decentralize the internet: https://www.diyisp.org/dokuwiki/doku.php

There are a lot of diy-isps out there. Engage with your community and become part of a diy-isp, e.g. by hosting services there, peering with them, etc.


I've been trying to think about "rethinking" government for quite a while. The idea of a web based governance system seems really appealing, in the sense that I think the greatest strength of the web is how quickly ideas can go from concept to execution. Its about the closest thing to a meritocracy as we've ever come, and that's simply because the gate keeper is thrown away. Think about how many great things have sprung out of places like Reddit. Ordinary people who have a quick flash of a good idea can take 30 minutes to type something up... and sometimes it gets traction. Getting things done in our current system requires an inhuman like persistence, and the luck of making a relevant connection. Even if you have a good idea, its a difficult thing to execute.

I have this image of a framework/protocol in my head around allowing various policies to be created, and built on top of each other. As a programmer I pretty much used object oriented programming as an inspiration. Though I guess even OOP had its inspiration from biology. Its a way for an organic system to grow and adapt robustly. I think just like computer programs, government needs that too.

Added to that, if the web has a decent self regulation mechanism, that makes the justification of regulating the web even harder.

The one problem is, the advantage "physical" governments have is they have sovereignty. If Amazon starts doing illegal things, the FBI can arrest Jeff Bezos. The internet doesn't have an equivlent means.

However I think there is away to have the essence of sovereignty, and that is through a new crypto-currency like bitcoin. Imagine if to be a part, and to receive the benefits of this online sovereign nation you had to register your receive address. If the sovereign entity had a way to embargo your ability to accept payments in the currency, it might be enough penalty to fall back inline with whatever regulations were created. I think it also provides another means to give legitamicy to a crypto-currency.

An example of something that would be cool is this. You want to create an open source space program, so you propose the idea on your local netizen forum. The idea has overwhelming support, so someone creates a new policy component, and gives permissions to suction tax funds. I can imagine something like that happening in hours. Imagine initiating a public effort going from idea to reality in hours.


You, like a whole bunch of the new technocratic futurists, have almost no ability to discern fantasy from reality.

What you're talking about is popularly known as science fiction. You should be writing books and not pretending like you know the first thing about politics.

The reason why things take so long in reality is because there is a multitude of voices. Reddit does not promote pluralism, rather a giant circle-jerk of like minded individuals. Compromise is the name of the game in politics, something that the goon squad on Reddit has almost no understanding of.


You offer a lot of piss and vinegar, yet little insight. Please elaborate on "fantasy and reality" and "science fiction" instead of throwing up bombasitc refute. Also, parent post has no mention of Reddit.

What are you actually saying here? IF you're going to barate parent comment, then you're going to have to (at the very least) back it up with a few reasons why his/her system would fail.


The basic tenant is know as 'conservatism'. You may be surprised to realize just how lacking in radicalism any sustainable political systems are. US style liberal democracy wasn't a large departure from UK style parliamentary democracy. The president is modeled on the King, the Senate on the House of Lords and the House of Representatives on the House of Commons. Slowly over time the Senate and the Presidency were made more directly democratic. Our system of common law has a continuous chain of statements going back to England for many hundreds of years. Capitalism was going on for hundreds of years before it was described by Adam Smith.

You know what always fails? When a group of nitwits gets together and decides they've got it all figured out and then submit the rest of society to their hair-brained ideas. In the last 100 years engineers have have played a large role in a number. See: Franco, Pinochet, Mussolini.

So yeah, expect a lot of vitriol from students of history when some engineers start getting really excited about playing the role of political radical.

The meritocracy the SV loves so much is basically a nightmare when applied to society st large. You'll have designed a system by and for computer engineers with the premise that all of the unwashed masses will either have to sink or swim in your newly founded utopia system. You're going to fuck it up and along with that you're going to fuck up millions of lives.

Are there any fucking adults on this forum any more? Do your parents know where you are? If not, you should probably call and let hem know, they are probably worried sick.


From the parent post:

> Think about how many great things have sprung out of places like Reddit.

To think about what is lacking in the fantasy lets start with what happens when there is no overwhelming support for anything. How does a whiz-bang networking technology solve fundamental human disagreements?

It's also worth asking if speed of execution is really the defining characteristic of a good government. Plenty of horrible governments have been able to get horrible things done quickly and efficiently.


It's not compromise. Compromise implies you give a fuck about the wills and whims of the other party or parties out of concern or compassion or belief in some objective fairness. That's not how shit works. You aim to get what you can anyway you can or you lose to someone else who does. That's politics.


What usually happens is that a policy will have some ridiculously extreme measures added to it, which no sane person would ever agree to. Then, in the name of "compromise", those extreme measures will be taken out and the policy ends up passing in its original form. As an added bonus, those who oppose the policy's original measures are "paranoid" and "unwilling to compromise".

Some examples which spring to mind are the Snoopers' Charter and Digital Economy Act.


I fear that you are overestimating humanity: "The idea has overwhelming support, so someone creates a new policy component, and gives permissions to suction tax funds"... You are not afraid that this will lead to radical redistribution of wealth eventually resulting in loss of motivation to create wealth? (Yes, the Atlas Shrugged scenario.)


You are not afraid that this will lead to radical redistribution of wealth eventually resulting in loss of motivation to create wealth?

The motivation to create wealth justification for inequality is one of the most morally bankrupt ideas ever conceived. Billions of people around the world live in abject poverty; to blame them for their situation takes a profound case of the fundamental attribution error.


Actually I fully expect people to make terrible decisions, That's why I tried to make the system compartmentalized with specific rules on what kind of policy can go where. In this way if people go overboard the system can be ditched easily, and we can try again. Today failure is not an option. Which means its way too risky to try new things. I want to design the system so failure is easy to recover from.

I should do a better write up sometime.


Anyone who fears the Atlas Shrugged scenario isn't living in reality anyway.


Someone else has similar ideas, but I think I prefer their vision of the future:

http://bitcoinism.blogspot.com/2013/12/lex-cryptographia.htm...

It leverages Bitcoin, but not in a way to "suction" taxes away from unwilling participants.


My own rant on this from 2008 from a different perspective: http://timepedia.blogspot.com/2008/05/decentralizing-web.htm...


Excellent article and almost 6 years old. There are many projects to mention here but so far none of them took of as far as I know. Or did they?

After publishing my own rant about the same topic [0], Dave Winer commented via Twitter and argued that RSS is a very successful tool that decentralizes the web. I didn't understand his point at the time but today I think he is absolutely right.

His point was that when trying to fix the problem we should look at what already exists and works. Why did RSS succeed? Probably because it solved a problem for many people. It did not try to reinvent old things in a decentralized way but offered real benefits. Another good example for this git vs svn (centralized vs decentralized revision system).

[0] http://www.soa-world.de/echelon/2011/09/the-decentralized-we...


Fantastic! Thanks for sharing. I've been beating the same drum for years within my circle of friends, and I always tend to get the "huh?" reaction. It's always refreshing to discover somebody else who seems to be running on the same wavelength. :)

I miss USENET. Nowadays, my digital identity has been spread to the four corners, across Facebook, Twitter, various unrelated phpBB-based sites, and places such as /. and HN. Even if I wanted to, there is no possible way that I could re-locate every digital utterance I have ever made publicly. Not only that, but I don't truly "own" anything I write anymore. It's stored in some unmarked server somewhere, stuffed into a schema that has no published documentation. I still have e-mail archives going back 10+ years, but if Facebook ceases to exist 10 years into the future, it will be like the (admittedly few) worthwhile written conversations and interactions I had with people on there never happened. There will be no record of it.

There's got to be a better way.


Given the support Google has from the tech community, I don't see much movement on this in the near future.


Google has support insofar as they still offer several really good products, but after the Reader shutdown, KitKat/Nestle branding, the NSA revelations, etc, I think they're a whole lot less geek-cool.

They're not the new Microsoft quite yet, and they may never be, but Google is a long way from their peak of hacker-ish credibility.


Their benevolent intentions as somehow more than just a self-interested corporation are still strongly defended around here.

See: https://news.ycombinator.com/item?id=7184912


given the years of covert actions slowly coming to light, this is changing. I'm not just talking about relationships with 3-letter agencies but also actions like colluding to push down developer salaries, black-boxing their search algorithms, suddenly blacklisting competitors in their search rankings (e.g. Rap Genius), WiFi and Streeview privacy invasion, etc... How many times have they already faced Antitrust accusations?

Add to that the fact that Google is THE face of the recent tech backlash. Others don't like the whole idea that they employ a tiny fraction of someone like GM yet "hoovering up public subsidies and then avoiding taxes"[1]. If the public backlash continues or increases, we'll all start to face (some) repercussions - so how long will the tech community support Google en masse?

[1]http://www.economist.com/news/21588893-tech-elite-will-join-...


A lot of that is a tempest in a teapot. Rap Genius was blacklisted for good reasons, they were clearly gaming the system using blackhat SEO techniques.

Antitrust accusations? You mean those funded by Microsoft astroturf lobbying groups?

And just listen to your other complaint: Google is the face of rich privileged techies, and at the same time, their uncool for theoretically pushing down salaries, and these already hated SF techies should be showered with even more money? If you look at the publicly released transcripts, Steve Jobs was the ringleader of this effort anyway and threatened to "go to war" with some of the players if they poached Apple employees. Why isn't Apple facing similar outrage over this? Steve Jobs tried to price fix engineering salaries and price fix eBooks.


Yeah - those are indeed pretty weak complaints.

Becoming anti-privacy, pro-DRM, and against net neutrality are much more significant betrayals.

The reason Apple doesn't get such a backlash is that they haven't represented themselves as morally superior. Everyone knows Steve Jobs was a ruthless businessman, and that Apple is not a philanthropic organization.


Why Tim Berners-Lee is wrong about DRM in HTML5

http://craphound.com/?p=4651



and while at it some links to the real origins of the web:

For ADD-ers: http://youtu.be/hSyfZkVgasI

The story: http://www.archive.org/details/paulotlet

Long PR story short: RPC was already prevalent, that is how you control(led) your stuff remotely. Like Tim did on his day job. Today, if you'd ask for a NeXT-like toy, you'd be denied were you an average Eastern member of CERN (n.b. western equivalent). As a true westerner, TBL asked for it and got one. Put the gopher link address ptr in the reserved field of the text font properties (where things like bold and italics properties are stored) and voilà. Mix in a simplified SGML for markup (as it was customary back then for typefaces and layout on printers). One could also hire cheap disposable students to actually write the web clients and servers to be cross platform (the web's virtue/value).

Would the "web" have just ran on NeXT, it would be long extinct, let alone have taken off. Clicking on text is how you used Oberon or even Genera Document Examiner (perhaps even over the network). It was the spirit of the times. Even Jobs demoed a WYSIWYG client-server application in 5 minutes on the NeXT. What an insult.

On linking and hypertext: all post-war era anglo-saxon stuff is spin (including Vannevar and Tim invented the web myths). The real stuff comes from Belgium, as did Tim's boss, btw.


Time Berners Lee : "I would have got rid of the slash slash after the colon. You don't really need it. It just seemed like a good idea at the time."

When asked what he would have done differently!!


I think that was rather a joke. A good one, though.


Let me know if you'd prefer to write mailto:// or http:www.fish.com


I'm surprised by the amount of comments declaring "Berners-Lee supported DRM so his words are meaningless". Not that I myself put a lot of weight to his words, but not for some silly DRM issue (frankly I didn't even remember that), but because I can't really say what his contribution to the web (or anything relevant) has been in the past couple of decades. Looking at his wikipedia page, it mentions that he founded W3C in 1994, and that's about it. Everything since seems to be some sort fluff, often with grandiose names and concepts but very little substance.

So I'm not surprised that people do not put weight on his words, but rather the reason why they don't and the apparent recency of this opinion.


Wow, the giant flashing ad on the left of that article was so distracting I was unable to read the text. I eventually opened up the web inspector and just deleted the damn thing from the DOM.


AdblockPlus - I never even knew there was a giant flashing ad on the left of the article.


"A plugin is needed to display this content." is all i see ;)


Do you not care to use (or perhaps not have the capacity to use) an adblocker?


Digital Restrictions Management will take care of that.

Not only mandatory flashing, but also tracking.


How? How can DRM force me to display the iframe containing an ad.


DRM can force you to do whatever you want. It's proprietary code running on your system.

If you want to view the content, it will ensure that you've seen the ad before it renders the content.


The "on your system" part is what prevents this from happening. However, with features like remote attestation, a DRM platform could be built. If users start accepting PCs locked down like tablets, then this might be an issue.


I'm not sure I follow. The DRM extensions are basically hooks to proprietary plugins. If the plugin decides that you didn't view the ad, then it will refuse to play to request media.

Then can and will be workarounds, but thanks to lobbying those workaround are criminal in many parts of the world.


At the moment, those proprietary plugins execute inside an OS and processor you "fully" control. Nothing really stops you from flipping a jump statement. (Except being more complicated than that.) I believe that's what the poster was saying: what can force him on his own system.


Flipping a jump statement is a criminal offense in many countries now.


I've gotten to the point where I just tune those out. I wouldn't have even known there was an advertisement on that page if you hadn't pointed it out.


Advertisers must hate you, you know one weird trick to tune out advertisements.


I'm having a really hard time trying to stomach anything Tim Berners Lee is saying these days, when I know he's such a big proponent of bringing DRM to his "open web" (and inevitably turning it into a more "closed" one, in the end).

Also, isn't this decentralization? Not having the data in one mega-American-cloud? Seems to me that Tim is doing a lot of PR for big companies lately, masked as a benefit for the users, just like he's doing with the whole DRM thing, which he actually says would be beneficial for users.

But let's assume he's not trying to be malicious here, and that he has a point. Here's the thing. Yes, I agree that having every country demand companies to host the data locally is going to make it very hard for innovation to spread, and therefore, progress will slow.

HOWEVER, right now it seems we only have a choice between this, and allowing US to get their hands on all the data. I didn't even see Obama mention anything about NSA not being able to tap the world's fiber cables anymore.

So until US gets serious about not doing shitty stuff like that to the world's users, then I absolutely think all the other countries should try to force companies to host the data locally.

There is one other solution to this, that will allow companies to keep the data wherever they want, and that's encrypting everything by default, and even them not being able to decrypt the data without the user allowing it. So stuff like OTR for chat systems, DarkMail/PGP for e-mail systems, and so on.

Make it so the web is completely trust-less, so other countries don't have to trust Google or the American government to not get their data, because they could be assured there's nothing for them to get other than strongly encrypted data.

Unfortunately, this option isn't even on the table right now with the big companies, and the US government will push against companies trying to do this, too. And the only way to get this option on the table, is for them to think that other countries are going to inevitably force them to store the data locally, and build data-centers locally. Only then they might start preferring this "encrypt everything with the user having the key" solution, as an alternative to storing data in every country or region.

So until that happens, I absolutely support countries demanding data to be stored locally, because I know that minutes before that will begin to happen, US companies and the US government will agree to letting the data be fully encrypted and trustless. But not any sooner than that. So in the end, we'll get what we want, and the Internet will be safe.


This talk is very relevant too: http://www.youtube.com/watch?v=QOEMv0S8AcA


I think a powerful thing would be a p2p anonymous net interface that ships default with the Linux kernel.

So not only would you have eth0: You would have p2p0: Anonymous, Unique ID, DHT

So you could reach your web server eiher via 192.168.0.100 or short unique id

SSL communications will not be based on central CAs because that model is long compromised.

I think we should built this together. Maybe its GNUnet?


"The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs." -- Alan Kay



Why should we care what some guy says when that same guy thinks that putting DRM in the an open standard makes sense?

Tim Berners-Lee needs to withdraw his statement, acknowledge his mistake and publicly apologize before his voice can carry any weight again.


What we need is Gopher, Tim.


> we need a bit of anti-establishment push back

Wow. Understatement of the century.


Decentralizing the web won't really help when all communications flows throw a handful of telecom companies.


in the absence of a relevant plug for a library that i've written (shame), here's one for a library that someone else wrote

http://www.infoq.com/presentations/private-backend

tl;dw: next time you build a webservice, let your users use private drive, dropbox, or whatever accounts to host their data (or at least the data that doesn't get JOINed to other tables all over the place). one has to pay for data ownership, but it's not per-service charge.

the article's kind of a tease. it's easy to say things like "open" and "decentralized," but i guess if i want to know what specific implementations were discussed, i'd have to buy the magazine.


May be he should revert his stance on DRM in HTML then. Supporting it goes completely against the idea of decentralizing the Web, since it's giving control to gatekeepers and taking it away from users.


They make a reference that the NSA may be actively using a quantum computer to break mainstream encryption (or that it may happen soon).

Puh-leez.


The web was never decentralized. The internet was, and kind of is.

The web meaning basically http websites. On a bunch of servers.


He needs to get over that slash slash. Really, it's fine. He's hung over it.


Let's decentralize first by removing the centralized generic resources that consume too much of the traffic, like Facebook, Wikipedia, Amazon and Google.


How do you propose "removing" Amazon, Wikipedia, et al?


I imagine there are some legal ways to do that.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: