
Tim Berners-Lee: We need to re-decentralise the web - amirmc
http://www.wired.co.uk/news/archive/2014-02/06/tim-berners-lee-reclaim-the-web
======
doctoboggan
Perfect timing, I just released a tool that enabled a more decentralized web.
It used BitTorrent Sync to distribute files and I plan on implementing
namecoin or colored coins for name resolution. As soon as an open source,
stable btsync clone is released I plan on swapping it in.

The biggest drawback is that you can only serve static content, although I
think many websites today would be fine with this restriction.

You can read more on my website: [http://jack.minardi.org/software/syncnet-a-
decentralized-web...](http://jack.minardi.org/software/syncnet-a-
decentralized-web-browser/)

Follow along with the development on github here:
[https://github.com/jminardi/syncnet](https://github.com/jminardi/syncnet)

~~~
Sir_Cmpwn
BitTorrent Sync is the wrong foundation for this revolution. It should be open
source from top to bottom.

~~~
chroem
_Free software_ ; the term is free software. In other words, software that
respects your freedom.

Refer to this: [https://www.gnu.org/philosophy/free-software-for-
freedom.htm...](https://www.gnu.org/philosophy/free-software-for-freedom.html)

~~~
Sir_Cmpwn
And it's not Linux, it's GNU/Linux!

~~~
chroem
Ordinarily I think Stallman takes that way too far, but in this case the exact
motivation behind free software was being described, yet it was still placed
under the banner of open source. I think there's a time and a place for both
terms, but when you're talking about resisting the NSA, you are (or should be)
referring to free software.

------
MikeTaylor
It's great that Berners-Lee is saying this, but he'd have retained a lot more
of his authority to pronounce on the future of the Web had he not thrown his
weight behind the W3C's horrible plan to put DRM into HTML.

~~~
desireco42
Let's crucify his for that :)

I hear what you are saying, but there is also reason for drm. Or we will have
flash forever. What I am saying, I at least can understand why he did it.

~~~
fafner
Flash and Silverlight are dying anyway. So it's up to the movie industry and
not up to us to bend over. And you know at least Flash was separated from the
browser and ported to several systems. DRM in HTML5 will mean that Google,
Apple, Microsoft will simply bundle their DRM modules and the free software
world can fuck off. DRM in HTML5 is far worse than the status quo.

~~~
desireco42
I don't share completely your assertions. In order for commercial companies to
be able to use html5, they need some form of drm. This is just enabling them
to do business.

I believe that free and open, given some semblance of equal playing field will
always win.

Just to be clear, if they came and say, no drm in html, I would stand behind
that as well. I just accept their judgement that this was necessary at the
moment for things to work.

I don't think only my view is right, so, I might be wrong here, but this is
what I believe.

~~~
fafner
They don't need DRM. DRM does not work. Just go to pirate bay and you'll find
every TV series, movie (with DVD/BluRay/Stream release), album immediately
after their release. They think they need DRM. But in reality they don't. It
doesn't stop piracy at all. It's a burden only to their legal customers and
not to the pirates. It is all about easy and affordable access.

If a company insists on using DRM then they should simply create their own
proprietary plugin like Flash. But there is absolutely no reason for us to
destroy the open web and make it rely on proprietary binary modules.

~~~
agildehaus
It stops plenty of piracy as without it piracy could be as easy as a single
click from a place like Netflix, Google Music, etc.

Just because it's easy for you to circumvent doesn't mean it's easy for the
everyday user.

But totally agreed that it doesn't belong in web standards.

~~~
fafner
Everyday users fail at saving a youtube video. And someone who knows how to
rip a youtube video usually also knows about pirate bay or other pirate
websites.

~~~
seabee
Even worse, there's a whole bunch of malware Youtube downloader
programs/browser plugins. It's pretty crazy but using TPB is at least easier
and safer.

------
dbingham
Decrentralizing the web's software isn't good enough. We need to decentralize
the hardware. Right now, connections to the web look like a tree, where a
whole bunch of connections get funneled through an ISP. That ISP has the
power. The power to throttle, the power to block, the power to record. And
that ISP can be pressured by other powers. We need to decentralize so that
instead of looking like a hierarchical tree, the internet looks like a graph.
With each building forming a node that connects to its neighbors.

Of course, the amount of work it would take to build such a web and move to it
likely rules out the possibility of it ever happening. I mean, how do we go
about forming a movement to build this? It only works, really, if everyone's
on board.

~~~
aij
Where I live at least, nearly everyone has a wireless router, which hardware-
wise should have no trouble talking to other wireless routers in nearby
buildings. Often, there are even multiple wireless routers in the same
building, which can't talk to each other except through the ISPs, which I find
quite ridiculous.

So, in urban areas of the US at least, it really is just a matter of software.

Of course, a large scale wireless mesh network would probably be somewhat
lacking in terms of performance, but we also still have wired connections to
ISPs and could start setting up wired connections to each other. Peering with
your neighbor (wirelessly or not) to share idle bandwidth shouldn't make
things any slower than they already are...

The problem of course is that everyone would have to use a compatible mesh
protocol. There are several options for a mesh network protocol now, but I
don't know of any that scales well while remaining decentralized. Even if
someone did come up with a good protocol now, considering how long the switch
to IPv6 has been taking, I wouldn't expect any switch away from IPv4 to happen
very soon.

~~~
chriswarbo
Unfortunately I've noticed a major obstacle to getting nearby hardware talking
to each other: WPA.

Seriously, why prevent people from using a network? If you're relying on WPA
for security then you shouldn't be connecting to the Internet in the first
place. The most common thing I hear when guests see that my WiFi is open is
that people will 'steal the bandwidth', as if the local exchange will
magically run faster if there are enough residences connected to it! Also,
given the monopoly position of phone provider, getting everyone to throw more
money at them for the privilege of having multiple overlapping, mutually-
exclusive networks is not a good plan.

------
donpdonp
The IndieWeb movement is pushing the constructs built on top of the web into a
decentralized form using self hosted content and webhook notifications.

Example: Comments on a blog post.

Given a blog post at URL A, comments are created by posting a note on your own
site at URL B, then using a webhook to notify the blog that such a comment
exists.

Your content cannot go away as long as you host it, and its not dependent on
the blog's cooperation to be available, just to be findable.

[http://indiewebcamp.com/comment](http://indiewebcamp.com/comment)

~~~
GrinningFool
Doesn't that introduce the reverse problem though?

That makes someone else's content depend on the availability of your content -
in the example of comments, the context and any intermediate replies can be
lost if you stop hosting them. Leaving not a record of a conversation behind,
but only indecipherable fragments.

A quick scan through the page at the link posted also seems to indicate the
protocol depends a lot on content providers being well behaved. I'm assuming
that somewhere in there there is handling for when content simply goes away
with notification?

~~~
edoloughlin
_the protocol depends a lot on content providers being well behaved_

I think this is true of existing content, except there are much fewer content
providers and there's no protocol.

------
soapdog
The idea of an open, free and decentralized web created by all for all is one
of the main reasons I became a Mozillian. I invite all that think this is a
good idea to check out their local Mozilla communities and help carry this
mission onwards.

You can get guidance on how to get involved with Mozilla by following the
instructions and filling the form at [http://www.mozilla.org/en-
US/contribute/](http://www.mozilla.org/en-US/contribute/)

=)

~~~
rasz_pl
Funny you say that. If you study history you will learn Mozilla org was big on
pushing proprietary non standard compliant extensions. Rise of IE curbed their
enthusiasm because roles have changed. I think it was in one of the interview
videos at
[https://www.coursera.org/course/insidetheinternet](https://www.coursera.org/course/insidetheinternet)

~~~
amaranth
If you mean Netscape that was a different group of people at a different time
with different goals. Some of those people carried over to the Mozilla project
but the goals have always been different.

------
nawitus
The internet is decentralized, the web has never been. HTTP protocol doesn't
support it. But it's time to do that. What we need is the possibility of peer-
to-peer connections between browsers.

~~~
zackmorris
While I agree that TCP/IP is decentralized, unfortunately due to NAT and
corporate firewalls, vast swaths of users have been relegated to second class
citizens who can only act as data consumers. Unfortunately this same misguided
approach to security, or as I call it suckerity, has been carried forward into
IPv6.

There is some hope that WebRTC and other p2p technologies might alleviate the
problem but then the task of punching through firewalls with techniques like
STUN falls on developers. I'm not seeing an industry-led campaign to
standardize a true p2p protocol based on something like UDP, because it would
undermine the profits of data producers.

I could go into how the proper layering of TCP should have been IP->UDP->TCP
instead of IP->TCP and a whole host of technical mistakes such as inability to
set the number of checksum bits, use alternatives to TCP header compression,
or even use jumbo frames over broadband, but what it all comes down to is that
what we think of as “the internet” (connection-oriented reliable streams and
the client/server model) is not really compatible with a distributed content-
addressable connectionless internet that can work with high latencies and
packet loss.

I think if this network existed, then extending HTTP to operate in a p2p
fashion wouldn’t be all that complicated. Probably the way it would work is
that you’d request data by hash instead of by address, and local caches would
send you the pieces that match like BitTorrent. Most of the complexity will
center around security, but I think Bitcoin and other networks of trust point
the way to being able to both prove one’s identity and surf anonymously.

Tim Berners-Lee may not be thinking about the problem at this level but we
need more people with his clout to get the ball rolling. I wasted a lot of
time attempting to write a low-level stream API over UDP for networked games
before zeromq or WebRTC existed and ended up failing because there are just
too many factors that made it nondeterministic. It’s going to have to be a
group effort and will probably require funding or at least a donation of
talent from people familiar with the pitfalls to get it right. Otherwise I
just don’t think a new standard (one that makes recommendations instead of
solving the heart of the problem) is going to catch on.

~~~
jnbiche
This is exactly the problem. We need a better transport layer.

You know, the speed at which SPDY has caught on has given me hope. Admittedly,
that's at the application layer, but I think if Google were to get behind an
initiative to create a better TCP, it would actually have a chance at working.
Historically, Google's incentives were properly aligned with a decentralized
Internet, but Google is more and more a content producer. So I'm not sure.

Until then, we'll have to rely on servers to bootstrap peer-to-peer networks,
even Bitcoin, Bittorrent, and yes, WebRTC.

~~~
rubiquity
> but I think if Google were to get behind an initiative to create a better
> TCP, it would actually have a chance at working.

I don't think Google has much interest in a decentralized web. Google, along
with Facebook, are two of the prime examples of a centralized web.

~~~
dclara
Yeah, you got the point. They are on the list of "reliant on big companies,
and one big server". But "support geo-distributed clusters" remains an open
task on Google's radar. See here:

[http://highscalability.com/google-
architecture](http://highscalability.com/google-architecture)

Unfortunately, with the existing architecture and infrastructure along with
the team resources, they are focusing more on the algorithms of machine
intelligence and AI.

------
EGreg
I wrote about this as well:
[http://myownstream.com/blog#2011-05-21](http://myownstream.com/blog#2011-05-21)

Two years ago I really became passionate about the problem of decentralizing
the consumer internet again. We can see with git and other tools how
distributed workflows are better in many ways than centralized ones. The
internet was originally designed to be decentralized, with no single point of
failure, but there's a strong tendency for services to crop up and use network
effects to amass lots of users. VC firms have a thesis to invest in such
companies. While this is true, the future is in distributed computing, like
Wordpress for blogs or Git for version control.

I started a company two years ago to build a distributed publishing platform.
And it's nearly complete.

[http://qbix.com/blog](http://qbix.com/blog)

[http://magarshak.com/blog/?p=135](http://magarshak.com/blog/?p=135)

Soon... it will let people run a distributed social network and publish things
over which they have personal control. And I'm open sourcing it:

[http://github.com/EGreg/Q](http://github.com/EGreg/Q)

[http://qbixstaging.com/QP](http://qbixstaging.com/QP) <\-- coming soon

~~~
jiggy2011
Git is an interesting example, even though the software itself is
decentralized it seems to be most commonly used around centralized services.
See how upset HN gets when github goes down.

------
wwwtyro
Part of the difficulty of bringing this about is making p2p easy for users. I
have high hopes that WebRTC data channels can start tearing these walls down
by making it as simple as opening a web app.

~~~
chimeracoder
> Part of the difficulty of bringing this about is making p2p easy for users.

As a general concept, P2P _is_ easy for users - using bittorrent clients, TPB,
etc. is essentially mainstream.

The one thing that would be nice is browser support - I would love it if
Chrome had the option to download .torrent files using the Chrome download
manager.

Thanks to web seeds[0], this would allow providers to convert _any_ download
to a torrent instead, both increasing use of P2P and decreasing their own
server load.

Users wouldn't even need to know that they're using Bittorrent to download
files - all they'd see are faster download speeds.

[0]
[https://en.wikipedia.org/wiki/BitTorrent#Web_seeding](https://en.wikipedia.org/wiki/BitTorrent#Web_seeding)

~~~
wwwtyro
> As a general concept, P2P is easy for users - using bittorrent clients, TPB,
> etc. is essentially mainstream.

I wish that were true, but I am forced to disagree; it's been my experience
(and that of my tech-savvy friends with whom I have discussed this UX
shortcoming) that most people don't know how to torrent (much less set up and
use freenet). Showing my parents or most of my siblings how to use a torrent
client in conjunction with a tracker site is not going to be a fruitful task.
If it's not as easy as Youtube or Netflix (click and watch), they aren't going
to put in the effort to learn or remember how to do it.

------
sergiosgc
We need IPV6. We need every device to be fully connected, not the halfassed
solution we have now. Full connectivity plus ever increasing distributed and
unused computing capacity is the right playground for a new set of network
usage paradigms.

The mainframe->distributed->mainframe cycle will then once again swing to
distributed (yes, we now live again in a mainframe age).

------
peterwwillis
This actually has nothing to do with the web in general, or the internet, or
politics, or personal liberties, or peer to peer networks. This is about
designing stable distributed applications.

Almost every really big, really stable website (or network service) is built
on a set of distributed applications. Global data redundancy is just one of
the considerations. If you want your application available everywhere, all the
time, you have to design it to withstand faults, to distribute data and
computation, and to do this over long distances, and still perform the same
actions the same way everywhere. That's all a de-centralized web needs to look
like, and it's actually already implemented in many places.

What I think Tim is saying is that we need to move away from concepts that
centralize data and computation and use existing proven models to make them
more stable in a global way. And i'm totally behind that. But if you think
we're going to get there with p2p, self-hosted solutions, new protocols, new
tools, new paradigms, or a new internet, you've completely missed the boat. We
have had everything we need to accomplish what Tim wants for a while. It just
has to be used properly [which some companies actually do].

But good luck convincing most companies to spend the time and money doing
that...

~~~
dclara
I partially agree with you in the sense that we don't need new network with
p2p or self-hosted solutions, but need stable distributed applications. How?

I cannot agree with you to realize it via data redundancy and replication. The
way you mentioned is just what Google is doing right now, which is not true
distributed computing. Google's architecture relies on replicating services
across many different machines to achieve distributed computing/storage and
fault-tolerance across the world. See the reference here:
[http://static.googleusercontent.com/external_content/untrust...](http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/archive/googlecluster-
ieee.pdf)

The real distributed computing solution for providing a well-organized web has
been proposed to build a layer on top of the existing physical infrastructure
of the internet to present the web in a de-centralized manner under a
systematic way. [http://bit.ly/MwT4rx](http://bit.ly/MwT4rx)

~~~
peterwwillis
As far as i'm aware there's no such thing as "true" or "real" distributed
computing. It's a wide field of study with many different overlapping
concepts.

The paper you linked to does describe how queries are resolved by Google, and
not quite the whole picture required for a typical distributed computing
model. But to say it's not distributed computing is... confusing. Clearly they
must support this model over many sites the world over, and we know from other
documents that they geo-locate data to make it more convenient, meaning it's
portable and partitionable. Sounds like a distributed computing system to me.

~~~
dclara
Yes, I agree that it's hard to define what is true distributed computing.
That's only from my personal opinion.

A good reference could be from Wikipedia: a distributed system is a software
system in which components located on networked computers communicate and
coordinate their actions by passing messages. The components interact with
each other in order to achieve a common goal.

You can find from the blog page here about distributed computing:
[http://bit.ly/1jkySqU](http://bit.ly/1jkySqU)

------
aaronpk
I always chuckle when the title of the article was obviously changed after
first putting it online because the slug uses a different word:

"Tim Berners-Lee: we need to re-decentralise the web"

vs

"tim-berners-lee-reclaim-the-web"

~~~
dkuntz2
That doesn't imply the title was changed. Shorter slugs are nicer, and there
could be some policy involved in keeping them short. It doesn't mean anything.

~~~
aaronpk
"reclaim" vs "re-decentralise"

The actual article never mentions the term "re-decentralise" at all.

~~~
coldtea
Still it's standard practice. I've done it often for a few news sites I've
managed content for. A word doesn't even have to be in TFA, just needs to sum
it well enough.

~~~
thirsteh
It's done all the time to get search engines to rank the page higher for the
keywords mentioned and listed.

------
api
[http://redecentralize.org/](http://redecentralize.org/)

------
freifunk_berlin
Help us decentralize the internet:
[https://www.diyisp.org/dokuwiki/doku.php](https://www.diyisp.org/dokuwiki/doku.php)

There are a lot of diy-isps out there. Engage with your community and become
part of a diy-isp, e.g. by hosting services there, peering with them, etc.

------
swalsh
I've been trying to think about "rethinking" government for quite a while. The
idea of a web based governance system seems really appealing, in the sense
that I think the greatest strength of the web is how quickly ideas can go from
concept to execution. Its about the closest thing to a meritocracy as we've
ever come, and that's simply because the gate keeper is thrown away. Think
about how many great things have sprung out of places like Reddit. Ordinary
people who have a quick flash of a good idea can take 30 minutes to type
something up... and sometimes it gets traction. Getting things done in our
current system requires an inhuman like persistence, and the luck of making a
relevant connection. Even if you have a good idea, its a difficult thing to
execute.

I have this image of a framework/protocol in my head around allowing various
policies to be created, and built on top of each other. As a programmer I
pretty much used object oriented programming as an inspiration. Though I guess
even OOP had its inspiration from biology. Its a way for an organic system to
grow and adapt robustly. I think just like computer programs, government needs
that too.

Added to that, if the web has a decent self regulation mechanism, that makes
the justification of regulating the web even harder.

The one problem is, the advantage "physical" governments have is they have
sovereignty. If Amazon starts doing illegal things, the FBI can arrest Jeff
Bezos. The internet doesn't have an equivlent means.

However I think there is away to have the essence of sovereignty, and that is
through a new crypto-currency like bitcoin. Imagine if to be a part, and to
receive the benefits of this online sovereign nation you had to register your
receive address. If the sovereign entity had a way to embargo your ability to
accept payments in the currency, it might be enough penalty to fall back
inline with whatever regulations were created. I think it also provides
another means to give legitamicy to a crypto-currency.

An example of something that would be cool is this. You want to create an open
source space program, so you propose the idea on your local netizen forum. The
idea has overwhelming support, so someone creates a new policy component, and
gives permissions to suction tax funds. I can imagine something like that
happening in hours. Imagine initiating a public effort going from idea to
reality in hours.

~~~
teekert
I fear that you are overestimating humanity: "The idea has overwhelming
support, so someone creates a new policy component, and gives permissions to
suction tax funds"... You are not afraid that this will lead to radical
redistribution of wealth eventually resulting in loss of motivation to create
wealth? (Yes, the Atlas Shrugged scenario.)

~~~
chongli
_You are not afraid that this will lead to radical redistribution of wealth
eventually resulting in loss of motivation to create wealth?_

The _motivation to create wealth_ justification for inequality is one of the
most morally bankrupt ideas ever conceived. Billions of people around the
world live in abject poverty; to blame them for their situation takes a
profound case of the fundamental attribution error.

------
cromwellian
My own rant on this from 2008 from a different perspective:
[http://timepedia.blogspot.com/2008/05/decentralizing-
web.htm...](http://timepedia.blogspot.com/2008/05/decentralizing-web.html)

~~~
oneofthose
Excellent article and almost 6 years old. There are many projects to mention
here but so far none of them took of as far as I know. Or did they?

After publishing my own rant about the same topic [0], Dave Winer commented
via Twitter and argued that RSS is a very successful tool that decentralizes
the web. I didn't understand his point at the time but today I think he is
absolutely right.

His point was that when trying to fix the problem we should look at what
already exists and works. Why did RSS succeed? Probably because it solved a
problem for many people. It did not try to reinvent old things in a
decentralized way but offered real benefits. Another good example for this git
vs svn (centralized vs decentralized revision system).

[0] [http://www.soa-world.de/echelon/2011/09/the-decentralized-
we...](http://www.soa-world.de/echelon/2011/09/the-decentralized-web-
movement.html)

------
gress
Given the support Google has from the tech community, I don't see much
movement on this in the near future.

~~~
TillE
Google has support insofar as they still offer several really good products,
but after the Reader shutdown, KitKat/Nestle branding, the NSA revelations,
etc, I think they're a whole lot less geek-cool.

They're not the new Microsoft quite yet, and they may never be, but Google is
a long way from their peak of hacker-ish credibility.

~~~
gress
Their benevolent intentions as somehow more than just a self-interested
corporation are still strongly defended around here.

See:
[https://news.ycombinator.com/item?id=7184912](https://news.ycombinator.com/item?id=7184912)

------
Create
Why Tim Berners-Lee is wrong about DRM in HTML5

[http://craphound.com/?p=4651](http://craphound.com/?p=4651)

~~~
JetSpiegel
Here's a link to the real text:

[http://www.theguardian.com/technology/blog/2013/mar/12/tim-b...](http://www.theguardian.com/technology/blog/2013/mar/12/tim-
berners-lee-drm-cory-doctorow?CMP=twt_fd)

~~~
Create
and while at it some links to the real origins of the web:

For ADD-ers: [http://youtu.be/hSyfZkVgasI](http://youtu.be/hSyfZkVgasI)

The story:
[http://www.archive.org/details/paulotlet](http://www.archive.org/details/paulotlet)

Long PR story short: RPC was already prevalent, that is how you control(led)
your stuff remotely. Like Tim did on his day job. Today, if you'd ask for a
NeXT-like toy, you'd be denied were you an average Eastern member of CERN
(n.b. western equivalent). As a true westerner, TBL asked for it and got one.
Put the gopher link address ptr in the reserved field of the text font
properties (where things like bold and italics properties are stored) and
voilà. Mix in a simplified SGML for markup (as it was customary back then for
typefaces and layout on printers). One could also hire cheap disposable
students to actually write the web clients and servers to be cross platform
(the web's virtue/value).

Would the "web" have just ran on NeXT, it would be long extinct, let alone
have taken off. Clicking on text is how you used Oberon or even Genera
Document Examiner (perhaps even over the network). It was the spirit of the
times. Even Jobs demoed a WYSIWYG client-server application in 5 minutes on
the NeXT. What an insult.

On linking and hypertext: all post-war era anglo-saxon stuff is spin
(including Vannevar and Tim invented the web myths). The real stuff comes from
Belgium, as did Tim's boss, btw.

------
techaddict009
Time Berners Lee : "I would have got rid of the slash slash after the colon.
You don't really need it. It just seemed like a good idea at the time."

When asked what he would have done differently!!

~~~
zupa-hu
I think that was rather a joke. A good one, though.

~~~
clockwerx
Let me know if you'd prefer to write mailto:// or http:www.fish.com

------
zokier
I'm surprised by the amount of comments declaring "Berners-Lee supported DRM
so his words are meaningless". Not that I myself put a lot of weight to his
words, but not for some silly DRM issue (frankly I didn't even remember that),
but because I can't really say what his contribution to the web (or anything
relevant) has been in the past couple of decades. Looking at his wikipedia
page, it mentions that he founded W3C in 1994, and that's about it. Everything
since seems to be some sort fluff, often with grandiose names and concepts but
very little substance.

So I'm not surprised that people do not put weight on his words, but rather
the reason why they don't and the apparent recency of this opinion.

------
munificent
Wow, the giant flashing ad on the left of that article was so distracting I
was unable to read the text. I eventually opened up the web inspector and just
deleted the damn thing from the DOM.

~~~
BlackDeath3
Do you not care to use (or perhaps not have the capacity to use) an adblocker?

~~~
Create
Digital Restrictions Management will take care of that.

Not only mandatory flashing, but also tracking.

~~~
taway2012
How? How can DRM force me to display the iframe containing an ad.

~~~
admax88q
DRM can force you to do whatever you want. It's proprietary code running on
your system.

If you want to view the content, it will ensure that you've seen the ad before
it renders the content.

~~~
MichaelGG
The "on your system" part is what prevents this from happening. However, with
features like remote attestation, a DRM platform _could_ be built. If users
start accepting PCs locked down like tablets, then this might be an issue.

~~~
admax88q
I'm not sure I follow. The DRM extensions are basically hooks to proprietary
plugins. If the plugin decides that you didn't view the ad, then it will
refuse to play to request media.

Then can and will be workarounds, but thanks to lobbying those workaround are
criminal in many parts of the world.

~~~
MichaelGG
At the moment, those proprietary plugins execute inside an OS and processor
you "fully" control. Nothing really stops you from flipping a jump statement.
(Except being more complicated than that.) I believe that's what the poster
was saying: what can _force_ him on his own system.

~~~
admax88q
Flipping a jump statement is a criminal offense in many countries now.

------
higherpurpose
I'm having a really hard time trying to stomach anything Tim Berners Lee is
saying these days, when I know he's such a big proponent of bringing DRM to
his "open web" (and inevitably turning it into a more "closed" one, in the
end).

Also, isn't _this_ decentralization? Not having the data in one mega-American-
cloud? Seems to me that Tim is doing a lot of PR for big companies lately,
masked as a benefit for the users, just like he's doing with the whole DRM
thing, which he actually says would be beneficial for users.

But let's assume he's not trying to be malicious here, and that he has a
point. Here's the thing. Yes, I agree that having every country demand
companies to host the data locally is going to make it very hard for
innovation to spread, and therefore, progress will slow.

 _HOWEVER_ , right now it seems we only have a choice between this, and
allowing US to get their hands on all the data. I didn't even see Obama
_mention_ anything about NSA not being able to tap the world's fiber cables
anymore.

So until US gets serious about not doing shitty stuff like that to the world's
users, then I _absolutely_ think all the other countries should try to force
companies to host the data locally.

There is one other solution to this, that will allow companies to keep the
data wherever they want, and that's _encrypting everything by default_ , and
even them not being able to decrypt the data without the user allowing it. So
stuff like OTR for chat systems, DarkMail/PGP for e-mail systems, and so on.

Make it so the web is completely trust-less, so other countries don't have to
trust Google or the American government to not get their data, because they
could be assured there's nothing for them to get other than strongly encrypted
data.

Unfortunately, this option _isn 't even on the table_ right now with the big
companies, and the US government will push against companies trying to do
this, too. And the only way to get this option on the table, is for them to
think that other countries are going to _inevitably force them to store the
data locally_ , and build data-centers locally. Only then they might start
preferring this "encrypt everything with the user having the key" solution, as
an alternative to storing data in every country or region.

So until that happens, I absolutely support countries demanding data to be
stored locally, because I know that _minutes_ before that will begin to
happen, US companies and the US government will agree to letting the data be
fully encrypted and trustless. But not any sooner than that. So in the end,
we'll get what we want, and the Internet will be safe.

------
knocte
This talk is very relevant too:
[http://www.youtube.com/watch?v=QOEMv0S8AcA](http://www.youtube.com/watch?v=QOEMv0S8AcA)

------
acd
I think a powerful thing would be a p2p anonymous net interface that ships
default with the Linux kernel.

So not only would you have eth0: You would have p2p0: Anonymous, Unique ID,
DHT

So you could reach your web server eiher via 192.168.0.100 or short unique id

SSL communications will not be based on central CAs because that model is long
compromised.

I think we should built this together. Maybe its GNUnet?

------
StephenFalken
"The Internet was done so well that most people think of it as a natural
resource like the Pacific Ocean, rather than something that was man-made. When
was the last time a technology with a scale like that was so error-free? _The
Web, in comparison, is a joke. The Web was done by amateurs._ " \-- Alan Kay

~~~
yungchin
source: [http://www.drdobbs.com/architecture-and-design/interview-
wit...](http://www.drdobbs.com/architecture-and-design/interview-with-alan-
kay/240003442#)

------
josteink
Why should we care what some guy says when that same guy thinks that putting
DRM in the an open standard makes sense?

Tim Berners-Lee needs to withdraw his statement, acknowledge his mistake and
publicly apologize before his voice can carry any weight again.

------
gaius
What we need is Gopher, Tim.

------
rch
> we need a bit of anti-establishment push back

Wow. Understatement of the century.

------
rayiner
Decentralizing the web won't really help when all communications flows throw a
handful of telecom companies.

------
auvrw
in the absence of a relevant plug for a library that i've written ( _shame_ ),
here's one for a library that someone else wrote

[http://www.infoq.com/presentations/private-
backend](http://www.infoq.com/presentations/private-backend)

tl;dw: next time you build a webservice, let your users use private drive,
dropbox, or whatever accounts to host their data (or at least the data that
doesn't get JOINed to other tables all over the place). one has to pay for
data ownership, but it's not per-service charge.

the article's kind of a tease. it's easy to say things like "open" and
"decentralized," but i guess if i want to know what specific implementations
were discussed, i'd have to buy the magazine.

------
shmerl
May be he should revert his stance on DRM in HTML then. Supporting it goes
completely against the idea of decentralizing the Web, since it's giving
control to gatekeepers and taking it away from users.

------
j2kun
They make a reference that the NSA may be actively using a quantum computer to
break mainstream encryption (or that it may happen soon).

Puh-leez.

------
callesgg
The web was never decentralized. The internet was, and kind of is.

The web meaning basically http websites. On a bunch of servers.

------
bariswheel
He needs to get over that slash slash. Really, it's fine. He's hung over it.

------
fuckpig
Let's decentralize first by removing the centralized generic resources that
consume too much of the traffic, like Facebook, Wikipedia, Amazon and Google.

~~~
wutbrodo
How do you propose "removing" Amazon, Wikipedia, et al?

~~~
fuckpig
I imagine there are some legal ways to do that.

