
This Could be Big: Decentralized Web Standard Under Development by W3C - kerben
http://www.readwriteweb.com/archives/his_could_be_big_decentralized_web_standard_under.php
======
haberman
The W3C is broken: it has spent the last 10 years standardizing _ideas_
instead of _existing practice_. This is totally backwards and leads to
standards that are too complicated, unrealistic, and in many cases not needed
at all.

The W3C was originally created to standardize HTML, which was already being
used by many vendors and users but in incompatible ways. That is exactly the
right situation for creating a standard. It leads to standards that are
realistic and motivated by a demonstrated need.

Unfortunately, almost everything else the W3C has ever done has happened in
the opposite direction: in response to an _idea_ or a _perceived_ need, some
people theorize about the best way to solve the problem and then write a
document that a bunch of vendors are supposed to then implement from scratch.

This is how we ended up with the XML stack, which was designed to solve the
data interchange problem but ended up being a disaster of complexity,
inefficiency, and ad hoc implementations.

Even the case of CSS (which has been quite successful) is sub-optimal IMO,
because it didn't choose to standardize the existing practice of how people
were using tables for layout. The CSS box model makes it stupidly difficult to
do things that are trivial with table-based layouts, like a a three column
layout (which is considered a "holy grail" by even CSS advocates:
<http://www.alistapart.com/articles/holygrail/>).

CSS could have used a table-like layout model that makes it easy to arrange
<div>s into rows and columns, allowing a smooth upgrade from people who were
using the <table> tag. Instead they invented something new that was much more
difficult to design for, creating an unnecessary tension between web standards
advocates and people who just wanted to get things done.

Standards should codify and refine existing practice, not attempt to invent
new things.

~~~
olavk
I agree with you in general, but not in the case of CSS.

The CSS layout model were indeed based on exactly the layout properties of
"presentational HTML" which people were using for style and layout at that
time. It was just made more fine grained, and separated from the markup.

For example, the much hated "float/clear" layout model were based on the
align="left|right" and clear="left|right" attributes which Netscape (I
believe) introduce on img- and table-elements, and which people did use a lot.
The layout model of tables were also widely used, and that was codified with
the display:table-* properties, which again provides exactly the same layout
model in CSS as tables provided via markup.

The problem was basically that IE development were frozen for many years. IE
implemented the float-related CSS before the freeze, but not the table-model.
I believe IE 8 were the first version of IE to support the table model, more
than a decade after CSS2 became a recommendation.

This resulted in period of many years where the float model were supported via
CSS in most browsers, but the table-model were not. This in turn resulted in
an amazing amount of confusion, and people believing that floats are somehow
the CSS's substitute for tables.

~~~
flomo
display:table- always felt more like a hack than as something part of the
design.

As in "Oh, we forgot to model certain properties of tables that numerous
websites actually use. Here's a workaround that imitates a table cell until we
get around to figuring it out. PS: tables are still bad."

~~~
olavk
> PS: tables are still bad.

 _Tables are not bad_.

CSS purists state that using table markup purely for layout purposes (e.g. for
positioning a page footer) is bad, because it violates the separation between
content and presentation. In that case display:table-* will give you the same
layout (neither worse nor better!) but without requiring any specific markup
in the html.

But this poses a dilemma for web developers, because display:table-* is not
supported by all browsers, so some specifc kinds of layout can _only_ be
achieved with table markup.

Developers had to choose between (1) compromising the layout (2) compromising
the semantic/style separation or (3) use complex workaround like the CSS
frameworks, which tried to emulate the table-layout model without requiring
display:table support in the browser. Neither is optimal.

~~~
mattmanser
I think you're misremembering history. It's not bad because 'it violates the
separation between content and presentation', it's bad because you end up with
tag soup. The table's demise was purely that it was too hard to maintain in
anything more than a trivial layout. When you started getting down to your
third nested table it was a friggin' nightmare. Also cells and rows had weird
quirks as well as missing properties where you'd then have to nest a span in
every cell just to get it behaving properly. Oh and also the occasional
sacrifice to the dark god of 'table-layout:fixed'.

The rest of it though I agree with you.

~~~
olavk
But I think we agree. Layouts with nested tables are hard to maintain
_because_ presentational concerns (like the attributes which defines cell
sizes) is spread all over the document and intermingled with the content. It's
a mess.

------
jamii
I've been thinking about this sort of thing a lot lately. Sugar
(<http://www.sugarlabs.org/>) get a lot of things right in terms of p2p user
interaction. If you want to, say, edit a document with a friend you just click
invite and Sugar will handle everything else. It will even send them a copy of
the editor app if they don't have it installed.

I think developing small-scale p2p apps (eg IM, 1-1 audio/video, multiplayer-
editors ala etherpad) can be made a lot easier than it is today. My rough plan
of attack is to use erl-telehash (<https://github.com/jamii/erl-telehash>) for
addressing and NAT traversal with something similar to bloom
(<http://www.bloom-lang.net/>) for coordination / logging / debugging. Add
libraries for at-least-once messaging, leader election and operational
transform. Maybe piggyback on chromeos or android to get secure p2p app
installation.

I also had some thoughts about the CALM hypothesis (<http://www.bloom-
lang.net/calm/>) which I haven't seen mentioned in the literature. A monotonic
bloom program is one in which every delivery order for a given set of messages
results in the same state. For many protocols what I actually care about is
that every possible end state is equivalent, for some protocol-specific notion
of equivalence. For example, for leader election all I care about is that
every end state should have exactly one leader. Monotonic programs are easy to
model check and I think explicitly stating the desired equivalence relations
will reduce the state space explosion at points of order. It might be possible
to get good results from a very simple/naive model checker by exploiting this.

Not hugely related to the article, but its been on my mind a lot lately.

------
swombat
As the comments point out, this is not a generic decentralised web standard to
get around ICE and the like, but just a specification for p2p audio/video/etc
communications for online video conferencing and so on. Not as big as I hoped.

~~~
mcrider
That's a shame. I was just thinking yesterday how great it would be to be able
to host web pages in a P2P fashion (after all, people are building an economy
of the bittorrent infrastructure). Sites like WikiLeaks would get a great
advantage of this sort of protocol.

~~~
bxr
It sounds like you're looking for the Freenet project.

<http://en.wikipedia.org/wiki/Freenet>

------
EGreg
freenet and other services already do this. You can already use your browser
to browse freenet, if you get the freenet program. They recommend using Chrome
in Incognito mode for maximum privacy.

And it is impervious to DNS takedowns and you can even set up a darknet. It's
used in China a lot. Also Perfect Dark is used. They operate on distributed
hash tables. The problem is that without a central server, the only way you
can connect to the hive is by hoping one of the last known hosts is still up.
It also needs to use heuristics for routing.

~~~
fauigerzigerk
The problem I have with freenet is that I don't want to blindly store or
transfer other people's encrypted data.

I think, if we take some of the powers away from the government, it is all the
more important that we are able to make a judgement on the kind of content and
communications that we want to support.

No single entity should be able to control all access to information, not even
a democratic government. But turning the tables completely and make everyone
help anyone spread any kind of information can't be the solution.

I'm totally aware of the dilemma we're in. Knowing what other people transfer
over our machines puts us in a position of "must make judgement and be
liable". Not knowing puts us in the position of "cannot make a judgement even
if we want to".

It's just a difficult problem.

~~~
EGreg
If you don't store other's people data, who will?

If those people store their own data, how will you get it if they disconnect
their computer from the network, or it goes down?

And more importantly, if their computer is the only place to get the data,
then how do you make the "host" of the data untraceable?

~~~
fauigerzigerk
I don't know. I guess the answer to your last question is that I don't want a
system that thwarts all kinds of social filtering or control and at the same
time makes it impossible to trace people who commit horrific crimes. I think
we want to decentralize social control, not abolish it completely.

~~~
Havoc
"horrific crimes": Exactly. That was my 2nd thought on reading the headline.
(1st was Awesome). We need some form of control, to prevent the worse of it.
Perhaps some sort of community voting/blacklisting?

~~~
EGreg
one man's horrific crime against the government is another man's struggle for
freedom. A tool isn't all things to everyone

------
omouse
Less corporate/company crap, more independent ideas please. Things like
Freenet, GNUNet, etc. have the right idea for decentralization, same with more
specific projects like Appleseed, StatusNet, etc.

------
protagonist_h
Flash supports peer-to-peer communication since Flash Player 10 using RTMFP
(Real Time Media Flow Protocol).

~~~
Fargren
Isn't that still in beta?

~~~
protagonist_h
No, in fact it's been available in FP10 since October 2008.

------
phlux
I wonder how easily man-in-the-middle attacks via node spoofing would be.

You masquerade as a node by re-hosting their content and you capture any other
client that accesses your proxy of that information.

~~~
sp332
Oh, it's simple! All we need is for each node to sign their content with a
private key. Then everyone can check the validity of the content with the
originating node's public key, even if it comes from some untrusted
intermediary.

Hm, now all we need is a centralized server to distribute the public keys
securely. But that's a lot of keys (one for every node), so let's use a
distributed system... oh wait :)

~~~
akdubya
Good background from the bitcoin people:

"A number of Byzantine Generals each have a computer and want to attack the
King's wi-fi by brute forcing the password, which they've learned is a certain
number of characters in length. Once they stimulate the network to generate a
packet, they must crack the password within a limited time to break in and
erase the logs, lest they be discovered. They only have enough CPU power to
crack it fast enough if a majority of them attack at the same time.

They don't particularly care when the attack will be, just that they agree. It
has been decided that anyone who feels like it will announce an attack time,
which we'll call the "plan", and whatever plan is heard first will be the
official plan. The problem is that the network is not instantaneous, and if
two generals announce different plans at close to the same time, some may hear
one first and others hear the other first.

They use a proof-of-work chain to solve the problem. Once each general
receives whatever plan he hears first, he sets his computer to solve a
difficult hash-based proof-of-work problem that includes the plan in its hash.
The proof-of-work is difficult enough that with all of them working at once,
it's expected to take 10 minutes before one of them finds a solution and
broadcasts it to the network. Once received, everyone adjusts the hash in
their proof-of-work computation to include the first solution, so that when
they find the next proof-of-work, it chains after the first one. If anyone was
working on a different plan, they switch to this one, because its proof-of-
work chain is now longer.

After about two hours, the plan should be hashed by a chain of 12 proofs-of-
work. Every general, just by verifying the difficulty of the proof-of-work
chain, can estimate how much parallel CPU power per hour was expended on it
and see that it must have required the majority of the computers to produce in
the allotted time. At the least, most of them had to have seen the plan, since
the proof-of-work is proof that they worked on it. If the CPU power exhibited
by the proof-of-work is sufficient to crack the password, they can safely
attack at the agreed time."

<http://www.bitcoin.org/byzantine.html>

------
benihana
>Opera is always several years ahead of its time

Except in adoption rates.

~~~
billybob
What do you mean? Opera had Opera's 2011 adoption rates way back in 2009.

------
codemechanic
Freedombox foundation is trying to achieve the same end result.

------
codemechanic
Tonido (<http://www.tonido.com>) is a pioneer in this space. They have
invented the model much before opera unit. The cool thing is that Tonido also
provides a decentralized openid to end users.

Firefox should buy Tonido. It will change the industry if it happens and the
way people share information. I probably may be little ahead. If you think
deeply it will make sense.

~~~
zyfo
Do you work for Tonido? If you do, please provide a disclaimer. What you are
doing is borderline spamming. See his comments for yourself [0]

If I'm wrong, I apologize profusely in advance.

EDIT: Parent does work for Tonido [1]. This is spamming.

0: <http://news.ycombinator.com/threads?id=codemechanic>

1: <http://news.ycombinator.com/item?id=643833>

~~~
fragsworth
Even if you were wrong, I don't think you need to apologize. Almost all of his
comments were trying to promote something rather than provide meaningful
discussion.

~~~
codemechanic
It is very relevant to the discussion. If you are not sure check the Tonido
architecture.

