
We're heading Straight for AOL 2.0 - Rondom
http://jacquesmattheij.com/aol-20
======
guycook
Previously:
[https://news.ycombinator.com/item?id=10008769](https://news.ycombinator.com/item?id=10008769)
(and
[https://news.ycombinator.com/item?id=10160133](https://news.ycombinator.com/item?id=10160133)
with no comments)

~~~
nicklaf
It seems the recentness of that discussion was enough for many people to
warrant flagging this one--it's been sent to page 6.

------
jlas
Not sure what you're ranting about here. What closed protocols / technologies
are vendors creating that pose a risk?

On the contrary over the last few years we've seen vendors start to converge
on web standards (HTML, CSS, JS) [2].

We've seen the development of crucial technologies for rich multimedia /
interactivity on web apps like websockets (RFC 6455 [3]) and WebRTC [4].
Things like an open browser automation protocol is helping devs build cross-
browser apps [5].

Also, 1) we're up to RFC 7639 in the official list [1], so that's about 2000
RFCs since the one you mentioned in your article. 2) RSS is not a protocol 3)
vendors _are_ building completely open source tools (e.g. Facebook & Google
with React & Angular.js) 4) "companies now deliver one half of their
application and some custom protocol over HTTP and never mind inter-
operability with other services or playing nice", really confused about this,
do you have any examples? 5) Are you suggesting that HTTP is transport layer?
Because it's not [6]

    
    
       [1] http://www.ietf.org/download/rfc-index.txt
       [2] http://caniuse.com/
       [3] https://tools.ietf.org/html/rfc6455
       [4] http://www.w3.org/TR/2015/WD-webrtc-20150210/
       [5] http://www.w3.org/TR/webdriver/
       [6] https://en.wikipedia.org/wiki/Internet_protocol_suite

~~~
teacup50
You're missing the point if you're holding up HTML/CSS/JS as a counter-
example.

In the past, we had a more robust network software ecosystem in which high-
level protocol semantics were standardized across vendors, ensuring that:

\- Multiple server implementations existed, and

\- Multiple client implementations existed, and

\- Data (the bits owned by users) were portable across those implementations.

This creates a healthy, more resilient ecosystem, within which there does not
exist a single point of failure, and competition for clients, servers, and
_services providing the same_ is robust.

HTML/CSS/JS do standardize semantics, but they're _application platform_
semantics -- we get interoperability and choice across browsers and web
servers, but we're losing network interoperability between the applications
being built on top of them.

This isn't limited to the web, either: closed protocols and DRM on mobile
(e.g. Apple's iMessage, Facetime) have produced a similar result. Apple
controls the messaging clients, the OS, the network protocol, and the servers.
If you want to leave, you lose access to everyone else using the protocol.
Resiliency is also lost -- a single party controls the entire infrastructure,
including the client software.

------
mattzito
The difference here is that while AOL was a completely walled garden, many of
the various players here have APIs that you can connect to. Then you have
intermediary companies like IFTTT and Zapier that are glue between those
various services.

Not that I don't agree with the main point - moving from completely open to
little fiefdoms, but I think it's an imperfect analogy given the push towards
everything having APIs for connectivity between them.

~~~
jfmercer
Your mention of "little fiefdoms" reminded me of a piece that Bruce Schneier
wrote called "Feudal Security":
[https://www.schneier.com/blog/archives/2012/12/feudal_sec.ht...](https://www.schneier.com/blog/archives/2012/12/feudal_sec.html).

------
fossuser
Previous discussion:
[https://news.ycombinator.com/item?id=10008769](https://news.ycombinator.com/item?id=10008769)

Somewhat related Sandstorm.io is also a pretty cool concept that might help
avoid an AOL 2.0.

[https://sandstorm.io/](https://sandstorm.io/)

------
javajosh
I love a good rant (and I respect Jacques a lot!), but this one falls apart
between the 3rd and 4th paragraphs:

 _>...instead of delivering software to the end users which then implemented
this protocol using executables for the various platforms[,] HTTP allowed to
deliver both the visual part of the application (the user interface) and
(eventually) the rest of the client portion of the application in one go._
(comma added)

 _> The end result of all that is that we’re rapidly moving from an internet
where computers are ‘peers’ (equals) to one where there are consumers and
‘data owners’, silos of end user data that work as hard as they can to stop
you from communicating with other, similar silos._

Both of these may be true, but one doesn't follow from the other. There are a
complex blend of forces that conspire to silo data and separate it from users
- not the least of which is the desire to a) keep it safe, and b) make it
accessible on all devices. If disposable web software follows from anything
it's from the vulnerability of client devices - we give you both the reader
and the data every time because we know you're going to drop your phone and
need a reinstall anyway.

The web is _still maturing_ and I think WebRTC and WebSockets are particularly
interesting, because they are analogs to lower level things that came before,
but they work in that disposable web environment.

------
Afforess
Yep. I think a big point that has not been addressed is how we got into this
situation. Most internet users are locked behind home or corporate networks,
without a real addressable IP and every port but 80 and 443 blocked. A lot of
this was done in the name of security (exactly how does blocking a port or
ignoring a ping response make you secure? I've never heard a valid technical
reason for this being more secure, beyond tradition-says-so answers.), but is
really due to cargo-cult sysadmin behavior and bad default settings on
hardware.

Now we've basically ruined IPv4, peer to peer communication doesn't work for
enough users to make it useless, forcing the centralized server-client model.
It's not just hardware with bad defaults - try accessing a (non-localhost)
website hosted on a port 6666 in Chrome. You can't, Chrome has a blacklist of
"insecure" ports. I am baffled such a concept even exists, ports aren't secure
or insecure, they are a channel for data. Then I look at websockets and laugh,
We rebuilt udp over http over tcp, just so web developers can send arbitrary
data again...

I get very upset when I see the same sort of ideas taking hold in IPv6, we
have another chance not to ruin this, and yet we seem to want to force the
broken IPv4 model, with firewalled NATs, blocked ports, and more broken
default settings.

~~~
Pharaoh2
Ping of death has happened before. You do not serve any purpose by opening
ports that you do not intend to use. Closing all unneeded port is called
minimizing attack surface.

Ports may not be inherently secure or insecure, but the history of their usage
give them bias towards likely being secure or insecure. Chrome does the right
thing by blocking these ports as an overwhelmingly large percentage of their
users will probably hit this port under "insecure" situations. Those who need
to use these ports can and will find alternate ways to do it.

~~~
Afforess
What "attack surface"? Almost all your ports, no applications listen to.
Closing them serves no useful purpose.

If an application has problems accepting input on a port it is listening to,
that is an application bug. Closing your ports doesn't solve this, it just
hides the problem under the rug, and breaks the Internet in the process.

~~~
Pharaoh2
The firewalls you mentioned were NAT/Enterprise level. They do not generally
have absolute control of what programs are running on one of their internal
computers. May be that port left open is used by a C&C client installed on one
of the user's computer inside LAN, hereby possibly compromising the whole
internal network. There is a reason why DMZs exist.

~~~
simoncion
Corporate security is a different sort of beast from almost every other
security environment. What's acceptable in that arena is almost _always_ an
unacceptable management practice for someone who is designing and operating
what is supposed to be a network of peers.

------
brongondwana
Open protocol, HTTP, email:

[http://jmap.io/](http://jmap.io/)

It's not peer-to-peer, but at least it is open. We see the same issue, which
is why we're doing it.

------
mbesto
> _Please open up your protocols, commit to keeping them open and publish a
> specification. And please never do what twitter did (start open, then close
> as soon as you gain traction)._

If you want to make money, I'm not sure how sound this advice is. Anyone know
an example of a company that has kept protocols open and made any significant
money?

~~~
viraptor
The question is: is it a case of "because", or "despite"? Is there any proof
that using open protocols and making money are mutually exclusive?

------
foota
I think the people that have been making proprietary protocols just moved from
making them directly on top of TCP to HTTP. Saying that if HTTP didn't exist
Facebook would have proposed an interopable status protocol seems incredibly
naive to me.

------
tracker1
I actually expected to see a rant about Facebook. That said, I don't think
that the general issue is so bad regarding protocol development. For the most
part, people are building on the back of HTTP.

------
yskchu
From the article:

> And please never do what twitter did (start open, then close as soon as you
> gain traction).

Charging for access is hardly "closing itself". Is the author faulting them
for moving to a more sustainable model?

------
jessaustin
Somewhat, but not completely, along the same lines:

[https://news.ycombinator.com/item?id=10188667](https://news.ycombinator.com/item?id=10188667)

