
Internet protocols are changing - nmjohn
https://blog.apnic.net/2017/12/12/internet-protocols-changing/
======
jandrese
> When a protocol can’t evolve because deployments ‘freeze’ its extensibility
> points, we say it has ossified. TCP itself is a severe example of
> ossification; so many middleboxes do so many things to TCP — whether it’s
> blocking packets with TCP options that aren’t recognized, or ‘optimizing’
> congestion control.

> It’s necessary to prevent ossification, to ensure that protocols can evolve
> to meet the needs of the Internet in the future; otherwise, it would be a
> ‘tragedy of the commons’ where the actions of some individual networks —
> although well-intended — would affect the health of the Internet overall.

On the other hand, I've done a fair bit of work getting TCP based applications
to behave properly over high latency high congestion (satellite or radio
usually) links and QUIC makes me nervous. In the old days you could put a TCP
proxy like SCPS in there and most apps would get an acceptable level of
performance, but now I'm not so sure. It seems like everybody assumes you're
on a big fat broadband pipe now and nobody else matters.

~~~
peterwwillis
> It seems like everybody assumes you're on a big fat broadband pipe now and
> nobody else matters.

This is intentional. The powers that be have an interest in moving everyone to
faster networks, and they effectively control all new web standards, and so
build their protocols to force the apps to require faster, bigger pipes. This
way they are never to blame for the new requirements, yet they get the
intended benefits of fatter pipes: the ability to shove more crap down your
throat.

It's possible to cache static binary assets using encrypted connections, but I
am not aware of a single RFC that seriously suggests its adoption. It is also
to the advantage of the powers that be (who have the financial means to do
this) to simply move content distribution closer to the users. As the powers
that be don't provide internet services over satellite or microwave, they do
not consider them when defining the standards.

~~~
sarah180
There's a technical reason for this. The Internet2 project spent a lot of time
and effort working on things like prioritized traffic to deal with congested
links. They found that it was easier and more cost effective to just add more
bandwidth than it was to design and roll out protocols that would deal with a
lack of bandwidth.

For more info, read this:
[https://www.webcitation.org/5shCiXna8](https://www.webcitation.org/5shCiXna8)

A noteworthy quote:

> In those few places where network upgrades are not practical, QoS deployment
> is usually even harder (due to the high cost of QoS-capable routers and
> clueful network engineers). In the very few cases where the demand, the
> money, and the clue are present, but the bandwidth is lacking, ad hoc
> approaches that prioritize one or two important applications over a single
> congested link often work well. In this climate, the case for a global,
> interdomain Premium service is dubious.

"Premium service" in this document refers, basically, to an upgraded Internet
with additional rules to provide quality of service for congested links.

(I'm not personally claiming that all these conclusions are correct and that
they still apply today, just that there's some backstory here.)

~~~
wmf
I think more bandwidth is better in every case except geostationary satellites
due to their unavoidable latency. And in theory those satellites are going to
be obsoleted by LEO ISPs.

------
shalabhc
Another interesting protocol, perhaps underused, is SCTP. It fixes many issues
with TCP, in particular it has reliable datagrams with multiplexing and
avoiding head-of-line blocking. I believe QUIC is supposed to be faster at
connection (re)estabishment.

[https://en.wikipedia.org/wiki/Stream_Control_Transmission_Pr...](https://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol)

~~~
dragontamer
SCTP is a superior protocol, but it isn't implemented in many routers or
firewalls. As long as Comcast / Verizon routers don't support it, no one will
use it.

It may be built on top of IP, but TCP / UDP levels are important for NAT and
such. Too few people use DMZ and other features of routers / firewalls. Its
way easier to just put up with TCP / UDP issues to stay compatible with most
home setups.

~~~
nfoz
Why do the routers involve themselves at the transport layer? Can't they just
route IP packets and leave the transport alone?

Firewalls -- whose firewalls are we talking about here? If a client (say, home
user) tries to initiate an SCTP connection to a server somewhere, what step
will fail?

~~~
zAy0LfpBZLC8mAC
> Why do the routers involve themselves at the transport layer? Can't they
> just route IP packets and leave the transport alone?

Because they have to do NAT, at least on IPv4.

> Firewalls -- whose firewalls are we talking about here? If a client (say,
> home user) tries to initiate an SCTP connection to a server somewhere, what
> step will fail?

Because the connection tracking that's needed to even recognize whether a
packets belongs to an outbound or an inbound connection needs to understand
the protocol.

------
ilaksh
It seems that widely deploying TLS 1.3 and DOH can provide an effective
technical end-around the dismantling of net neutrality. So we should be
promoting and trying to deploy them as widely as possible.

Of course, they can still block or throttle by IP, so the next step is to
increase deployment of content-centric networking systems.

~~~
topspin
It seems to me that all of the changes described in this story will contribute
to thwarting intermediaries and their agendas. HTTP/2 and its "effective"
encryption requirement are proof against things like Comcast's nasty
JavaScript injection[1]. QUIC has mandatory encryption all the way down; even
ACKs are encrypted, obviating some of the traditional throttling techniques.
And as you say TLS 1.3 and DOH further protect traffic from analysis and
manipulation by middlemen.

Perhaps our best weapon against Internet rent seekers and spooks is technical
innovation.

It is astonishing to me that Google can invent QUIC, deploy it on their
network+Chrome and boom! 7% of all Internet traffic is QUIC.

Traditional HTTP over TCP and traditional DNS are becoming a ghetto protocol
stack; analysis of such traffic is sold to who knows whom, the content is
subject to manipulation by your ISP, throttling is trivial and likely to
become commonplace with Ajit Pai et al. Time to pull the plug on these
grifters and protect all the traffic.

[1]
[https://news.ycombinator.com/item?id=15890551](https://news.ycombinator.com/item?id=15890551)

~~~
zb3
But I, as an user, want to be able to block domains, inject scripts and see
what Chrome is sending to Google on my own devices (which is what Google
doesn't want me to do). That's why I can't support these protocols...

~~~
JoshTriplett
You, as a user, absolutely can. An ISP or network administrator who does not
control the endpoints, on the other hand, cannot, by design. That's a feature.

~~~
fiddlerwoaroof
What if I want to use my router to block telemetry domains? Or other malware
sites? It’s looking like the only way forward is running my own CA to mitm all
encrypted traffic.

~~~
aoeusnth1
That seems superior anyway - you could keep blocking domains even when you're
on the go.

~~~
kuschku
Can I? On Android, apps now can decide if they want to accept user-installed
CAs, or not.

So if an app is hostile (say, all the Google apps), then I have no way to
intercept their traffic anymore.

~~~
pas
You can decide to install the app or not.

And you can put your CA into the system CA store if you have root. (You can
make an Android image, so technically the requirement is unlocked - unlockable
- bootloader.)

~~~
kuschku
Unlocking the bootloader makes the device permanently fail the strictest
SafetyNet category.

Apps can and will refuse to run in that situation.

Modifying /system will make every SafetyNet check fail, as result Netflix,
Snapchat, Google Play Movies, and most banking apps will refuse to run.

I can decide to install the app or not? How do I go about replacing Google's
system apps with my own, without preventing above mentioned apps from running?
I can't. And I can't buy reasonable devices withou Google Android, due to the
non-compete clause in the OEM contracts.

~~~
pas
Then don't buy it. Or support efforts like Librem (and LineageOS and Magisk).

[http://www.androidpolice.com/2017/07/16/safetynet-can-
detect...](http://www.androidpolice.com/2017/07/16/safetynet-can-detect-
magisk-fix-works/)

You can walk into your bank and access the services. Or call them. Or use
their browser based service, right?

Google and a lot of developers made the choice to restrict user freedom for
more security.

I don't agree with it, but it's what it is. A trade off.

Of course, you can sign your own images and put the CA into the recovery DB
and relock the bootloader on reasonable devices. (
[https://mjg59.dreamwidth.org/31765.html](https://mjg59.dreamwidth.org/31765.html)
)

Or at least you used to be able to.

------
gumby
Let's just hope that future innovations (and, more perniciously,
"innovations") reinforce the end-to-end principle. A major weakness of the
2017 Internet is its centralization.

The DNS-over-http discussion in this post mention that in passing, though I
wonder if this treatment might not be worse than the disease.

~~~
ocdtrekkie
The DOH example, in particular, only conveys it's benefits if centralized to
something governments are hesitant to block. This is an example of
"innovation" specifically designed to centralize. There's maybe a handful of
companies with enough influence that countries would hesitate to block in
order to block DOH.

------
feelin_googley
"DOH" is not going to work very well as an anti-censorship protection unless
they also fix the SNI problem in TLS.

------
frut
This is just depressing. Sure, sell us out to big corporations by not
implementing proper features in protocols like HTTP/2 so we can get tracked
for decades to come. Yet, represent freedom by yet another cool way to "fool"
governments. When historians look back at what happened to the Internet, or
even society, they are going to find that organizations like the IETF was to
busy with romantic dreams of their own greatness to serve the public. It's
like people leaned nothing from Snowden.

~~~
xingped
What features are missing that should be implemented?

~~~
teddyh
Not the OP, but omitting support for SRV records in HTTP/2 was a terrible
missed opportunity, as I’ve written about here before:

[https://news.ycombinator.com/item?id=8404788](https://news.ycombinator.com/item?id=8404788)

[https://news.ycombinator.com/item?id=8550133](https://news.ycombinator.com/item?id=8550133)

I quote myself: “ _It really is no surprise that Google is not interested in
this, since_ Google _does not suffer from any of those problems which using
SRV records for HTTP would solve. It’s only users which could more easily run
their own web servers closer to the edges of the network which would benefit,
not the large companies which has CDNs and BGP AS numbers to fix any
shortcomings the hard way. Google has already done the hard work of solving
this problem for themselves –_ of course _they want to keep the problem for
everybody else._ ”

~~~
Shoothe
I would also like to see SRV record support in HTTP/2 but IIRC Mozilla did
some telemetry tests and found out that a significant amount of DNS requests
for SRV records failed for no reason (or probably for reasons mentioned in
this submission). Unfortunately I can't find a source link for that claim
right now.

~~~
teddyh
I know of two rather large users of SRV records already: Minecraft servers and
(the big one) Microsoft Office 365. I’m less than convinced that resolution of
SRV records is _that_ broken.

~~~
Shoothe
Do you mean accessing Office 365 via browser uses SRV records or something
different?

~~~
dylz
o365 general services (lync skype, outlook, ... / exchange autodiscover) uses
SRV a fair bit.

365 is not just the browser suite

~~~
Shoothe
Yeah but the services that you mentioned are used mostly by enterprises. It's
still possible that SRV lookups are broken for large amount of consumers that
are not enterprises.

~~~
teddyh
And I wonder who that could be, if that is even true.

------
collinmanderson
> Finally, we are in the midst of a shift towards more use of encryption on
> the Internet, first spurred by Edward Snowden’s revelations in 2015.

Personally, I'd say it was first spurred by Firesheep back in 2010, but the
idea of encrypting all websites, even content-only websites may have been
Snowden related.

------
signa11
the author is responsible for the 418 teapot incident in the early 2000's.
though i am sure he is a swell guy :)

~~~
stmw
Indeed, he was great in web services standards back in the day, and still a
good writer. ;-)

------
mikevm
Regarding throughput, see UDT
([http://udt.sourceforge.net/](http://udt.sourceforge.net/)) which does
reliable data transfer over UDP.

------
g-clef
I'm really struck by how hostile to enterprise security these proposals are.
Yes, I know that the security folks will adapt (they'll have to), but it still
feels like there's a lot of baby+bathwater throwing going on.

DNS over HTTP is a prime example: blocking outbound DNS for all but a few
resolvers, and monitoring the hell out of the traffic on those resolvers is a
big win for enterprise networks. What the RFC calls hostile "spoofing" of DNS
responses enterprise defenders call "sinkholing" of malicious domains. Rather
than trying to add a layer of validation to DNS to provide the end user with
assurance that the DNS request they got really is the name they asked for
(and, in theory, allow the enterprise to add their own key to sign sinkhole
answers) instead DOH just throws the whole thing out...basically telling
enterprise defenders "fuck your security controls, we hate Comcast too much to
allow anyone to rewrite DNS answers."

"Fuck your security controls, we hate Comcast" is, I think, a bad philosophy
for internet-wide protocols. (That's basically what the TLS 1.3 argument boils
down to also...and that's a shame.)

~~~
slrz
As implemented, all these "enterprise security" things are mostly
indistinguishable from malicious attacks. Of course they break when you start
tightening security.

Forging DNS responses is a horrible idea (and already breaks with DNSSEC). I
have a hard time to comprehend how this can be considered a reasonable
security measure.

~~~
g-clef
> I have a hard time to comprehend how this can be considered a reasonable
> security measure.

OK, let's walk it through.

Task: block access to "attacker.com" and all it's subdomains. Reason: Maybe
it's a malware command and control, maybe it's being used for DNS tunneling,
whatever. Blocking a domain that's being used for malicious behavior is a
reasonable thing for an enterprise to want to accomplish.

Option 1: Block by IP at the firewall. Problems: Attackers can simply point
the domain to another IP, so you're constantly playing whack-a-mole and
constantly behind the attacker. Also, if it's a DNS tunnel the DNS answer is
what's interesting, not the traffic to the actual IP. Result: Fail, doesn't
solve the problem.

Option 2: Block by DNS Name at the firewall. Problems: Requires the firewall
to understand the protocols involved, which they have shown themselves to be
inconsistent at, at the best of times. Also, doing regex on every DNS query
packet(in order to find all subdomains) doesn't scale. Result: Fail, doesn't
scale.

Option 3: Block with local agent. Problems: Tablets, phones, appliances,
printers can't run a local agent. Result: Fail. Not complete coverage

Option 4: Block outbound DNS except for approved resolvers, give those
resolvers an RPZ feed of malicious domains. Problem: Clients have to be
configured to use those resolvers, but otherwise none. Result: Pass. It's
standards compliant, and DNSSec isn't an issue since the resolver never asks
for the attackers DNS answer, so they never get the chance to offer DNSSec.

That's why option 4 (or some variant of it) is popular in enterprises. It
accomplishes the task in a standards-compliant way, and covers the entire
enterprise in a way that scales well.

DOH blows this up. So, the question becomes: in a world with DOH, how is an
enterprise supposed to completely and scalably block access to "attacker.com"
and all its subdomains? So far, the answer has been "you don't." I think that
is a really shitty answer to someone who's trying to accomplish something
reasonable.

~~~
Dylan16807
If the attacker can get new IPs, they can get new domains. Why is pure domain-
blocking a goal in the first place?

The one-size-fits-all answer with DOH is the same as without it: Tell your
devices to use/trust the MitM.

------
wojcikstefan
This reads less like “Internet protocols are changing” and more like “Google
is changing the Internet to their own benefit”.

~~~
kaplun
Do these specific changes from Google impact negatively the community?
Otherwise, IMHO good ideas, are good ideas regardless where they come from.

~~~
ori_b
Yes, generally they're some combination of overly complicated technically,
difficult to use without layers and layers of heavy dependencies, are poorly
thought out, or solve Google-specific use cases.

~~~
zAy0LfpBZLC8mAC
Well, the complexity is a problem, but I don't really see that as Google's
fault. The only chance to evolve the network is by building on stuff that
works despite all the hostile middle boxes, and that necessarily requires
quite a bit of complexity, unfortunately. In the long term, it seems to me
like QUIC is a better idea than everyone individually having to work around
idiocies all over the internet, as that is not exactly a zero-complexity game
either.

------
jedisct1
I'm pretty excited about DNS over TLS. Ahaha no, that's so tacky, I meant DNS
over QUIC of course. Sorry, I meant iQUIC. Ah no, it's not even there, but it
will suck compared to DOH, DNS over HTTPS.

------
provost
No mention of BGP in the article?

------
shawndrost
ELI5: Does DOH threaten the great firewall?

------
adictator
Isn't ws:// also a new-ish protocol that is not supported yet by many browsers
natively at least?

~~~
scott_karana
WebSockets leverages TCP and is based on/compatible with HTTP. Nothing new
about it, at least in the sense the article is discussing.

------
sarmad123
good to see

------
peterwwillis
> For example, if Google was to deploy its public DNS service over DOH on
> www.google.com and a user configures their browser to use it, a network that
> wants (or is required) to stop it would have to effectively block all of
> Google (thanks to how they host their services).

Which will result in all of Google being blocked by schools, businesses, and
entire nations. Which, as Google is relied upon more and more, means less
access to things like mail, documents, news, messaging, video content, the
Android platform, etc.

Thanks.

~~~
jlgaddis
Nah, many of them can't -- won't -- block Google over this.

A huge number of them are absolutely reliant on Google, for things like (org-
wide) Google Mail, Google Docs, ChromeBook deployments, and so on -- not to
mention basic Google search.

~~~
norin
What about China or the EU? They can surely block Google?

~~~
inimino
China has, for many years. The EU is unlikely to.

