
When “Dumb Pipes” Get Too Smart - saurik
http://www.saurik.com/id/14
======
z3t4
I have 10ms ping to news.ycombinator.com, and 100ms ping to www.amazon.com.
Yet time to first byte is 20% faster to www.amazon.com. What actually happens
is my PC connects to Cloudflare, witch in turn connects to HN. This is an
unnecessary step, and is highly over-rated.

~~~
mjevans
It's needed because CDNs are presently 'protection rackets' for the Internet.

Instead of having a mechanism by which a website under attack (or simply un-
monitored heavy load) from a host or number of hosts can direct the ISPs of
those hosts to not send it traffic the CDNs instead use the present lopsided
nature of peering agreements to simply sink the hits.

In the above mentioned solution either the direct ISP would filter out the
requests before they hit a higher tier ISP/backbone, or an ISP that does not
provide said filtering would (possibly blacklisting the entire client ISP for
sufficient bad behavior).

~~~
Alupis
> Instead of having a mechanism by which a website under attack ... from a
> host or number of hosts can direct the ISPs of those hosts to not send it
> traffic

How is this any different? You're putting the responsibility of being internet
police on the ISP's, which have shown their either do not want that
responsibility, or will abuse that responsibility.

Not to mention, this would go squarely against the notion of ISP's as "dumb
pipes".

~~~
JoshTriplett
I'd _like_ to see ISPs doing better policing of their customers: disconnecting
(and prosecuting) spammers, blocking users infected with malware until they
get clean, and so on. I'd like a world in which I can reasonably expect to
react to spam by tracking down where it came from and getting the spammer
removed from the Internet.

abuse@ ought to function and produce a rapid response.

(That doesn't mean we shouldn't have anonymous, untrackable services as well,
but using such services means you have to put up with spam too. When you have
a service like email, with all the tracking information readily available in
the headers, it shouldn't be as near-impossible as it currently is to get
something done with that information.)

~~~
AnthonyMouse
The problem is there are two sources of spam.

The first is large networks that tolerate spam, and those are irrelevant
because their IP blocks are already on every blacklist in the world.

The second is compromised machines, which are the real problem because they're
ephemeral. Spammers can compromise a hundred new machines a week. You can't
fix them or block them faster than they compromise new ones. The only solution
to that is to improve computer security so they don't get compromised to begin
with, which has nothing to do with ISPs.

Removing people from the internet is never the answer, both because it's too
broad (should the spammers not be able to go to the government's website to
pay their taxes?), and because the nominal current source of the spam isn't
actually where the spammer connects to the internet anyway.

The better solution if you can actually find a real life spammer is to impose
a fine on them that exceeds the profits of spamming.

~~~
kazagistar
Wait, why can't you block spammers as fast as they are created? If your
computer is compromised, it seems fine to disconnect it, or at least severely
rate limit uploading until you fix the problem. Its the responsibility of the
internet user to keep their computer in working order.

Really, if your computer is spamming someone, even if you aren't aware of it,
you are harming them, and ignorance shouldn't be protection. Maybe fines and
jail are a bit too severe, but rate limiting and possible disconnection are
more then fair.

~~~
plttn
So if Joe Shmo's router is compromised, then the ISP turns it off. How then is
Joe supposed to learn that he's compromised by looking up symptoms, or
download an antivirus to clean it up?

It's a pretty solution to the problem in theory, but when you actually apply
it, it doesn't quite work out as well.

~~~
jon-wood
My ISP actually did once cut me off when one of the machines on the network
was compromised by malware. I called up their support line and they told me
what had happened, I fixed it, and all was well.

------
fapjacks
> Cloudflare likes to look triumphant.

Yeah absolutely this. They spin everything they do as some kind of heroic "for
the people!" decision even when it's just about cutting costs or not having to
solve "hard" problems. One example are DNS "any" queries. Cloudflare just
decided to toss standards out because they aren't up to conforming to them. As
far as I'm concerned, this Cloudbleed thing is karma, and nobody should
believe anything Cloudflare says about itself.

~~~
daenney
To be honest though, ANY is mostly used for reflection/amplification attacks
or to scrape domain records. Sure there are legitimate uses for them but I
can't think of many that need to happen over the internet vs. only allowing
that from specific trusted parties. And yes I'm aware that it can be useful as
a debugging tool. There are also ISP recursors that don't allow ANY queries
through for similar reasons so relying on it will cause trouble for some and
there are other broken implementations in the wild. Though ANY can be useful
it shouldn't be assumed that it'll work.

What I'm thinking they could've done in the case of ANY is to respond with
truncated and switch to TCP mode at which point it becomes harder to do the
spoofing dance and (ab)use ANY as a DNS amplification attack to DDoS a target.
Unfortunately that does put an extra cost on the DNS server which might also
be undesirable. However that would've probably been a worthwhile tradeoff.

There's an IETF draft that attempts to provide some guidance in the area of
the ANY query: [https://tools.ietf.org/html/draft-ietf-dnsop-refuse-
any-04](https://tools.ietf.org/html/draft-ietf-dnsop-refuse-any-04)

~~~
fapjacks
Well, two out of the three people proposing that document work for Cloudflare.
The other works for Dyn. It's clear what their motivation is. And this is
_not_ the first time they've filed a document just like that, and it will
expire this year just like it has in the past. That document is bullshit. It's
trying to now change the standard _after_ they've already broken their DNS
implementations.

~~~
chrisbolt
Your comment doesn't address ANY of the arguments you're replying to. Are ANY
queries typically used outside of attacks, scraping, or diagnosis? Is there a
reason they need to be served over UDP?

> Well, two out of the three people proposing that document work for
> Cloudflare. The other works for Dyn.

Two companies that have some experience with dealing with DNS attacks. Their
motivation may be self-serving, but they're not the ones doing the attacking.

~~~
fapjacks
No, they're the ones tossing the standard and then trying to retroactively
change it. Cloudflare sucks.

------
syncsynchalt
Lovely bit of debugging in this article, I really enjoyed it! Somehow a task
that would be grueling to do myself is so much more enjoyable when read about.

------
dpc_pw
We need to switch dump pipes to be dumb content-addressable p2p pipes with
maidsafe, ipfs, dat, or anything like that, and most problems that CDNs are
trying to solve would disapear.

~~~
kevinr
At least until someone finds a collision in the hashing algorithm used for the
content addresses...

~~~
ThisIs_MyName
Crypto agility is a thing :)

~~~
kevinr
Somebody needs to inform Linus. :)

------
lanius
Has the WebCore scheduler improved since?

~~~
edoceo
The bug is in bad js authored by CF hammering ticks/event-loop.

WebCore is doing what it's told, just just being told very stupid things
(very, very quickly)

~~~
chainsaw10
> The bug is in bad js authored by CF

No, the bug is definitely in the browser. Web code is untrusted and should not
be able to adversely affect the browser.

~~~
saurik
So, there was (I hope not "is"... I am having a difficult time remembering the
sequence of events involving this bug and what I might have done on the WebKit
side) definitely a bug in the browser, but just to be explicit: there was also
a bug in the JavaScript (the array returned as object with no length, causing
the code to loop), and managing to deploy a change that managed to DoS tons of
deployed browsers and somehow not noticing and then not really caring
afterwards was extremely careless.

