
Blizzard games were vulnerable to DNS rebinding attack - csmajorfive
https://bugs.chromium.org/p/project-zero/issues/detail?id=1471&desc=2
======
DrJokepu
So basically this is a local web server that is used for IPC? Is there a
reason to do local IPC over TCP/IP, rather than over named pipes / unix pipes,
other than not knowing about the existence of named pipes / unix pipes?

~~~
catfood
That's how it should be done yeah. Developers on Windows seem to be really
loose in their use of TCP or UDP for IPC. Even my mouse driver opens up a port
on 0.0.0.0.

~~~
DrJokepu
UDP for IPC seems to be especially crazy to me. I mean sending data over TCP
to localhost is basically a memcpy(), what advantages could UDP possibly have
in that context?

~~~
johncolanduoni
Guarantee of preserved message boundaries? It’s the reason Unix domain sockets
have SOCK_SEQPACKET and Windows named pipes have message mode.

------
zajd
Disappointing (lack of) response/fix

Would have assumed they'd do better given how polished their consumer products
are

~~~
ghostbrainalpha
Could someone explain in the most accessible language, how a problem like this
could even be fixed?

~~~
kentonv
Answer:

The service should check that the `Host` header on every request specifies
`localhost`, `127.0.0.1`, or whatever other name is normally used to access
it. If the `Host` header specifies a possibly-attacker-controlled name, the
service should reject the request.

Explanation:

The problem is that the service is relying on same-origin policy to prevent a
web site running in a local browser from making requests to it -- but it does
not actually check what origin requests are addressed to. A random web site
running in a browser normally cannot make requests to a hostname other than
its own. Or, more precisely, it can make requests, but it can't read the
responses. The Blizzard updater therefore forces the client to make an initial
request whose response contains a secret token, and then subsequent requests
contain that token, proving that the client was able to read the response and
is therefore not a web site running in a browser.

However, it's perfectly possible to define an evil hostname whose DNS records
assign it the IP address 127.0.0.1. Thus requests to this evil hostname would
end up reaching the local Blizzard updater service. Unfortunately, the service
will happily respond to these requests because it does not pay attention to
what hostname the client was requesting.

(To actually exploit this, it's necessary to load an evil web site from the
evil host name, which then performs the attack. This means that the evil
hostname can't _only_ map to 127.0.0.1, but must also map to the attack
server. But a hostname can indeed have multiple DNS records, and the browser
will arbitrarily switch between them, allowing an attack to proceed.)

~~~
michaelermer
And/or simply implement CORS headers.

~~~
kentonv
No, CORS doesn't apply here. CORS regulates cross-origin requests, but the
attack here makes the browser think the requests are same-origin.

(Also, CORS can only be used to permit access that would normally be denied.
CORS does not offer any way to deny access that is normally permitted.)

------
68c12c16
the DNS rebinding vulnerability exists in applications other than Blizzard
games, such as this one in Bittorrent Transmission also reported by Tavis
Ormandy....

[https://github.com/transmission/transmission/pull/468](https://github.com/transmission/transmission/pull/468)

Seems to be a quite prevalent issue...

~~~
ekimekim
That's because for decades binding to localhost has been taken to mean "only
users on the local machine can access this". Now chrome is breaking that
assumption through a leaky sandbox, and demanding everyone else change rather
than fixing their own security issues.

~~~
geofft
Can you explain why this is a Chrome-specific issue? I believe that it applies
to all web browsers, including Internet Explorer for UNIX (which I do have
access to and I can test if you would like me to confirm). I remember this
being a vulnerability class with CUPS, which listens on
[http://localhost:631/](http://localhost:631/), about 10 years ago.

In particular, note that the request is _not_ made to localhost, it's made to
a DNS name that simply happens to resolve to 127.0.0.1. Should Chrome and also
all other web browsers add a special case for DNS names that resolve to
127.0.0.1?

~~~
kodablah
How about caching a DNS result for the duration of a tab? Doesn't solve
everything but probably good enough. I don't think it'd break many things
(sure it may affect some round robin DNS things for long-open pages that are
Ajax'ing back home frequently, but that should be minimal)

~~~
geofft
I could believe that caching "is it localhost / RFC 1918 space or not" for a
tab would have a sufficiently low false-positive and false-negative rate to be
worth doing. It would still break the average business user who has an email
half-written in OWA via an RFC-1918 address, puts their laptop to sleep, and
goes home and expects OWA to still work over the public internet, but maybe
OWA can figure out a solution there like local storage + document.reload(). It
won't solve all the problems but it'll solve many of them.

I think caching all DNS resolutions is going to make a lot of cloud-native
websites very sad (e.g., I'd imagine something like Slack would break
quickly), and also cause poor performance because you don't get the benefit of
a CDN's DNS server telling you that other servers are closer now. A lot of
people have long-running tabs.

~~~
kodablah
I don't mind breaking legitimate uses rebinding here. I'm not familiar w/ how
OWA changes IPs from private-network to public-network, but I'd say using DNS
is the wrong approach. And Chrome can probably detect DNS server change and
then evict its known cache anyways. Yeah, a first step is probably just
localhost/private-IP specific.

~~~
geofft
It's just that OWA is a web app that is often configured to be accessible on
the internal network via an internal IP, and on the external network via an
external IP (so you can get webmail even if you're not in the office), but
it's the same app.

I certainly don't think it's worth supporting this from first principles -
i.e., had the web been designed the way you suggest from day one, that would
have been great - but it's got a very large deployment, and "Chrome silently
loses your Outlook emails because of a security issue in video games and a
BitTorrent client" is a good way to get Chrome removed from enterprises, and
other browsers to decide not to ship the same fix....

------
aplorbust
A mischievous person renting a domainname can list _any_ public or private IP
addresses in her A records; and she can list any nameservers, including ones
with which she has no relationship. She can list an IP address that the user
may be utilising or renting.

I use a fast DNS resolution solution that only queries authoritative servers
and stores IPs in constant, perfect hash databases, then in kdb+. No caches. I
see the IP addresses that are returned in DNS packets not as ephemeral and
inconseqential, but as entries in a database that need to be validated before
insert.

If I see some nonsense like 127.0.0.1 in an A record, let alone a public IP
address that Im using, it is rejected. I have seen NS records with 127.0.0.1
as well.

Are there DNS rebinding attacks that do not use iframes, Javascript or some
other way to trigger automatic lookups _without user interaction_? In theory
perhaps. But every attack I have seen relies on triggering lookups
automatically.

Its too bad the popular browsers make automatic requests for resources,
automatically follow redirects and do not allow users to disable this default
behavior.

However users can make use of less complex HTTP clients that do not make such
automatic requests and where redirects can be disabled. These can be used in
tandem with the popular browsers to give users more transparency and control.

Also isnt it possible to use SSL/TLS for localhost JSON-RPC? Theoretically
couldnt users make use of client certificates?

Just a thought.

~~~
emmelaich
Yep, I saw 127.0.0.1 for an A record about 20 years ago.

It was discovered when going to that name outside revealed what seemed to be a
copy of our own internal webserver!

What had happened is that our webserver was on the same host as our squid
proxy. So we were in fact seeing our own site under the external name.

Protection for this sort of thing is in the default squid conf these days
(from memory)

~~~
fulafel
It was a standard joke to point warez.yourdomain.country to 127.0.0.1 - I
wouldn't be surprised if there were still some left and could be used for
rebinding attacks!

------
Jyaif
Developer 101: if you want to do a blacklist, do a whitelist instead.

~~~
squiguy7
I think it depends mostly on the context. If you only want to allow a known
subset of items, prefer a whitelist. If you want to avoid a subset of items,
prefer a blacklist.

~~~
vec
Yeah, but the hard-earned wisdom the parent post is trying to impart is that
if you think you want to avoid a subset of items, you're probably wrong.

In an explicitly enumerated category, blacklists and whitelists are logically
equivalent and can be used interchangeably. In almost every other case
blacklists are insufficient because new items can generally be created, either
maliciously or just accidentally as the size of the category grows, which are
not on the blacklist but which share whatever bad trait you were hoping to
protect against.

I'm sure there are a few exceptions, but generally speaking any problem that
can be solved with either a blacklist or a whitelist should use the whitelist,
just to be safe. A problem that can't use a whitelist is probably not actually
solvable by a blacklist either, and trying to use one is likely to fail in the
long run.

------
greglindahl
The vulnerable update daemon is running all of the time if you've installed
any Blizzard game in the past, not just if you currently play Blizzard games?

~~~
jrs95
I believe they've patched it already. But if you haven't updated the updater
yet, you're probably still vulnerable.

~~~
greglindahl
The posting says that the patch is not very good, but that's a separate issue.

~~~
jrs95
Edge in particular was missing, I don't think that uses "iexplore.exe".

------
skela224
Would having TLS on the localhost endpoint (without client certificates) make
the attack more difficult? the browser would have to validate the localhost-
returned cert against the attacker.com hostname.

------
dooglius
> Any website can simply create a dns name that they are authorized to
> communicate with, and then make it resolve to localhost.

So if I understand this correctly, websites can now bypass all firewalls and
send traffic to any _local_ port at will? It also seems that this same trick
would apply to local/intranet IPs (e.g. have domains that redirect to
192.168.0.x) allowing interaction with things like printers. While Blizzard
has a bug, it seems to be the browser that has the real vulnerability here.

Edit: The replies have good explanations with more detail why this would be
difficult to fix -- the host doesn't have enough context to differentiate
between "intended" and "unintended" IPs without a bunch of pernicious edge
cases.

~~~
benchaney
Agreed. Not that lost ago, there was a post about how a BitTorrent client had
a similar issue. Just like in that case, the real vulnerability was in
browsers. I wonder how many programs are going to be discovered to be
“vulnerable” before people realize that they are missing the point.

~~~
geofft
What do you think browsers should do instead?

~~~
benchaney
There are a few options. IMO the best is saving ip address that each DNS name
is resolved to for every connection for the duration of a windows existence.

~~~
geofft
That doesn't save you from this attack - you can send the index page and some
JS over with a long Cache-Control header, and then trigger the site to get
loaded some days later when DNS has been changed to point to localhost. The JS
will trigger the same XHR and it will succeed this time.

(.. and now that I think about caching, the various "within the same tab"
solutions elsewhere in the comments don't work, because they'd force Gmail /
OWA / Slack / etc. to get reloaded _with a full cache flush_ when they change
IP addresses, which everyone would hate.)

~~~
benchaney
Can you really reload a website just even if it is rejecting connections just
using a long Cache-Control header? I find that hard to believe.

> because they'd force Gmail / OWA / Slack / etc. to get reloaded with a full
> cache flush when they change IP addresses

I think security issues should be fixed even if the fix imposes an
inconvenience. Especially since the inconvenience is basically just a
performance issue.

Side note: I don't find the idea that this attack is hard to fix a
particularly strong justification for not fixing it. Not being able to connect
to arbitrary hosts is deliberately not part of the API exposed to javascript
so I think it would be hard to argue that DNS rebinding isn't a bug.

Side note 2: The workaround of "don't trust" localhost doesn't prevent all DNS
rebinding based attacks. For instance, if you took out an ad, you could then
have a botnet to bypass any ratelimiter.

------
btym
I assume there's a similar patch for the Mac client, but this offers no
protection for users running the Windows client via WINE.

