Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
DNS Cookies – Identify Related Network Flows (dnscookie.com)
74 points by codezero on June 19, 2019 | hide | past | favorite | 24 comments


Someone linked this on a great thread about how dns can leak info.

https://news.ycombinator.com/item?id=19828769

The parent thread is really interesting too.

https://news.ycombinator.com/item?id=19828702


It's fun when you check the front page of HN and see your work :).


I'm bummed it didn't stay there longer, not just for my own karma :)


OK, now please tell me how do I block it?


- Never allow any part of the computing systems you use to cache anything.

- Insist that everything in your life exist in a state of being functionally pure & stateless.

- Eliminate access to all sources of timing data.

- Make sure that all tasks are completed in a pre-determined fixed amount of time regardless of resource contention.

There are so many different side channel attacks, and the computing primitives & API choices we have been making for years make it challenging to build secure systems.

Caches are very deeply embedded in the culture of how computing is done. Making tasks take longer than strictly necessary to avoid leaking information goes against our instincts to optimize system performance.

It's going to take a lot of work and cost a lot of money to get software to a point where we aren't playing whack-a-mole with side channels.

More pragmatically, the current implementation of this technique can be dealt with by being very conscious of how much data your DNS resolver(s) are leaking & being conscious of how large the anonymity set is of the userbase of your DNS resolver(s).

If you limit DNS cache times and use blinding computation techniques to limit the identity information your DNS resolver has or retains about you, then DNS cookies can be largely mitigated. If you have faith that 1.1.1.1 is operated in the manner that Cloudflare claims, the measures they have taken go a long way to making DNS cookies unusable.

I also pointed out some additional specific mitigations when I reported this issue to the Chromium team in October 2015:

https://bugs.chromium.org/p/chromium/issues/detail?id=546733


What if we designed the resolver to fetch many responses with the caching disabled and then caching all of them? In essence, force it to give you as many cookies as your desired anonymity set size and then sample this local store of cookies when calculating the response for the end client.


This would make it harder to build a fingerprint, especially if responses were sampled from a number of independent sources.

The next logical step in the arms race would likely involve fingerprinting systems using more bits than strictly necessary, and using error correcting codes - i.e. treat the sampling as "noise" to be overcome.

It seems both more straightforward and more effective to build recursion paths that you can trust aren't doing any intentional or unintentional caching.

This of course means the performance benefits of caching go away. This has been a theme in computing lately (i.e. CPU speculative execution leaks such as meltdown).

A recursor could be built which only uses each query response once, with prefetching used to reduce the performance impact.

However, the mere fact prefetched responses exist would also leak data.


> It seems both more straightforward and more effective to build recursion paths that you can trust aren't doing any intentional or unintentional caching.

I agree, but as you say, that will take quite some work and time to happen and will be costly. I was thinking of this as a possible temporary mitigation which would retain some benefits of caching. If it was made adaptive[1], it would also have the nice side-effect of being more resource intensive for those servers that attempt to use tracking.

[1] i.e. only fetch many responses if they appear to vary while doing a smaller number of "probing" requests. Continue fetching more responses for your local sample until they stop varying with some degree of confidence.


It would be difficult to differentiate between responses that vary due to load balancing and responses that vary due to active fingerprinting.

Even when a site only has a single physical location, load balancing might be done in part by having DNS randomly return one of many valid IP addresses. E.g. this is a behaviour supported by Amazon's Route53.

Larger sites frequently use a combination of anycast and DNS based routing to get packets to the closest POP. This introduces both (1) difficulty identifying when fingerprinting is occurring, and (2) still more opportunities for fingerprinting.

Most users will find it impossible to control which POP their packets get routed towards. For someone doing fingerprinting, it could be a very useful signal.


Yes, but variation due to such load balancing would surely be limited in entropy in practical non-tracking scenarios?

Approaching from the other end, it points towards anycast itself (and similar techniques) being incompatible with hard tracking resistance.

I'm glad to see that Firefox containers already mitigate this by using a separate DNS cache for each container.


Do not use third party recursive DNS services, such as Google or OpenDNS.

If running own DNS recursor, turn off EDNS; only send traditional 512 byte DNS packets.

Use DNS software that does not support EDNS, such as djbdns.


Third party DNS servers are helpful in one sense - you can share your state with other users.

Turning off EDNS with your own recursor won't really make much difference. Limiting the maximum cache length will help, but will also eliminate much of the benefit of having a local recursor.

The other issue with running your own recursor is nasty networks will transparently proxy DNS and you can end up using a cache you don't even know exists.

DNSCurve, DNSCrypt, and DNS-over-HTTPS solve one set of problems while introducing different ones.


Sharing a cache with other users introduces its own set of problems, e.g., cache poisoning. The problems that arise from shared DNS caches gave rise to "solutions" that in turn introduced further problems.

For transparent proxying, i.e., hotel internet, I use a local forwarder and a remote recursor listening on a non-standard port and it has worked flawlessly.

I prefer to serve static address info via authoritative DNS or /etc/hosts. I have other methods of getting DNS data besides querying caches. I have no need for DNS caches. Most websites I visit do not change addresses frequently. I also like to know when they change, if they ever do.

I have not experienced any problems with DNSCurve.


Thanks.

> DNSCurve, DNSCrypt, and DNS-over-HTTPS solve one set of problems while introducing different ones.

You seem to know quite a bit: Are there series of blog posts where you've detailed these issues and/or proposed mitigations?


I appreciate the compliment, but sadly I haven't yet made the time to write much on the subject. There's a lot I'd like to see implemented.


In general, publicly visible DNS cookies are set by a DNS recursive resolver. Typically, multiple IP addresses share one recursive resolver. So it seems to me that a DNS cookie has strictly less information than an IP address.


If this fingerprinting stays across say HTTP Proxies (or VPN/Tor network) and a regular network, this may be a way to track users especially for ad networks.


ELI5 description?


The abstract from RFC 7873:

—— DNS Cookies are a lightweight DNS transaction security mechanism that provides limited protection to DNS servers and clients against a variety of increasingly common denial-of-service and amplification/forgery or cache poisoning attacks by off-path attackers. DNS Cookies are tolerant of NAT, NAT-PT (Network Address Translation - Protocol Translation), and anycast and can be incrementally deployed. (Since DNS Cookies are only returned to the IP address from which they were originally received, they cannot be used to generally track Internet users.) ——

At the end of the day, the data can really be any 8 byte set of data for the client part and up to 32 bytes for the server section. Which you could technically use to store anything you want (or the upstream resolver could).

The linked article talks about using it for tracking users, which the abstract ironically says isn’t generally possible.


I published dnscookie.com in late 2015. I google "dns cookies" and a few other things terms, was surprised that the terminology appeared unused, and it seemed suitable for the concept I was describing.

In May 2016, RFC 7873 was published which also uses the term "DNS cookies".

These two things share a name but have different meanings. The naming collision is an unfortunate coincidence.


Oh! Wow! That was a huge assumption on my part.

I had assumed it was the afore mentioned RFC. That explains why the site make no mention of EDNS or OPT records.

TIL...


Specifically the tracking is done here by randomly choosing IP addresses from a pool and correlating connection attempts to the resolved IP to the original DNS request. To quote the article:

"With 2 IP addresses available in the pool, a 32-bit identifier requires 32 correlated connections. With 256 IP addresses, a 32-bit identifier requires only 4 correlated connections."

IPv6 brings it down to just one.


Like HTTP cookies, but that users don't have control over since they're managed by network infrastructure.


I’m sorry, I really don’t know much about DNS. I still have many questions: Who sets it? How is it transmitted? Who can access it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: